Information?: An Inquiry

Tuesdays, 9:30-11 am
Science Building, Room 227

Schedule | On-line Forum | Evolving Resource List

For further information contact Paul Grobstein.


Discussion Notes
1 July 2004

Participants:Al Albano (Physics), Doug Blank (Computer Science), Peter Brodfuehrer (Biology), Anne Dalke (English, Feminist and Gender Studies), Wil Franklin (Biology), Paul Grobstein (Biology), David Harrison (Linguistics, Swarthmore), Mark Kuperberg (Economics, Swarthmore), Jim Marshall (Computer Science, Pomona/Bryn Mawr), Liz McCormack (Physics), Lisa Meeden (Computer Science, Swarthmore), Eric Raimy (Linguistics, Swarthmore/Trico), Ed Segall (Edge Technical Associates), Jan Trembly (Alumnae Bulletin), George Weaver (Logic/Philosophy)

Summary by Paul Grobstein
(presentation notes available)

Intending to lay a foundation for discussing Chaitin's work on randomness and algorithmic information theory, Jim laid out a sufficiently rich array of material on the history of logic and its connections to computer science (see first five sections of Jim's notes) to fully occupy the group for this week. The conversation will continue on to Chaitin's work next week.

Jim traced the effort to develop "formal axiomatic systems" to problems that emerged in the late 19th century in the exploration of set theory. Basically, these explorations ran into "paradoxes" or "antimonies" (see paradoxes for more on this distinction and related matters) such as the apparent ability to specify "the set of all sets that are not members of themselves". There is something "wrong" with this specification since one cannot decide whether such a set is or is not a member of itself. If one assumes the set is a member of itself, the specification asserts it is not. If one assumes it is not, the specification asserts that it is. Hence, the specification itself creates an internal contradition, apparently related to its self-referential character.

Problems of this sort led Hilbert and others to revisit the foundations of logic with the objective of creating systems where one could be assured that contradictions would never arise. Such systems, it was hoped, would avoid paradoxes (including those related to self-referentiality), and could be formalized in terms of (Jim's characterization, followed by some elaborations that occurred in discussion; to be verified?):

The ambition was to create a formal axiomatic system adequate to support mathematics in the sense that it would be both consistent (for no string can both that string and its negation be generated from the axioms) and complete (you can always derive either the string or its negation; equivalent?: all admissable strings are "connected" in the sense that one can derive from the axioms every admissible string or its negation). What is particularly interesting/significant about this ambition is not that it succeeded but that it failed of its own weight (a failure anticipated by Poincaré, who we had earlier found in the midst of problems with the Boltzmann derivation of entropy) and that, despite this, formal axiomatic systems continue to be the touchstone for thinking about information (and cognition?).

Gödel showed in 1931 that any formal axiomatic system rich enough to handle arithmetic must be incomplete, ie that there are "always true statements that cannot be proven". This (common) phrasing created problems for some people (what is the criterion for "truth" in this sense; is it something outside the system"; does it depend on language? on humans?). An alternate phrasing was suggested that seemed (at least for the moment) acceptable: what Gödel proved was that there will always exist admissible (see above) sentences/strings that are not "connected" to other sentences/strings in the sense that they (or their negations) cannot be derived using the inference rules. The alternate phrasing avoids the appearance of an external standard of "truth", but it also makes less obvious a link between between Gödel's proof and the "self-referentiality" problem that is probably important and is more apparent in the common phrasing. Typically, one points to the sentence "This statement is unprovable" as an instance of a "true" but no proveable sentenin the Gödel sense. If the rephrasing does capture the essence of the proof, then one needs to better understand the relation between the categories of "not connected", and "self-referential" (see forum) In any case, what Gödel showed is that the "self-referential" problem that arose in set theory is actually an outcome of a formal axiomatic system rather than something that can be eliminated by it.

Turing, in the context of the theory of computability, did essentially the same thing that Gödel had earlier done in the context of formal axiomatic systems: show a concrete set of limits to what could be done starting from particular presumptions. Indeed, the starting presumptions are largely formally equivalent in the two cases, so there is substantial interchangeability between the analyses in the two contexts. A "Turing machine" is a device that has a formal axiomatic description and operates deterministically on numbers. What Turing established was that one can list all conceivable Turing machines in a way that gives each a distinct integer label. To put it differently, there are a countably infinite number of different Turing machines. But there are an uncountably infinite number of real numbers. Hence it follows that some real numbers are "not computable". Turing further showed that real numbers can be associated with mathematical "facts", and so not all mathematical facts are derivable from formal axiomatic systems.

At this point, the conversation returned to self-referentiality and "recursion", via a link that the humble recorder (yours truly) seems to have missed (and would like to have reconstructed by someone, please?; is there an argument that only with recursion does one get uncountably infinite sets?). In any case, self-referentiality/recursion was seen to "create problems" (if one is a logician, mathematician, physicist?) or "open possibilities" (if one is a linguist, neurobiologist, humanist?). It was suggested that the issue here was one of using "information" in two different ways, as both "data" and "process". In DNA, for example, the base sequences are both a source of instructions for amino acid sequences and are themselves acted on enzymes for which they are the source of instructions. There are here some interesting resonances to the "decoder" notion of information under consideration from earlier conversations, particularly if one adds an additional complexity, that decoders function both as "filters" and as "generators".

Jim provided a concrete example of recursion in connection with Turing's "halting problem". It is possible to show that there cannot exist a Turing mechine that will answer the question of whether an arbitrary Turing machine with an arbitrary input will arrive at an answer ("halt") or keep going indefinitely. The proof (see halting problem in Jim's notes) involves a self-referential/recursive relationship among Turing machines which in turn leads to a "logical contradiction" (a "formally undecidable" proposition?).

A key question that emerged from the conversation was the issue of whether the "limits" established by Godel and Turing were to be seen as absolute limits, relevant to all material entities (including humans) or as limits specific to "formal systems" (of which humans might or might not be a subset). In this regard, it seemed worth looking more closely to see what creates the limits and what allows/would allow them to be transcended. Here a key issue is the wish for "consistency". Is it conceivable that one could trade some decreased amount of consistency for some increased amount of "completeness"? How might one do this? Formal systems are "deterministic". Might one find in other systems (humans?), like in physics, some essential degree of indeterminacy? Depending on how indeterminacy is introduced, it is likely to reduce consistency ... and conversely to enhance completeness? Are there other ways to achieve this? Is there any sense in which "formal systems" neglect a bidrectional categorizing/decategorizing process that is distinctively important for thinking about information? Or, perhaps, actually create the setting for a kind of recursion/self-referentiality that itself transcends the limits of the formal system and, in so doing, makes information significant?


Home | Calendar | About | Getting Involved | Groups | Initiatives | Bryn Mawr Home | Serendip Home

Director: Liz McCormack -
emccorma@brynmawr.edu | Faculty Steering Committee | Secretary: Lisa Kolonay
© 1994- , by Center for Science in Society, Bryn Mawr College and Serendip

Last Modified: Wednesday, 02-May-2018 10:51:19 CDT