Information?: An InquiryThursdays, 9:30-11 amScience Building, Room 227 Schedule | On-line Forum | Evolving Resource List For further information contact Paul Grobstein. |
Summary by Paul Grobstein
(presentation notes available)
Jim built on the background of last week's presentation/discussion to characterize the way in which Gregory Chaitin's work extends that earlier done by Gödel and Turing (see Jim's notes beginning with "algorithmic information theory").
A fundamental idea that Chaitin uses is that of a "universal Turing machine". Such an imaginary device is an extension of the equally imaginary "Turing machine". A Turing machine consists of a read/write head that operates on a one-dimensional tape of indefinite length containing in sequence symbols drawn from a finite set, together with a finite number of "internal states", and a finite number of stored instructions that operate deterministically based on the current symbol and the current internal state to read the current symbol and to write a new symbol and/or move the tape in either direction and/or change the internal state. "Turing computable" is currently understood to mean any computation that can be carried out by an appropriately configured Turing machine and is, by many people, equated with "computable" in general. Turing showed that a single device, the "universal Turing machine" could be configured in such a way that it would mimic the activity of any "Turing machine", so the univeral Turing machine becomes, by itself, a standard for determining "Turing computability". There is a close equivalence between "Turing computability" and the capabilities of formal axiomatic systems, so not only the capabilities but also the limitations of the former are directly relevant to the latter.
An interesting issue that arose at this point related to the number of internal states required for a universal Turing machine. The issue wasn't fully explored but it was accepted for the moment that a univeral Turing machine could use the tape to mimic internal states and so needn't necessarily itself have a number of internal states greater than or equal to all machines that it mimicked. If so, this may have significance in thinking about issues of information "compression" that arose in earlier discussions.
A second issue that arose at this point had to do with the definition of "randomness". Of general significance is Chaitin's assertion that many numbers are profoundly "random". This assertion doesn't relate to how the numbers are produced but rather to the numbers themselves, as represented in decimal form. The decimal expansions of many numbers are "patterned", in the sense that one knows from initial sequences of digits what the subsequent sequences are (this is true of all rational numbers, numbers that derive from dividing one integer by another). There are two kinds of "unpatterned" numbers, ie numbers whose decimal expansions are uncorrelated (knowing x digits provides no information about the identity of the x + 1 digit). Both are "statistically random" in this sense, but one set is "Turing computable" (pi is an example) and the other set is provably not (by an diagonalization argument, see last week). Chaitin terms the latter numbers "incompressible" and it is these numbers that he takes as central to his assertion that "randomness" is endemic to arithmetic/mathematics.
An important point that came up in discussion is that "incompressibility" is not a property that one can impute to a given number simply by looking at it. One can test a particular sequence of digits for statistical randomness but there is no comparable test for incompressibility; the fact that one doesn't have at the present a computational algorithm for a particular sequence does not establish that there may not in the future be found such an algorithm (as was the case for pi). This might incline one to argue that "incompressible" numbers don't in fact exist. But the diagonalization arguments says they do. One might also argue that in some sense numbers for which one doesn't have an algorithm aren't meaningful. Chaitin provides a compelling case that such numbers not only exist but have quite fundamental meaning. A still broader perspective would be the assertion that it makes no sense to take Turing computability (or formal axiomatic systems) as the criterion for "reality". If one can't get there (undecidable propositions, incompressible numbers) from here, one shouldn't conclude that there doesn't exist. Instead one ought instead to notice the limitations of one's tools and develop new ones that don't have those limitations.
Chaitin's case for the existence of incompressible numbers derives from an exploration of the halting problem as applied to a universal Turing machine. He imagines all possible arbitrary strings that can be put of the tape of such a machine, created by flipping a coin to generate sequences of digits, and asks what proportion of these result in the machine halting (the machine is configured in such a way that any string that continues beyond a sequence that by itself halts does not halt). This number Chaitin calls Ω (omega). Ω, Chaitin shows, is well-defined but fundamentally not Turing-computible. Moreover, it is "normal", ie statistically fully random.
Chaitin's case for the significance of incompressible numbers relates to diophantine questions and to a proof that there does not exist a general mechanical procedure for their solution. It is possible, Chaitin shows, to construct a diophantine equation that correspond to Ω in the sense that the digits of Ω are generated by a series of solutions to the equation. The inference Chaitin draws is that arithmetic (and hence mathematics) is itself "incompressible", in the sense that one cannot derive all of its consequences (solutions to the diophantine equation) from an initial set of axioms using a fixed and finite set of rules (while one can construct an algorithm to successively approximate the solutions, one can never know how close one is coming to them). Chaitin concludes that mathematicians need to be "experimentalists" as well as logicians; that random, incompressible numbers are part of the "way things are" and will never be accounted for in any other way). A broader conclusion might be that the aspiration to express science in terms of mathematics is flawed because of the demonstrable limitations of that particular tool. And the same for formal axiomatic systems as a basis for thinking. One need not follow this to the conclusion that there are things that can be handled by formal axiomatic systems and everything else is just "the way things are". Other tools might in principle be deveoped that, together with formal axiomatic systems, would continue to reduce the space of "the way things are".
From here the discussion went in several different directions. Might it be wise/appropriate for computer scientists to add Ω-computability to Turing computability? What is the relation between Chaitin's results and Wolfram's program? Wolfram works with phenomena that evolve deterministically but are not "compressible" in the sense that the only way to get the state of the system at any given time is to iterate the needed number of cycles. They are, though, compressible in the sense that the entire evolution of the system is known if one knows the starting conditions. Finally, is there a way to tie these considerations back to the information problem? Is it, for example, possible that "information" is in some sense related to the spaces left open by formal axiomatic systems and by Turing computability?
Home
| Calendar | About
| Getting Involved
| Groups | Initiatives | Bryn Mawr Home | Serendip Home
Director: Liz McCormack -
emccorma@brynmawr.edu
| Faculty Steering Committee
| Secretary: Lisa Kolonay
© 1994-
, by Center for Science in Society, Bryn Mawr College and Serendip
Last Modified:
Wednesday, 02-May-2018 10:51:19 CDT