Serendip's Bookshelves

Stephen Wolfram, A New Kind of Science, Wolfram Media Inc., Champaign, Illinois, 2002

Commentary by David Berger. David, Swarthmore College 2004 (Economics major with minors in History and Math), attended the New Kind of Science 2003 Conference in Boston in July. Davi's interest was stimulated by conversations with Mark Kuperberg, Professor of Economics at Swarthmore, and Steve Sigur, a high school math teacher. These notes were prepared for a meeting of the Emergence Working Group at Bryn Mawr College, and are also available as a PDF file.




A New Kind of Science (NKS) is Stephen Wolfram’s attempt to revolutionize the theoretical and methodological underpinnings of the universe. Though this endeavor is incredibly ambitious and thus should be approached critically, Wolfram’s book is not all talk: it contains many startling and thought provoking results and conjectures.

Notion of Computation

Classification of computation (output)

  1. Almost all initial conditions lead to a one, homogenous state
  2. Almost all initial conditions lead to repetitive or nesting structures
  3. Almost all initial conditions lead to output that looks random<
  4. Almost all initial conditions lead to output that has aspects that are random but with definite localized structures that interact in random ways

    Prevalence of Universality

    Implications

    Principle of Computational Equivalence

    the major assumption of his book- "all processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations." (715)

    "it is possible to think of any process that follows definite rules as being a computation- regardless of the kinds of elements involved." (716)

    - thus we can view processes in nature as computations

    Therefore "almost all processes that are not obviously simple can be viewed as computations of equivalent sophistication." (717)

    -this follows because Wolfram believes that all systems of class 3 and 4 behavior are universal and are therefore computationally equivalent.

    - implies that there is an upper limit to complexity that is relatively low

    More specifically, the principle of computational equivalence says that systems found in the natural world can perform computations up to a maximal ("universal") level of computational power, and that most systems do in fact attain this maximal level of computational power. Consequently, most systems are computationally equivalent. For example, the workings of the human brain or the evolution of weather systems can, in principle, compute the same things as a computer. Computation is therefore simply a question of translating inputs and outputs from one system to another.

    - if some system seems complicated then it is most likely universal

    PoCE allows for an explanation of complexity. Systems seem complex when they are of equivalent sophistication as any of the systems we use for perception and analysis. Thus,

    "observers will tend to be computationally equivalent to the systems they observe- with the inevitable consequence that they will consider the behavior of such systems complex." (737)

    our processes of perception and analysis are of equivalent sophistication and the most powerful computers or even our minds must obey the PoCE. "Can these (computer and brain) be more sophisticated? Presumably they cannot, at least if we want actual results, and not just generalities." Difference is that results require definite physical processes and are therefore subject to the same limitations of any process.

    Generation of Randomness

    1.Externally imposed.

    2. Randomness initial conditions

    3. intrinsically generated

    -Wolfram believes that the third option underlies most of the generation of complexity in the physical world because of the ease of which nature seems to do it. Now that we know that we can generate enormous complexity from very simple rules it seems plausible that this is what nature does as well. Fits in with the idea of natural selection- nature just generates all possible outcomes and the ones that are feasible survive over time.

     

    Problem of Randomness:

     

    Computational Irreducibility and the Problem of Prediction

    If one views the evolution of a system as a computation, where each step in the evolution can be thought of taking a certain amount of computational effort, then traditional mathematics suggest that there must be a way to find a shortcut and know the outcome with less computational effort. The picture to the right suggests that this is not always true. This behavior is computationally irreducible.

    - This implies that even if one knows the underlying rules of a system, then to predict what the system will do can still take an irreducible amount of computational work.

    - To make meaningful predictions, "it must at some level be the case that the systems making the predications outrun the systems it is trying to predict. But for this to happen the system making the predictions must be able to perform more sophisticated computations than the system it is trying to predict." (741)

    PoCE says this is not possible thus there are many systems where systematic prediction can not be done because the behavior is computationally irreducible.

    Implications

    Sources


    | Serendip's Bookshelves | Serendip |

    Send us your comments at Serendip
    © by Serendip 1994- - This Page Last Modified: Wednesday, 02-May-2018 10:51:05 CDT