Ways of Making Sense of the World:
From Primal Patterns to Determinisitc and Non-Deterministic Emergence

The world as we perceive it is neither fully disorganized (Figure 1), beyond our ability to identify any overall pattern in it, nor fully organized, describable by us in terms of some single simple pattern (Figure 2). Instead, we are faced with, and find ourselves trying to make sense of, a world that most typically includes both disorganization and a variety of patterns that may (Figure 3) or may not (Figure 4) themselves obviously fall into larger patterns.

In this exhibit, we describe, illustrate, and compare three general approaches to making sense of the world using simple computer models. One approach presumes that primal spatial patterns are the explanation for all organization, and that these patterns need to be uncovered by removing obscuring disorganization. A second approach treats both pattern and disorganization as the outcome of historical processes that follow simple and well-defined deterministic rules, and seeks to determine the starting conditions and rules which yield the current observations. A third approach, which we call "non-deterministic emergence", similarly adopts an historical perspective but identifies disorganization as largely the result of random (non-deterministic) processes that are a starting point as well as a continuing contributer to the historical process. From this third perspective, the task is to understand how random processes can yield varying degrees of organization.

Moving beyond primal patterns as a way to make sense of things

"In the beginning was the Word"
Inquiry approach 1

Goal: to characterize the underlying pattern that everything actually relates to

Perception of time/evolution: either absent or a parameter along which things get closer to the ideal

Status of disorganization: a barrier to detecting the underlying pattern

One general approach to making sense of such a world starts with the existence of patterns, and treats disorganization as something secondary: an inherent noisiness in the world that to varying degrees degrades the patterns or imperfections in our perceptual abilities that obscure the patterns. The task of the inquirer, in either case, is to try and correct for the source of apparent disorganization so as to discern the actual underlying patterns.

The history of humans making sense of the heavens provides one good example of both the strengths and the weaknesses of this approach. Orderliness in the relative motion of celestial bodies was taken as evidence of underlying primal patterns or "celestial harmonies". These helped to make sense of the world despite their inability to account for all aspects of the motion of celestial bodies.

As perceptual abilities were sharpened by technological advances, the emphasis switched from celestial harmonies to newtonian mechanics, which focused on simple things interacting by simple rules. This made it possible to account with greater precision for more observed motions.

Significantly, the newtonian approach itself ultimately proved inadequate to account for many of the details of the masses, orbits, and orbital locations of our particular solar system. To account for these, one needs to move not only beyond the notion of ideal patterns in space but also that of simple interactions of simple things. One needs an additional temporal and historical dimension: the reason for the currently observed distribution of large objects in our solar system is not that it fits a particular spatial pattern, nor that it follows necessarily from a particular set of rules, but rather it emerged from several billions of years of successive interactions beginning with much smaller and simpler entities.

Emergence as an alternative to primal patterns

"Simple things interacting in simple ways can yield surprisingly complex outcomes"

A contrasting general approach to making sense of the world begins not with primal spatial patterns as the explanation but rather as the problem to be accounted for. Inherent in this approach is both a temporal/historical perspective and the idea that patterns reflect interactions of smaller entities that make them up: a pattern on a larger scale exists not because a particular distribution of smaller entities fits a particular pre-existing template pattern but rather because the pattern itself has been created over time by the interactions of smaller entities. This "emergence" approach was used effectively by Keynes in economics, by Darwin in biology, and has become increasingly important in recent years in a variety of fields of inquiry.

The general emergence approach has the appealing feature of not being dependent on assumptions about primal template patterns, and so avoiding the question of what is responsible for the existence of such patterns. At the same time, it doesn't necessarily do away with infinite regress questions (what accounts for the the smaller scale entities and how they interact?). And it leaves open the question of what the relation is between disorganization and pattern. Why do we typically see a world that consists both of patterns and of disorganization? If disorganization is neither a perceptual artifact nor a degradation of pattern, what is it?

"Deterministic" emergence

In his A New Kind of Science, Stephen Wolfram outlines a general approach to making sense of the world that is worth understanding and taking seriously, both for the possibilities it opens up and for others it neglects. Wolfram's program is an instance of the more general emergence approach, using the basic insight that one may indeed get a quite substantial array of patterns out of simple things interacting in simple ways. While this insight is not at all unique to Wolfram, he systematically explored it in a way that leads both to some important additional understandings and to some interesting new issues. We will refer to the Wolfram approach as "deterministic emergence" and use both the potentials and limitations of "deterministic emergence" to clarify and further characterize and explore an alternative approach to making sense of the world ("non-deterministic emergence") that we believe is also worth taking seriously.

Inquiry approach 2

Goal: to characterize the starting conditions and rules (initial pattern) from which everything else follows over time

Perception of time/evolution: essential as the parameter underlying emergence

Status of disorganization: along with later patterns, the product of the initial pattern

Cellular automata are a widely explored instance of "simple things interacting in simple ways can yield surprisingly complex outcomes," with the additional feature that both the things and their interactions are "deterministic," ie any given set of things, interactions, and starting conditions will always yield the same result. Among the simplest of cellular automata is a linear array of elements each of which can exist in one of only two states, together with a rule set that transforms any given linear array into a new one based on the state of each element and its two neighbors. Since one element can be in either of two states, there are eight possible states for an element and its two neighbors. A rule set specifies what the subsequent state of the element will be for each of the eight possible states of that element and its neighbors. There are two possible outcomes for each of the eight possible states, and therefore 2x2x2x2x2x2x2x2 = 256 possible rule sets. Wolfram systematically explored the pattern generating capabilities of each of these 256 rule sets (as you can do yourself by clicking here) and a variety of others. From these came some important conclusions about deterministic systems:

  1. Simple interactions of simple things do indeed yield patterns that one doesn't expect (assuming you have never seen it happen before).
  2. Simple interactions of simple things yield results that cannot be determined in advance, ie for some starting conditions and rule sets there is no way to determine what will result except by trying it out.
  3. Simple interactions of simple things can produce both patterns and what appears to be disorganization.
  4. Making things and their interactions more complex doesn't increase the complexity of the resulting patterns.
Since cellular automata, and the other examples Wolfram explored, are "deterministic systems," the state of the system at any given point in time is completely and fully determined by its state at each preceding point in time. In this sense, nothing that the system does is actually "surprising." One may have to run the system to see what happens but everything that happens is a direct and necessary consequence of the initial state of the system and the rules that govern how it moves from one state to the next. Nothing is brought into existence except that which was inherent in the system at the outset. There are no "choice" points or branch points at which more than one possible next step exists.

With this understanding in mind, let's look a bit more at each of the conclusions.

Conclusions 1 and 2. " ... yield patterns that one doesn't expect ..." and " ... yield results that could not be determined in advance"

The "surprise" here is of two sorts. The first has to do with what an observer expects and doesn't expect of simple interactions of simple things. If one thinks that pattern depends on a designer or architect, one is surprised to see it appear in the absence of either. Once one gets used to the emergence perspective, there is less surprise on this count.

The other source of surprise is that there exist processes for which every step is fully defined ("algorithmic processes) but where the outcome can't be known except by carrying out each step. If one has, for one reason or another, come to believe that there is an "equation" for all well-defined processes, one that will allow one to determine the state of the system at all future times by plugging in a value of time, this is not only a surprising but an important new understanding. If one has a greater familiarity with the work of Turing and others, the point is not less important but is less surprising.

Wolfram makes more of this point, saying he suspects it "is the ultimate origin of the apparent freedom of human will ... even though all the components of our brains presumably follow definite laws, I strongly suspect that their overall behavior corresponds to an irreducible computation whose outcome can never in effect be found by reasonable laws." We'll return to this issue below.

3. "... produce both patterns and what appears to be disorganization ..."

At this point, we need to be more explicit about what one means by "pattern" as well as by "disorganization." Most people looking briefly at Figure 1 would agree that it is "disorganized" in the sense that it exhibits no pattern. But others may see patterns in it from the outset, and others may begin to see patterns if they look at it long enough. The human brain is designed (by evolution) to find patterns, and so one's judgement that something is disorganized because it has no pattern is not a very reliable one.

Figure 1 is, however, "disorganized" or "unpatterned" in a less subjective sense. Each element that makes up the figure can be in either of two states (black or white), each has a 50% probability of being on or off, and whether one is on or off is completely independent of the state of any other element. To put it differently, there is no correlation whatsoever between the state of one element in the image and the state of any other element. Knowing the state of one element does not give one any information useful in predicting the state of any other. In this sense, figure 1 is fully disorganized, lacking any pattern; the distribution of black and white is random. Figure 2 is at the other end of a spectrum. Once one knows the distribution of blacks and whites over a small part of the figure, one knows it over the entire figure. There's a pattern. The same is true of Figure 3. Figure 4, on the other hand, seems to have local areas of pattern as well as local areas lacking pattern.

The surprise here is in important ways the converse of the surprise associated with point 1. Simple interactions of simple things can yield patterns that one doesn't expect" AND yield disorganization, absence of pattern. Not one or the other but both. It is not only order that can result from simple interactions of simple things but lack of order, statistical randomness, as well. Perhaps then, as Wolfram suggests, simple deterministic interactions of simple deterministic things can produce everything we experience around us, everything we could experience? Perhaps deterministic emergence actually could be a sufficient basis for making sense of the world?

At this point we need to flag the notion of "statistical" randomness. What one can conclude from observations on cellular automata (and in other mathematical explorations), is that deterministic systems are capable of producing results that look to be indistinguishable from the results of non-deterministic systems (ones in which the current state allows several different possible next states). Another way to say the same thing is that any known output of a system, no matter how much pattern it has or doesn't have, can be generated by a deterministic system. That is not quite the same thing as saying that there is in principle no distinction whatsoever between deterministic and non-deterministic systems. We'll discuss this further below; its a critical issue in deciding whether indeed "deterministic" emergence is an adequate basis for making sense of the world.

4. "Making things and their interactions more complex doesn't increase the complexity of the resulting patterns"

Here too we need to be more explicit about terms, in this case what is meant by "complex." Wolfram's point is that something as simple as a cellular automaton is capable of doing exactly the same set of computations as any serial computer, no matter how sophisticated. And that certainly may surprise people unless again, they are familiar with the work of Turing and others following him. A "Turing machine" is also a deterministic interaction of simple elements interacting deterministically in simple ways and, like a cellular automaton, it is also capable of doing exactly the same set of computations as any contemporary (or conceivable) serial computer. The point is an important one if one is thinking about computers, or about information processing more generally. And it is an even more important one if one thinks that the brain is subject to the same limitations as a cellular automaton or a Turing machine. But do we need to presume it is? Or can there be things in the world, our world, beyond those that a cellular automaton or Turing machine is capable of creating? That's the question to which we now turn.

"Non-determinstic" emergence

Inquiry approach 3

Goal: to contribute to the ongoing creation and exploration of possibilities

Perception of time/evolution: essential as the parameter underlying emergence

Status of disorganization: the starting point and continuing contributor to exploration

If one thinks of the problem of "making sense of the world" as one of finding the "answer" then both the pattern approach and the deterministic emergence approach provide a guide to where one should be looking. The pattern approach encourages one to try and get through the noise to discern the pattern which makes explains why everything is (or should be) a particular way. The deterministic emergence approach encourages one to try and find the starting conditions and rules (itself a pattern) from which everything that is, both pattern and disorgnization, has emerged. But perhaps there is another way to "make sense of the word"? One that doesn't presume there is a secret pattern to be found either at the starting point or at the end? Perhaps one that does away with the primacy of pattern altogether?

"... we know all atoms to perform all the time a completely disorderly heat motion, which, so to speak, opposes itself to their orderly behavior and does not allow the events that happen between a small number of atoms to enrol themselves according to any recognizable laws. Only in the co-operation of an enormously large number of atoms do statistical laws begin to operate ... All the physical and chemical laws that are known to play an important part in the life of organisms are of this statistical kind; any other kind of lawfulness and orderliness that one might think of is being perpetually disturbed and made inoperative by the unceasing heat motion of the atoms" ....... Erwin Schršdinger, What Is Life?, 1944

Perhaps what's important about the world isn't any particular pattern that it somewhat imperfectly reflects any particular pattern that it started with but instead an ongoing exploration of possible forms of existence, an exploration that both originates and continues to depend on randomness? If so, perhaps it is randomness that yields pattern rather than randomness being something that obscures patterns? And the reason why we see both patterns and disorganization is that without randomness there would be no patterns?

This idea is not so far-fetched as it might at first seem. It actually is random motion that drives, for example, the spreading out of cream in coffee. And it is random motion and interactions that produce pattern in chemical reactions. And the random falling apart of the sun that drives life on the earth. Furthermore, randomness in the form of mutations underpins the diversity of life on the brain, perhaps the best example of the mix of pattern and disorganization that we see all around us.

The idea that the world is about exploration, and hence that randomness rather than pattern is fundamental also opens some intriguing new questions. Wolfram classified his cellular automata in terms of their differing abilities to create pattern and disorganization. But one might instead ask of them, how good are they at exploring? And would they be better if instead of being fully deterministic systems they incorporated some degree of randomness in their behavior?

This question can be answered with a further useful simplification of the cellular automata phenomena explored by Wolfram. Let's focus attention not on their two dimensional appearance but rather on the successive one-dimensional states they generate. Each of these is a linear array of light and dark patches. If n is the number of elements in the array, then there are 2 expt n possible different arrays. If one is interested in the exploratory abilities of cellular automata, what one wants to know is how many of the 2 expt n possibilities is realized by any given cellular automaton?

The applet to the right makes it possible to answer this question, by keeping track of the number of different one dimensional arrays that are generated by a cellular automaton operating on a nine element array using any of the 128 possible rules discussed earlier. Several points quickly become clear by playing with it ...

  1. Some rules quickly settle into repetition of the same linear array and so don't ever exhibit most of the 2 exp n possible outputs.
  2. Other rules never successively repeat the same linear array but do eventually begin cycling on a longer period, ie they repeat some linear array previously exhibited and from there on repeat a pattern of successive arrays. This is inevitable given an array of fixed length. Since there are only a finite number of possible arrays, the automaton must eventually start repeating itself.
  3. Most rules settle fairly quickly into either generating repeatedly the same one-dimensional array or into relatively short cycles. Hence most rules do not generate a significant number of the possible states, and are not good explorers.
  4. A small number of rules are distinctive in the length of their cycles and so in the number of states exhibited. Even these, however, do not generate all possible states from any of the starting conditions we have tested.
It is interesting that the rule categories Wolfram developed are clearly different than the ones we would create using our criteria, and it would be interesting to explore further the relation between how each of the criteria categorizes particular rules. For present purposes, what is even more interesting is the following conjecture based on our observations:

there does not exist a single set of rules and initial conditions that will generate all possible linear arrays

Though we cannot yet prove this, nor yet provide a compelling reason why it should hold, it suggests the interesting possible conclusion that deterministic systems may be fundamentally limited in their exploratory capacity relative to non-deterministic ones. A linear array of elements in which the state of each element is independently and randomly varied will over time generate all possible linear arrays. That a deterministic cellular automaton won't may well be related to the limitations on logical systems first described by Godel and to the similar limit on computational systems established by Turing. And provide a further reason to approach understanding the world not as a problem of locating a basic pattern but rather of understanding (and participating in) the process of exploration and creation.

Moving on ....

If making sense of the world involves ongoing exploration/creation, an interesting question is what methods/procedures facilitate that process? Random variation will generate all linear patterns but will take a long time to do it. Will some combination of deterministic and non-deterministic processes do it more rapidly? One can try that out using the applet to the right. In this case, one can add to the cellular automaton varying amounts of indeterminacy and see whether that enhances their exploratory capacity and whether there is some optimal level of indeterminacy that does so in different cases.

a cellular automaton can do anything but it can't do everything

randomness is not something to be accounted for but something to be used to account for things

business of inquiry isn't to "explain" but to participate