Home | Calendar | About | Getting Involved | Groups | Initiatives | Bryn Mawr Home | Serendip Home

 

Emergent Systems Working Group

April 15, 2004
Mark Kuperberg
Canonical Emergence


What is Emergence?
Mark Kuperberg

Answer: Something Emerges

This implies two things:

1) There has to be a lower level from which the "something" can emerge

2) The "something" may be considered a higher level of organization and/or structure. I am agnostic as to whether this higher level is "real" or the result of our perception. The point is that we would not categorize a phenomenon as emergent if it did not have a higher level.

A) Canonical Emergence

1) The lower level is populated by many agents who follow simple rules. The agents interact with one another locally and have only local knowledge of their environment. They have no understanding of the "something" that they are creating.

Examples: Social insects, neurons, genes in evolution

2a) The "something" that emerges comes as a surprise to the observer precisely because nothing about the agents at the lower level would lead one to believe that they would create the "something". I don't want to overemphasize the uniqueness of surprise for emergence (though I will use it several times in what follows). More generally, surprise is simply the hallmark of learning something: you know you have learned something new when you are surprised by it.

Favorite Quote:

"It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest.

The individual only intends his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention... By pursuing his own interest he frequently promotes that of society more effectually than when he really intends to promote it."
Adam Smith, The Wealth of Nations, 1776.

2b) The "something" that emerges may or may not have optimality properties. For an emergent process to have optimality properties, two additional characteristics must be present:

i. There must be a diversity generator - something that generates divergent behavior or characteristics in the agents.
ii. There must be a grim reaper (there you are Paul) which acts as an editor and eliminates some agents based on fitness criteria.

The classic examples of emergent processes that generate optimal outcomes are competitive markets and evolution. Models without such properties include segregation, stock market bubbles, and epidemics.

B) Is Canonical Emergence the only kind of emergence?

Agents and the Lower Level:

1) Are the agents only allowed to have knowledge of their immediate environment?

No: Cells in a maturing embryo interact locally, but contain the entire genome; economic agents can have knowledge of the overall economy. While agents can have global knowledge and they can act on that knowledge, they cannot base their actions on an attempt to create the "something"; otherwise, there really aren't two distinct levels and no real surprises.

2) Do there have to be a lot of agents?

Yes. A big part of what defines the two levels is that there are many 'things" at the lower level, but one "something" that emerges.

3) Do the agents have to follow the same rules?

No

4) Do the rules have to be simple?

No, but they cannot be so complex as, Go build the "something".

Structure and the Higher Level:

My conclusion up front: To some degree, structure is always going to be in the eyes of the beholder. It may not be useful to attempt a full definition, or a complete description, of what constitutes structure. Still, something (ie. some classes of "something") need to be ruled out; otherwise, everything constitutes emergence.

1) "Somethings" to rule out:

Statistical regularities do not qualify as structures:

i) In many competitive environments, various outcomes follow a power law. Examples: income, wealth, hits to web pages. Should this statistical regularity qualify as a "something"? I am dubious. If you look hard enough, you may be able to find patterns in most anything.

ii) While emergenauts have focused on power laws, a much more ubiquitous and fundamental distribution is the normal distribution which was specifically derived as the outcome of many small errors. Should anything that follows a normal distribution be classified as emergent?

I would say that the normal distribution itself can be considered an emergent phenomenon, but not the things that fit it. So, for example, the height of a group of people is not an emergent phenomenon just because it follows a normal distribution. But, the fact, discovered by Gauss, that a very specific distribution categorizes any linear combination of independent random variables is an emergent phenomenon.

2) "Somethings" not to rule out:

Ted Wong says: "When a rough stone wheel is worn smooth and efficient by use, is the smoothness an emergent phenomenon? I say no". I say yes. Think of a smooth oblong stone worn smooth by a stream: the first time you saw one, it came as a complete surprise that a stone could look like this. It was the result of a gazillion water molecules, each of which had no knowledge of, or intention to create, a smooth stone.

C) The System as a whole (System is understood to mean both the lower level and the structure that emerges):

What role does randomness play?

This is a complex and interesting question :

1) Cellular automata illustrate that emergence does not require randomness - agents can follow completely deterministic rules

2) A common view is that in emergence the structure that emerges is more "organized" - less random - than the behavior of the agents at the lower level. In this view, self-organization is synonymous with emergence. So, in the segregation model, while you may not like it, the agents cluster, self-organize, into like "minded" groups following an initial random placement.

3) In Stephen Wolfram's world view (see last section), deterministic rules can lead to randomness. For Wolfram, the interesting rules are the ones that lead to a "structure" that is more random than the behavior of the underlying agents. This appears to be the polar opposite of 2) above.

4) Information theory (which I know virtually nothing about), may, in a loose sense, support Wolfram's view. According to information theory, a message is most dense (ie. has maximal information) when it is indistinguishable from randomness. So, one could say that the most complex systems will appear random.

5) A resolution of this paradox may be to distinguish between two different types of emergent processes :

a) Type 1 has disordered agents, perhaps behaving randomly, at the lower level, but generates a more organized structure at the higher level. Most of the work in emergence seems to concentrate on this type.

b) Type 2 processes start with orderly agents, perhaps completely deterministic as in cellular automata, and generates randomness. This seems to be the paradigm that Wolfram is working in.

Is it turtles (emergence) all the way down?

Ants exhibit emergent behavior, but the ants' brains have neurons which exhibit emergent behavior, so is it emergence all the way down? While it is certainly possible to build emergent systems on top of one another, to conclude that it is emergence all the way down, is to make everything emergent.

Favorite Quote:

"To a little boy with a hammer, the whole world is a nail"

Let's avoid that (Sorry Doug and Paul).

To what degree can the levels be separated?

By this I mean, can meaningful laws exist that relate only to the structure without making reference to the rules that the underlying agents at the lower level follow? Wolfram and Doug Blank say no, they believe in what I will call irreducible complexity. I, however, hope yes. Upper level laws not based on the rules followed by the underlying agents will not be a complete description of the reality, but they can still be true and useful.

Example: Boyle's Law accurately describes the behavior of the volume of a gas under different temperatures and pressures without making reference to the underlying gas molecules.

To me, this is a critical, maybe THE critical, question. If the levels cannot be separated, then there is no way to understand the behavior of the system without replaying all of the individual interactions that generated the structure. While this is a full employment guarantee for intellectuals and computers, to me it would be a sad outcome.

It may be useful to briefly explain Wolfram's rationale for believing in irreducible complexity. So here is Wolfram's World-View:

1) Everything is a computation.
2) The complexity of a system is determined by the complexity of the computations that it can perform.
3) Systems that can perform equivalent computations are of equivalent complexity.
4) There is an upper bound to the kind of computations that can be performed, and this upper bound is reached very quickly in nature. Therefore, most systems in the natural and social world are already at maximal complexity.
5) To be able to predict the outcome of a system, you have to be able to "outcompute" it.
6) Since most systems in nature, including the human brain and the biggest theoretical computer, are already at the upper bound of complexity, it is impossible for the human brain to outcompute most systems in nature Therefore, prediction is impossible

In a sense, this is where chaos and emergence meet. While both fall under the rubric of complexity, to me they are very different, except:

a) The hallmark of chaos theory is that the outcomes of the equations that govern the system are so sensitive to initial conditions, that in practice one cannot predict the outcomes (since you will never have exact enough knowledge of the initial conditions).

b) If the levels of emergent systems cannot be separated, then emergence shares this unpredictability. The only way that one could predict things would be to replay all of the lower level interactions, and in practice this would be impossible.

My conclusion: emergent phenomena may fall into two types:

1) Reducible - those for whom the levels can be separated and laws can be established that govern the structure without reference to the agents at the lower level.

2) Irreducible - those that cannot be understood except with reference to the entire past history of the underlying interactions

It would be interesting to know if this dichotomization maps directly into the Type 1 and Type 2 processes discussed in the section on randomness. The irreducible processes do seem to fit nicely into Wolfram's generation of randomness from deterministic rules. The Type 1 processes, however, may not all be reducible.

So, for example, in the segregation model, the exact pattern of segregation can only be determined by running the model, so, in this sense, the model is Type 1 (the generation of organization from disorderly underlying behavior) but also irreducible. Still, there may be general laws that can be formulated about the segregation model that predict the degree of segregation from the underlying preferences of the agents, the percent of empty spaces, and the initial distribution of agents. Such a law would in my view illustrate that the levels can be meaningfully separated, since the exact pattern of segregation (as opposed to the percentage of segregation) may not be of importance to us.


Additions, revisions, extensions are encouraged in the Forum and/or at emergent.brynmawr.edu

Participants for April 15.2004: Mark, Al, Karen, Deepak, Mike, Will, Ted, Anne, Doug, Alan, Tim, Jim, Jim, Paul (14)




Home | Calendar | About | Getting Involved | Groups | Initiatives | Bryn Mawr Home | Serendip Home

Director: Liz McCormack -
emccorma@brynmawr.edu | Faculty Steering Committee | Secretary: Lisa Kolonay
© 1994- , by Center for Science in Society, Bryn Mawr College and Serendip

Last Modified: Wednesday, 02-May-2018 10:51:16 CDT