Full Name:  Web Master
Username:  webmaster
Title:  Testing
Date:  2006-03-29 11:25:34
Message Id:  18718
Paper Text:

<mytitle>

Emergence 2006

Reviews of Relevant Books

On Serendip

Testing



Full Name:  David Rosen
Username:  david@wolfire.com
Title:  Book report on Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds by Mitchel Resnick
Date:  2006-03-29 18:11:33
Message Id:  18721
Paper Text:
<mytitle> Emergence 2006
Reviews of Relevant Books
On Serendip

In his book Turtles, Termites, and Traffic James, Mitchel Resnick introduces the ideas of decentralized systems that result in different layers of emergent phenomena. He mostly addresses the question of why decentralized systems are important, and how we can teach decentralized thinking to children, and does not really write about how emergent systems work, or why. This book is an effective introduction to the basic ideas of emergence from an educational standpoint. Resnick divides the book into five sections: Foundations, Constructions, Explorations, Reflections, and Projections.

In the "Foundations" section, he describes the trend towards decentralization in five areas: organizations, technologies, scientific models, theories of mind, and theories of knowledge. He refers to the dissolution of the USSR and the decentralization of IBM as examples of the trend away from centralized power structures. However, I do not entirely agree with his claim that organizations are become more decentralized; when I look at mega-corporations like PepsiCo and Disney, I see gigantic hierarchical structures that are constantly growing and acquiring new companies. On the other hand, he is correct about new academic theories. New ideas in biology, cognitive science, and physics are resting more and more on decentralized self-organizing systems.

Moving onto "Constructions," he describes StarLogo, the system he created to help students design and observe decentralized systems. He believes that this will help the students understand these systems much better than if he just explained how they work or demonstrated them. He quotes Confucius: "I hear, and I forget. I see, and I remember. I do, and I understand." To me, this raises questions about the usefulness of this book; most of it consists of telling us about certain Starlogo simulations, and while Resnick provides Internet links to let us try the simulations ourselves, these links no longer work. He emphasizes that in StarLogo, the patch system makes the environment more important, and the turtles make object-oriented programming more intuitive to students.

In the "Explorations" section, Resnick shows us nine examples of simplified simulations of real-world phenomena that result in emergent behavior. He starts with slime mold aggregation using diffusing pheromones, and then goes through ant foraging behavior, traffic jams, termite wood chip collection, segregation, predator/prey ecology, forest fires, and trees. There is also a section on geometry, which seems really out of place in this book; it does not really have anything to do with decentralized systems or emergent behavior. All of the examples are fairly interesting and well-explained, but it is never clear why they are important except as tools to help students understand the basic ideas behind decentralized systems.

The "Reflections" section discusses why the "centralized mindset" is so prevalent, and provides "heuristics" for thinking about decentralized systems. Resnick urges us to keep in mind that positive feedback can be constructive, randomness can help create order, it is important to separate different levels of behavior, and the environment plays an important role in agent behavior.

The last section, "Projections," is only two pages. Resnick argues that in order to overcome the centralized mindset in children, it is important to integrate education about emergence and decentralized systems into the academic curriculum from an early age.

Overall this book is an effective introduction to the basic ideas of decentralized systems and emergent behavior for someone with little experience in the field, or who is interested in teaching these ideas to students. It felt rushed towards the end; the examples and sections kept getting shorter, until the final chapter was only two pages long and had almost no content. It would also have been much more effective if the website it linked to still existed, so that readers could try out the simulations it referred to. To me it was most interesting that the book was written in 1994, and we are still running the exact same simulations in 2006. It makes me wonder if Mitch Resnick was just far ahead of his time, or if the field has really been essentially stagnant for the past twelve years.



Full Name:  Laura Cyckowski
Username:  lcyckows
Title:  Emergence by Steven Johnson
Date:  2006-03-30 15:15:06
Message Id:  18748
Paper Text:

<mytitle>

Emergence 2006

Reviews of Relevant Books

On Serendip


'Emergence': a relatively new and popular term to describe complexity resulting from bottom-up or self- organization used in many fields, from the natural sciences to computer science to economics. However, recognition of such emergent phenomena pre-dates the term itself and has as its contributors many influential thinkers, themselves from a variety of fields, such as Turing, Darwin, and Engels. In Emergence: The connected lives of Ants, Brains, Cities, and Software author and web technologist Steven Johnson explores the work of thinkers like these, as well as many other current models of emergence in the field of biology, sociology, and technology. His emphasis is on the role of locally interacting agents which yield bottom-up forces that produce global behavior as well as other emergent properties.

The book begins with a model that has played a leading role in understanding and elucidating many characteristics of complex systems: ant colonies. Although an ant colony requires a queen for perpetuating the population, her role stops there. The queen, despite implications of the title, plays no role in orchestrating the behavior of the colony. Instead the activities of the colony result from interactions between individual ants. Each ant lives by a simple set of rules that guide her behavior. For instance, an ant's decision to forage is dependent on the frequency of contact she has with other ants in her immediate surroundings, rather than any knowledge of what the colony as a whole is doing. The result is what an onlooker might observe as intentional behavior or hierarchical organization. Many behaviors of the colony though, such as allocation of duties or strategic placement of midden and ant corpses, result from this kind of collective behavior or "swarm intelligence". Attempts to identify any such true queen, directing ant, or "pacemaker" element prove fruitless.

Johnson connects these ideas of emergent organization to city development. Urban organization too exhibits patterns formed by lower-level interactions. While top-down forces (zoning laws, planning commissions, etc.) are without question present, there also exist bottom-up forces that together are responsible for certain patterns. Interactions between individuals shape and define communities, form segregated populations, and so on. Local forces between businesses, like competition for customers, help shape commercial distribution. Johnson cites Jane Jacobs, a writer on urban planning, who emphasizes the importance of "sidewalk culture." This alludes to the belief that certain pre-planned organizations are destined to fail due to lack of lower-level interactions by individuals needed for the layout to survive and flourish. This rests not literally on sidewalks themselves, but on the idea that sidewalks serve as a venue for interaction between individuals and then among communities. Interaction, in turn, has the potential to modify subsequent behavior and interactions as Jacobs stresses, "Encountering diversity does nothing for the global system of the city unless that encounter has a chance of altering your behavior." (96)

Johnson goes on to describe other instances of bottom-up organization in the biology of morphogenesis and the brain, in areas of computer science including learning, artificial intelligence, and genetic algorithms, as well as in the current technological world, including the Web, computer software, gaming, and even the media. Through these various discussions he hits on the most salient features of emergent organization and behavior, the foremost characteristic being that of locality.

The fact that agents-- whether they be ants, neurons, human individuals, or agents of another sort-- base their behavior on their local environments without requiring knowledge about the system as a whole is perhaps the foremost reason why emergent complexity is so interesting. The "organizing force" is then a decentralized one with no single agent being in charge. Again, an ants only source of information comes from those it comes into contact with, and she can only retain that memory for a short period of time (eliminating the possibility of surveying every single ant before deciding on a behavior). Likewise, a city business may settle based on other businesses within, say, a half mile radius while paying no heed to similar businesses that are located tens of miles away. Johnson points to the example of morphogenesis in the context of embryonic development to further emphasize the role of locality: how does a (precursor) cell specialize? At first glance, it may seem like DNA is orchestrating overall development. No doubt DNA is influential in development, but every cell contains the same DNA. The question becomes, how does a cell know which part of the very large code to follow? The answer, not surprisingly, lies in surrounding cells and (semiochemical) signals that they elicit. This illustrates further that agents within emergent systems not only "think" locally-- that is, base their behavior on the local environment-- but they also act locally as well. An embryonic cell, once specialized may send out messages to its neighbors to influence their development, much like it was once influenced, but the receiving cells are only those nearby rather than the entire group of cells.

If one looks hard enough, emergent phenomena are to be found in many places: the unique patterns of snowflakes, the alternating stripes of zebras, or sand dunes of the desert. But the truly remarkable instances of emergent phenomena are those systems that are adaptive, and monitoring a system over time can lead to interesting observations. Johnson appropriately raises another intriguing aspect of emergent systems-- their evolution and ability to "learn". Tracking an ant colony over time shows that its behavior, measured by activity and interaction with neighboring colonies, changes as it grows in size and in time. The older a colony, for example, the more likely it is to avoid confrontation with another colony by foraging along new routes. The striking thing is that an ant's lifespan is at longest twelve months, while a colony can exist for over a decade. Therefore, there is nothing inherent in the ant that determines the colony behavior; each ant, no matter when it exists in the lifespan of a colony, follows the same rules. The colony behavior emerges as a function of its evolution over time. "The persistence of the whole over time-- the global behavior that outlasts any of its component parts-- is one of the defining characteristics of complex systems," Johnson states. (82) These types of adaptations by emergent systems provide new ways of thinking about learning. Learning is frequently thought of as something done by a self-aware organism. But Johnson stresses, "...learning is not just about being aware of information; it's also about storing information... respond[ing] to changing patterns." (103) A system might react to some sort of change-- a change in local behavior, or number of agents-- and then exhibit a new type of global behavior. If this new type of global behavior allows the system to continue to exist with time, the change can be viewed as adaptive. "Learning" boils down to adaptation, and storing information is simply the perpetuation and stability of a group. Akin to the case of locality, changes in behavior by agents are directed towards the short-term, not long term. Johnson refers to this consequence as a "latent purpose". For example, the aim of an individual setting up a business in a particular location is for financial success during the span of an individual lifetime, he or she is ignorant of or unconcerned with any long-term organization of the city over the span of many lifetimes.

Johnson goes on to explore the existence of bottom-up forces in the technological world. He cites coverage by the media and national news as operating on a positive feedback mechanism determined in a bottom-up fashion from local stations. On the other hand, on-line programs may operate based on negative types of feedback. Sites that recommend new products, like Amazon.com, adapt over time to an individual's taste based on user input. The gaming community alone boasts many instances of bottom-up, emergent forces. Many games today involve not a specified aim or objective, but rather an exploration of possibilities by the player. SimCity, for example, very closely parallels real urban development. The user has only indirect control of what happens in a city, the rest being determined by lower-level parameters in the game. Johnson also explores many issues about the Web. Does it self-organize? Not quite yet, according to Johnson. Although it is similar to many emergent systems in that it involves a large number of connections, it lacks the property of bi-directionality. That is, the input is largely in one direction-- linking of web pages without mutual linking. However, many smaller web communities are self organizing by allowing users to explore particular sites and then provide a personal rating, something that eventually affects the likelihood someone else will come across the same page or community.

Although Johnson stresses some of the practical applications of emergent systems, some of the broader implications of emergence he touches on cannot be ignored. For one, knowledge and science itself are easily thought of through the lens of emergence. Individual interaction and engagement act as the lower level agents which give rise to the collective property of knowledge. Knowledge is preserved or "stored" through existence of a group of humans. In his discussion of cities, Johnson notes the advancement of ideas due to the existence of urban life, or rather, groups of humans in close contact.

For the reader interested in biological implications of emergence, Johnson highlights different ways of thinking about more abstract ideas. Personality, for example, can be thought of as the collective sum of a number of biofeedback mechanisms, mechanisms controlling things not thought of as being directly related to personality, such as adrenaline. But more interesting is Johnson's discussion of human consciousness. Though "the jury is still out" on the subject, a reasonable model for consciousness and much of human intelligence is as an emergent entity somehow brought into existence by locally interacting simpler elements-- neurons. One step up from self-awareness may be awareness of others, as in the case of humans and primates. Johnson suggests that thinking of self-awareness as preceding awareness of other individuals may be backwards. By first recognizing and forming expectations of others, individuals then become aware of their own existence. Of course, the third alternative exists as well. Self-awareness and awareness of others may simultaneously emerge at some certain point or "threshold", akin to a phase transition which transforms a simple system into an emergent one.

Johnson's survey of a wide range of models reveal that emergent organization appears in many contexts, both natural and man-made (though, this begs the question of how much influence we can directly exert on social phenomena/organization despite what we may think). Emergence phenomena show that decentralized and distributed forces are in fact a very real and powerful force. Although many ideas like theories of the mind remain (momentarily) philosophical in nature, such imagined models of emergence promote new ways of exploring emergent systems and raise new questions-- how might an emergent system form expectations or match its state with others, and so on. As Johnson encourages, emergence provides us with tools to implement in our daily lives as well as new routes for science to explore.



Full Name:  Laura Kasakoff
Username:  lkasakof@haverford.edu
Title:  Er, Does Not Compute. Algorithms, Consciousness, and the Human Mind (An Emergent Reaction to Roger Penrose's The Emperor's New Mind)
Date:  2006-03-30 17:04:15
Message Id:  18750
Paper Text:

<mytitle>

Emergence 2006

Reviews of Relevant Books

On Serendip

When I was in elementary school I hated doing my homework. (Some things never change.) Instead of just focusing on finishing my work, I would daydream about a computer that would do all of my homework for me. When I told my parents about my computer idea, my mother said it would just be easier to do my homework than create such a computer, and my father said it would be dishonest since the computer's completion of my assignments would not be my own work. However, this did not put an end to my fantasies of homework evasion. My dream changed to one of a computer that would work just like my brain. If asked to write a paper on the symbolism of birds in Arthur Miller's "The Crucible", it would write the paper exactly how I would if I took the time to author the paper myself. Would it really be dishonest to hand in work done by a computer that would produce precisely the quality of work that I would create myself?

I hoped that by reading Roger Penrose's "The Emperor's New Mind", I might gain insight as to how the human mind could be emulated by a computer. Surely a great mathematician like Penrose would show how a computer algorithm could function like a brain (and give new life to my childhood musings). I had not realized that Penrose actually believes that there is more to consciousness and human mathematical intuition than could ever be computed through an algorithmic process. While he does appreciate the eternal, ethereal nature of mathematics, it his respect for the human brain's ability to comprehend innate mathematical concepts that leads Penrose to the conclusion that the brain cannot be reduced to a computational procedure.

Penrose denies what is known as strong artificial intelligence which is the belief that we could program any computer to have intelligence with the correct algorithm. That is to say, our mental activity is a step by step execution of some complex algorithm. Philosophically, Penrose tries to discredit strong AI with the example of Searle's Chinese room. We are asked to imagine ourselves in a room where we can have no contact with the outside world, and we are given a story written in Chinese along with a yes-or-no question to answer pertaining to the story. We are also given a (presumably long) set of instructions, an algorithm, in English. Assuming that we can't speak Chinese, we will be acting like Schank's computer program answering yes-or-no questions which test for understanding of a story. After we complete the English instructions we will arrive at the correct answer, but would we really want to say that we had any understanding of the Chinese story originally presented to us?

Penrose hopes that Searle's hypothetical Chinese room will convince us that an algorithm alone cannot equal the understanding of human intuition. Penrose is angry that even Searle and others have been conditioned by computer people to concede that everything, including the human brain, is a digital computer. Penrose is attacking the school of thought proposed by Wolfram and his "digital determinism" by claiming that, despite popular belief, not "everything is a digital computer" (23). Also, Penrose does not understand how proponents of strong artificial intelligence can be happy with themselves since they end up subscribing to a very extreme form of dualism, the belief that separate from the body and the material brain, there exists a mind that has no physical component. Penrose points out that "[t]he mind-stuff of strong AI is the logical structure of an algorithm" (21). This is not the side of the mind-body debate that most strong AI supporters would want to champion.

Penrose insists that algorithmic processes can't be the only components that lead to human consciousness of the mind because there are incomputable numbers, non-recursive mathematical problems that humans but no computer can solve, and no complete strictly formulaic mathematical system. He describes how Turing machines cannot solve the halting problem by introducing an incomputable number. He presents Goedel's theorem to us to show how any attempt to formalize mathematics will have a statement that is not provable. Therefore, no matter what lens you use to examine the universe (mathematical or otherwise) an object exists for which there is nothing that can be done to calculate it. Nevertheless, I believe it is possible that emergent computer programs that try to model complex phenomena through simple rules, rather than increasingly complex mathematical equations may give rise to artificial intelligence and mathematical intuition in computers.

Penrose's inspiration for his belief that the brain is not computable stems from his work with tilings of the plane. Questions concerning plane tilings consider whether a set of tiles covers the plane periodically, that is, the polygonal tiles will repeat a given tiling pattern. There are many shaped tiles that can tile a plane "periodically" or "non-periodically", but the interesting question that Penrose and other mathematicians worked on was whether or not there existed a finite set of shapes that could tile the plane only non-periodically. Penrose was able to find a set of just two tiles that only tile the plane non-periodically. This is an example of a non-recursive problem that cannot be solved algorithmically by any computer. Therefore, since the brain can solve problems a computer can't, the brain cannot be emulated by a computer program.

Penrose says, "It is important to realize that algorithms do not, in themselves, decide mathematical truth. The validity of an algorithm must always be established by external means." Even if this is the case, couldn't we program a computer which possessed the consciousness to decide mathematical truths? After all, isn't our mathematical intuition an emergent system that develops from observation and practice? Penrose hopes that there really is a fundamental difference between our physical embodiments and the physical matter of a computer. Perhaps through a study of how environments affect agent, we can learn how the physiological components of our brain lead to certain thought processes, and then I do not see why with further understanding we should not be able to program these processes.

Of course Penrose spends a lot of time trying to define mathematical truth. I got the impression that Penrose sees mathematical truth as an emergent phenomenon itself. He says "We are driven to the conclusion that the algorithm that mathematicians actually use to decide mathematical truth is so complicated or obscure that its very validity can never be known to us...[But] mathematical truth is not a horrendously complicated dogma whose validity is beyond our comprehension. It is something built up from such simple and obvious ingredients..." (418). Penrose believes that humans are more fit than computers to comprehend these simple steps behind mathematical truth, but I believe with a deeper understanding of why we grasp mathematical concepts the way we do (perhaps through emergent modeling), we will be able to share this understanding with computers, too.

In fact, it seems that maybe Penrose himself has come to a contradiction because of his consideration of emergent phenomena. As an example of the emergent nature of mathematics Penrose describes how the Mandelbrot set emerges from a simple set of rules, but it produces such a beautiful pattern in the complex plane that Mandelbrot thought the computer he was working on had made a mistake when he saw it for the first time. He did not believe that such a natural beauty could be inherent in such a simple mathematical set of rules. If Penrose believes in the inherent god-given nature of mathematics, I think he should consider that the human brain could come from a set of simple rules as well. Of course the rule set might not be as easy as the one to generate Mandelbrot's set, but examples like Mandelbrot's set shows us that although the human brain is both miraculously beautiful and complex, the rules governing it need not be. Why does he think the most complex phenomena are so far from being computable? It is those things we can't easily grasp, paradoxical tricks that elude human intuition that create incomputable messes. The brain and the algorithm behind it might just be the next of God's mathematical jewels waiting to be uncovered.

I believe Penrose takes too great a leap from saying that some things are not computable to saying that the brain is one of these things. Why should something as familiar as our human brain be one such incomputable thing? We have seen in emergent phenomena like the Conway's Game of Life and Langton's Ant that completely deterministic systems are not necessarily predictable. So perhaps our actions themselves are all predetermined by the mathematical equation of our brain. Our sense of free will could stem from the fact that the algorithm is so complex that while our future is destined to be a certain way we cannot predict the outcome of our lives. After all, our brains are but a network of neurons out of which emerges our consciousness. Non-recursive mathematical problems that can only be solved non-algorithmically by the human mind, such as the Penrose tilings, are Penrose's main justification for his conclusion, but I hope that it will be possible through emergent modeling of the brain for a computer to show mathematical intuition even for non-algorithmic problems.

In the prologue of "The Emperor's New Mind", we meet a young boy in a futuristic society who is about ask an omnipotent computer a question at its grand "turning-on" ceremony. In the epilogue we discover the boy has asked the computer how it feels and the audience at the ceremony laughs. I think it is funny that Penrose should use such a story to try and convey the uniqueness of consciousness and "feeling" to human beings (or at least living creatures with some capacity to verbalize). I find the question "How do you feel?" to be one of the most annoying questions because it is nearly impossible to answer. At any given moment one feels so much more than what can be conveyed through the standard {sad, happy, tired...} responses. Perhaps the most intelligent computers in the world are unable to answer how they feel, but in all honestly can humans truly give a complete survey of their respective internal states? Perhaps the experience of emotions is itself an emergent phenomenon where the interaction of neurons and biochemical reactions and (Penrose's dear friend) quantum physics conflate to produce our emotional consciousness and self-awareness. If this is the case it makes sense that through computer modeling and animal robotics and other more emergent approaches to artificial intelligence, maybe one day the computer version of my brain will write this paper for me!



Full Name:  Sunny Singh
Username:  ssingh@haverford.edu
Title:  The Universality of Emergence
Date:  2006-04-02 11:14:36
Message Id:  18787
Paper Text:
<mytitle> Emergence 2006
Reviews of Relevant Books
On Serendip

The field of emergence is "emergent" on two separate levels. In the literal sense, emergence is the process by which simple agents, adhering to a simple rule-set, coalesce into complex systems. In and of itself, the actual study of emergence also exhibits emergent characteristics, as demonstrated in Steven Johnson's Emergence: The Connected Lives of Ants, Brains, Cities, and Software. Johnson manages to provide the reader with a primer on how the emergent train of thought can be applied to numerous seemingly disparate subjects. What soon becomes evident in Johnson's broad discourse is that, albeit software development, the study of city and socioeconomic structure, neurology, and myrmecology—the study of ants—to name a few, an emergent point of view proves to be an effective tool in studying these complex, and often times chaotic, systems. By providing both the historical motivations for studying emergence, and through the firsthand accounts of those at the forefront of emergence in each of the aforementioned fields, Johnson delivers substantial evidence that agents on a low level have the ability to self-organize and exhibit macroscopic behavior through feedback networks and adaptation. It is due to this rather ubiquitous nature of emergent tendencies that a colony of ants, for example, may be able to provide more insight into the lives of humans than meets the eye.

Ant colonies have been known to be highly organized and efficient in nearly every facet. Upon initial inspection, it appears as though these colonies materialize through a top-down hierarchal construction. The societal structures in which humans live are products of a top-down hierarchal system. Local and federal governments are appointed by the individuals in a society and these leaders subsequently impose the laws by which the society must abide. Since the very world in which people live is engrossed by a centralized state, it is only natural to assume that the same applies to ant colonies. Through his dialogue and interaction with Deborah Gordon, a renowned behavioral ecologist, Johnson presents the reader with intriguing facts about the delegation of work amongst ants in a colony.

Contrary to naïve belief, the queen does not dictate each individual ant in the colony—bearing the size of the ant population in the colony, it is evident that such a task would be unfeasible for a single queen ant to undertake. This lack of leadership does not imply that anarchy reigns supreme in the colony. Rather, Johnson presents the notion of an ant colony as an archetypal example of bottom-up organization. Much in the same way that slime mold cells release cyclic AMP as to assemble other cells—which, in turn, "triggers waves of cyclic AMP that wash throughout the community"(Johnson 14)—ants rely heavily upon pheromone trails left by other ants. The presence of a pheromone trail may indicate to an ant that a fellow worker has found a luscious cornucopia of food; because of this feedback, the said ant will follow the path to the food and similarly leave a pheromone trail for more ants to follow. In this sense, the individual ants are not independently scouring for food throughout the entire day. Although they do act independent from one another at first, the ants eventually converge into a single, coherent being. What is even more startling about the emergent behavior of the ants is that the paths they create often times are geometrically optimal. On describing one of the colonies she has been studying, Gordon states "they've built the cemetery at exactly the point that's furthest away from the colony. And the midden (waste) is even more interesting: they've put it at precisely the point that maximizes its distance from both the colony and the cemetery" (Johnson 33). Similar to the way biological evolution motivated John Holland and Danny Hillis to develop genetic algorithms and genetic programming, respectively, the uncanny potential of pheromone trails was used to attack a daunting conundrum in mathematics—the traveling salesman problem.

Marco Dorigo, an artificial intelligence researcher at the Free University of Brussels, exploited the pheromone schema in an attempt to map out the shortest path between 15 different cities. Dorigo essentially sent out a swarm of individual virtual salesmen, all of whom would explore the various ways of visiting each of the cities. When a solution was found, the agent would retrace its path and leave a pheromone trail for the other agents to follow. After several generations of salesmen scouring the map, following thick pheromone paths, and retracing efficient paths, an optimal solution eventually emerges. Johnson furthers the idea that close interaction between agents in a system has implications on a larger scale. In fact, Johnson believes that it is in the best interest of neighborhoods to interact in order to materialize into a vibrant city.

In situations where neighborhoods are secluded from one another, the only things that "emerge" are xenophobia and an overall lack of emergence. In cities where neighborhoods are both adjacent and within walking distance of one another, the individuals in the city will most likely communicate when they cross paths on a sidewalk. The local interaction between the different groups is crucial if the groups are to eventually emerge. Although such emergence may not be immediate, such coalescence may come to fruition on longer timescales. Thus, given enough interaction and social feedback, a united city working as one can form from a set of disjoint groups.

Johnson stresses that although emergence seems almost applicable in every situation, there are certain conditions that must be met in order for there to be emergence in the first place. One of his most interesting arguments is that the vital communication must be between agents working on the same level. Emergence is virtually nonexistent when agents from different scopes attempt to interact. Johnson gives a compelling argument with cars in a city. A car which travels on a stretch of highway does not provide feedback to the neighborhoods it passes—cars and neighborhoods are on different scales. Rather, the car will provide feedback for the other cars zipping down the highway, and visa versa. The feedback on this level and in this scenario leads to the emergence of traffic jams.

Despite the fact that traffic jams emerge from the feedback between cars, Johnson suggests that feedback could also be used to combat such predicaments. By implementing a feedback network between cars and stoplights, it is possible that a homeostatic timing scheme could be derived which would optimize the movement of the cars such that jamming would be minimized.

The recurrent thread that seems to resonate all throughout Johnson's work is the reliance that emergent systems have on feedback. Individual agents are able to converge and attain macroscopic homeostasis through interaction with similar agents in the microscopic scale. The fact that emergence is exhibited on so many unique levels is a testament to how powerful it is. Johnson does a marvelous job by lucidly presenting this budding idea and demonstrating how far-reaching its implications are. Notwithstanding Johnson's vast array of emergent examples, it is truly a grave underestimate to believe that study of emergence has reached its pinnacle. Emergence has opened the eyes of those who have studied it and has inspired fascinating work in a large number of circles. For many, mankind's current understanding of emergence of merely a scratch on the surface of something bigger. Regardless of whether emergence holds the key to true artificial intelligence, or if the field itself emerges into the highly coveted 'Theory of Everything', it is wise for researchers and enthusiasts alike to continue pushing the limits of emergence.




Full Name:  Jesse Rohwer
Username:  jrohwer@haverford.edu
Title:  Commentary on Douglas Hofstadter's Gödel, Escher, Bach: an Eternal Golden Braid
Date:  2006-04-02 15:31:52
Message Id:  18790
Paper Text:

<mytitle>

Emergence 2006

Reviews of Relevant Books

On Serendip

Gödel, Escher, Bach is an entertaining and thought-provoking exploration of several related mathematical, philosophical, and computer science themes cast in a popular science perspective. Published in 1979, it received the Pulitzer Prize for general non-fiction in 1980. Throughout the book, Hofstadter illustrates such concepts as unpredictable determinism, self-reference and self-representation, Gödel's incompleteness theorem, intelligence, and consciousness through a combination of prefaces consisting of dialogues between fictional characters, analogies to Bach's music, Escher prints, and paintings by René Magritte, and lucid direct exposition.

One of the most prominent themes in GEB is a reductionist explanation of consciousness and human intelligence. Hofstadter states that "to suggest ways of reconciling the software of mind with the hardware of brain is a main goal of this book." Although some people are still debating the question of whether or not conscious experience can be explained as an epiphenomenon of relatively well-understood microscopic physical processes (i.e. as a secondary, emergent property—"it is not built into the rules, but it is a consequence of the rules"), acceptance for this description is certainly more widespread today than it was a quarter century ago, when Hofstadter wrote GEB. For this reason, I was less interested in this theme than in some of the others. However, because all of the concepts that Hofstadter presents are interrelated, by addressing a few of what I perceive to be Hofstadter's most interesting themes—unpredictable deterministic systems, levels of complexity, and the relationship between self-reference and incompleteness—and by discussing some of the questions that GEB raises, I will also address the problem of consciousness.

Hofstadter doesn't discuss cellular automata, which we have found in class to provide an excellent example of how simple deterministic systems can have unpredictable and complex behavior. However, he does explore the unpredictability and complexity of emergent behavior manifested in ant colonies, intracellular enzymatic processes, and neurons in the human brain. It is only through the interactions of a multitude of ants, no single one of which possesses an internal plan for the often complex design of the anthill, that such vast (relative to the size of each individual ant, at least) and intricate (arches, mazes of tunnels, and towers) structures are eventually built. The neuronal example of unforeseeable complexity arising from simple parallel agents is also fascinating. Hofstadter points out the difficulty of localizing higher cognitive processes due to the fact that any individual neuron may interact with thousands of others, which in turn interact with thousands of others, all in parallel, to produce complex mental behavior. Finally, Hofstadter relates this emergent complexity to creativity, making the point that determinism does not rule out creativity because there is more than enough pseudorandomness in any sufficiently large deterministic system to give rise to unpredictable, "creative" results. Although the evidence of creativity in computer programs has been doubtful to date, the development of more powerful computers and correspondingly more complex, unpredictable programs is promising. Hofstadter explains that "When programs cease to be transparent to their creators, then the approach to creativity has begun."

Another interesting theme pervading Hofstadter's work is that of levels of complexity. Hofstadter explains that it is almost always necessary to introduce higher-level concepts to make the task of understanding complex systems tractable. In some cases, these approximations are almost perfect, as in the case of the gas laws—solving an equation in terms of pressure, temperature, and volume of a gas will always yield an answer that does not deviate perceptibly from complete accuracy. However, at the microscopic level there are no such things are pressure or temperature—these concepts have been invented because to calculate the velocity vector of every individual gas particle would be a near-impossible task. Another classic example of the utility of higher-level grouping, or "chunking", is in weather prediction. We base our calculations on cold fronts, hurricanes, and other macroscopic concepts, but to the individual atoms that make up the atmosphere there is no "cold front" or "hurricane".

The theme of levels was particularly interesting when applied to intelligence. Hofstadter is a professor of computer science and cognitive science at Indiana University, and his knowledge in the field of artificial intelligence shows. He employs the idea of "symbols" to describe mental concepts which must exist in the brain as some pattern of neural connectivity and firing, just as a cold front is a pattern of atmospheric activity. Furthermore, he concludes from a comparison of human intelligence (based on complex neural structures that differ from person to person) to the intelligence of lower animals (in one example the solitary wasp, which has many fewer neurons than a human and which demonstrates seemingly intelligent behavior that turns out on closer inspection to be nothing but a very simple and inflexible predetermined program) that many interacting neurons are needed to achieve the capacity for logical manipulation of symbols, and that this ability is in turn necessary for human-level intelligence. Related to this conclusion is his assumption that computers will eventually achieve human-like intelligence, but only through mimicking the human brain architecture. He goes on to erroneously predict that no special purpose chess program will be capable of beating the best human players because only a general purpose artificial intellect, based on the emergent properties of neural networks, would be intelligent enough. This thinking is confused; chess is a game with simple, well-defined rules, making it closer to arithmetic than to the types of pattern recognition tasks humans are specialized to perform. Since computers can be applied to arithmetic tasks with far more efficiency than humans can, a computer with enough circuitry to rival human intelligence should outperform a human in chess; Deep Blue did just that to Kasparov, and with significantly less processing power than the human brain is estimated to have. However, this minor misunderstanding aside, Hofstadter's belief in the importance of emergent systems for generating high general intelligence is supported today—the most prominent contemporary view in artificial intelligence research is that it will be necessary to mimic the human brain's architecture in order to emulate its intelligence.

Finally, Hofstadter brings Gödel into the picture with an examination of the incompleteness theorem and its implications. He explains the concepts of self-reference and self-representation, and the fact that a self-referential system cannot be consistent. He postulates that consciousness is probably somehow the result of the brain's ability for self-reference, which is not an uncommon explanation in cognitive science. This is all familiar ground, and Hofstadter does a good job of presenting the material, but the end effect of GEB is to leave me searching for explanations to unanswered questions. That is, what does Gödel's incompleteness theorem imply about reason itself? Are all attempts to fully understand reality futile? And what are the implications of our own ability to create systems as logically complete (or incomplete, depending on how you look at it) as our own world? Doesn't this mean that we could be part of a larger system about which we know nothing? Could it be that we can know something about this larger system, such as that it must be more complex than our own world? These are the types of philosophical questions that reading GEB evoked, and they are reflected in some of Magritte's pipe series that Hofstadter includes—for example, a painting of a pipe with the caption "Ceci n'est pas un pipe," i.e. "This is not a pipe." At first we may think, 'of course it's a pipe,' until we realize that what Magritte means is that it is really just a painting. Another painting features a room with the painting of the first pipe on an easel and a "real" pipe floating above it—now there are three layers of "reality" evident: our world, the world of the painting, and the world of the painting within the painting. It forces the viewer to confront the subjective nature of reality—do we necessarily exist in the 'highest' layer?

Another slightly less philosophical but still disturbing question that Hofstadter raised in my mind was whether or not computers will eventually attain or surpass human intelligence. Hofstadter predicts that they will attain intelligence comparable to that of humans, and says he is unsure of whether or not they will ever exceed it. To me, it seems obvious (although maybe this is the result of having read Hans Moravec's papers on AI) that computers will be capable of exceeding human intelligence in the not-too-distant future. The question is whether or not they should be allowed to reach this point. Considering the state of chaos of computer software even today—flawed programs, viruses, countless opportunities for humans to exploit the oversights of software developers to accomplish their own often malicious ends—I think it would be entirely foolish to delude ourselves into thinking that we could "control" whatever new intelligence we create. And once the hardware is powerful enough and the programming techniques have been perfected, what's to prevent someone or something from creating, either accidentally or intentionally, a hyper-intelligent electronic entity with malicious intent? Is it our fate to be destroyed or replaced by our creations? Should we accept this possible outcome? Should we embrace a transition from our biological origins to the perpetuation of the human race through artificial progeny? Personally, I think not, but reading GEB has made me realize that if we are not careful, we may not have a choice.

I highly recommend Hofstadter's Gödel, Escher, Bach: an Eternal Golden Braid. It is well-written, interesting, informative, and thought-provoking.



Full Name:  Leslie McTavish
Username:  lmctavish@brynmawr.edu
Title:  Finding Order in the Universe
Date:  2006-04-03 06:59:56
Message Id:  18799
Paper Text:
<mytitle> Emergence 2006
Reviews of Relevant Books
On Serendip

Steven Stogatz is currently a mathematics professor at Cornell University. He studied at Princeton and Cambridge and received his PhD from Harvard. He also spent five years teaching mathematics at MIT. In addition to teaching for more than twenty years, he has been studying the role that sync plays in subjects such as human sleep cycles, three dimensional chemical waves, biological oscillators and fireflies. His book Sync is the story of his personal journey from grad school up until the time that he wrote the book detailing many of the intriguing discoveries he and others have made. What makes the book enjoyable to read is not only the wide range of intriguing subjects that he covers, but also the engaging and sometimes humorous stories about the people that have influenced his life. These people are his teachers, colleagues, students and other scientists who are all exploring what seems to be emerging as a common thread throughout the universe. This thread is sync and it can be found almost anywhere one takes the time to look. It is in fireflies in the jungles of Thailand, the beating of the human heart, power grids, lasers and superconductors.

One drawback to the book is that some of the topics that he covers involve complex mathematical and technical explanations that can be difficult if not impossible for someone without the appropriate background to understand. However most of the time, Strogratz takes the time to explain fundamental processes and does it with such simplicity that it enables someone without prior knowledge to understand the majority of the subjects he covers. For example, one of the topics that he goes into a fair bit of detail explaining is oscillation. Oscillation is at the heart of many synchronous systems and he uses the simple image of flushing toilets to provide an articulate explanation of how the process works. Strogatz uses fireflies not only as an example of oscillators that synchronize but also to point out of the importance that sync has in other applications.

Fireflies are a perfect example of oscillation and they have been the subject of a great deal of study. In the jungles of Thailand flashing fireflies are able to spontaneously synchronize. It was though that this was the only place on earth this was happening until a woman in Tennessee reported that she had witnessed the same thing happening there as well. When it was announced that the government was spending tax money to study the synchronous behavior in fireflies, one Representative from Wisconsin was outraged. However, Strogatz points out that understanding the principles of sync has allowed engineers to identify and solve traffic jams on the internet, and firefly enzymes are being used in the testing of drugs to treat tuberculosis.

Strogatz was first introduced to sync while he was studying at Cambridge in 1981. He felt he was lacking a clear direction that his career should take and was looking for some inspiration in a local book store. He came across a book with a title remarkably similar to the subject of his own thesis paper, and was intrigued by the originality of the authors' ideas. The author was Arthur Winfree and he was to become Strogatz's teacher, mentor and friend.

Strogatz wrote to Winfree and began working with him that summer. His first project was to study the behavior of oscillators in a three dimensional environment. Up until this time they had only been studied in regard to one dimension, time. But in reality these systems do not operate in a single dimension. The problem in attempting to perform these studies was the mathematics involved were extremely complex and computing capacity at the time was still relatively low. This was coupled with the fact that they were unsure of how to interpret any results of these calculations.

Instead of using his Steven's mathematical background, Winfree put him to work studying Zhabotinsky soup. It sounds like something invented for a science fiction novel, but it was developed in the 1950's by two Russian scientists. The remarkable thing about these chemicals was that they were capable of spontaneously creating waves similar to those that cause the human heart to beat. With the aid of this soup, Strogatz was able to model the activity of oscillators in a three dimensional environment. In more recent years these same methods have enabled scientists to discover another type of wave that may help explain the reason behind sudden cardiac deaths.

Sync is involved in almost every system in the human body. Pacemaker cells in the heart all oscillate, and it is their synchronization that causes the heart to beat. Oscillating cells in the intestine synchronize into a rhythm that aids in digestion. The body's circadian rhythm is thought to be linked to pacemaker cells which control our sleep patterns. Scientists are studying the synchronous qualities of neural activity hoping to be able to gain a better understanding of how we are able to learn, are able to recognize odors and how memories are formed.

All of these systems of oscillators function sympathetically with respect to their geographic location are self organized networks. They bear striking resemblance to more complex networks that we are only beginning to comprehend. Massive power grids and the Web are examples of gigantic networks that have evolved without an organizer responsible for the overall design, yet they display the same spontaneous organization of the synchronizing oscillating systems seen in the flashing of fire flies and Zhabotinsky soup. This so called ripple effect is what enabled just two failed power lines to disrupt power to over 7 million people in 1996, the Love Bug worm to cause billions of dollars of damage world wide.

The effect of sync also shows up in other surprising places. The Millennium Bridge in London which linked St. Paul's Cathedral with the Tate Museum opened in June of 2000. It was a radical new design of the suspension bridge. On opening day, once the ribbon was cut, the public streamed onto the bridge. It suddenly began to vibrate and sway from side to side, sometimes deviating up to 20 centimeters from its origin. Engineers tried in vain to discover what miscalculation they had made and the bridge was closed just two days later. It was later discovered that the synchronous foot steps of the people crossing the bridge that caused the effect. This was surprising for two reasons. One was the obviously unpredicted effect that the crowd walking across the bridge would have its movement, the other is that 2000 people unprompted in any way, somehow managed to move into perfect sync.

The role that sync plays in so many diverse systems in the universe is prompting people like Strogatz to study science in a whole new way. One can sense in this book the enthusiasm that Strogatz has for the work that he is involved with. He believes that a new era in scientific research is emerging that may lead to the discovery of the secret of the universe.




Full Name:  Julia Ferraioli
Username:  jferraio@brynmawr.edu
Title:  A Review of Holland's Seven Basics
Date:  2006-04-03 11:22:11
Message Id:  18800
Paper Text:
<mytitle> Emergence 2006
Reviews of Relevant Books
On Serendip

Different people will look at different aspects of life with varying preconceptions. One person will find a certain beauty, a pattern in the genes which make up an organism, but find utter chaos in the workings of a city. Another person might see it the other way around, and yet another might see both as incomprehensible. John Holland attempts to simplify these systems into basic abstract attributes. In his book, Hidden Order: How Adaptation Builds Complexity, the father of genetic algorithms steps through both the properties which he believes are common to all complex systems and the development and principles of his modeling system, Echo.

The central phenomena which Holland studies in his book are what he terms complex adaptive systems, or cas (not to be confused with the plural of CAs). A cas is a type of complex system, which has many parts, yet retains coherence despite the introduction of new elements or challenges. In short, it is a dynamic network with the ability to change in response to stimuli. Many phenomena may be considered to be complex adaptive systems, including ant hills, cities, the immune system and the ecosystem. All of these examples have the ability to learn from past experiences. They work, without reason, on occasion. They are so complex that we cannot predict the exact outcome of any event acting on them.

Holland postulates that there exist seven basic elements that characterize a cas, four of which are properties and four of which are mechanisms. Separating the two types, I will attempt to simplify the explanations of these elements. The most basic of the properties, and in my opinion, one of the most essential, is the property of aggregation. Aggregation is as much a property of a cas as it is an ability in ourselves. We have the ability to look at a collection of objects (or elements, if you prefer) and see past the specifics and generalize. Instead of seeing 10 different types of cars and thinking that each one is truly unique and none of them could ever fit into a more general description, we look at that collection and think, yes, they are all cars. So it also is with complex adaptive systems; they can all be generalized into categories, and then all the categories are treated the same.

The next property is that of nonlinearity. Linear equations follow the form that the whole is equal to the sum if its parts. In contrast, nonlinear systems are equal to more than the sum of their parts. Instead of the summation, it takes the product of dissimilar variables, and this reveals far more about the cas than simply the sum. Inevitably, they are also more complicated to analyze than linear equations. Flows, in terms of complex adaptive systems, work much in the same way as they do in everyday life. Resources flow over the network of nodes (agents) and connectors. They are customarily denoted as such: {node, connector, resource}. A more specific example would be {cities, roads, produce} to represent the flow produce to cities. When we look at flows, an important concept to grasp is the one of the multiplier effect. The multiplier effect happens when an additional node is introduced into the system. The effect is how this affects the system and the flow. Another concept is that of the recycling effect.How does the reuse of resources affect the system as a whole? These are both things to keep in mind when looking at the flow of a cas.

Then Holland discusses the property of diversity, where each agent fulfills a function that is delineated by its interactions with the surrounding agents. With the spread of agents, it allows the modification and diversification of agents. New interactions develop, thereby creating a new kind of niche that is to be filled by a different type of agent. If an agent disappears, there is a hole in that system. While the agent that takes its place may not be the same agent, it tends to fulfill the same properties and provides equivalent tasks. Patterns in complex adaptive systems are likely to persevere despite disturbances. The example that Holland gives is water. Water has a fabric which is easily disturbed, but reverts back to its original fabric quickly. While agents might die (or go extinct) new agents come into the system to preserve the integrity of the pattern.

Tagging is what is termed the mechanism of identifying an element of an aggregate. To some extent, this could mean no more than calling the aggregate by a specified name. However, in terms of a cas, tagging often means putting some means of identification on an agent, such as actual tags on a wild animal. Tagging allows agents to "discriminate" between other agents, as well as allowing the observer to discriminate between all of the agents. Internal models are unique to each cas, and are a basic schema to the system. The internal model takes input and filters it into patterns which it is able to use to change itself. After one such occurrence, the agent should be able to anticipate the outcome of the same input if it occurs again. Tacit internal models only tell the system what to do at the current point, but overt internal models are used to explore alternatives, or to look ahead to the future. The mechanism of building blocks is the idea starting with the decomposition of a complex system into simple parts. These parts may then be reused and combined in any number of ways. This reusability leads to repetition which leads to patterns. An example with Holland gives is that of facial features. All types of facial features may be dissected into elements such as eyes, nose, ears, etc... They can then be combined, mixed up, and matched in a sort of building block-like fashion.

As is quite evident, these properties and mechanisms are essential to the idea of emergence, and how to model emergent phenomena, as complex adaptive systems came to be called. When looking at phenomena that have emergent properties, we do not ask what is unique about that phenomenon; rather, we compare that phenomenon to other emergent phenomena and ask what they have in common. In essence, we simplify it into a preexisting category; we use the property of aggregation. Different emergent phenomena display the property of diversity, but the most evident is that of the Game of Life. In a stable board, there would often be many types of agents, all working in conjunction with each other. They all fulfilled their own purpose, but their purpose depended upon the purpose of those around them. Once a board stabilized, it would continue to be stable until an external agent acted upon it. Afterwards, it would again stabilize. The mechanism of building blocks seems so instinctive that it is too obvious to even include, yet its exclusion would be disastrous. What is an emergent phenomenon without a pattern? These patterns arise out of the building blocks inherent in ecosystems, in economic systems, in the immune system and in social systems.

Holland does an excellent job of communicating both the seven basics and how they are applicable in different settings and situations. Understanding these properties and mechanisms is essential to understanding emergence because without them we would have no hope of modeling emergent phenomena. Holland found this when he was developing his program Echo to model genetic algorithms. By using them, he was able to simplify the process of creating Echo and at the same time demystify complex adaptive systems for everyone else. Not only do these properties help us model emergence, but they clarify what forms emergence might take in the world and help us recognize them.




Full Name:  Bhumika Patel
Username:  b2patel@brynmawr.edu
Title:  Perk Bak's How Nature Works
Date:  2006-04-03 13:10:58
Message Id:  18802
Paper Text:
<mytitle> Emergence 2006
Reviews of Relevant Books
On Serendip

At the center of Per Bak's How Nature Works is self-organized criticality, an idea that endeavors to explain how nature is complex and not simple as implied by the laws of physics. Bak claims that this is a new way of viewing nature where it is described to be constantly out of equilibrium, yet ordered in the critical state where anything within the statistical laws can occur. Bak argues that the complexity of nature is a result of its predisposition to evolve into a critical state where negligible turbulence can cause events, referred to as "avalanches" of varying sizes. The critical state is attained due to the dynamic nature of the interactions among individual components in a system; hence this particular state is self-organized as well as independent of an outside agent. Bak claims that self-organized criticality is thus far the only mode for creating complexity.

Bak's paradigm for self-organized criticality is the formation of a sand pile where the sand pile represents a complex system. Initially as sand grains are trickled down, they remain within a close proximity to their position of landing causing the sand pile to appear flat. As time progresses the sand pile becomes steeper, leading individual sand grains to experience minor slides around the sand pile. When the sand pile reaches its maximal steepness, further trickling of sand grains leads to avalanches or the sliding of sand grains down most or all of the length of the sand pile. At this point, the sand pile system has reached a state of unbalance and its behavior cannot be explained in terms of the actions of individual sand grains. The avalanches are also dynamic and this behavior can only be explained by examining the properties of the overall sand pile.

Bak also points out that sand piles display their own punctuated equilibria, an idea that evolution occurs in spurts instead of gradually. As sand is being trickled down, there are long periods of time where there is little or no activity. These states of apparent equilibrium are interrupted by sudden bursts of sand slides that affect the whole system. Bak says that the avalanches observed in a sand pile are very similar to the punctuations in evolution. According to Bak, avalanches or punctuations are the trademark of self-organized criticality. Furthermore, Bak claims that since complexity is seen everywhere, nature functions at the self-organized critical state and complexity observed across the sciences can be explained in a manner analogous to the sand pile system.

Bak defines complexity as variability and says that a theory of complex systems has to be abstract where all possible scenarios are considered and there are no references to individual components of the system. Bak believes that the theory also has to be statistical and probabilistic where specific details about the system cannot be obtained. Lastly, Bak believes that a theory of complexity must be able to explain general observations across individual sciences that cannot be understood within the realms of the particular sciences. The examples of general observations used by Bak are the occurrences of catastrophic events, fractals, one-over-f noise (1/f noise), and Zipf's law.

Catastrophic events are encountered since complex systems are composite where components of the system can affect each other through a domino effect. Earthquakes are caused by the proliferation of cracks on the earth's crust in this manner. It has been observed that nature is fractal where fractal is described to be geometrical structures with features with varying length scales. Specific examples in nature include the geometry of mountains, coastlines and trees. 1/f noise can be thought of as fractals over time and it has been observed in diverse systems such as the flow of the Nile, light from quasars, and highway traffic. Zipf's law says that the magnitude of a system's element is related to the element's rank in the system. Zipf's law has been applied to various systems such as the population of a city as a relation to the city's rank and the frequency of a word as a relation its rank. An observed trend in all of the phenomena described is that they are emergent and can be described in terms of power laws. Mathematically, when a straight line is obtained on a double logarithmic plot, the straight line represents a power law.

Since all of these phenomena can be expressed as power laws, Bak argues that they are expressions of a single underlying theory: self-organized criticality. Next, Bak guides the reader through the process that led to his discovery of self-organized criticality as well as the ways in which the theory has been tested on "real sand piles" and landscapes. For the rest of the book, Bak discusses earthquakes, evolution, economics and traffic jams as being applications of self-organized criticality. Using earthquakes as a model for self-organized criticality, Bak says that the Earth's crust has self-organized to a critical state through platetectonics, earthquake and volcanic activities. The Gutenberg-Richter law is used as evidence that the earth's crust has gone through this organizational process. Furthermore, Bak finally allows for the fact that earthquakes, volcanic eruptions, river network formation and avalanches are all interlinked. In this way, the Earth's crust is thought to be a complex system at a critical state where the criticality is caused by different phenomena. Bak also shows that pulsar glitches, the changes in a pulsar's rotational velocity, black holes and solar flares are also phenomena that operate at the critical state.

Bak says that one can think of the Gaia hypothesis, an idea that all life on earth can be thought of as a single organism, having self-organized criticality as the underlying principle. Based on this belief, in the critical state all species can be represented as one organism following a single evolutionary path. In this system, an event can lead to the collapse of a large fraction of the ecological network and its replacement by a new stable ecological network. The replacement represents the "mutated" global organism. When the ecological system is at the critical state, all species affect each other yet act jointly as a single organism sharing the same fate. As evidence, Bak uses mass extinctions where a small portion of the overall organism is affected by a meteorite but a large portion of the overall organism becomes extinct.

Bak then talks about the significance of power laws in connecting earthquakes and evolution. He says that according to the power law, the longer a place has gone without an earthquake, the longer it has before it will experience an earthquake. Similarly, the longer a species has existed, the longer it will exist. Bak also mentions that John Conway's Game of Life displays criticality, but once the rules set by Conway are changed, the Game of Life is not critical. He goes on to say that complexity is only observed in the Game of Life when it is in the critical state since non-critical rules result in simple structures. He uses this to stress the point that complexity only results from criticality.

There seems to be a similarity between Bak's How Nature Works and Wolfram's A New Kind of Science in that both authors declare that they have come upon a new and original idea that explains all complex systems. Although Bak gives credit to work contributed by his colleagues, the reader is left with the impression that Bak is the one with that connects all the pieces. The book itself is an interesting read as Bak includes his insights into the scientific community. One can also argue that the models presented in How Nature Works are very vague and general since they overlook important biological and physical factors. Regardless, as Bak points out in the preface of the book, the theory of self-organized criticality must carry some weight since more than 2,000 papers had been written on self-organized criticality from the time the idea had been proposed to when this book was published, making the original paper the most cited paper during that period in physics.



Full Name:  Sarah Malaya Sniezek
Username:  ssniezek@brynmawr.edu
Title:  Emergence and Blink
Date:  2006-04-03 16:08:09
Message Id:  18805
Paper Text:

<mytitle>

Emergence 2006

Reviews of Relevant Books

On Serendip

Malcolm Gladwell's Blink is about one's "adaptive unconscious", which is responsible for making snap decisions through thin-slicing. Blink's three tasks are: 1) to convince the reader that snap decisions (unconsciously) are every bit as good as decisions made cautiously and deliberately (consciously), 2) to teach the reader when to trust our instincts and when not to, and 3) to convince the reader that snap decisions and first impressions can be controlled through practice. (14-15)

Gladwell discusses and gives many examples of "thin-slicing", "the ability of our unconscious to find patterns in situations or behavior based on very narrow slices of experience" (23). By narrow slices, he means minute details. For example, Gladwell describes a study done by John Gottman. Gottman developed a system where he has the ability to predict with 95% accuracy, after observing an hour of married couples talking, whether they will be divorced in 15 years. Gottman is able to train himself as well as others to thin slice certain characteristics, such as contempt, to make a profound prediction. Gladwell also gives examples of how in WWII the British hired women interceptors. These women were to figure out the German broadcasts that were in code, and soon enough they were able to recognize the different operators through their distinct ways of doing the code. Another example of thin-slicing is a study Gosling did. This study concluded that strangers do much better at assessing a person on the Big Five than friends of that person do. These people, John Gottman, the women interceptors, and the strangers were able to attack "...the question sideways, using indirect evidence...and their decision-making process was simplified: they weren't distracted at all the kind of by confusing, irrelevant information that comes from a face-to-face encounter. They thin-sliced" (39). Snap decisions are made very fast from the thinnest slices of experiences. The problem with thin-slicing is that many people are unaware how they used it to make their snap judgment. This is due to the fact that thin-slicing is an unconscious thought process; they occur "behind a locked door".

Not being able to trace back how we made a snap decision is very difficult for us to understand and can confuse us even more. Gladwell uses an example of Vic Braden, a top tennis coach's experiences in sports, to show how difficult it is to get into our unconscious. Braden, while watching any tennis match, is able to call a double fault with significant accuracy, before the ball is even hit. He was in complete shock of how he could do this. Braden could never get through his locked door to figure out why he could figure out when a tennis player would double fault. "The evidence he used to draw his conclusions seemed to be buried somewhere in his unconscious, and he could not dredge it up" (50). Braden is not the only one who feels this way. Since it is difficult to backtrack to how one came to a snap decision, many people have their own theories about how or why they do what they do. Andre Agassi had this problem when describing how he performed athletically. Agassi, like many pro tennis players, would say that he was so good at hitting the ball because he would roll his wrist back. In fact, when Braden did extensive studies, he found that Agassi, and most other professional tennis players, almost never moved their wrists until after the ball is hit. It was surprising to Braden that so many professionals could be confused about why they performed better than others. In fact, we all have these storytelling problems. "We're a bit too quick to come up with explanations for things we don't really have an explanation for" (69).

It seems that we all seem to have issues with explaining why it is we do things or come to certain snap judgments. Gladwell says that "...allowing people to operate without having to explain themselves constantly...enables rapid cognition" (119) and having them explain themselves causes verbal overshadowing. This can be seen through an example given by Gladwell when one is asked to picture faces versus describe faces. Gladwell states that the left hemisphere of ones brain is what thinks in words and the right hemisphere is what thinks in pictures. If one was to describe the face of the last waitress they had while eating versus just picturing the waitress in their mind, then it would push one's thinking from the right to left hemisphere. Then if you were asked to identify your waitress in a line up, one would refer back to the description given in words and not the picture that was in their mind. This causes a problem because recognizing faces is meant to be a snap decision and not a long drawn out decision. It seems when one is asked to pick out the waitress in a lineup, after they described her, they are unable to do so versus just picking her out of the lineup without describing her. "In short, when you write down your thoughts, your chances of having the flash of insight you need in order to come up with a solution are significantly impaired" (121).

Another significant issue with thin-slicing and snap decisions is due to priming. Priming is when one tries to activate certain associations before having to do a task. Gladwell discusses how there have been many different cognitive studies on priming and its effects on people. Studies have shown that people are prone to priming. An example of this is the Warren Harding error. Warren Harding was a president of the United States, and won the presidential election based on his looks. Harding's looks, tall, dark and handsome, seem to be primed within our unconscious through our society. Studies have shown most CEO's of the top fortune 500 companies are just above six feet, meaning that in our society we have been primed to go for the person who "looks" most qualified for the position. Many other studies, such as John Barg's experiment of powerful associations, have shown the same priming affect. Barg primed people with certain words associated with old people, and it caused the participants to change their behaviors towards older people. One major every day example of priming is marketing. Marketers are well aware of priming based studies and use the findings to their advantage. They understand that people have certain unconscious opinions on things so they make sure they use it to their advantage. This is the dark side of thin-slicing; even if people consciously feel that they are not being affected by racism, marketing, etc, they are unconsciously being affected and there are tests to prove it.

Lastly, Gladwell gives examples of people who are good at thin-slicing to help one improve on harnessing their thin-slicing. Gladwell's only advice on improving snap decisions is to either pay attention only to important information and block out the rest or do not even allow yourself to know unnecessary information. He also discusses how practice makes perfect and how experts seem to have an upper hand on snap decisions. He says that , "This is the gift of training and expertise-the ability to extract an enormous amount of meaningful information from the very thinnest slice of experience...Every moment-every blink- is composed of a series of discrete moving parts, and every one of those parts offers an opportunity from intervention, for reform, and for correction" (241).

Since I came to Bryn Mawr and took my first College Seminar class with Professor Grobstein, I have begun to think about this sort of thinking. Why are we able to do things, make decisions, etc without really thinking? Gladwell does an excellent job in describe how we come up with these snap decisions by using the process of thin-slicing. I wonder though if this thin-slicing is really happening in our unconscious, or some other, new part of our brains? Could it be that there is just this inherentness within us that makes these decisions with whatever information it has, or does it really thin-slice through our experiences? Is thin slicing really an unconscious experience, or is it an example of our brain analyzing a familiar experience so fast that our logical reasoning can not keep up? And when thin-slicing happens, does it only take thin slices of our experiences and put it all together, or does it know what material to thin-slice from experience? Does everything we have ever learned affect our snap decisions, or just things relating to the specific question?

Gladwell's Blink, has many good point on how the "adaptive unconscious" functions and how it is important to human beings for survival, but it also had many flaws. Why does this thin-slicing cause us human beings so many problems? It can be affected by numerous things, but Gladwell states that we can control it if we practice. Is it truly possible to control our thin-slicing to make better snap decisions? I believe that this is partly true. Gladwell's description of the "adaptive conscious" has really helped me to put my prior thoughts of these quick decision makings into a category. I truly believe that there is this sort of "thin-slicing" going on, but is it really in the unconscious, or some other part of the brain? I would like to think that it is the unconscious, but it just seems too simple to be only this adaptive unconscious that makes up this part of our decision making. Maybe it could be a "pseudo-unconscious", partly unconscious and partly conscious. I guess really what needs to be defined a bit more clearly is the difference between the unconscious and conscious. Also, is it even possible to have something in the middle, and is this adaptive unconscious that part?

There are three main issues that I have with Gladwell's concepts. Firstly, Gladwell does not account for people who are not experts who make good snap decisions. How is it that there are people, many of us, who are capable of making good snap decisions without having any expertise on the subject? Is it because of chance? It can't be chance. Is it because snap decision making is truly not unique to experts and is unique to people who have learned to hone into this skill? I am not quiet sure why this is the case, but I would think that it is because the person is able to thin-slice the most relevant information and make the best decision. However, it seems that the more one has experience with a subject the more accurate the thin slicing.

Secondly, Gladwell discusses how it is important to pay attention only to important information and block out the rest or do not even allow yourself to know unnecessary information. I am not quite sure how that is done? How could one know what is necessary information and what is unnecessary information to make the snap decision? Is it possible that it is inherent in some, and not in others? I think this relates to the first issue I have, in that maybe the people who are able to make snap decisions well, that are not experts, have some sort of ability to just inherently know what information is necessary and what is not. I highly doubt that, but I guess it is possible. Also, if snap-decisions are made in the "blink" of an eye, how can one have the time to consciously filter information? I feel that the same function that makes the snap decision has to do the filtering. This is where I think experience relates to snap decision accuracy.

Lastly, is this ability to thin-slice and make snap decisions higher than logical reasoning or lower than human reasoning? In other words is this type of thinking more basic than consiouse logical resoning, or more complex. Do other species use this same type of thinking? I would like to think that this type of thinking is on a higher level than what we understand, but how is it possible for other species to also posses these qualities? I think that other species do function on this level and have this "adaptive unconscious". Maybe snap decisions are how species without the cerebral cortex make decisions. I am not sure about this, but I do think it matters what one's definition of "more elaborate" means. I feel that since other species have more information to filter through than we do and that is what makes it more elaborate.

When reading this book I found myself trying to piece thorough all the information and connect it with emergence. What I have taken from it is that the process of thin-slicing is seemingly so simple, but ends up making this huge snap decision that we are unable to understand. Therefore, snap decision making is a type of emergence. We have many of these snap decisions a day, but are unsure of how we got there. I feel that Gladwell did a decent job on trying to explain how we got to our snap decisions through thin-slicing our experiences, but he did not give a clear path back to figuring out how we go to these snap decisions. It is so general, and I think it is important that Gladwell has made a move towards trying to figure out how we go to these snap decisions. What I want to understand more of, is what exactly of this thin-slicing are we taking in to make these snap decisions? Is it possible that we are able to never thin-slice and still get the same results? Gladwell also suggests that we are able to improve on thin-slicing by trying to be more aware of its errors. If this is the case, then is it possible that all types of emergence can be improved and figured out after using our memory to think about the different types of things that might have made up whatever it is we are studying? I seem to think that this part does not only apply strictly to the "adaptive unconscious" type of emergence. I feel that it applies to all studies of all types of emergence. However, I also feel that it is impossible to completely understand and figure out all the simple things that make up something vastly more complex. It is difficult to try and fathom how certain simple things interacted with each other and the environment. I feel that there are some cases where maybe we could try and do this, but we will never have the complete answer, but instead have it "less wrong".



Full Name:  Peter O'Malley
Username:  pomalley@haverford.edu
Title:  Sync: The Emerging Science of Spontaneous Order
Date:  2006-04-05 00:03:38
Message Id:  18837
Paper Text:

<mytitle>

Emergence 2006

Reviews of Relevant Books

On Serendip

In Sync, the mathematical biologist Steven Strogatz looks at emergence in a fundamentally different manner: synchronization. He starts off with a number of examples, all which involve the spontaneous emergence of synchronization from systems of what he terms coupled oscillators. The prototypical example that he offers is that of fireflies in East Asia along riverbank which blink in unison after dark. When these were first discovered, according to Strogatz, the Europeans at home did not believe the journals of explorers detailing these sights in awe. Over the years various explanations were provided, including one in the 1920s that posited that it was the result of the structure of the human visual system, but none proved satisfactory. It was only recently however, that it was realized that the fireflies would sync themselves naturally: an experiment was conducted, where fireflies were isolated and then slowly reintroduced to each other. At first, each firefly blinked at its own pace, but then pockets of synchronization started to form until eventually they all synchronized and blinked in unison. Strogatz then discusses various possible mathematical models for these fireflies and their realism and applicability. Finally, he extends the analogy to the so-called peacemaker cells of the human heart.

This is the basic format in which Strogatz tells his story of synchrony. He introduces a real-world example that he or others have found interesting—and it almost always is interesting—then proceeds to describe computer or real world models and experiments that have successfully or otherwise described the phenomenon, introduces the concepts behind any mathematical description of the system, including whether or not one is even possible, and concludes with an analysis of its usefulness, and any links that it has to other phenomena of synchrony.

It is the synchrony of the human heart that Strogatz frames his book with. He introduces the pacemaker cells of the heart, the sinoatrial node, early in the book as a group of relatively simple heart cells that, together, ensure at the heart beats in tempo. The challenge of synchrony here is to get all the cells to release their electrical charge at the same time, and to have it repeat steadily. A relatively simple model for these cells proposed by Charlie Peskin is to model them as identical, coupled, RC oscillating circuits—something with which every physics student is familiar. Charge builds up in a circuit until it surpasses some threshold, similar to a neuron, after which it fires, and adds a little bit of charge to every other circuit. It is not difficult to show through a computer model that these circuits will synchronize spontaneously, and Strogatz even outlines, qualitatively, the mathematical proof that he Peskin came up with.

An effective analogy, and one that Strogatz returns to throughout the book, is that of runners on a track. Each runner has his own speed, which is analogous to the frequency of an oscillator, and all the runners shout at and are heard by every other runner, which is analogous to the coupling between the oscillators. Depending on the initial conditions and the setup of the coupling, a group of runners may synchronize into a single block all running at the same speed, fall into chaos everybody running on her own, or anything in between. Strogatz tells of a work of Wiener, who came up with this analogy; Winfree, who discovered that for many situations there is a sudden phase transition; and Kuramoto, who showed that with an isotropic track and homogenous rules between the runners, there was either a chaotic solution or a solution where the runners grouped into three packs of varying speeds.

A subtle, but fundamental, difference between this model and the RC circuit model is in the coupling mechanism. Whereas in the circuit model, every oscillator affected the phase of every other oscillator, in this model the oscillators affect each other's frequencies. Which one of these is more correct depends, of course, on the physical system that one is trying to model. The frequency modification model is more accurate to another very real world application: the human body.

Strogatz spends quite a bit of time on what he terms "human synchrony." The circadian cycle, which is actually a bit longer than 24 hours naturally, is fundamental to many human processes, not least of which is sleep. It turns out that one of the most reliable measures of the circadian cycle his body temperature, which fluctuates roughly like a sine wave over 24 hours. Humans in complete isolation tend to lengthen their circadian rhythm to around 26 hours, so the question is: why do humans' sleep and temperature cycles have a period of 24 hours? It must be the influence of the sun, and brain imaging and experiments on rats have shown that there is a command center for the circadian cycle in the brain, and it is directly linked to the eyes.

The second half of Strogatz' book deals with non-biological synchrony, and there are many examples. Again, he starts with a historical example: Huygens, a Dutchman, and the negative feedback that synchronized his pendulum clocks. He proceeds to describe many examples, including lasers, the power grid, GPS devices, the moon's rotation, and Jupiter's orbit and its relation to Kirkwood gaps in the asteroid belt.

He spends the most time, however, on one fascinating example of synchrony: superconductivity. He goes into quite a bit of detail about how superconductivity works, perhaps too much, and describes Josephson effects and superfluid liquid helium. He also relates the practical uses of superconductivity, including the SQUID and its ability to detect magnetic fields and various imaging techniques. Strogatz also describes synchrony in a place that one would little expect it: Chaos. Mathematically chaotic systems, of "butterfly effect" fame, will apparently synchronize with themselves. It has no known uses yet, as all methods of encryption that have been devised with it are easily cracked with a Fast Fourier Transform.

It is with chemical synchrony however that Strogatz comes full circle and returns to the subject of the human heart. He tells a personal story of his work as a grad student with Winfree in their use of chemical waves, or more formally, "wave propagation in the excitable media." He and Winfree used a combination of mathematical models, computer models, and plain old experimentation to show that these waves in three dimensions form into what are known as scroll rings. With a detour through topology and advanced computer modeling (in 1982) he describes all of the form that emerge from simple waves, although these are not the relatively well understood kinds of waves that are commonly studied by physicists. What is most important, however, is that these waves provide a model for how the pacemaker cells in a heart can get out of whack and then this will propagate in what is known as tachycardia. It is not known, though, whether these waves can provide a model for or be the cause of defibrillation and cardiac death.

Strogatz closes his book with human sync: not in the biological sense this time, but in the social sense, and whether networks exhibit synchrony. He describes the small world problem by using a common parlor game: six degrees of Kevin Bacon, where players try to relate any given actor to Kevin Bacon in the fewest number steps possible. There are two types of fundamental networks that Strogatz describes: the tight locally clustered one, and the random one. He explores the networks that are found in between: those that have some local clustering and some randomness, and apply to the phenomena of human sync. Although he presents no definitive results, he theorizes that the synchrony of networks may be fundamental in the many phenomena of human sync: fads, traffic, the World Wide Web, disease spreading, and even applause.

In Sync, Strogatz presents a fascinating and entirely readable book about emergent synchrony. He includes personal anecdotes about the scientists involved—apparently, the Nobel laureate Josephson is now exploring the physics of the paranormal—and expert descriptions of theories without a single equation—though I, personally, would have preferred to see a few. What I found particularly intriguing was the different take on emergence that Strogatz had throughout the whole book: he knew exactly what he was looking for, and it was synchronization. This allowed him to provide an entirely different list of emergent phenomena than we discuss in class, though he would call them "phenomena of synchrony"—one man's emergence is another man's synchrony. I recommend this highly interesting book to all who are interested in a different take on emergence.



Full Name:  Angad Singh
Username:  adsingh@haverford.edu
Title:  Tensions in Agency Jane Jacobs and Emergent Thought
Date:  2006-04-05 18:24:50
Message Id:  18849
Paper Text:

<mytitle>

Emergence 2006

Reviews of Relevant Books

On Serendip

While Jane Jacobs' The Nature of Economies is replete with references to emergence theory, it focuses primarily on the application of emergent thought without actually referencing it as such. Explicitly referenced in only one instance(1) , the novel yet draws significant insights from what could be considered an emergent perspective or lens. This essay will uncover the theoretical underpinnings of Jacobs' novel as they relate to emergence. The exploration will focus on tensions in Jacobs' thoughts on agency(2).

One contentious issue in emergence theory is that of agents or agency. In the manner of Armbruster, a character in the novel fixated on definitions, an agent could be described as an encapsulated entity operating under some set of rules. In the NetLogo modeling environment, a turtle is a constructed agent able to operate under unique parameters. In a similar sense, living organisms could be construed as agents operating under rules different from those than govern the inanimate. Jacobs' narrative ventures near acceptance of agents and agency but does so in a nuanced and contradictory manner worth analyzing.

The Nature of Economies often describes animate objects, such as humans, as individuals capable of altering their environments. This line of thought of thought is tempered, however, by not conceding that agents are anything more than useful constructs. Also, there is little speculation far in the other direction by considering agents to be merely patterns, such as a glider in Conway's Game of Life (3). The opposition perspectives are both broach at times, and a delicate balance is achieved between the two. Jacobs, through the character Kate, asserts that inanimate and animate development depend on the same underlying processes. At the same time, she consistently refers to the animate and inanimate distinctly. The theoretical argument Jacobs' appears to be making in this instance is: while there is no substantive difference between the animate and inanimate, the construction of animate agents is useful. While the construct of an agent is consistently applied throughout Jacobs' novel, her characters equally consistently argue that the same universal principles and processes govern animate and inanimate reality. What initially appears as an inconsistency in Kate's thoughts on agents is actually her concession that the false construct of encapsulated agents is a useful construct.

Though surrounded by intelligent and accomplished individuals, the primary fount of wisdom in Jacobs' novel is Hiram, a proponent of biomimicry. Biomimicry is the usage of natural processes to achieve desirable ends, such as utilizing human hair to clean oil spills. The underlying supposition of biomimicry, however, is partially at odds with the conception of agency detailed above. Biomimicry presupposes a dichotomy between natural and unnatural human constructs and processes. Its very aim is to make human constructs and processes more natural. But if one holds true that agents are but emergent patterns fundamentally based on the same principles and processes as non-agents, then the human agent is natural to its very core. As Kate argues, accusing human action of being unnatural or artificial is "like accusing spiders of artificiality because they're spinning something other than cotton, flax, silk, wool, or hemp fibers" (9). There is no alternative than to operate under universal processes and principles for agents and non-agents alike. This claim, however, is not nearly as nihilistic as it may appear on the surface. Beneath the 'everything is natural' veneer lies prudent resolve.

This tension on the topic of agency could be construed as being theoretically central to Jacobs' novel. Her characters consistently struggle in reconciling their wanton ambitions for development while maintaining a delicate and versatile conception of agency. Hiram and Armbruster debate the requisite preconditions and universal processes governing ecologic and economic development. The motivation for their analysis is one of free will or agency. The purpose of understanding development is to better inform ecologic and economic decisions. While all of Jacobs' characters cede that development follows universal principles and processes, they also oftentimes retain faith in free will. In this perspective, an agent may consciously shape the direction of development while following universal principles and processes. Though it is a touchy argument to make, Jacobs' may be hinting at the difference between free will and encapsulation. If an agent is provided limited encapsulation and variability in its responses, it does not necessarily have free will. An encapsulated agent could simply be a more complex cellular automata. As the characters struggle with determining the proper recourse for development, they are playing with notions of agency and free will (4).

A community of thriving agents requires equilibrium in number and environment to achieve dynamic stability. In the words of Hiram, "both the competition and the arena for competition are necessary" (123). A parasite killing its host effectively dashes its hope for dynamic stability. In the case of stability in number, Jacobs describes four methods by which it is achieved: bifurcation, positive-feedback loops, negative-feedback controls, and emergency adaptations. Bifurcations loosely fall under the category of a change in rule sets, such that the progression of an agent deviates by making a change in operation. The feedback controls are analogous to multi-leveled learning in NetLogo programming, where certain behaviors are either encouraged or discouraged. It is important to note that the feedback controls are not discriminatory, meaning that positive behaviors can be negatively mitigated while negative behaviors may be positively reinforced. The fourth method to achieve stability in number, emergency adaptation, again tests the emergent notions of free will, agency, and encapsulation.

Emergency adaptations are mobilized when an agent is confronted with a dire threat. Seasonal variations in temperature induce hibernation in certain species of bears. Periodic hibernation, however, is not considered an emergency adaptation. Abnormal, unique, or unexpected threats result in emergency adaptations. As described Hiram, the primary emergency adaptations required of humans are speed and improvisation, traits that are in many individuals' repertoire. Under extenuating circumstances, agents make required and oftentimes drastic emergency adaptations to satisfy the demands of external pressures. Again, this presupposes an encapsulated agent reacting to pressures outside itself. In her subtle way, Jacobs is also suggesting that agents do less than switch rule sets. Instead of switching to an entirely novel rule set, able agents simply begin operating under rules already within their repertoire that previously lay dormant. The activation of these rules occurs not through conscious decision but by changing environmental conditions.

This stands in marked contrast to the stance described earlier with regards to encapsulation and free will. The conversational, didactic structure of the text, however, lends itself to such tensions and internal contradictions. By placing certain arguments, whether explicit or otherwise, in the mouths of different characters, Jacobs is able to weave together disparate notions of agency. From insinuating agents are little more than patterns and expedient constructs to utilizing agents as discrete, autonomous entities, the inconsistencies of the novel provide readers with an opportunity to establish their own interpretation by reconciling the various viewpoints. Or maybe the lesson of the day is simply to utilize the most effectual notion of agency for each particular situation. Perhaps we should just relish its multiplicity and exploit it when suitable.

Endnotes
1. Jacobs describes the emergence of a multi-cellular organism from undifferentiated cells
2. As mentioned in an online forum, it is interesting to note that women and minorities are more likely to focus on the application of emergent analysis than on emergence theory.
3. In Conway's Game of Life, gliders can appear to behave as agents. Because they are not encapsulated and do not operate under different rule sets, however, gliders are little more than patterns misconstrued as discrete objects. The human predilection for pattern deifies their existence, placing within their existence autonomy and agency.
4. It could be argued that the agents are not in fact operating under a different rule set. The encapsulated agent may operate under the same rule set, but because of different environmental conditions, it may activate a different subset of rules. A simplistic example could be an acidic encapsulation within a basic environment. While ions in both environmental conditions operate under the same rule set regarding charge transfer, only rules pertinent to the given environmental condition will be in effect. In this case, however, the characters give no indication of this argument.

Works Cited

Jacobs, Jane. The Nature of Economies. Modern Library, New York: 2000.



Full Name:  Ben Koski
Username:  bkoski@haverford.edu
Title:  Jacobs' Creative Self-Organization
Date:  2006-04-06 09:37:49
Message Id:  18873
Paper Text:
<mytitle> Emergence 2006
Reviews of Relevant Books
On Serendip

One of the most intriguing ideas presented in Jane Jacobs' The Nature of Economies is the concept of "creative self-organization" (137). In discussing the seemingly unpredictable behavior of many large, complex systems, Jacobs posits that "even if every single influence on some types of complex systems could be accurately taken into account, their futures would still be unpredictable" (ibid.). That is to say that even if we were able to measure all possible inputs into a complex system—and had the computational power to analyze all such inputs—we could not develop an algorithm to effectively model the system. Indeed, it could be said that such a system exhibited properties of emergence, since the only way to predict the state of the system at time t would be to actually run the system.

This, I believe, represents a major departure from paradigms of emergence that we have seen so far in our studies: though the scholarly work that we have seen grapples with the emergent properties of cellular automata, agent-based modeling, and evolutionary algorithms (that is, the idea that these systems are non-deterministic and so can only be predicted through simulation), this work does not account for a system that is "making itself up as it goes along" (ibid.). Conventionally, the study of "emergent" phenomena focuses on systems that cannot be predicted because the final output depends on a connected series of inputs. The output of a CA cannot be predicted deterministically because it depends on the repeated application of a ruleset on a series of intermediate outputs. The final result of an agent-based model such as Langton's Ant cannot be predicted deterministically because the state of the environment at time t requires the application of a rule over all previous time points.

Yet we do agree that running an "emergent" CA or agent-based model over and over with the same parameters should yield the same results. We should be able to—and can—replicate Wolfram's experiments simply by applying the same ruleset to the same starting point over the same amount of time. Running a basic version of Langton's Ant with the same parameters should produce the same result at time t, since the output is merely the application of rules over connected inputs. Though different runs of an evolutionary algorithm may produce different "products," it will always yield the same "result" in that the algorithm will reliably end with the satisfaction of a fitness function. Furthermore, genetic algorithms could be viewed as mere applications of a simple rule to a series of connected inputs—albeit with unpredictability introduced. If we had a means of replicating random mutations in an evolutionary algorithm, we would be able to reproduce the results.

Jacobs takes a step beyond all three of these models with the idea that "a system can be making itself up as it goes along" (ibid.). There is no ruleset governing behavior; there is no concept of connected inputs leading to a final output. Indeed, the inputs do not even matter: whether we know the inputs or not, the system is still unpredictable. Unlike other sorts of emergent systems, the system is non-deterministic not because it depends on intermediate inputs that must be computed prior to the result, but because it is simply and utterly unpredictable. This unpredictability is the direct result of the fact that the system is "making itself up as it goes along:" the phenomena is dynamically changing its rules, process, environment, and fitness function as it progresses. Therefore, no two runs of the system will nver be alike because no two runs will ever be conducted under the same ruleset, process, or fitness function. The results of such a system are not replicable.

Though this concept of creative organization does not provide a very robust analytical framework for peeking inside these systems or understanding their mechanics, it does allow Jacobs to account for natural systems that exhibit unpredictable behavior. Jacobs' strongest example is that of weather. Jacobs draws on Frank Lorenz's famous weather forecasting experiment in which Lorenz showed that past weather patterns serve as poor predictors for future weather, in order to demonstrate the inherent unpredictability of weather systems (135). She also makes use of examples from linguistics—"speakers make a language and yet nobody, including its speakers or scholars, can predict its future vocabulary or usages," Jacobs explains—and studies of processes in ecosystems to demonstrate the breadth and existence of systems operating under the control of "creative self-organization" (137, 158).

Though these examples and explanations make sense, there are still uncomfortable points in Jacobs' theory. The most glairing gap in Jacobs' framework lies in the idea of a system doing something: in constructing the idea of a system that is "making itself up as it goes along," Jacobs implicitly introduces the idea of some sort of hidden actor that is doing this "making." In order for something to be "making itself" do anything, there must be some sort of actor taking some sort of action. Whether conscious or not, holistic or narrowly defined, this implicit actor is doing something and participating in the direction of the system. By failing to address what exactly is "making up" the system as it goes along, Jacobs leaves us with some troubling questions. Is there some sort of a master conductor hidden deep within these systems? Moreover, if there is some sort of actor "making up" the system, could it be possible to determine the rule that this actor uses to "make up" the system? Is this agent not bound by certain limits? If we could determine this rule—or even just the bounds of the agent "making up" the system, wouldn't the system then be predictable?

Also unaddressed is the influence of agents within the system. At least two of the examples of "creative self-organization" presented—most notably the case studies from linguistics and ecology—are made up of a large number of independent, individual agents acting collectively. An ecosystem is comprised of plant and animal agents each making independent "plans for the future" that have collective influence; languages are shaped by a number of individual speakers, each contributing their own linguistic idiosyncrasies to the collective language. How can we be sure that this observed unpredictability is not merely the result of a diverse collective agency? Jacobs asserts that these systems themselves are unpredictable, but it is entirely possible that unpredictability results from the complex "co-development webs" that are necessarily present in a large group of independent agents (20). Her failure to address the influence of individual actors—either at the individual or collective level—casts some doubt on the idea of "creative self-organization."

Another important concept raised in Jacobs' work is that of development—economic or otherwise—as an emergent process. Traditionally, development is thought of as the "multiplication" of existing resources to create new, more diverse resources (19). Jacobs, however, posits that one can think of development as "differentiation emerging from generalities" (16). Each new generation—or differentiation—becomes new "generalities from which other differentiations emerge" (ibid.). The diversity generated by the emergence of these new generations is further heightened by "co-developments," or the development of parallel—but independent—processes (19). Jacobs' best example of the influence of co-development is that of a river delta: "A delta needs both water and grit," she explains. "Neither, by itself, can develop a delta and each, by itself, is a result of co-developments" (ibid.). Jacobs' theory of development has important implications for the study and analysis of economic development. Perhaps most importantly, it debunks the "Thing Theory" (31). Many economic development experts were taught to suppose that "development is the result of possessing things such as factories, schools, tractors, dams, whatever..."—and often express puzzlement when the purchase of these "things" does not lead to successful economic development (ibid.). Jacobs, however, explains that "if the development process is lacking in a town or other settlement, things either given or sold to it are merely products of the [development] process somewhere else. They don't mysteriously carry the process with them" (ibid.). Thus, if economic development efforts are ever to be successful, they must understand development as a process, rather than a thing that can be influenced by the purchase or transfer of other things. Another important point underscored by Jacobs is the chilling effects of discrimination on economic development. The process of creating new "differentiations" for development depends on the innovation and creativity of economic actors. Jacobs theorizes that discrimination inhibits economic development because a large part of a population "doing [menial work], are excluded from taking initiatives to develop all of that work"—and thus a large proportion of an economy's labor that could be used to develop new differentiations is wasted in inefficient menial labor (33). As Jacobs puts it, "people don't need to be geniuses or even extraordinarily talented to develop their work. The requirements are initiative and resourcefulness—qualities abundant in the human race when they aren't discouraged or suppressed" (ibid.). Limiting these qualities limits the ability of an economy to successfully develop.

To introduce her thoughts on emergence and economic development, Jacobs resorts to setting up her text as a fictional conversation between friends—ostensibly for the purpose of bringing "rarefied economic abstractions into contact with earthy realities" (ix). This decision, however, becomes a critical limit on her work—essentially limiting the import her work to merely that of a conversation between friends. Jacobs feels that the use of "dramatic dialogue" to present material excuses her from having to include footnotes and other direct references (ibid.). Though she does include further explanation and many references to the most influential examples included in the text in the "notes" section, many of the smaller examples introduced in the book are completely undocumented. For example, Jacobs claims that "knowledge of how to choose good transit routes seems to be going extinct, too, judging from cities that construct expensive transportation lines long ridiculous routes, then wonder why they're underused" in an effort to demonstrate the importance of obsolete differentiations introduced by development, but fails to include any references or further proof to back this assertion up (30). Similarly, Jacobs often relies on the roots and usage history of English words to make proofs, but often fails to include references to linguistic authorities to give her observations weight.

Despite the lack of comprehensive references, Jacobs' The Nature of Economies does raise intriguing and important points—particularly in the fields of emergence and economic development.

Works Cited
Jacobs, Jane. The Nature of Economies. New York: Modern Library, 2000.



Full Name:  Joshua Carp
Username:  jm.carp@gmail.com
Title:  The Complexity of Cooperation
Date:  2006-04-08 14:22:04
Message Id:  18904
Paper Text:
<mytitle> Emergence 2006
Reviews of Relevant Books
On Serendip

In The Complexity of Cooperation, political scientist Robert Axelrod presents a collection of academic papers. The research behind said papers was conducted between 1986 and 1996 and appeared in a broad range of journals (American Political Science Review, Journal of Conflict Resolution, and Managerial Science, among others). All told, they consider abstract representations of conflict (the Prisoner's Dilemma and novel variants), alliances in war and business, and the spread of cultural values. That said, this variegated corpus is unified by a single common insight: that complex collective behaviors can be modeled—with often surprising verisimilitude—by simulating the interactions of their simplest constituents.


That insight, though crucial, is probably not original and certainly not unique, but Axelrod is uncommonly thoughtful about it. Computer simulations of complexity are, at least in some minimal intuitive sense, interesting, but their actual utility in doing science is not fully clear. Axelrod addresses this matter explicitly only briefly. Computer modeling, he writes, is in some ways akin to deductive modes of science: the sorts of parameters included in a model, and often their values, are derived from (formal or informal) empirical study. Likewise, the output of these models is considered deductively—results typically cannot be gotten by reasoning from initial premises. At the same time, modeling approaches draw from induction: interactions among agents are often specified axiomatically, without regard for real-world circumstances. So Axelrod thinks of these simulations as in some sense distinct from traditional kinds of inquiry—something new emerges from the unification of two principles of science. So modeling may be useful because it is a new and qualitatively different tool in our investigative arsenal.


More direct attention is given to the proper epistemological uses of computer modeling. In his earlier papers (with one exception, the papers in this book are presented in chronological order), he is largely concerned with the iterated Prisoner's Dilemma in one form or another1. This is an abstract situation that real social agents are not likely to encounter, especially not, as in the first paper, in the form of round-robin tournaments once per generation. It seems to follow from this that any knowledge gleaned from modeling this kind of game ought only to be considered relevant to the behavior of real agents in the abstract. In his second paper, Axelrod uses genetic algorithms to evolve strategies for noisy Prisoner's Dilemmas. In this variant, one in ten moves is “implemented incorrectly,” and the opposite of the player's choice is selected. In simulations of the noise-free Dilemma, “tit-for-tat” has consistently emerged as the best strategy overall. Tit-for-tat cooperates on the first move and on each subsequent move plays what its opponent played on the previous move. When noise is added, two modified versions of tit-for-tat perform best: generous TFT, which randomly “forgives” some defections by the opponent (i.e., cooperates following the opponent's defection), and contrite TFT, which “apologizes” after its own defections by cooperating. Axelrod attributes the efficacy of both TFT variants to their “error-correcting” properties: pairs of standard TFT players will both cooperate on every play, assuming each cooperates initially, but noise can disrupt this pattern. Both generosity and contrition can correct an unintended defection, returning the game to a configuration where net punishment is minimized. This is well and good, but the question of the model's practical applicability remains. Axelrod cautions against broad application of his results: they describe simple-minded automata, not genuine social actors capable of calculating complicated decision rules. The most appropriate use of this work, he writes, is in informing social science; reciprocity seems to be in a general sense a fruitful strategy for curbing mutually destructive impulses between interacting agents, and generosity and contrition may be of further help under certain conditions. But the values of the payoff matrix are chosen arbitrarily and have no relation to reality; real people and real nations are not truly expected to square off in neat round-robin tourneys; and conflict and cooperation are rarely limited to pairwise interactions. Our best hope is to extract some principle from these models general enough to capture interesting features of both the models and of real life.


This is Axelrod's official stance in his earlier papers. But two later papers belie the principle. Applying a strategy he describes as landscape theory, Axelrod constructs models of political and economic alliances, and then tests them against empirically-derived data. His models predict alliances among major powers in World War II and among firms vying to establish a standard implementation of the Unix operating system in the late 1980s. There is an interesting tension here: computer simulations are meant to model situations in general terms and to inform us about laws of interaction that transcend particulars. What are we to make, then, of simulations with (apparently) strong and (apparently) unintended predictive power for real events? It may be that simple simulations are more than good abstract analogues for social behavior. It may be that short-sighted actors bound to minimal rulesets are useful models of complex behavior not because they collapse across the putative cognitive sophistication of real actors but because real actors really behave simply. If this is the case, it is not surprising that empirically informed simulations can predict international politics. Where extant empirical data are adequate to describe the relevant parameters of some simulated situation, prediction seems likely, if not inevitable (the need for good data is not to be neglected, though: there might exist otherwise excellent rulesets that describe interactions in terms of constructs that cannot, presently or ever in the future, be measured).


Axelrod further develops his idea of “myopic” social agents in the papers that follow. His models of alliance are of particular use here. Those models assume that each actor has some known and constant propensity to affiliate with each other actor. Further, those propensities are weighted by the sizes of the actors: agents can tolerate siding against other, highly desirable agents when those potential associates are small. In a given configuration of alliance, where each actor is assigned to one side or the other, each actor has a level of frustration (or energy), representing his propensity to shift allegiance. The actor's frustration for each association is defined as the product of his propensity to ally with that associate, that associate's size, and the distance between the two actors (0 if they are on the same side, 1 otherwise). An actor's total frustration is the sum of his frustration with every other actor; the frustration of the system is the sum of the frustrations of all its actors. Stable configurations of the system are those that exhibit minimal systemic frustration—local minima of frustration. Put another way, alliances should cease to change once all actors become unwilling to change, i.e., when any possible action increases total frustration. These equilibrium states2 could conceivably be discovered by hill climbing, but the system of interest is small enough to map the landscape exhaustively, computing every point.


When the propensity matrix is populated with historical data on the major actors involved in the second World War and when sizes are supplied, two small (i.e., low-frustration) minima emerge. One, the smaller, describes alliances at the beginning of the war correctly, with the exception of assigning Poland to the Axis3. The other predicts an entirely different configuration—but a configuration that Axelrod regards, in retrospect, as not wholly implausible. Assuming that this second state was not an artifact of measurement or modeling, this has some interesting implications for international politics. According to this sort of model, real actors are completely blind to fitness landscapes; they make incremental decisions based on preferences for individual associates, without regard to the system as a whole. Real actors, then, are imagined to reach equilibrium by hill climbing. Given a fitness landscape with a large number of local minima, the final configuration that emerges may be largely a consequence of the initial state of the system. If initial states are random in such a situation, so are final states. Axelrod notes that a drawback of Nash equilibria is that they often describe landscapes with many local minima; perhaps, then, the course of history is not inevitable but instead stochastic.


The usefulness of The Complexity of Cooperation is twofold. Foremost, it presents a body of research that together represents a powerful application of agent-based modeling to a whole class of social problems. Beyond that, it offers thought (though no final resolution) on the proper purpose of this sort of modeling. Finally, and perhaps most usefully, it grants the reader access to nearly all the source code behind the research. Most of the models presented can be rerun, reanalyzed, and altered to fit whatever interests the reader brings to the material. All in all, the book is a rich resource with far more depth than can be covered here.

1The Prisoner's Dilemma, in its original and simplest formulation, describes a two-player game where each player may choose to cooperate or to defect. Payoffs for each combination of moves vary, but in all cases the temptation payoff (where the player defects and his partner cooperates) > reward (both players cooperate) > punishment (both players defect) > sucker (player cooperates, partner defects). In its iterated form, play lasts for an indefinite (randomly determined) number of rounds.

2Nash equilibria, formally.

3Poland had a negative propensity to ally with either the USSR or Germany, the poles of the alliance configuration. Since the USSR is a larger power, Poland chose to align itself with the lesser of enemies. Midway through the war, once Axelrod's measure of size rated Germany higher than the USSR, the model predicts Poland switching sides.




Full Name:  Kathleen Maffei
Username:  kmaffei@brynmawr.edu
Title:  Six Degrees: The Science of a Connected Age by Duncan Watts
Date:  2006-04-12 13:28:37
Message Id:  18997
Paper Text:

<mytitle>

Emergence 2006

Reviews of Relevant Books

On Serendip

1) Small World Phenomenon
a) 1967 experiment by psychologist Stanley Milgram
i) Small world phenomenon: strangers discover they have a mutual acquaintance
ii) Theory: in the network of social acquaintances, any particular person can be reached through a short number of steps from friend to friend
iii) Test: one letter to each of 100 random people with the goal of eventually reaching a target person.? Letters could be passed along only to someone known on a first name basis.
iv) Result: average of 6 steps - 6 degrees of separation
v) Not so surprising?? Consider a Branching Network - a person has 100 friends, each of whom has 100 friends, each of whom has 100 friends, etc;
(1) 1 degree of separation = 100 people
(2) 2 degrees = 10,000 people
(3) 3 degrees = 1,000,000 people
(4) Exponential growth of nodes
vi) Real world: many friends share friends - clustering - redundancy in the network
vii) Paradox: while the real world social network is highly clustered, it is still possible to travel the network in relatively few steps

2) Random Graphs
a) Formal theory of random graphs: 1959 mathematicians Paul Erdõs & Alfred Renyi
b) Random Graphs: a network of nodes randomly connected by links
c) Connectivity of Random Graphs
i) Imagine a bunch of buttons tossed on the floor, and imagine that you tie a random number of threads to different pairs of buttons. - When you pick up one button, the buttons that lift off the floor with it are its connected component.
ii) Tie only one thread in a set of buttons: largest connected component is 2 - as a fraction of a large network that's equivalent to zero
iii) Connecting every button to every other button would produce a complete graph (completely inter-connected network)
iv) While the average links per node is less than 1, network connectivity is statistically zero because randomly added links are most likely to connect isolated links
v) Phase transition at 1: As the average links per node exceeds one, the fraction of nodes in the network that are all connected - that are in the largest connected component - increases rapidly
(1) the critical point is 1: the threshold between relative isolation and a connected network
(2) Phase transitions occur in many complex systems: magnetism, outbreaks of disease, spreading of cultural fads, stock market trends
vi) Importance
(1) An isolated network means local events stay local, but in an interconnected network, local events may affect the entire network
(2) Global connectivity isn?t incremental ? it occurs rapidly

3) Random-Biased Networks
a) 1950?s mathematician Anatol Rapoport considered social networks as he studied the spread of disease through human populations
b) homophily - the tendency of people to congregate with similar people
c) A person is more likely to befriend a friend of a friend than a complete stranger
d) Triadic closure - linking nodes in a social network are more likely to create triads
i) Example: if A is linked to B and B is linked to C, when C gains a new link, it has a higher probability to link to A than to another node in the network
e) Unlike random networks, social networks will over time develop triads (a bias away from a random network)
f) Random-biased networks
i) utilize the power of random network theory, while accounting for some of the non-random ordering principals by which people tend to create links
ii) consider the ways in which the network evolves: a network's eventual configuration depends on its current configuration - the probability of certain configuration is based on the prior configuration

4) Universality Classes
a) Example: Magnetism
i) The direction of spin of an electron determines the orientation of its magnetic field. The spins of the electrons of a magnet are all lined up pointing in the same direction.
ii) Creating a magnet
(1) Electrons prefer to align their spins, but are so weak that each can only affect its closest neighbors - in other words, while each node's information about the network is local, magnetism requires global coordination
(2) Frustrated state: as groups of electrons form (all pointing in the same direction), neighboring groups tend to point in opposite directions, unable to affect one another's spins & balancing out magnetic fields
(3) To magnetize a piece of metal, you need an outside source of magnetism and a specific amount of energy (force or heat) to re-start the transitions - too much energy and all spins will flip around randomly
iii) Transition to magnetism: each node (electron spin) is still only able to act locally, but at the transition point they all behave as though they can communicate globally
(1) Correlation length: the distance at which each node appears to communicate
(2) Criticality: the critical point when the correlation length crosses the entire system - each node affects every other
(3) Global coordination without central authority
(4) Phase transition - a sudden transition state, as opposed to gradual change
b) Similarity to the spontaneous coordination of clapping crowds, freezing of liquids, transition to superconductivity, & random graph connectivity
c) Universality classes - classes of (perhaps dissimilar) systems which display common properties may be studied in abstract rather than in detail - we can gain understanding with simple models

5) Small World Networks
a) Alpha Model: Duncan Watts and Steven Strogatz
i) Clustering
(1) How does the importance of mutual friends affect the creation of new links in a network. They began by graphing network development at the two extremes
(a) The top curve represents when as little as one mutual friend give A & B a strong chance of linking
(b) The bottom curve occurs when all nodes are just as likely to link
(2) Once the extremes defined the boundaries of the range of possibilities, the intermediate values could be sketched in.
(a) Each curve defines a different rule for node linking based on the tendency for mutual friends to affect the chances of a link
(b) This family of rules can be expressed as an equation with a tunable parameter, alpha
(3) Mapping the average path lengths for networks created with a range of alpha values
(a) Low alpha values created highly-clustered networks of unconnected components
(b) High values of alpha created basically random graphs
(c) There is a critical alpha value that creates a network of numerous small clusters connected globally with a relatively small path length for reaching any node from any other
(4) Comparing average path length and clustering coefficient
(a) Path length (L) spikes at a critical alpha value
(b) There is high clustering for lower values of alpha
(c) To the left of the spike in path length, networks are fragmented (paths are short because they don't cross the network, only their clusters
(d) To the right of the spike in path length, there is a region where clustering is still relatively high, but path length dramatically drops - small world networks
b) Beta Model: Duncan Watts and Steven Strogatz
(1) Networks on a periodic lattice - easier to understand than a random network
(a) Path length across the network is quite long when only neighbors are linked (left side)
(b) A completely random network on the right
(c) With only a few random rewirings, clustering coefficient remains high but path length dramatically drops (middle)
(2) Understanding Alpha
(a) Regardless of the size of the network, it takes only 5 random rewirings to reduce average path length by half
(b) Diminishing return: to reduce average path length by another half requires another 50 links
(c) Clustering coefficient slowly drops as links are re-wired randomly
(d) The space between the drop in path length and in clustering coefficient is where small world networks exist
(e) So, alpha from the last model was the probability that the network would have long-range random shortcuts, which have the effect of reducing path length

6) Six Degrees from Kevin Bacon
a) Bacon Number
i) If an actor has been in a movie with Kevin Bacon, the Bacon # is 1
ii) If an actor hasn?t been in a movie with Kevin Bacon, but has been in a movie with someone with a Bacon # of 1, their Bacon # is 2
iii) Etc
b) Distance Degree Distribution
i) The large majority of actors have a Bacon # of 4 or less
ii) The largest Bacon # is 10
iii) The average is less than 4
c) Small world networks: path length is close to that of a random graph, but the clustering coefficient is high
i) Movie actors
ii) Power grid dynamics
iii) Neural network of C. Elegans (earth worm)

7) Scale-Free Networks
a) Degree distribution: distribution of # of neighbors per node
b) Poisson distribution: the mathematical form describing the degree distribution of a random graph - fairly normal distribution
c) Barabasi and Albert demonstrated that many networks do not follow the Poisson distribution - they follow a power law distribution
i) Power laws don't have a peak - they start at their maximum and decrease to infinity
ii) Power laws have a slower decay rate than normal distribution, so extremes are more likely
iii) Unlike normal distribution, power laws do not have cut-offs for value (scale-free), so any number of links may be possible in a power-law distribution
iv) Scale-free networks have some super-connected nodes (hubs) and many nodes with fewer links
(1) Internet
(2) Metabolic networks of certain organisms
(3) Airlines
d) Barabasi and Albert also demonstrated how scale-free networks develop over time
i) In a random graph, poorly-connected nodes are just as likely to make new connections as well-connected nodes, and everything evens out in the end
ii) Real life: the rich get richer - with resources, it's easier to accumulate more
iii) Preferential growth model: the evolution of real networks
(1) If a node has twice as many links as another, it is exactly twice as likely to attain a new link
(2) New nodes should be added - network growth
(3) Over time, a network evolved this way demonstrates a power law distribution
e) Drawbacks
i) Since most networks have a finite number of nodes, there must be a cut-off at some point
(1) In real life, that cut-off is far below the number of nodes, since a person only has the time & energy for a certain number of friends
ii) Barabasi and Albert assumed that creating and maintaining links comes at no cost
(1) this assumption works for some types of real-life networks like the Internet, but not for others like biological systems
iii) Information is assumed to be widely available, but in real-life systems information is usually local

8) Affiliation Networks
a) Duncan Watts, Steven Strogatz, and Mark Newman attempted to create a random network that would better account for social structure
i) People identify themselves with many different social groups
ii) The more groups people share, the closer they are, the more likely they are to be friends
iii) Define the groups and the individuals associated with the groups, and the distance between people will be defined by those associations
iv) Two types of nodes, actors and groups
v) Bipartite (two-mode) network
vi) Random affiliation networks will always be small world networks

9) Searching Networks
a) Broadcast Search: each node sends a signal to every other node, ensuring that all nodes will be contacted during the search, and the shortest path may be found
i) Not an efficient way to search a network
ii) Likely to overload a system
b) Directed Search: in Milgram's experiment, each of the 100 starting people were given one letter only, and they were to pass it along to one person they new who they supposed would be closer to the target
i) A directed search requires some kind of information about your neighbor - not about the whole network, but local information by which to choose a link to follow
ii) Directed searches may not find the shortest path
c) Jon Kleinberg wanted to know how individuals find the path
i) Uniform random connections: random links are as likely between any 2 nodes, as used in previous small-world networks
(1) Since nodes are locally informed, a node without a long-distance shortcut cannot assist in a directed search
(2) Proved mathematically that a network created with uniform random connections cannot perform a directed search
ii) People judge distance from one node to another in many different ways: social, physical, race, class, profession, education
iii) Kleinberg's model used a lattice and random links were created based on probability which increased as distance between nodes decreased
(1) Only a specific probability constant of 2 would produce short, searchable paths
(2) At this critical value, each node has as many local nodes as shortcuts to further nodes, making directed searches possible
d) Duncan Watts, Mark Newman, and Peter Dodds
i) People measure distance through a hierarchy of the social groups to which they belong
ii) Model of affiliation groups that accounts for social distance
(1) The higher up the hierarchy you have to go to find a common branch, the further apart two groups are in distance
(2) Used in an affiliation network, this accounts for affiliations of difference strengths
(3) Two nodes may be close in one affiliation group and not in another
(4) Close affiliation in one context is enough to consider distance to be close
(5) It's the multi-dimensional nature of social relations that allows directed searches to occur
(6) The network was highly searchable regardless of the number of social dimensions or homophily parameter (probability that nodes will link to dissimilar nodes)

10) Epidemics and Failures
a) Biological diseases and computer viruses perform broadcast searches
i) The susceptibility of a node varies, depending on how contagious a disease is, or what kinds of computer systems
b) SIR model
i) Epidemiological model
ii) Members of the population are Susceptible (vulnerable but not yet infected), Infectious, or Removed (recovered or dead)
iii) Assumes random interactions between members
(1) Probability of infection is determined by the sizes of infected and susceptible populations
iv) Logistic growth
(1) Slow growth phase: few infected people to spread quickly
(2) Explosive phase: sudden cross of threshold value
(3) Burnout Phase: leveling out - few people left to infect
v) Ignores population structure
c) Epidemics in Small World Networks
i) Comparison of infectiousness on types of networks
(1) Random graph represents the SIR model, with a logistic growth
(2) Lattice - disease can spread only in two dimensions - only very infectious diseases become epidemics, and they spread very slowly
(3) Clustered model: it took only a few random links added to a lattice to cause a jump in infectiousness approaching that of a random network
ii) Conclusions
(1) Locally, disease growth behaves as though it is on a lattice
(2) When a disease reaches a shortcut, then it behaves as though it is on a random network
(3) Focus disease prevention on shortcuts: airlines, livestock movement, HIV needle-exchange program (needles shared not only among friends but also between strangers)
d) Percolation Models
i) Each site (node) has an occupation probability, which represents susceptibility
ii) Each bond (link) is either open or closed, with a probability based on the infectiousness of the disease
iii) From a random start point, imagine fluid pumped into the system through open bonds to all other susceptible sites. All affected sites are considered part of the cluster, where infection occurs.
iv) This model helps demonstrate how these parameters affect the spread of disease
v) Epidemics depend on both factors to create a percolating cluster, without which outbreaks will be small and isolated

11) Information Cascades & Collective Behavior
a) Financial crises like the stock market bubble have been documented for two centuries
i) What makes people act against common sense and follow the "rush"?
b) Cooperation
i) Diner's Dilemma: you go out to dinner with friends and you plan to split the bill evenly - what will you order?
(1) If everyone gets the cheap meal then the bill will be low - collective good
(2) If you're the only one to get the expensive meal, you're going to get it at a bargain
(3) If you're the only one getting the cheap meal, you're going to overpay for it
ii) Tragedy of the Commons: there's a common field free to all villagers to use to pasture their livestock
(1) Grazing one extra sheep will make it easier to feed/clothe your family (increase your personal wealth) - everyone has personal incentive to add livestock
(2) If everyone continues to add livestock, soon the pasture will be over-grazed and unable to sustain any livestock
(3) Individuals are concerned with their own interests, and only able to control their own actions, but are subject to the consequences of everyone else's decisions
iii) Social movements, such as the Leipzig parades that were pivotal in bringing down the Berlin Wall, are often un-centralized / uncoordinated
c) Failures
i) Power grid failures - cumulative small failures, unexpected result
ii) Challenger shuttle - a convergence of minor, routine issues (normal accidents)
d) Information Externalities
i) 1950's social psychologist Solomon Asch experiments: show a group of people a picture, and ask them a question about it for which the answer should be obvious.
(1) When all of the people except 1 are not subjects – they're pretending to be subjects and unanimously give the same wrong answer: 1/3 of the time the subject would agree with the others and give the false answer
(a) There were signs of distress (perspiration, agitation)
(2) Regardless of the size of the group of people, when there was at least one other person giving the right answer, the subjects were most likely to give the right answer
(3) People pay attention to their peers when making decisions
(a) In trying to minimize our personal risks, we tend to rely on majority opinions
(b) Problem-solving mechanism: sometimes we lack information, sometimes too much information to process
e) Coercive Externalities
i) In Asch's experiment, when only 1 person was secretly instructed to give a false answer, the others laughed at him
ii) Spiral of Silence: West Germany 1960?s & 1970?s Elisabeth Noelle-Neumann
(1) Two political parties with a constant level of support by the electorate, yet the more vocal of the two was perceived to be the majority
(2) The perceived minority became less willing to speak out publicly, reinforcing the perception that they were in the minority and further silencing them
(3) The strongest predictor on election day wasn't which party each voter supported but which party each voter expected to win
f) Market Externalities
i) Unlike cars and copiers, technology like fax machines are not self-contained - a fax is useless unless you want to communicate with someone else with a fax machine
(1) Individuals make the purchasing decision, but the evaluation of that decision is based on what the collective is doing
ii) Complimentaries: products that increase one another's value, like PCs and software
g) Coordination Externalities
i) Some decisions are affected by situations like the Diner's Dilemma and the Tragedy of the Commons
ii) Trade-off between personal gain and collective good
iii) To contribute to the collective good,
(1) an individual must care about the future
(2) and believe that participation will cause others to do so
iv) Individuals must pay attention to what others are doing: if enough people seem to be doing something for the collective good, then an individual will judge it worth doing
h) Information Cascade
i) A shock in the network becomes a cascade across the system
(1) Cooperation
(2) Financial crises
(3) Social fads
ii) Threshold models of decision making
(1) Asch's experiment demonstrated that it's not the absolute number of external influences, but the fractional number
(2) Size is important, therefore, only in proportion to how much influence each neighboring node will have
(3) Disease metaphor breaks down: unlike disease, which infects a node at a probability based on infectiousness & susceptibility - the same probability each time the node is exposed - and not cumulative like an information cascade
(4) Social contagion follows a threshold rule, where it takes a certain number of exposures for a node to switch from one Boolean condition to the other
iii) What are the consequences at the population level?
(1) Each node has its own threshold level - there will be a distribution of threshold values, with more nodes in the middle range and a few at the extremes
(2) A single node will start the shock to the network, lower threshold nodes will pass it along, and cumulative exposures may trigger slightly higher threshold nodes, etc
iv) What features of a social network allow cascades?
(1) Localized contagion happens in isolated groups: cults
(2) Innovator - the starting node
(3) Threshold is the fraction of a node's neighbors that must be active
(4) Early adopters - threshold proportional to # neighbors is low enough to activate with only 1 active neighbor
(5) A node's degree (# of neighbors) becomes important
(6) A cascade can only happen if the innovator is connected to an early adopter
v) Percolation model: early adopters form percolating clusters.
(1) If the network has a percolating cluster, then a cascade is possible
vi) Phase diagram: all possible systems
(1) Horizontal axis: average value of threshold distribution
(2) Vertical axis: average degree (# network neighbors)
(3) Shaded region: where global cascades can occur
(4) Phase transitions occur at the upper and lower boundaries
(a) At the lower boundary,
(i) there are few neighbors and the threshold is easily met
(ii) the phase transition is the same as for biological diseases
(iii) as with disease, network connectivity is what restricts the cascade
(iv) as with disease, well-connected individuals help spread the contagion
(v) as with disease, cascades tend to be localized - propagates through the vulnerable cluster, but fewer connections keep it from spreading
(b) At the upper boundary
(i) greater connectivity always makes diseases more likely to spread, but makes global cascades impossible (since each neighbor's proportional influence is therefore diminished and the threshold is less likely to be met)
(ii) though cascades are rare, when they occur they tend to traverse the system since the network is highly connected

12) Adaptation
a) Toyota-Aisin crisis
i) 1980's Toyota manufacturing strategies
(1) Just-in-time inventory systems - produce parts as needed
(2) Simultaneous engineering - changes to design on-the-fly - adaptation
(3) High division of labor - member companies specializing in specific parts
(4) High level of cooperation between member firms, exchange of personnel
ii) Aisin - sole provider of P-Valves, specialized tools & expertise in design
iii) Aisin's only factory burned down overnight
(1) All production at all Toyota factories stopped
iv) Within 3 days, member companies coordinated & worked with Aisin engineers to produce P-Valves at near normal levels
b) Hierarchy functions when market is well-understood
c) Once there is ambiguity, decisions and problem-solving often have to occur at lower levels of production rather than through hierarchy
i) Communication in a hierarchy requires many steps through the structure to reach one node from another
(1) Ambiguity requires more communication
(2) Individuals have capacity constraints
(3) Hierarchy will bottleneck trying to resolve high levels of ambiguity
ii) Randomly adding shortcuts
(1) Dramatically reduces overall path length for some nodes, but still doesn't account for capacity constraints or for the stratification of information
iii) Team building: shortcuts between neighbors
(1) Local teams at each level of the hierarchy need to communicate / solve problems
(2) Effective where most message-passing is at the local level
(3) Example: members of the same work team or the same ISP
iv) Periphery: shortcuts at the top of the hierarchy
(1) When message-passing is between distant nodes, congestion is at the top of the hierarchy
(2) The top level is connected to create cooperation and sharing of the communication lode
(3) Example: airline network, postal service
v) Multiscale connectivity
(1) Message-passing across each level
(2) All levels manage information
(3) Problem-solving is part of production



Full Name:  Stephanie Hilton
Username:  shilton@brynmawr.edu
Title:  Drowning in the data
Date:  2006-04-14 21:14:38
Message Id:  19038
Paper Text:

<mytitle>

Emergence 2006

Reviews of Relevant Books

On Serendip

Snap judgments, first impressions and the ideas that are made in a fraction of a second are all issues that Blink breaks down. Malcolm Gladwell brilliantly describes the way people can bring subconscious snap judgments into their cognitive processes to help make complex decisions in everyday life. In Gladwell's introduction he begins to explain the weight and importance of judgments made in split seconds, "Blink is concerned with the very smallest components of our everyday lives- the content and origin of those instantaneous impressions and conclusions that spontaneously arise whenever we meet a new person or confront a complex situation or have to make a decision under conditions of stress" (Gladwell, 16). To most people these first impressions seem meaningless. Gladwell tries to show that if the skills of effectively interpreting these snap judgments can be harnessed, the result will be of indescribable value in everyday life.
Gladwell spends most of the book giving examples of how simple rules and looking at problems objectively are the best ways to draw conclusions about decisions in life. Gladwell explains how overanalyzing a problem is ineffective because focus of the bigger picture is lost, "You know, you get caught up in forms, in matrixes, in computer programs, and it just draws you in. They were so focused on the mechanics and the process that they never looked at the problem holistically. In the act of tearing something apart, you lose its meaning" (Gladwell, 125). This idea of how simple rules leas to surprisingly complex outcomes is a main theory of emergence. When it comes to the complexity and elegance of how a flock of birds move, generally it is concluded that the brilliance of the flow and direction is due to simple rules. Some believe that the flock simply follows a leader bird. When actually analyzed, there is no true bird that is conducting how the flock moves through the air. The question then arises, who is keeping these birds in sync with each other? The answer is that the birds are following a set of simple rules such as: mimic the movement of the birds to the left and right; if one birds decides to swoop down and land, then swoop down and land too. There is no rhyme of reason as to how these birds make these decisions but it can be concluded that there is no conductor or leader in this situation. Simple rules that lead to complex outcomes is a main principle in emergence and in Blink. Gladwell makes three main examples in his book of how a series of simple rules can change everyday cognitive thinking.
Believe it or not, there are some experts that can analyze a romantic relationship in a few minutes and determine, very effectively, if the couple will last in a marriage. Gladwell spends the first chapter of his book talking about an experiment in which couples are watched closely and specialists break down every bit of their interaction to determine if they really get along. The participants are hooked up to many monitors that judge their heart rate, how much they sweat and even how much they wiggle around in their seat. Then the participants (who have been in a romantic relationship for quite a while) are asked to talk about something that has been bothering them lately. The topic shouldn't be trivial neither a deep problem that would cause an argument. The subjects are video taped and watched very closely. Every second of the video is broken down into audio tones. Each tone has a meaning. The scientists conducting this experiment came to very significant conclusions. The tone of someone's voice can tell a lot about what they're truly thinking. They developed a set of rules that breaks down exactly what each millisecond of tone means in a relationship. These experts can determine, at a very high percentage, whether a marriage will last past a few years. This is an emergent idea because of the theme of simple rules leading to complex conclusions. A relationship is one of the most complex human interactions that one could imagine. To have a person put a couple in a room and tell them if their marriage will not last in about 5 minutes is incredible, but that's emergence.
There are a few specific taste criterias that go into deciding which foods are pleasing and therefore put out on the market. Gladwell met up with two food-tasting experts named Gail Vance Civille and Judy Heylmun. These women can eat a piece of anything and tell if the product will be a hit or miss in the stores, "Every product in the supermarket can be analyzed along these lines, and after a taster has worked with these scales for years they become embossed in the taster's unconscious" (Gladwell 182). These experts have followed a set of guidelines such as how much sweetener needs to be in a cola or how much crunchiness should be in a potato chip. The rules were studied for so long that now the decision of whether a product is "good" is instantaneous and usually right on target. These women as well follow the ideas of emergence by deciding on what simple rules will influence a whole population's preference to a product. They brake down the seemingly never-ending question of "will people like it" in a bite. That idea is so emergent because they're following simple textbook guidelines to determine what millions of people will prefer.
Psychics aren't the only people that can read minds, in fact anyone can be trained to analyze a set of rules about facial expressions that would allow them to accurately read minds. Gladwell worked with two brilliant men named Paul Ekman and Wallace Friesen who developed a system to accurately define every single facial expression. The thought of reading minds seems incredible; it's something in the movies and sci-fi books. When there is a problem that seems overwhelmingly complicated, the idea of thin-slicing the problem in an emergent way into smaller issues is effective. In their work, these two men broke down the complicated issue of mind reading into simply reading combinations of facial expressions, "Ekman and Friesen ultimately assembled all these combinations- and the rules for reading and interpreting them- into the Facial Action Coding System, or FACS, and wrote them up in a five-hundred-page document" (Gladwell, 204). Each facial expression on its own isn't usually important. There comes a split second when someone is interpreting a question or comment and subconsciously gives up these combinations of expressions that dictate exactly what they're feeling. Someone trained with the FACS system can immediately pick up on these subtle nuances. With the FACS system, emergence solves another huge problem with mind-blowing simplicity.
A key theory of emergence that was very consistent through Blink is the idea that complex conclusions can be drawn from simple rules and practices. Ant colonies do amazing things by following the example of the ants around them. There is no clear leader telling them to build tunnels or anthills, but over and over again these processes get accomplished. A relationship can be broken down into tones in everyday interactions. An educated prediction as to weather a food will sell can be defined by a set of simple rules. A liar is discovered in a fraction of a second based on a point where he or she makes a certain combinations of facial movements. These ideas of split-second decisions making and how simple rules can produce complex ideas are omnipresent in everyday life. If people educated in emergence can take a minute and listen to their intuition and thoughts on first impressions, the human race in general will be more inclined to make good decisions. The tip of the day is to take a step back and allow these snap-judgments to surface, "If you get too caught up in the production of information, you drown in the data" (Gladwell, 144).


Full Name:  Flora Shepherd
Username:  fshepher@brynmawr.edu
Title:  Blink and Emergence
Date:  2006-05-24 13:32:24
Message Id:  19417
Paper Text:
<mytitle> Emergence 2006
Reviews of Relevant Books
On Serendip

Malcom Gladwell proves in his book, Blink, that journalists can write accessible emergence books. His topic is snap judgements or "thin slicing" wherein a person makes an accurate answer about a complicated question in one to two seconds. Why does this work? When doesn't this work? And how can we improve this ability are all questions that Gladwell poses and explores. He uses examples from a wide variety of disciplines, from police violence to tennis to prove this huge disparity between conscious thinking and our unconscious "computer's" fast processes. There are several parts of the book that I found fascinating. However, he only offers a small window in the many questions raised about human conscious thought. As Gladwell would encourage us to, we must read more about the topics to be able to thin slice properly about the questions he brings up.

I think that the most revolutionary thing in this book is his assertion that you can change your unconscious, your prejudices. It's especially interesting how he traces out where snap decisions and prejudice come from and how they can be avoided. I think that talk of race or gender bias can sometimes be incredibly frustrating because people use arguments like, "You're white. You don't understand." However, in this book, Gladwell shows that race is not an excuse. Anyone can learn to be more tolerant and intelligent simply by training their unconscious to follow the thought patterns that you most approve of. This is great. It also provides an interesting analysis of social problems. It gives educators specific goals to work towards in education. It is not enough to simply teach black history, you must teach positive role models. That will bring positive associations in one's head and will in turn, combat racism better than self-conscious regulation.

I think that respect for experts is one of the biggest missing pieces in America society. When Gladwell frames respect for experts in the arena of food and fine art, then of course it sounds easy to just listen to experts. However, in practice, we have such a strong sense of individualism that we resist wanting to listen to experts advice. Don't we know what we like and what we think is good just as well as someone who has been studying an art his/her whole life? Perhaps only in athletic endeavors is respect easily given. You can either hit a hole in one or you can't. Whereas, anyone can say that greed is evil. It's so much harder to analyze logic. The man who believes that the earth in flat believes his own conclusions more than a system of science he does not understand. Gladwell oversimplifies the distinction between logic and experience.

Yet, how does all of this apply to emergence? Gladwell has a emergent view of thought. He certainly believes in a hierarchal model of the brain: a brain where a consciousness sits on an unconscious "computer". The conscious brain does not have to tell the body to continue pumping blood or to start to sweat, but it relies on this mechanism to keep itself alive. This concept is very useful to him and is well illustrated from his points. However, his model of a brain that learns by soaking up knowledge for years and years does not always hold water. How could he explain child prodigies? Where do their insights come from? How could they possibly have spent more time processing information than their older counterparts? Are their unconscious "computers" (his words) just better at processing information than some adults? If this is true, then couldn't some minds just be prone to prejudice more than others? His argument is compelling an incredibly interesting and, as such, provokes a great deal of interesting thought.

This book has many attributes. It is a self-help book. I, personally, have already attempted to apply some of the principles he describes to my everyday life. According to Gladwell, reading accounts of successful students will put the image inside your head better than hours spent studying. Both methods are necessary to prepare well for an exam. In some ways, this closely parallels popular messages of self-help books: that "thinking good thoughts" will make good things happen to you. They may not make the universe work in your favor, but they certainly will influence human interaction, according to Gladwell.

In closing, I would just like to say how important it is that this book be taken as a whole. I think it is difficult to quote from this book without have the passages within the context of the rest of the reading. This book is truly one piece and it would be difficult to pick and choose lessons from the text. All of the lessons work together well to create a thoughtfully written book. You can tell that Gladwell is a great journalist and now I want to seek out the rest of his work.