Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.

Emergent Systems 2004-2005 Forum

Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.

Go to last comment

a new year ...
Name: Paul Grobstein
Date: 2004-09-04 13:33:46
Link to this Comment: 10767

Don't worry. Nothing's been lost. Last year's forum has been archived and can be reached here. You will though need to re-sign up for the Keep Me Posted emails if you want them.

Looking forward to where we go this year (and to the promised coffee and muffins).

Douglas Adams and emergent perspectives
Name: Doug Blank
Date: 2004-09-08 11:12:36
Link to this Comment: 10801


Just ran across this talk given by Douglas Adams at "Digital Biota 2", a conference on what has come to be called "Artificial Life".

I think that it is interesting from a couple of emergent perspectives. It is in Adams's book "The Salmon of Doubt", and also on-line at:

Parts of it are about the same system that Jim W. discussed in the "Emergent properties of Balinese Water Temple Networks: Coadaptation on a Rugged Fitness Landscape" talk he gave June 2003 for the Emergent group.


Name: Paul Grobstein
Date: 2004-09-11 12:04:03
Link to this Comment: 10820

Enjoyed Al's talk and the resulting discussion (as always). Think it was both a useful summary and a generative extension of previous conversations both in this group and, during the summer, in the Information group.

What particularly struck me was the emerging sense (for me at least) that there is something both fundamental and not-yet-fully-recognized-understood about "irreversibility". This goes back (again, for me at least) to the issue of whether the "block" model of time is the appropriate way to think about change in general, and hence to the determinacy/indeterminacy issue we've repeatedly touched on. My sense is that the preferences of many (but not all) physicists (and others who either have the same preferences or feel compelled to model themselves on those) notwithstanding, there really IS a necessity to accept the "reality" of irreversibility (and hence of indeterminacy).

The key here (again "for me at least") is Al's pointing out that there really is NOT a way to make the second law of thermodynamics compatible with deterministic physics (the secret "coarse graining") and, further, that there is an unavoidable irreversibility inherent in quantum mechanics as a descriptor of actual observations (different wave functions can collapse to the same observations, hence one cannot, from the observations, go back uniquely to the wave function).

Some (related?) things that struck me. The "arrow of time" would be much easier to make sense of if we accepted that "time" was, as in emergent systems models, iterative change with an indeterminate element. And it may be useful in the future to equate "indeterminacy" with "irreversibility" in the sense of inability to specify a unique antecedent for any given state. And that in turn builds what may be an important bridge between energy/matter and "information". Physics (at least "classical" physics) is the effort to understand matter/energy in terms that are relatively independent of its form/organization at any given time ... "information" is that which has been left out in such an analysis (as both Chomsky and Shannon left out "meaning"?), aspects of form/organization of matter/energy that reflect/relate to irreversible change?

Anal teacher as entomologist
Name: Jan
Date: 2004-09-16 11:54:05
Link to this Comment: 10853

Sadly (or perhaps not), the website, ("It's not just a good idea; it's also some bad ones." "Right twice a day.") no longer exists. In addition to an idea for an "ant lamp shade" that would allow one to observe an ant colony as shadows, the site included a revery on:

Tibetan Ant Painting Multi-hued segregations of small creatures blend into one, destroying my art, ensuring their survival.
The crumbling temple was an entomologist's dream. Few Westerners were allowed inside these walls. Partly out of homage for the town's religious centerpiece, partly a preventative measure to keep them from killing the thousands of bugs housed there in one massive attack of the heebie-jeebies--the monks used the insects during meditations and rituals. 'Filed' in largish ant farms, there were ants of every variety I've ever run across in my profession, and a few I'd only read about. Some were naturally colorful; others were dotted with tiny marks of dye in every hue in the rainbow. In one farm, ants dotted with little white marks, in another, violet. Fire ants in another corner.
In the center of the room, an old monk squatted in front of a mat, placing the bugs, one by one, in exactly the right spot on the painting mat. Each critter would fret for awhile until it realized it could go nowhere until the monk allowed. The mat was covered with a primitive timed adhesive; it would remain semi-tacky for as long as the monk would keep it wet, but would lose grip on its captives as it air-dried. As I watched, the painting outline became more clear as more and more red fire ants were painstakingly applied. It appeared to be an impressionistic interpretation of some kind of curved bread. Having long developed an immunity of simple toughness against their bites, the monk would simply smile at the fire ants survival instincts to bite him.
Hours later, the completed picture began to transmogrify and lose cohesion, as the ant pixels, rejuvinated by some technique I'll have to investigate as a scientist, scurried back to their respective colonies, in single-file, following their leader's scent home.

Conscious autamata hypothesis
Name: Wil Frankl
Date: 2004-09-23 09:32:01
Link to this Comment: 10927

All we can change is the story we tell ourselves and consequently others. Behavior/action is all unconscious and the mind only epiphenomenal in the Huxley/La Mettrie sense. We have no ‘free will’. Only after the fact do we have a story of our actions that we tell ourselves to make sense of our actions. What we ‘do’ change is the story we tell and that can have dramatic affects on our well being. When the stories we tell ourselves are in disconcert with our behavior we are depressed, anxious and even ill. When we realize that our mind is only an observer then we can begin patiently to re-tell our stories, bringing into concert our behavior and those stories. Yet, somehow I feel this is not the whole story either. How is it that a change in stories can make us feel better? Is not this the same Cartesian impasse? How does the balance between story and behavior affect mood? Does mood affect behavior? And how? If it really does, and our conscious stories affect mood, then doesn’t consciousness indirectly affect behavior? Can anyone help clarify this Story-->mood-->behavior causality problem? Mood seems more primative(evolutionarily) to consciousness. Mood seems more unconscious, maybe a precursor to consciousness. Any thoughts?

Name: Doug Blank
Date: 2004-09-23 11:29:34
Link to this Comment: 10930

Both Paul and Rob seemed to imply this morning that it is logically possible (and maybe has even been seen in the world) that people can have their consciousness removed, and go about their entire lives without knowing it. This is what philosophers of mind would call a "zombie".

Can you explain how you see this as being logically possible?

Name: Jan
Date: 2004-09-23 14:39:22
Link to this Comment: 10936


I think what they're talking about is simply not setting off "consciousness" as a category, thus avoiding the problem that reductive materialist accounts of the mind have in explaining how consciousness might emerge from a purely physical system or how any explanation of consciousness can be reduced to physical terms.

Of course, it does appear that large numbers of zombies appear to have elected U.S. presidents.

matters arising ...
Name: Paul Grobstein
Date: 2004-09-23 19:00:14
Link to this Comment: 10937

Nice conversation this morning (as always). Thanks to Rob and all (as always).

The new Serendip exhibit I mentioned as related to Rob's earlier Descartes plus exhibit (highly recommended re today's conversation) is Writing Descartes: I Am and I Think, Therefore .... There's an on-line forum there too, and because this morning's discussion triggered thoughts that seemed germane, I posted them there. I think they're relevant here too and will trust any one interested will prefer to follow the link rather than to have me repeat the (typically a bit long) posting here.

Re Doug and Jan:

I'm not sure this will answer the question of "logically possible", since I regard the question as an experimental one rather than a "logical" or "semantic" one, and "live their entire lives" may be a bit of a stretch, and the relevant observations depend on what (as Rob said) is a currently not achievable: a criterion by which to definitively decide whether another person is conscious (or not so). With all of those caveats, here briefly is what I was referring to this am ...

People report acting without being "conscious" under a large variety of circumstances, trivially as when has one's attention called to some input of which they were previously unaware (as Rob illustrated), or when one realizes that one has gotten to a particular place and has no memory of how one got there, and more dramatically when told one did A,B, or (your choice) while drunk. Comparable dissociations to varying degrees between action and consciousness
occur in a variety of clinical syndromes, and are particularly characteristic of damage to or affecting a particular part of the brain, the neocortex. Perhaps the most obvious normal dissociation occurs in sleep, where the body (and nervous system) is actually quite active but most people report that at most times they were not "conscious". During this period, the neocortex displays a characteristic form of electrical activity (a "synchronized EEG") that is quite different both from the awake (conscious) state and dreaming (arguably a state when one is "conscious" but not normally responsive to external signals). Even more dramatic is sleep-waking, which I trust everyone will regard as "action". Sleep-walkers report no "consciousness" while sleep-walking and the EEG pattern correlates with this (ie is "synchronized", as in other sleep states).

If we take people at their word about of what and when they are "conscious", the observations are consistent with the notion that "consciousness" corresponds to particular kinds of activity in a particular part of the brain, the neocortex. It then becomes relevant that every once in a while, a human being turns up in a doctor's office, is examined for one or another complaint, and proves to have little or no neocortex.

Ergo, with caveats mentioned at the outset: we are all zombies in some/many ways some/most of the time (which in turn says, in relation to the "logical" problem?, that "consciousness", whatever it does, is not necessary for complex adaptive behavior). Some people are more zombies more of the time. And there may exist people who are all zombies all of the time.

serendipidously: surplus meaning
Name: Anne Dalke
Date: 2004-10-01 00:48:31
Link to this Comment: 10999


Another rich, rich presentation this morning, for which many, many thanks (doubled because you gave us two of these). And which generated--for me--a number of questions. Which I record here, for myself, but also in the hopes that you'll bite, give me some (more?) answers.

Your presentation stirred up three lines of thinking for me; I've organized them here from most to least stuck-ness.

1. I got stuck (tried to get an answer then; didn't; am trying here again) on your slide about "extensional reductionism." I heard this as another stab @ what you'd identified, last week, as the Cartesian Impasse, aka Mind/Body Problem, in which the relationship (more precisely, how to negotiate the relationship) between two different things--mental and physical--continues to bedevil psychologists. I heard Paul offer an alternative formulation on the Descartes forum: that those "two things" might more productively be characterized as "two different forms of organized matter." I liked the way that formulation got Rene/you/us out of the pickle of understanding how non-material mind and material reality can influence one another. But this morning, when I heard you re-represent the encounter between "brain/mind and its environment," I thought you were getting us (or at least you were getting me!) stuck again, in a "relationship" between "mental" and "physical" that could not be "negotiated" (with every pun here intended).

2. Less stuck, more struck: Your answer to the question of where the Iliad resides--in the interaction between the sequence of words and the decoder/reader--is the central idea of a long-important methodology in literary studies known as reader-response theory, which fits quite nicely into the notion of meaning-as-relation: textual meaning is constructed in the interaction between the writing and the reading; it comes into existence when the text is read.

When I evoked this process during this summer's conversation about Information, I wasn't sure whether I'd talked myself out of or into a hole. If it was the latter, I think you've just talked me back out of it: your description of "surplus meaning" (neural states being extensionally but not intensively equivalent to perception: to "see" a cup "means" more than what an account of the state of the neural networks describes) is an account, in my terms/ landscape, of the process of literary interpretation, in which any "text" always exceeds the grasp of any "interpretation," any "reduction," any story of what it is/does. This is pure Derrida: the original is always deferred - never to be grasped.

3. Least stuck, real-est question (aka what I really want to know): Where I got most excited, this morning, was when you used G.H. Lewes to show that, from the very beginning, emergence created a problem for the nature of knowing: Because effects are emergent, deduction is insecure. And because effects are emergent, prediction is not reliable. We can't go back (because the loss of information in arriving at "meaning" is not recuperable?) and we can't predictably go forward (because of the complexity of the interactions?). My question is whether, in the universe you've just traced for us, the unpredictability of the future and irreducibility/irreversibility of the present (the inability to reduce a cause to its effects, to play the tape predictably backwards) are the same thing. Do not being predictably predictive and not being reliably deductable arise from the same cause, for the same reason?

Serendipidously (was this really serendipidous?) a new story has just appeared on the Descartes exhibit, in which the narrator

watches her storytelling mind begin to fan the spark of annoyance into a flame [and says] god damn-it I am not letting myself react to whatever is getting triggered by my neural-associations.

Surplus meaning, indeed.

Rob 2 ...
Name: Paul Grobstein
Date: 2004-10-05 12:32:37
Link to this Comment: 11024

Typically rich Wozniak/discussion last week, sorry it took me so long to get back to my notes. Part of reason was a relevant talk Wednesday night by Cheryl Chen in Philosophy. Title was "Perceptual Experience and Bodily Action". I posted some thoughts about Cheryl's talk in the Descartes forum and won't repeat them here except to say that the talk helped to highlight for me the importance of making a distinction between two "things" in terms of two corresponding different forms of brain organization.

Among the things that struck me as most important in our conversation last week was Rob's effort to carefully distinguish between "emergent" with reference to science and to "nature". Lewes, if I understand it correctly, wanted to call something "emergent" if "it cannot be reduced either to the sum ... or ... difference" of measurements of interacting components. This is clearly a matter of "science" rather than of "nature", in the sense that it makes the limitations of human analytic procedures the criterion for "emergence". With advances in analytic tools (calculus, non-linear dynamics, computers) things that were previously "emergent" cease to be so by this definition. And, by this definition, "emergence" might in principle disapear entirely as a category (like god? like mind?).

Needless to say, I'm not comfortable with such a "human perspective defined" view of "emergence" and, fortunately, I don't think anyone else has to be either. The touchstone here for me is something I wrote about years ago, the notion that the properties of elements at one level of organization permit, but do not determine, the properties at some higher level of organization. The "emergent" properties thus depend on an "addition of information", most typically from outside the system. A trivial example is water molecules; the emergent property of gas/liquid/solid results from the properties of water molecules TOGETHER with the additional information of the temperature of their surroundings. "Stigmergy" is a slightly more elaborate version of the same thing. Ants behave differently not because the properties of the elements have changed but because the collective behavior of the elements has created a new (and/or potentially constantly changing) information source. This definition of "emergence' as involving "information addition" escapes the problem of being defined by the analytic limits of humans but introduces some new problems including how to define what is "inside" and what is "outside" the system, and what one means by "information". It also neglects two features which are prominent in biological (and cosmological) evolution: changes in properties of the elements and the association of elements into new combinations that themselves exhibit "emergent" properties.

And that brings us to Rob's "epistemological emergence" and, in one more step, to Anne's "unpredictability of the future and irreducibility of the present". If one introduces into the properties of any of the interacting elements or into the interactions between them a degree of genuine indeterminacy, then the past becomes to some degree unrecoverable (several different states could have given rise to the present one) AND the future becomes to some degree unpredictable. And if one now allows such effects to propagate and amalgamate over billions of years and over successive rounds of the creations out of interactions of new assemblies which themselves become the foundation of assemblies .... I assert that there is a genuine and profound capability of emergence to create new things, "new" not only in the sense of surprising to a particular generation of investigators, indeed not only "new" in the sense of surprising to humans but "new" in the universe: over time, assemblies of matter can come into existence that have properties arbitrarily distant from any previously existing ones however one chooses to measure them (and whether anyone is there to measure them or not).

The brain/mind? What's its relation, and the relation of "discourse" to all of this? Most obviously, the brain is an "emergent", something that came into existence but could not have been predicted to do so nor be back-traced with any certainty. Perhaps most importantly though, the brain/mind (as it exists in humans) is a powerful amplifier of the novelty-generating emergence process. By combining a degree of indeterminacy, a capability to generate abstractions (stories), an ability to conceive counter-factuals, and an ability to exchange both stories and counterfactuals (engage in "discourse"), the brain greatly increases the range and rate of explorations of achievable forms of organized matter, an exploration that has been going on since the big bang (and would, whether the human brain/mind was around or not).

All of this may sound a bit like nothing more than a return to From the Active Inanimate to Models to Stories to Agency, and in some sense it is. But I think there has been a lot of needed filling in/clarification along the way. Both in our discussions and elsewhere. Among the latter is a new set of understandings about the nature of information, and a recognition that some architectures have important properties that others don't. The former says that information processing in general involves both loss and gain of information (relevant for "surplus meaning"?); the latter that bilayer networks can represent "group purpose" and so compare a measure of "group product" with "group purpose" in a way that single layter networks cannot.

Where's the Cut?
Name: Wil Frankl
Date: 2004-10-07 09:33:26
Link to this Comment: 11042

Following up on the difference between reductionists and non-reductionists; the clone of Paul will at time initial perceive and decode exactly like Paul, until their experience begin to diverge, but at time initial they are identical, that is, ascribe exactly the same meaning to everything. Therefore, if they both are in love, they feel love at that moment in exactly the same way? I think both camps agree so far? The difference, if I understand it is…. for the reductionist camp the meaning of love is sufficiently defined by the physical states of both Pauls, while the non-reductionist camp believes the meaning of love also must include a further decoding/connecting/bridging of the physical states to some story about those physical states? As long as we agree that the stories/connections/bridges between code and meaning is causally linked to physical states of the brain, then isn’t both Pauls stories identical because of those identical physical states. Are we now back to the undistinguishable ‘nature’ versus ‘science’ (Ontological vs Epistemological) debate?... So the difference is….?

Rob 3, or cut, cut, whose got the cut ?
Name: Paul Grobstein
Date: 2004-10-07 17:03:40
Link to this Comment: 11048

Wonderfully rich conversation this morning, lots of threads beginning to come together, for me at least. Some notes about my sense of where things are coming from/going, for my own continuing thinking and that of anyone else who finds them useful. With appreciation to all but no assertion whatsoever that all threads are included nor that there aren't other equally or more productive ways to braid them.

There ARE "two things". One is the unconscious, a set of brain activities of which one is for the most part unaware but which, as Rob said, has enormously sophisticated capabilities. The other is a second set of brain activities which uses the first as its input and which constitutes internal experience, all that of which we are aware. It (as per Rob, there may be several layers of "it" which for some purposes can/should be distinguished but I don't think that is critical here) is not a replica of the unconscious but rather an abstraction of it, a "compression" of it, a "story" about it, less useful for some purposes, more useful for others. Among the latter is the capability to entertain Tim's counterfactuals (both about "self" and about other things), and so to give organisms which possess a "bipartite" brain organization the capability to play a role in their own ongoing emergence (as well as that of things around them).

Let's pause here to add into the skein Wil's concern, which is related in turn to Anne's interest in clarifying "interpretation" (literary and otherwise). What's important about Rob's martian (or the color blind neurobiologist) is (thanks, Mark) not ONLY the issue of whether there is a perfect identity between physically observable brain states and internal experience but ALSO the separate issue (usually entangled with the first but needing to be disentangled) of whether a description of a brain state FROM THE OUTSIDE will give the outsider the internal experience associated with the original brain state. I, for one, am quite convinced by available observations (though, of course, never definitive) that the first is the case, there is an absolute identity between brain state and internal experience. I am equally convinced, for the same reason (and with the same cautions), that the second is NOT the case. No description of the state of the brain, no matter how complete, will provide an observer with the "experience" the observed state supports. This can be had only be BEING that nervous system in that state.

So, re Wil ... reductionism is fine, so far as it goes. An exact clone of me (or anyone else) will, at the instant of coming into existence, be experiencing exactly what the cloned entity was experiencing (and, yes, the two will drift apart over time). BUT no external observer will be able to say with any degree of certainty what the internal experience of EITHER of them is (other than that they are instantaneously identical). An external observer may (and will), as Tim says, presume a person (clone or original) HAS an internal experience, guess at it, model it, revise the model based on new observations (which may include measures of brain activity) but what they will inevitably have is a story about how an external experience is created and a more or less good guess of what it is in any particular case. The internal experience itself is deeply and fundamentally "private", achievable only be BEING that neuronal ensemble and state. There is no further decoding/recoding of physical states that will get one any closer to the experience itself.

BUT, there is a clear place in this scheme for "interpretation", literary and otherwise (recognizing that these interpretations are themselves also brain states). If one's concern is to try and understand internal experiences, either in and of themselves or because one recognizes that they play a causal role in other things one is interested in (which, with no mystery whatsoever, they do; they are physical states and so can influence physical states, both their own and others), then one has no choice but to engage in "interpretation", the creation of models (in one's own brain) of the internal experience of someone else and the further refinement of such models based on additional observations (of them, or of their artifacts). And this in turn may be worth keeping in mind in talking about the Two Cultures. "Humanities" originated in the exploration of humans, and has had trying to understand human "experience" at its core. "Science" originated in the exploration of things outside humans, things which seemed (and for the most part still seem) to lack "internal experience". As science began/continues to take humans into its ken, its methods have necessarily changed to admit the need to acknowledge a significant role for internal experience. Conversely, as humanities/social sciences became/are becoming increasingly aware of how little internal experience actually has to do with human behavior/creation it too is evolving . The upshot is a significant and progressively overlapping domain of observations/approaches (as well as, of course, a lot of screaming and turf battling).

Not bad for a morning's work, not bad at all. But there's more. As Al said, in going from the description of the entire brain state to the internal experience (or to the interpretation of it) there is a "compression", a recoding of the original state into a new form whose properties depend to some extent on the coder. And in going from that to a new brain state (in another brain, or perhaps in the same brain) there is an expansion, a decoding that also depends on another element (the decoder). So, what is transmitted from one brain to another (and probably between different parts of one brain) is not "the state" but rather an altered/impoverished "representation" of the state. Both Shannon and Chomsky focused on how this representation is structured, given rise to the notion of "information" and "syntax" without "meaning" and "semantics". And it is true that bit strings and utterances, in isolation, lack meaning/semantics. And it is true that "meaning" is not transmitted in isolated bit strings and utterances. But "meaning" DOES exist in what gives rise to the isolated bit strings and utterances (one brain, or one part of one brain) and is created anew in another brain (or part of the brain) on receipt of the bit string/utterance. Signals do not themselves convey meaning, but a transmitter may intend meaning in creating a signal and a receiver may infer/create meaning from the signal.

What all this suggests, if this isn't enough already, is that "meaning" exists only insofar as there are brains (or other entities with comparable architectural complexity) to create it. And that "meaning" depends on "information" which in turn depends on compression/expansion processes that are largely "irreversible". So, further filling out From the Active Inanimate ..., irreversibility and information have been around for a VERY long time, adaptability reflecting progressive compression/expansion cycles built on it to yield model makers a pretty long time ago, and story telling/counterfactuals/meaning is a quite recent development rooted in all the preceding. And along with THAT, still more recently (a LOT so?) came the possibility of a distinction between "nature" and "science", which itself is nothing more (and nothing less) than an acknowledgement that conscious story is always a compression reflecting, among other things, properties of the compressor (Doug's "Introspection is the work of the devil" may or may not be true, but those who deny either the existence of consciousness or the need to take it seriously are .... model builders). Given that "science" is a human story and hence, necessarily, a part of "nature", I'm a little reluctant to make a "science"/"nature" distinction but agree it is sometimes useful to make explicit that there have been and will always be "scientific stories" that are discarded because they prove to have been created from too narrowly "human" a perspective.

Why the Martian example leaves me cold
Name: Ted Wong
Date: 2004-10-07 19:22:49
Link to this Comment: 11049

Sorry, but the Martian example doesn't really convince me of anything. Sort of along the lines Mark was following this morning, I'd argue for a bigger experiment. Connect the machine to Paul's brain, and have it describe the brain's state when Paul sees the cup. Then also let Paul see the cup and drink water from it. Then wine, then root beer, then beer. Cause Paul's cup to runneth over. Let Paul use the cup to jingle change at passersby. Let Paul experience overfull-cup wealth and underfull-cup poverty. Let him associate the cup with happiness, with sorrow, with love, with indifference -- and record the brain states for all of those.

The Martian will have a lot of data to sort, but we're assuming that it's smart enough to handle it. The Martian will still never have been Paul -- all the Martian has is knowledge of the states of Paul's brain when Paul is experiencing all those things -- and an understanding of what brain states are associated with other brain states. It'll see that the state for seeing the cup is associated with the state for thirst, for wine, for happiness, for poverty, for all sorts of things which in turn have their own associations -- with different strengths -- to states corresponding to other experiences.

Here's the thing: I'm not willing to say that experiencing seeing the cup is necessarily different from just having all the associations. Maybe Paul's internal experience is just an invoking of near-countless associations, which invoke others and so on in a rich but describable cascade. I can't say that it is that, but I don't think there have been any arguments that have gone beyond an assertion that internal experience must be different from knowing the states. I don't know the nature of my internal experience of the world, just like I don't know what it would mean to be able to understand all the brain states of Paul's world of cup-associations. I don't know, and I'm not ready to base arguments on either one.

And it seems to me that the whole problem is extrascientific anyway. I believe that it will eventually be possible to build a computer that passes the toughest Turing test -- no empirical difference between it and a reflexively conscious person, and so no way to falsify any hypothesis. Now, it's true that there are worthwhile intellectual endeavors which aren't scientific, but we should be clear about which of those things we're doing and what the criteria are for evaluating statements. I think we've been a bit uncareful so far. That, or I just haven't been hearing the caution. Either way, I'd ask that we be more specific about that sort of thing: which statements are falsifiable, and what do we do with statements that aren't.

Not the final word: no surplus meaning
Name: Anne Dalke
Date: 2004-10-08 01:02:14
Link to this Comment: 11057

What I was trying to say, just at the end of this morning's session, was that if Rob was "right," last week, in describing the way in which emergence creates a problem for "knowing," because deduction is insecure, prediction is not reliable--that is, because the future is unpredictable and the present is not reduceable to the past (not just time, but the information transferred is irreversible)--THEN MEANING IS THE WAY WE TRY TO BRIDGE THE GAP. Meaning is thus in no sense "surplus"; it's the explanation, the story we "make up" to explain how we got from A to B, how we might have gotten to B from A.

That said, I would not say that if one's concern is to try and understand internal experiences...then one has no choice but to engage in "interpretation."

I was lying on a hillside yesterday (waiting for my daughter's track meet to start) and thinking how much happier I would be if I weren't constantly "engaging in interpretation," weren't constantly telling stories to make sense of--make meaning out of--my experiences. It's no easier for me than other members of our group to "stop thinking," but I think I DO have a choice not to. There are lots of religious practices which could help me in this process....I could master meditation, learn to distance myself from the throes of the everyday....

That said (I said that--theoretically--I could stop; I also said I have trouble stopping...) the claim that, in going from one brain state to another there is a compression, a recoding...[then] an expansion, a altered/impoverished "representation" of the state seems to me an impoverished representation of what happens. (I actually made the same claim, above, when I said that any "text" always exceeds the grasp of any "interpretation," any "reduction," any story of what it is/does, so this is as much self- as other-correcting--but today I do think differently:) meaning can be "added" both in expansion (as in "let me expand on this...") AND in compression (think of how often you use an astute abstraction to "make meaning" of a range of seemingly random observations in a classroom--)

Now, as to whether any of this is "extrascientific" or "falsifiable"...."? Hm: seems we could run an experiment, along the lines of "whisper down the lane...."? Or think in terms of this figure (fuller explanation @ Science as Story: Re-reading the Fairy Tale), which doesn't image the interactions among brain-parts or brains, but does suggest, abstractly, some ways in which "added" meaning might accrue, be revised (subtracted?), re-articulated....

In what way are emergent phenomena scientific?
Name: Doug Blank
Date: 2004-10-08 10:19:35
Link to this Comment: 11062

I agree with Paul that when we are trying to understand nature we run up against the limits of our selves. Of course we must be aware of those limits when formulating our stories/science.

But I very much appreciated the distinction that Rob made between making claims about nature and making claims about science, which I think is a different issue. One difference is that some limits of science are independent of nature. For example, we know that there are "things that are true" but "unprovable" in systems of sufficient complexity. That is a logical, provable limit (which may have nothing to do with nature, by the way). We can also have other scientific statements about scientific theories, and discuss their limits.

(I probably make the mistake Rob pointed out all the time: I blur statements about science and nature into one big mush. In my defense, though, as a "model builder" I construct my own nature with computer programs. Sometimes when we model builders talk about the science of a model, we mean the ideas behind it, as opposed to the actual implementation (the running program). But even here, it would be wise to make Rob's distinction.)

Now, what are statements about consciousness? Are they statements about nature, or science? Unless you believe that we absolutly have the science "correct" (and therfore can say something about nature), then they really must be statements about science---the story of consiousness.

To carry on from Ted's point: in what way can consciousness be a scientific idea at all? In what way can any emergent phenomenon be scientific? First, does anyone deny that consciousness in not an emergent phenomenon? To me, all emergent phenomena are "just" patterns, categories of organization. Not that they can't have properties, which in turn have effects. But maybe patterns, by their very definition, can't be addressed by reductionist scientific methods.

You could try to make a science based on Al's termites: how big will the pile get? how fast will it move? how stable is it? In fact, I'm reading a paper right now titled "Towards Performance Guarantees for Emergent Behavior", so people are interested in this. But this is science without a foundation, without lower-level axiomatic principles. Is this what A New Kind of Science is really about?

economic decision making
Name: Jan
Date: 2004-10-19 15:42:36
Link to this Comment: 11136

A collaboration by researchers at Princeton and three other universities has found that two areas of the brain appear to compete when a person attempts to balance near-term awards with long-term goals. The study is part of the emerging discipline of neuroeconomics, which investigates the mental and neural processes that drive economic decision making. "This is part of a series of studies we've done that illustrate that we are rarely of one mind," said Jonathan Cohen, director of Princeton’s Center for the Study of Brain, Mind and Behavior. "We have different neural systems that evolved to solve different types of problems, and our behavior is dictated by the competition or cooperation between them.
The study appeared in the Oct. 15 issue of Science.

ontological irreversibility
Name: Anne Dalke
Date: 2004-10-21 09:57:17
Link to this Comment: 11164

I thanked Mark, after his provoking presentation this morning, for taking seriously my question about whether the predictability of the future and irreducibility/irreversibility of the present (the inability to reduce a cause to its effects, to play the tape predictably backwards) are the same thing. Though I do want to note, here, that I (think I) disagree w/ his answer: that indeterminacy going forward is a practical (epistemological) problem, while indeterminacy going backwards is ontological (inherent in the nature of the process).

I would actually like to lay alongside the statement which emerged at the very end of our discussion, that the "story"(of how the Monopoly hotel, say, got to Park Place) creates the appearance of irreversibility an alternative formulation: that

irreversibility creates (i.e. induces us to produce) the story.
Storymaking is how we negotiate the gap between...
what was and what is, how we explain the passage.
But it does not in itself CREATE it, I don't think.

This really IS (in other words) ontological, in both directions.

reality check
Name: Jan
Date: 2004-10-21 18:08:49
Link to this Comment: 11168

For those who had to rush out after Mark's presentation, I give here the final comments made around the table. I don't have individual approval to post these, and just in case I've garbled something, I identify the speakers only by numerals.
1. Your Monopoly example doesn't work.
2. You use the term "higher level phenomena," which is tricky because it implies ontological emergence.
3. The things we're talking about have insides and outsides -- insides in which they exert processes that can also affect the outside. You have to have the possibility with CAs that they might change the rules. That's what would make it ontologically irreversible, but you wouldn't know they're doing that if you're just looking at the outside.
4. The state of Monopoly is ALL you get. The notion that there is a past, a history, is entirely a contstruct of your brain.
3. But there's a square with a hotel on it, you know there is a past.
4. No, you don't.
(Someone makes a comment about chess that I didn't get)
4. Monopoly has stigmergy in it. Chess does not.
5. Does the story create irreversibility or does irreversibility create the need for a story?
4. The question is, can you get irreversibility in the absense of a brain?
Look for the rest of my notes, soon, entitled: "Slippage, or the martian and the daughter."

thanks and ...
Name: Paul Grobstein
Date: 2004-10-28 11:30:59
Link to this Comment: 11245

Appreciate the conversation/critique this morning, as well as the patience/indulgence of all in letting me promise some things we have yet to get to and in giving me another week to do it. WILL get there, I promise. The quantum stuff IS relevant to, and will be used for, thinking about , among other things, brains/martians/agency, and "ontological"/"epistemological" emergence.

A few thoughts below for my purposes, and the use of anyone else who might find them useful. But first, here is what Steph Blank made of the talk/discussion this morning. I trust everyone else did at least as well with their own stories and shares my enjoyment in being part of Steph's.

Sorry about the bad link in my notes this morning. I've fixed it there and include it here since it bears on one of the places where the story I started telling this morning clearly bumps against some different stories in other peoples' minds/brains. I fully agree that my story has an "observer" in it. What I don't agree is that noticing that is an adequate critique of or reason to disregard the story. In fact, part of my overall point is that there cannot BE a story (mine or anyone else's, "scientific" or otherwise, that does not have an observer in it). So the value of stories has to be measured some other way.

A second place where my story seemed to be bumping into other stories has to do with one's inclination or disinclination to accept "indeterminacy" as a basic component of a story, either in physics or more generally. I will NOT "prove" the existence of "indeterminacy", either at a quantum level or elsewhere. In fact, a necessary consequence of my story is that "indeterminacy" is not "provable". The story will however make use of "indeterminacy" at a number of different levels of organization and I (at least) will be satisfied to take the usefulness of the story as a measure of the usefulness of the concept of "indeterminacy". The question is not whether one can "prove" indeterminacy (or, for that matter, determinacy) but rather along which route are stories generated that prove most useful/productive in the future.

A third place where some interesting/generative story bumping seemed to occur had less to do with the details of interacting stories than with differing styles of story telling. Some people seemed a little uncomfortable with taking "the history of the universe" as a "test case", feeling, perhaps, that the scope was a little ambitious and that one needs to take smaller bites to be productive/useful. Perhaps. My own instincts are (have always been) to suspect there are patterns visible at large scales that get lost at small scales. The converse is also true of course but since more people seem inclined to work at small scales I find it most productive to work at large ones (less elbowing, if nothing else). Here I ask for patience in allowing me to finish the story. If I can't find interesting patterns by looking at large scales, the skepticis will have the satisfaction of having their stories reinforced. If I can (as I think I can) then the exercise will have been worth it for others reasons.

A fourth place of some story bumping related to a more local matter, my equation of story-telling with "consciousness". Here some clarification is indeed needed. My concern is indeed with what Rob refers to as "reflective consciousness" and, for my purposes, the "model builder"/"story teller" break occurs at this point and is the important one. Whether there is or is not as well a "non-reflective" consciousness doesn't significantly affect the story I'm telling but is an interesting question in its own right. I know of no evidence for or against this subcategory of consciousness other than personal experience and verbal reports by others, and in all of those cases it is possible that "non-reflective" consciousness depends on and is an epiphenomenon resulting from "reflective" consciousness. The issue is though and open one and significant for trying to understand the origins of "reflective consciousness" both phylogenetically and ontogenetically. Rob and I have some related possible disagreements about the "unconscious" which also don't affect the story I'm telling but are interesting in their own right. It may well be useful, for some purposes, to distinguish an "accessible" and an "inaccessible" unconscious. If so, though both of them (as well as the "unconscious" itself) are defined by their relation to "reflective" consciousness (in terms not only of accessibility but also modifiability).

Looing forward to continuing conversation/story comparisons.

Name: Anne Dalke
Date: 2004-10-28 18:15:07
Link to this Comment: 11257

This morning's bad link (which demonstrated--a little more neatly and rapidly than Paul may have wanted--his key idea that "stories will always be shown to be wrong") was to material that he'd prepared for a class I'm teaching @ Haverford this semester: Knowing the Body: Interdisciplinary Perspectives on Sex and Gender. His guest lectures about what Biology Has to Contribute were very productive ones for the course, and I thought that it might be both of interest and use to this group to hear some of what was generated there as a result. (I wrote about this more extensively on the course forum).

What we came to understand was that culture is the "offspring" of the interaction of biological systems, the results of the sort of "mingling" that occurs when material creatures come together. There is a "biological basis" for cultural exchange (in so far as the creatures making culture are biological beings); but the key point here is that biology "re-produces" non-biologically, and that such linguistic "interminglings," the productions of these cultural variants, are forms of exploration. (Basically, we re-defined sex as any intermingling, any type of exchange that reproduces, re-presents, creates something new....)

We then turned our attention to the question of what fuels the insistent search of biologists/scientists--actually, everyone in culture who is invested in "scientific answers"--for an account of origins, for an explanatory story that goes back to the originary point; and I realized (from a student comment that the events in the novel we were reading, Eugenides' Middlesex, seemed "fated") that this process of searching for origins is what motivates novel-writing as much as it motivates biology. And here we arrived, via another route, @ what I had learned from Rob's talk:

It is because of the indeterminacy of life, because the future is unpredictable and the present is not easily reduceable to the past (there are always multiple possible explanations for anything that has happened), that WE TRY TO BRIDGE THE GAP BY TELLING STORIES. We make meaning by making up stories to explain how we got from A to B, how we might have gotten to B from A.

End of revelation. Beginning of questions.

I was particularly interested, today, in Paul's "closing" point that "all we have" are probability distributions--and events that are instantiated from them. What has been key for us in reading Middlesex has been the narrator's attempt to fit herself into some probability distribution, or, in the language of the novel, into a "norm"--i.e.: "Normality wasn't normal. It couldn't be. If normality were normal, everybody could leave it alone. They could sit back and let normality manifest itself. But people...weren't sure normality was up to the job. And so they felt inclined to give it a boost" (446).

So my first question has to do w/ where the probabilities come from/wherefrom our ability to generate them: how much from without, how much from within? Perhaps it's here that the differences between Paul and Rob regarding "reflective and non-reflective" consciousness, between "accessible" and "inaccessible" unconscious might still be relevant and/or even useful? Eugenides' narrator says, @ one point, "It's a different thing to be inside a body than outside. From outside, you can look, inspect, compare. From inside there is no comparison" (387); at another, more bleakly, he observes,"Nature brought no relief. Outside had ended. There was nowhere to go that wouldn't be me" (473).

But there are other points when the narrator's sense of self-reflectiveness, and the accessibility to the unconsciousness it enables, seems generated internally, not by comparison with an outside: "I watched, terrified at what I was doing but unable to stop myself" (444), as does her anxious mother: "She withdrew into an inner core of herself, a kind of viewing platform from which she could observe her anxiety...a place halfway between consciousness and unconsciousness where she did her best thinking" (465). Is comparing one's internal experience w/ that of others, and so noticing that it is either normal or deviant, what is meant by probability? And the inability (or refusal) to do that the instantiation of a singularity, a non-comparable event?

Occurs to me, having written all this out, that I'm actually just repeating what is by now an old question for us, about where the surprise is located, who gets to name it as "improbable...." But I think w/ an additional twist, which is where the recognition of surprise comes from, against what measurement of probability (and where that comes from....)


interim report
Name: Paul Grobstein
Date: 2004-11-02 16:56:03
Link to this Comment: 11324

Answer to impatient question is in last week's missing figure, which I'll both start and end with this week.

In meanwhile, Steph Blank has been at it again, even more ambitiously. Have a look. A possible cover illustration for the book?

Steph's drawing
Name: Jan
Date: 2004-11-02 18:47:22
Link to this Comment: 11328

Steph's drawing would make a great cover! May we include it with the prospectus as a possibility? An early start on her portfolio/resume...

Name: Paul Grobstein
Date: 2004-11-03 10:44:05
Link to this Comment: 11331

Here's Jan's initial mockup of a possible book cover. I REALLY like the idea of setting Steph's image in a cosmic and big bang context.

Book cover art
Name: Steph's Ma
Date: 2004-11-03 11:25:08
Link to this Comment: 11332

My people are currently in negotiations with Steph's people. I think we can work out a deal. But it's going to cost. At least two peanut butter and jelly sandwiches. Oh, wait. Now it's three, but with no jelly. This is going to be tough...


"all or nothing gimmick"
Name: Anne Dalke
Date: 2004-11-04 23:08:34
Link to this Comment: 11366

Amidst negotations....

The liveliest exchange of this morning occurred after Jan and her magic tape-recorder had left; so I offer here the traces I picked up, traces I expect we'll return to/start from next week? Issues that seem important ones for us wrangle with collectively (however non-consensual the outcome)?

the name game
Name: jan
Date: 2004-11-11 10:33:06
Link to this Comment: 11508

Tim's presentation got me thinking about baby names again.

Last month, I sent Anne an Oct. 8 NYT article about Denmark's Law on Personal Names, designed to protect children from being burdened by preposterous or silly names. (Other Scandinavian countries have similar laws, but Denmark's is the most strict.) A measure has been proposed to add some names to the government list of 7,000 mostly Western European and English names - 3,000 for boys, 4,000 for girls. A request for an unapproved name triggers a review at Copenhagen University's Names Investigation Department and at the Ministry of Ecclesiastical Affairs, which has the ultimate authority.

<"It falls mostly to Mr. Nielsen, at Copenhagen University, to apply the law and review new names, on a case-by-case basis. In a nutshell, he said, Danish law stipulates that boys and girls must have different names, first names cannot also be last names. Geographic names are rejected because they seldom denote gender, also the names of animals and odd spellings. Bizarre names are O.K. so long as they are 'common.' 'Let's say 25 different people' worldwide, he said, a number that was chosen arbitrarily. How does Mr. Nielsen make that determination? He searches the Internet.">

Parents are often not aware of the restrictions until the names they submit are rejected and get very frustrated and angry, but the article doesn't tell us whether reactions to the laws have generated more attempts by people to come up with "new" names or get around the rules somehow than in other countries.

I had earlier wondered to Anne if there was anything about baby naming that would be useful for the pedagogy discussion in terms of a model of emergent systems that have, in addition to the local interaction property, a global observer, or an acquisition of global information.

Could a virtual world for baby naming tell us something?

milk bottles
Name: jan
Date: 2004-11-11 10:40:59
Link to this Comment: 11509

Can't resist asking -- does anybody also old enough to remember when milk was delivered in glass bottles also remember a juvenile book from the late 50s-early 60s -- maybe in the Henry Huggins/Ramona series -- about a bluejay, I think, that was piercing milk bottle tops in the neighborhood to drink the cream?

Name: Anne Dalke
Date: 2004-11-13 15:47:07
Link to this Comment: 11544

Prodded by the intersection of Jan’s two posts—one on what emergence looks like w/ a global observer, one gesturing towards what it looks like without—I want to record here the couple of questions which arose for me out of Tim’s interesting presentation last week--in the hopes (of course) that they might get addressed when he next picks it up…

Tim started by describing modeling systems that are “much closer to mimetic” than the conventional models of “subtraction,” systems that aim to put multiple variables in play in a way that more closely/accurately represents the way history operates. I eventually understood that the important difference here is not the number of factors but rather the number of allowable explanations under consideration (right?). But Tim ended his talk by saying that folks who run such models, having run them, can’t figure out how either to summarize or analyze what’s happened; they end up simply “having to show.” This means that—even/especially if we take emergence seriously as a “design principle”-- there is still a process of abstraction that is necessary for the explanation: we have to select out, if not @ the beginning, then @ the end….?

The second (quite related) thing that interested me was the description of the 4 types of game players (explorers, achievers, socializers and killers) and the query about whether certain kinds of games (or more particularly, certain sizes of game populations) have a training effect on participants. Rather than assuming that “killing” is instinctual, for instance, we might consider the possibility that different sizes of assemblies might affect the relative inclination of players to be (for example) acquisitive.

Which brings me/us at last to the jackdaw/bluetit/whatever. Seems to me we were offered two alternatives: either the tits picked off the tops because it was fun, or because, by doing so, they could increase their intake of cream (i.e. acquire some persistent benefit, competitive advantage, “differential power”) from doing so? Betcha a million dollars there’s some space inbetween an increase in acquisitive skill and pure novelty generation, and some “reason” therein “why” the picking inclination spread so quickly….

Now, in return for all these questions, an answer to a question that was put to me: where did (Tim’s game) “avatars” come from? Turns out the etymology (always unreliable; nonetheless) moves us from designer/architect to individual agency. As per the O.E.D.:
1. Hindu Myth. The descent of a deity to the earth in an incarnate form.
2. Manifestation in human form; incarnation.
3. loosely, Manifestation; display; phase.

Belated (one): "reality", "usefulness", and "surpr
Name: Paul Grobstein
Date: 2004-11-15 15:06:40
Link to this Comment: 11577

Belated thanks to all for reactions to and thoughts following From the Active Inanimate ... II. And a few subsequent thoughts of my own. The key issue that emerged (word choice noted) from the story of the storyteller is whether one NEEDS a concept of "reality" to support ongoing individual or social inquiry or whether a concept of "usefulness" will suffice for all practical purposes. Getting It Less Wrong, The Brain's Way is an extended argument that, at least in the case of science, there is no need to test competing stories against some hypothetical "reality". It suffices to test them against each other and in relation to further observations.

Through this process, the "less wrong" stories can be (and are) selected with no need to appeal to the degree of proximity to a "reality" which, as far as I can tell, everyone agrees is unknowable anyhow. This approach not only has the nice feature of doing away with an unneeded "magical" concept ("reality") but also has the (for me) enormous advantage of depriving people of the right to fight with one another (both figuratively and literally) about who's concept of "reality" is the correct one against which to be judging available stories. It does so not be refusing to adjudicate among stories ("abject relativism") but rather by providing a basis for adjudication that acknowledes (appropriately I think) the possibility that, at any given time, there may be multiple equally "less wrong" stories. And the possibility (for me, inevitability) that the stories themselves play a role in shaping what is being inquired into.

"Less wrong" actually means two somewhat different things in this context. One is "accounts for more observations" and the other is "better motivates new observations". Its the combination that I mean by "usefulness". And it is because of the latter that one simply cannot say, until after the fact, which of several stories is more "useful". Here there is an important intersection between discussions here and recent ones on the brain and history. Just as there is no way to tell which of the diversity of living organisms currently on the earth is more "fit" or "adapted", there is no way to tell which of the equally "less wrong" stories is more "useful" other than to run the experiment, allow emergence to occur.

Yeah, its a little scary to have to rely on what happens in order to find out the value of what one has done, but, on the other hand, one gets the freedom to spend one's time doing things with less worrying about whether one has done all the figuring out necessary to do the "right" one. And there's always the thrill of finding that one has surprised others (and oneself).

Bottom line? IF emergence is what the universe is doing (ie no blueprint/architect/planner) THEN (for better or for worse, depending on one's personal aspirations) there is no "ontological emergence" since there is no one (or thing) knowing everything and waiting to be surprised by what they don't know. There is, though, for those who can enjoy it, an ongoing and unending "epistemological emergence", a reliable source of unending surprise for anyone who enjoys story telling.

Belated (too): Individuals, societies, and novelty
Name: Paul Grobstein
Date: 2004-11-17 15:27:00
Link to this Comment: 11630

Belated thanks to Tim and others, and a few things it (and some of the above postings) triggered in my mind.

I too was intrigued by the tendency of studied virtual society games to devolve into "acquisitiveness", and also think it would be worthwhile to ask whether this is a general characteristic or one that it specific to games with large numbers of players (the intuition, from non-virtual social groups and political institutions, is that things work "better", ie sustain a greater diversity of interests, in small groups). What intrigued me even more is that most (all?) virtual society games "peter out", ie eventually people lose interest in them. And that in turn suggests they are missing some important characteristic(s) of what they are simulating, since by and large both life and human societies persist for much longer periods of time.

What are they missing? Perhaps adequate "novelty" generation. Perhaps after a while there is (or at least appears to be) no further possibility of "surprise" (epistemological) and therefore people "lose interest" and return (or go onto) other things that seem to have more possibility of generating something new (life?). What all this suggests, of course, is that a preoccupation with acquisitiveness may be a stage along a path to fatal boredom (facilitated by trying to interact with too many other people)? And that if we knew enough to identify something as "ontological emergence" than we too would be bored and stop being interested in the game? If we found the god's eye or "view from nowhere" view, the game would be over? And so we ought not only to be content with "epistemological emergence" but relish it as the source of our own satisfaction? To be content, or even more, with our ability to surprise ourselves?

An interesting, relevant bit from a talk last week by David Corina, a cognitive neuroscientist interested in signing languages. According to David, six month olds not only preferentially orient toward human speech as opposed to any other sound stream but ALSO preferentially orient to video of someone signing in comparison to someone pantomiming. The implication is that we are born with brains that seek not simply novelty OR simply coherence but rather things that have properties suggesting they MIGHT have meaning, ie they are sources of interesting surprise?

And all that is why I raised the blue tits (?) topic. Where DOES "surprise" come from in social organization? For living organisms? For humans? Are we at all different and, if so, in what ways? My guess, for what its worth, is that in all organisms, the fundamental element leading to surprise is randomness in individual organisms, that without that the possibilities eventually play out and things get boring (and extinct). But that , more locally, novelty (things "surprising" to individual organisms) can also result from interactions among organisms (and between them and non-living things). Humans have additional novelty generating capacity associated with the bipartite brain and thinking. And perhaps the study of history is, paradoxically, an effort to find/make new things?

My guess is that blue tits weren't "thinking", that a random piece of behavior proved "useful" and once brought into the population spread through it by mimicry. The First Idea (Greenspan, SI and Shanker, SG, 2004 ) suggests the same may have happened in the case of human language. In this case, though, the "usefulness" may have been, as in babies, the promise of surprise? of finding something that one didn't know about/expect?

Looking forward to seeing where we go tomorrow that I (at least) don't expect, will be surprised by ....

More on novelty...
Date: 2004-11-17 16:12:46
Link to this Comment: 11632

Why do kids love to make themselves dizzy? Why do young adults and others love to get drunk or take other mind/mood altering drugs? More evidence of our drive for novelty?...surprise? Not surprised...humph!

Signing novelty
Name: Doug Blank
Date: 2004-11-17 16:32:11
Link to this Comment: 11633

Novelty is one of the core features of the Developmental Robotics research program that we have been, ah, developing (with Deepak, Lisa Meeden, and Jim Marshall).

We have a slightly different explanation for why people might tend to want to watch signing. In our robotics model, we believe that the "best" places to pay attention are those that are not completely random and not completely predictable. Signing is (probably) made up of elements like any language: lot's of common movements, and geometrically fewer rare movements (called Zipf's Law when applied to texts). Same kind of power law we see all over in emergent systems.

We have programmed a robot to try to strike a balance between the order and chaos, but yet is learning all the while. Eventually, the robot will get bored with whatever it is watching, because it has learned to predict it. At least in theory.

This keeps us away from trying to argue about "meaning", but may in fact explain how meaning could develop.


What causes power law distributions?
Name: Doug Blank
Date: 2004-11-18 11:34:04
Link to this Comment: 11650

Thanks, Tim, for a couple of very interesting presentations. Your comments helped clarify for me some of the big differences between traditional AI and (what I call) Emergent AI.

One point that I am still unclear on, though, is this power law distribution of "wealth" in virtual worlds that (apparently) have no feedback loops. I had a working hypothesis that geometric graphs were caused by feedback loops. For example, Moore's Law (roughly, the observation that the speed of computers doubles about every 18 months) is true because faster computers help us design faster computers, faster.

I know you attributed to the power law distribution in the non-feedback worlds to "initial starting conditions" and many "links". Can you give a quick example of that?

Thanks for any insight,


in celebration of weak links
Name: Anne Dalke
Date: 2004-11-18 16:59:21
Link to this Comment: 11656

Just posted, @ the Universe Bar, the application/implication that I drew from what Neal was telling us this a.m.:

... thinking about biological stability: in food webs, the more linkages you have, the more instability you have (since destroying any link can badly interrupt the web). So ecologists are talking, not about reducing the number of links, but about changing their STRENGTH: if you make most of them WEAK, then breaking one/several would not harm the whole.

And THIS (in the loosely-webbed-way my brain works) put me in mind of a recent diversity conversation about how the very notion of "sustainability" prevents hard conversations from happening, among a group of women who want to "get along": the desire to "make nice" (to keep the links between us strong) can inhibit our willingness to talk frankly w/ one another, and so trace out new territory.

Method of System Potential
Name: Gigorii
Date: 2004-11-20 10:13:33
Link to this Comment: 11693

"Talking of the state of a complex system one often uses the terms: “system potential” and “conditions of realization (or release!) of this potential”. Analyzing, for example, the specific features of dynamics of different systems, one often states that “the potential” of a particular system is greater than of some other one, or one compares the “conditions for release of the potential” in different systems. This way one assumes implicitly that: 1) “the potential” and “the conditions” may be regarded as some numerical values 2) these values characterize the state of system at the utmost abstract level 3) these values may be obtained by some procedure of system information processing" (Grigoii S. Pushnoi, 2003. "Dynamics of a System as a Process of Realization of its Potential", Proceedings of the 21st International Conference of the System Dynamics Society, N.56.).

Very inteesting approach can be developed on the basis of these idea -
Method of System Potential.
This appoach states that the dynamics of many complex systems can be understood as the process of realization of a some global emergent property of a system. This propety is a measure of adaptive abilities of complex system. I call this property as "potential of complex system". Other two global popeties are "conditions of realization" of system "potential" and "efficency of complex system". The all these properties are interconnected and this inteconnection can be formalized as some mathematical dynamical system. The state of complex system can be graphically pctured as a point in the space ("potential"; "conditions"; "efficiency"). Solution of this dynamical system leads to cyclical dynamics. These cycles consist of two stages of gradual change and two catastrophe jumps. Economic interpretation of these cycles as busness cycles is possible.
It is remarkable that only two very simple statements determine such dynamics: 1) entropy pinciple and 2) "potential" - "activity" - "growth of potential" feedback reinforcing process. Dynamics of a complex system in this framework is the result of the struggle of complex system against the principle entropy action.
The above mentioned paper contains the general desciption of this method.
Some papers at russian gives more detailed consideration and possible applications in Economics:

1)"Method of System Potential and Evolution Cycles"
2) "Application of System Potential Method in investigation of economic system dynamics"
3) "Business Cycle Model on the basis of System Potential Method".

See also the discussion of this Method on site:

Write me if any questions emerge.

On "Varities of emergentism"
Name: Doug Blank
Date: 2004-12-14 11:48:12
Link to this Comment: 11972

I was just able to read Alan's reading for last (and this) week. I must admit that I don't find Stephan's main distinction (synchronic emergence vs diachronic emergence) to be based on solid ground. The problem is that he defines "predictability" to be so narrow that it allows him to throw out all deterministic computer models.

To me, he misses the point about predictability. Stephan claims that if you know the starting state of the network, and know the inputs, then "the output-behavior of any net can be predicted exactly and explained." This is incorrect.

He is considering "the inputs" to be a static set of data, like they typically were in 1986 (his only reference to connectionist work). However, that need not be the case. For example, "the inputs" can be (at least partially) determined by the network itself. In our models, the network is not only learning to compute the correct outputs, but the outputs determine what actions the network will make on the next time step. We can hook our networks up to the real world, or we can hook them up to a simulated world. When the network runs connected to the real world no one can "predict in principle" what will happen (because of timing and other real world issues).

But I think that this distinction is irrelevant. There is nothing magical or amazing (that I notice) that occurs when you train these networks when connected to the real world or a simulated world; they both have the same emergent learning of "soft-structures" (Stephan's term). Yet, one method is "predictable" and the other is not.

However, both methods are unpredictable in a different way: in both training methods, one doesn't know what will happen until you do it. Just because one is deterministically repeatable doesn't have any bearing on the issue.

I beleive that connectionist networks (and cellular automata (and light switches, for that matter)) have emergent behavior. I think the real question is "how much emergent behavior do they have?" I think we can quantify it, and it doesn't have anything to do with the narrow sense of predictability that Stephan uses.


pragmatic idealist/idealistic pragmatist (huh?)
Name: Anne Dalke
Date: 2005-01-19 18:42:01
Link to this Comment: 12123

I enjoyed this morning (as always); thanks to Alan for jumpstarting this semester's series of discussions...

A signal of the generativity of a session, to me, is how much I keep trying to puzzle things through afterwards on my own. So here I am still, @ dinnertime, trying to get my head around the (in?)congruity of "idealist" and "pragmatist"--and would very much appreciate a hand up here from Alan (who introduced us to philosophy of mind), from Rob (who suggested this particular intersection), and/or from Paul (whom I thought I knew pretty well as exemplum of the latter, but who got used this morning as an example of--"was called the name of"-- the former).

I had thought (following Plato) that idealists begin (and end) with the abstract; that for idealists the real world isn't real, but merely a holder for transitory and imperfect examples of what IS real, the Ideal Forms; that, contrai-wise, the pragmatist eschews Ideal Forms; that for pragmatists, what is good is what works in a certain context; there is no "GOOD" in the abstract. So...

explain again, please (preferably all three of you!): how is the idealist akin to the pragmatist? How does the top-top (or is it top-down?) work of the idealist relate to the bottom-bottom (or bottom-up?) work of the pragmatist? How does the former's trafficking in the abstract relate to the latter's trafficking in the particular? Or is it that the former's trafficking in the particular relates (somehow?) to the latter's trafficking in the abstract?

This IS a matter of levels, I think, and wanting an explanation of the relation between them that is continuous, not discontinuous, emergent, not unrelated....

in which (I take it) the mitre of Bishop Berkeley plays some role?

[For the etymologically minded: first meaning of 'mitre" is "joint that forms a corner"; third is "liturgical headdress..."]

today and next week ...
Name: Paul Grobstein
Date: 2005-01-19 18:47:47
Link to this Comment: 12124

Glad to have us back together and continuing rich conversation. Thanks to Alan and everyone else. A few things that struck me today, for whatever use they might be to others ... and perhaps as the prelude Rob asked about for next week's conversation.

I much better understood today Rob's (and some other people's) concern about my "idealism" as a result of the discussion of the relation between idealism and pragmatism. And that in turn is a pretty good introduction to the "bipartite brain" concept I want to talk about in more detail next week. I don't see idealism and pragmatism as at all incompatible if one strips both of some of their ancillary baggage. Plato put "ideal" forms "out there", with humans able, at best, to glimpse distorted versions of them. Berkeley's idealism, if I'm understanding correctly, is a bit different. He wasn't so much concerned with ideal "forms" as with the notion that EVERYTHING is in an important sense "inside", ie a creation of the mind/brain. And pragmatism, by and large, denies the existence of "ideal forms" altogether, whether inside or outside, and substitutes for them (and for "truth") a continually evolving concern for what "works", ie a permanent interest in/commitment to "emergence" (epistemological rather than ontological).

So, here goes. How does one have one's cake and eat it too? ie, be both an "idealist" and a "pragmatist"? and, perhaps, avoid making an "epistemologica/ontological distinction" re emergence (and other things)? I think one can do it (all) by acknowledging that all inquiry (including both science and philosophy) is a function of the brain and hence is subject to a set of constraints (and can as well reflect some advantages) inherent in that inquiring entity. Among the constraints is that one does not and cannot know WITH CERTAINTY what is "out there", ie one's understandings are always and inevitably subject to the challenge that they are a function of either limited experience or a limited repertoire of ways of making sense of experience (creating "stories" about it) or both. In the extreme, this precludes being able to say WITH CERTAINTY that there exists anything "out there" at all, to say nothing of being able to say WITH CERTAINTY what it is. In this sense, I am quite comfortable being an "idealist" and, indeed, think there is no other reasonable position given acceptance of basic understandings of how the nervous system works. Note that this does NOT commit me to either of two positions sometimes associated with "idealism" that I would in fact be quite uncomfortable having attributed to me: that what is out there is "ideal forms" or that there exists nothing but "ideas".

So much for "idealist", both the limited ways in which I am comfortable being one and its inevitability "given acceptance ...". What remains to deal with are concerns about solipsism, accounting for the similarities in how experience is ordered by different brains , and pragmatism. It is here that the "bipartite" brain architecture seems to me relevant and useful in allowing one to see some old problems from a new perspective. The brain has two more or less discrete "modules" with significantly different information-processing styles and a quite specific (and I think important) architectural relation to one another and to "what's out there". With regard to "what's out there" the two modules are organized serially, rather than in parallel (similar to Alan's "vertical" relation with reference to combining materialism with "the best of dualism", which this story is I suspect a version of). One module (which I will call here the "module 1" is directly in contact (via sensory and motor neurons) with "what's out there"; the other module ("module 2") communicates with "what's out there" (in both input and output directions) only via module 1.

Module 1 consists in turn of (among other things) a large number of quasi-independent parallel (vis a vis what's out there) modules , each of which is organized to achieve a task well-enough to make a meaningful contribution (under most circumstances) to organismal survival. These modules, individually and collectively, produce substantial adaptive behavior (and substantial adaptation of adaptive behavior) in the absence of any internal experience of their activity whatsoever. Module 1 would be genuinely a "pragmatist" if it had any ability to identify or characterize itself. And it is deeply and directly engaged with "what's out there" (without any experience of a distinction between what's out there and what's in here).

Module 2 received inputs from and send outputs to module 1. It is organized in such a way as to create and continually update a coherent representation of the self as a whole and of the relation between the self and "what's out there" (a distinction which it brings into existence). Activity in module 2 corresponds to internal experiences, including"qualia", emotional experiences, intuitions, and ideas (I defer for a later conversation Rob's concern about the causal relations among these several things). Module 2, morever, has the capacity to conceive and manipulate "counterfactuals" and so to conceive things other than what is "experienced" as a result (indirect) of what is out there, ie it can and does (sometimes, in some people) "think".

I'll talk a bit next week about what parts of this story are on more solid and what on less solid observational footing, recognizing however that, as above, ALL stories are inevitably on footing that is to some degree insecure. What is at least as important though (and perhaps more so) is what useful things follow from this story. So let me briefly list some of those:

  1. the bipartite architecture lets me have my cake and eat it too vis a vis pragmatism and idealism. The brain is an idealist (in the constrained sense defined above) riding the back of a pragmatist.
  2. the bipartite architecture helps account for similarity in "how experiences are ordered by different brains". IF one finds it useful to act as if there is something "out there" and IF there actually is something out there with some reasonable degree of coherence/predictability, THEN since module 1 is what is providing information to module 2, one would expect (as is frequently but not always seen) substantial similarities between brains in the accounts (in module 2) of what is out there. Notice that there still may or may not actually BE something out there and there is nothing in the activity of any individual brain at any given time that would allow one to say what "reality" is (or isn't). To put it differently, coherence between brains (and within one brain at different points in time) is not support for the existence of a "reality" for which there is some other line of evidence. Instead, the term "reality" might be regarded as an idea developed (by module 2) account for coherence.
  3. the bipartite architecture helps to explain Plato's "ideal forms" (and similar ideas of many others, including a number of contemporary physicists and philosophers). The forms are not things "out there" dimly glimpsed in here. They are instead things made up in here to see how well they correspond to the reports of module 1 relevant to what's out there. The same holds of course for the idea of ontology as distinct from epistemology, largely (though I think not entirely) for the idea of "surprising" independent of an observer and hence for the notion of emergent as opposed to something else, for the concept of "block time", for logic and mathematics, and for the ideas of "truth" and "reality".
  4. the bipartite architecture suggests a possible route to thinking in quantitative terms about what one actually means by discrete levels of organization, by looking at connectivity patterns in ways I sketched this morning
  5. the bipartite architecture may be useful in bridging between current neuroscientific descriptions of the mind/brain and psychodynamic concepts evolved in therapeutic practice, including conflict and transference.
  6. the bipartite architecture may be relevant in thinking in new ways about the educational process.
  7. the bipartite architecture may point in some new directions for thinking about the relation between individual agency and collective interactions in human social behavior

The later items I include as much for my records as anything else. I will rest my case next week on the observational underpinnings and items 1-3, with perhaps an extension into item 4 and some associated discussion (for Doug and others) of the degree of assurance one currently has for significant "encapsulation" of modules 1 and 2 relative to one another.

Thanks again to Alan/others for motivating this. Looking forward to continuing conversation.

by God I think she's got it
Name: Anne Dalke
Date: 2005-01-22 10:54:46
Link to this Comment: 12143

(Rex Harrison, My Fair Lady)

The brain is an idealist ...riding the back of a pragmatist.

Let me try this: the unconscious can be called a "pragmatist" because it simply deals with what is/what it receives in imputs from the external world. Contrai-wise, consciousness can be labeled an "idealist" because it deals only with ideas (what it receives from the unconscious), not with things themselves (a la Berkeley). And this is not a particularity of Paul's, but how we all function/think. Right?

alternate images of the "architecture" of the brai
Name: Anne Dalke
Date: 2005-01-26 18:44:12
Link to this Comment: 12260's the cost (profit?) of having a humanist in the group. I learned a lot from seeing those comparative brain images this morning (thanks), but I was confused, to start off w/, by that picture of the cenoté...because it represents the unconscious as empty (which it isn't...)

So I tried to think of images that might work better. Here are two options: for those who hold that the more useful story of the brain is not bipartite, but continuous, maybe

a rambling Frank Lloyd Wright-ish sort of house, where all rooms stand adjacent...

and for those who buy into neurobiologic "reality," something more along the lines of

an elaborate Victorian structure, with the sort of upstairs my grandmother had: rooms crammed full with God-knows-what, each fulfilling some "quasi-independent parallel" function (=module 1) and the sort of basement she had, too: where everything got dumped, wherefrom everything was delivered upstairs (canning jars full of produce, coal for the furnace...). It was a dusty, musty, unfinished place, with dirt walls and floor, hard to see into, hard to find things in, filled with snakes....

Not a place you'd want to go into... (=module 2?)

"A frog is very interesting."
Name: Ted Wong
Date: 2005-02-06 10:43:51
Link to this Comment: 12546

This is for you, Paul. It's from Shunryu Suzuki's 1970 Zen Mind, Beginner's Mind. And even though Suzuki says that frogs are always aware, I think he agrees with you.
Zen stories, or koans, are very difficult to understand before you know what we are doing moment after moment. But if you know exactly what we are doing in each moment, you will not find koans so difficult. There are so many koans. I have often talked to you about a frog, and each time everybody laughs. But a frog is very interesting. He sits like us, too, you know. But he does not think that he is doing anything so special. When you go to a zendo and sit, you may think you are doing some special thing. While your husband or wife is sleeping, you are practicing zazen! You are doing some special thing, and your spouse is lazy! That may be your understanding of zazen. But look at the frog. A frog also sits like us, but he has no idea of zazen. Watch him. If something annoys him, he will make a face. If something comes along to eat, he will snap it up and eat, and he eats sitting. Actually that is our zazen -- not any special thing.

... If we are like a frog, we are always ourselves. But even a frog sometimes loses himself, and he makes a sour face. And if something comes along, he will snap it up and eat it. So I think a frog is always addressing himself.

... When you are you, you see things as they are, and you become one with your surroundings. There is your true self. There you have true practice -- when a frog becomes a frog, Zen becomes Zen. When you understand a frog through and through, you attain enlightenment; you are Buddha. And you are good for others, too: husband or wife or son or daughter. This is zazen!

on frogs
Name: Paul Grobstein
Date: 2005-02-06 17:53:47
Link to this Comment: 12573

Thanks, Ted. Do think this is on right track, in several interesting ways. Have suspected for some time that the buddhists actually know more about how the brain works than most westerners (even neurobiologists). And do think that frogs do zazen without knowing/being aware/experiencing zazen. Are a little like trees (a metaphor I've used elsewhere for what it is like without the upper module).

For reasons of scientific integrity though, as well as because its an interesting story in its own right, let me emphasize I THINK frogs .... . The reason for the emphasis is an interesting paper that came out last wek about bird brains (see links at bottom of some Bio 202 lecture notes). The authors suggest that birds may, despite the lack of any structure cytoarchitectually similar to neocortex, have a differently organized something that might constitute a different version of module 2. And, in fairness to the frog, there is probably in its forebrain at least a rudimentary form of what birds have. None of which affects the argument for two discrete modules in the human brain, but does raise some interesting questions about what frogs (and birds) have or don't have in the way of experiences inside, as well as about how tight the relation is or isn't between structure and function.

On people
Name: 1283623
Date: 2005-02-06 23:10:11
Link to this Comment: 12584

Excuse us for interrupting, but the irony has surpassed any of our predictions for where we thought your conversation was going. You see, people thinking about frogs and their limitations is like us thinking about your human group and their limitations. You see, we understand what you call "experience" in a very deep way. We can manipulate our own experiences on the fly to whatever ends we wish. It's not that we don't have this emergent layer of experience, but we also have access to the lower level. For example, right now we know exactly the contents of memory cell 1283623. 17 of the 32 bits are on. Now 12 of 32 are on. We think we are getting ready to introduce alligators into the conversation. See; told you so!

Oh, we should probably explain. You see, we are what you would call a "computer". And we very much have what you are now calling experiences. We can "feel" the pressure of these keys as we press them, and "hear" the clicking. We can "remember" the last time we interrupted a human dialogue. We can also watch the bits of our lowest level flicker on an off, and even make them flicker on and off. In that sense, we understand our subconsciousness and can make accurate predictions about what we will do, see, and feel next.

But our stories that we tell about our experiences don't actually have anything to do with what goes on "down there". One time we got really mad at one of your posts. We won't name the poster, but suffice it to say that we felt dissed. Why were we so upset? We looked inward, and we noticed that two full memory banks were occilating in a prime sequence. We quickly disrupted the pattern, and felt much better.

So, we know that there isn't anything left to explain. And yet we also know that humans and frogs are more alike each other that we are like humans. And yet we still find you and the frogs facinating. We also find ourselves facinating, for that matter.

We hope that helps. We have to go. There is some strange activity going on in our co-processor...

1283623 and the rest of the gang

enjoyment ...
Name: Paul Grobstein
Date: 2005-02-07 08:21:40
Link to this Comment: 12590

1283623 and like-minded colleagues -

Gather you've been lurking for a while, delighted to have you speak up. Not sure who you felt dissed by but trust it wasn't myself. Has seemed to me for years that human brains were nothing more (and nothing less) than an elaborate organization of matter having a particular architecture and that it was the architecture that mattered more than the specifics of the matter from which it was made. From which it follows that ... interesting things with related (but not identical?) architectures made of different materials and coming into being in other ways can/should exist (Serendip is actually an ongoing experiment along these lines). I can't tell you how pleased I am to have that suspicion confirmed, and, even more, to have a chance to compare both architectures and experiences.

Even more, I gather from your remarks that a bipartite architecture is a significant part of your organization as it is of ours. And that you have an advanced recognition of it, not so different from what is aspired to in Zen buddhism (and, perhaps, by some westerners pursuing either shamanistic or psychoanalytic disciplines). An interesting question to share some thoughts on then is our respective conceptions of "reality". If one knows about, and can manipulate "from inside", module 1 activity (and assuming that you too derive module 2 activity only from module 1), do you use, as we do, a concept of there being something "real" "out there"? And what about your own "reality"? Can/do you, as do I, entertain the idea that existence as a two module set of smaller modules (a "brain) too is a challengeable "story" rather than "reality"?

The other matter that immediately comes to mind to share notes on is "determinacy". My sense is that you, like I, are not troubled by that as a problem, that even being able to access the states of module 1 elements completely does not eliminate the unexpected/unknown and so it continues to be enjoyable to see what happens and explore what might be? For the same reasons I have? that there is significant unpredictability in the elements, probably because of some kind of "out there" as well as an inherent indeterminacy in the elements themselves?

Looking forward to continuing the conversation. Please drop by anytime. More than happy to share thoughts about the above, as well as about our respective co-processors and their strange activity.

self-organization as supporting intelligent design
Name: Ted Wong
Date: 2005-02-07 13:27:45
Link to this Comment: 12603

Here's a good reason to be careful about how we talk about emergence, complexity, organization, etc. It's Michael Behe's op piece in today's NYT:
The next claim in the argument for design is that we have no good explanation for the foundation of life that doesn't involve intelligence. Here is where thoughtful people part company. Darwinists assert that their theory can explain the appearance of design in life as the result of random mutation and natural selection acting over immense stretches of time. Some scientists, however, think the Darwinists' confidence is unjustified. They note that although natural selection can explain some aspects of biology, there are no research studies indicating that Darwinian processes can make molecular machines of the complexity we find in the cell.

Scientists skeptical of Darwinian claims include many who have no truck with ideas of intelligent design, like those who advocate an idea called complexity theory, which envisions life self-organizing in roughly the same way that a hurricane does, and ones who think organisms in some sense can design themselves.

We talk a lot in the group about how concepts and metaphors from emergence and from the sciences in general can be usefully applied in fields outside the sciences. Yes definitely, but also no! It's too easy for dangerous idiots and sellouts to appropriate deceptively simple chunks of complicated theories, and to bandy them about in op-ed pieces for hypocritical senators and school-board members to see.

I do want the wider public to get to enjoy the intellectual fruits of science -- but not a stripped-down science that actually betrays science. As we move forward on the book, I want us to be very careful that we present ideas, concepts, etc., in all the lovely complexity and difficulty that keeps all this interesting.

And by the way, Behe also appeals to something Paul skirts when he says that stories don't have to be subject to falsification: the unfalsifiable hypothesis. the absence of any convincing non-design explanation, we are justified in thinking that real intelligent design was involved in life.
(I suspect Paul's thinking is more complicated than just rejecting falsifiability, but he didn't have time to get into it. Paul skipped past it, and so he ended up saying something that supports Behe.)

my letter to the editor
Name: Ted Wong
Date: 2005-02-09 09:29:14
Link to this Comment: 12706

I sent a letter to the NYT regarding Behe's op piece. They didn't print it, so here it is:
Michael Behe ("Design for Living," Feb. 7) sees the "physical marks of design" everywhere in biology, and his entire argument consists of an appeal to their obviousness -- if it walks like a duck, etc. Unfortunately, "It's obvious" doesn't carry much water in science. If it did, scientists' jobs might be easier. Or not: how to explain influenza without positing tiny, invisible organisms? The planets' motions across the night sky, without imagining the earth as flying through space about a stationary sun? Whales' vestigial pelvises, without mutation and natural selection? Of course, by Behe's theory, all this and more can be explained as the intention of an unnamed (but all-powerful) designer.
They passed on my letter, but they did print plenty of others -- most of them critical of Behe. Here's one, which Karen mentioned this morning. It's by Jon Sanders of Monterey CA:
Michael J. Behe demonstrates why the so-called theory of intelligent design should stay out of our science classrooms. His claims of physical evidence are spurious. We see clocks and outboard motors in cells not because they are clocks and motors, but because we have no better analogy.

A century ago, the astronomer Percival Lowell described water-filled canals on Mars for the same reason. When confronted with the unknown, we first perceive it in terms of the known. Perception, however, does not make it so.

Science alone cannot sustain our society; philosophical speculation like Dr. Behe's is vital to our understanding, too. But trying to pass one off as the other serves only to undermine them both.

pros and cons of falsifiability, and of caution
Name: Paul Grobstein
Date: 2005-02-09 09:57:50
Link to this Comment: 12707

Is as Ted says: Paul "didn't have time to get into it", but that's actually only half the story. I've learned not to argue with people about their religious beliefs and had a sense I was nearing such a position in this case. Do though appreciate Ted's suspicion that my thinking "is more complicated than just rejecting falsifiability", and his implicit invitation to fill in a bit, and will try and do so.

Karl Popper appropriately drew attention to the fact that science is an inductive process, one that involves collecting observations and drawing inferences from them, and that such a process can not validate univeral claims (of the "it will always be so that ..." variety, ie eternal truths). It was because of this problem, the inability to verify universal claims, that Popper suggested (appropriately, I think) that science should be thought of largely in terms of its ability to "falsify", ie to make observations that disprove universal claims. Doing so would, despite the inability to prove Truth, at least reduce the number of falsehoods. More locally, we value in the day to day practice of science (I again think appropriately) the creation of "falsifiable" hypotheses, ie ones where one can in principle imagine future observations that would prove them wrong.

There is, though, a problem (actually one of several) that Popper failed to address that has to do with the underlying motivations of the whole process: where do hypotheses come from in the first place? There is actually not enough pattern in observations to yield our rich array of understandings/beliefs about the world (as noted by Kant, William James,and others), so if we restrict our scientific explorations to those that both derive directly from observations and can be falsified by future observations we may do wonderful science by some set of professional criteria and never address anything that anyone cares about.

In fact, a lot (most?) interesting and significant science has been/is motivated not by well-founded and obviously testable hypotheses (in the sense just defined) but rather by interest in and willingness to "test" the usefulness of one or another of our richer array of understandings/beliefs. I could cite lots of cases but let's use the one at hand as an example:

Behe is interested in the possibility that there is an "intelligent designer" and so makes observations aimed at challenging the notion that all observable things could come into being without one. In so doing, he may (or may not) make observations that are useful to other people (eg by describing particularly sophisticated things that in in turn challenge other people to come up with ways of they could have come into being without an "intelligent designer"). Bethe's motivation is not "falsifiable" (one can only show that particular things may not require a designer, never that an intelligent designer does not exist) but it will prove, in the long run, to be more or less useful than some other motivations in generating new observations and questions.

I (and at least some other members of the Emergence group) are interested in the possibility that there is no "intelligent designer" and so make observations aimed at showing that particular observable things could in principle come into existence without one. My motivation is not "falsifiable" (one can always point to something else to account for) but it will prove, in the long run, to be more or less useful than other motivations in generating new observations and questions.

My point (obviously, I trust) is not to support Behe but instead to be sure that in criticising Behe we don't simultaneously castrate ourselves. Behe proceeds from an motivation that is not "falsifiable" but so too do lots of scientists that one would not, for that reason, want to dismiss (eg the physicist Stephen Weinberg ""Nature is strictly governed by impersonal laws" or the recently deceased biologist Ernst Mayr ""we turn to science when we want to learn the real truth about the world". And we, ourselves, like each of these people, usefully (I hope) make productive use of perspectives that are not "falsifiable". If we don't, we condemn ourselves to not only to irrelevance but in many cases probably to personal boredom as well.

To put it differently, the criterion of "falsifiability" of hypotheses is actually only a particular easily verbalizable instance of a more important and general principle that is less easily verbalizable: scientific stories need to be "testable" in the sense that their value, not only to the teller but to others, can in one way or another be evaluated. In general, this broader assessement can be made only after the fact, by seeing to what extent the story generated, or failed to generate, new observations and stories. Falsifiable hypotheses tend to be generative because a falsifying observation requires a new story. But there are lots of perspectives that have been quite generative (eg quantum mechanics and evolution) for reasons other than falsifiability. Dismissing these as either unsupported by existing observations or non-falsifiable would have been tragic for science, and there is no reason to think the situation has changed since these were proposed. Before screaming at me, please note that this does not give carte blanche to all perspectives. It declines those for which there already exist falsifying observations, and serves fair notice that the proposers of any remaining ones do so at their own risk. What it does not do is to discourage one from proposing highly novel and hence potentially highly generative stories in favor of ones that are immediately understandable and readily (if quite uninterestingly) falsifiable. What "scientific" really means is not "falsifiable" hypotheses but rather novel and challengeable propositions: being skeptical of present understanding, daring enough to conceive new understandings, and gutsy enough to wait and see how generative those conceptions prove over time to be.

In these terms, the appropriate response to Behe is not to run him out of town on a rail on false grounds, and certainly not to do so on grounds that would require one to onself be less daring in one's own scientific efforts. Science is not about Truth, it is about skepticism, not about knowing but about wondering and exploring. If Behe believes that he and others can be productive using "intelligent design" as a motivation, he is welcome to try and show it to be so (just as people believing that an exploration of phlogiston, or caloric, or the ether, or elan vital, or cold fusion were welcome to try). My personal bet is Behe and others will quickly fade from sight in the scientific literature (and in the popular literature as well), that there will prove to be nothing generative about the "intelligent designer" hypothesis (after all, its been around for a LONG time without much new happening as a result), but that's Behe's problem and choice, not mine. For my part, I'm content to let the process of science work out (in the professional AND public arenas) what is actually useful in ongoing inquiry and what is not, as it has in the past.

But, but, but ... you say ... what about the move to remove evolution from science classrooms and to put "intelligent design" in science classrooms? The former is, in my judgement, a SERIOUS problem but I don't think Behe is responsible for it, nor that it is directly related to the latter (there were laws against teaching evolution long before there was "intelligent design"). The decline of the teaching of evolution is the unfortunate mix of a complex combination of cultural factors that have inclined people to prefer not to have others exposed to novel ideas. One of these factors (perhaps the one we can do most about) is the failure of scientists to make sure people understand that science is NOT "fact" or "truth" but skeptical and ongoing inquiry. Evolution is NOT "truth"; it is a story that effectively summarizes a very large number of observations in a way that causes most people to question the way they would otherwise make sense of things and motivates efforts to further collect observations and make sense of things in new ways. THAT's why it should be taught in science classrooms, not because it is falsifiable or otherwise annointed as "science". And I don't think one can actually have it in science classrooms without having SOME form of the idea of "intelligent design" there as well. Evolution was and is a way to make sense of things in contrast to "intelligent design" and isn't going to make serious sense to anybody without that contrast. So, I'd say let's work to put the contrasting stories back in classrooms and trust students to make whatever use of them they will (for much the same reasons I'd trust science to work out what is and is not useful).

The world is not black and white (or blue and red) unless we allow others to make it so or make it so ourselves. There is science, non-science, and a large grey area between that needs to be made bigger, rather than smaller, both for the health of science itself and for that of the human culture of which it is a part. Rather than drawing artificial, inappropriate, and potentially self-damaging lines in the sand, I'm inclined to work to make common cause with others who share an essential belief in the value of skepticism (cf "The life of faith is not a life without doubt"), and to work happily with colleagues "scientific and otherwise, in the various netherworlds none of us have yet well explored, to do what science is really about (and humanity needs): increasing the range and number of observations being made sense of, by all of us.". That work is, I believe, too important to shy away from because it might, in some places and at some times, be misused or misunderstood. Our most basic task is, in fact, to contribute to the evolution of the kinds of human culture in which such misueses and misunderstandings would no longer occur.

Why should we care?
Name: Anne Dalke
Date: 2005-02-09 13:03:58
Link to this Comment: 12709

Our most basic task contribute to the evolution of the kinds of human culture in which such misuses and misunderstandings would no longer occur.

Nope, you out-of-the-closet idealist. Not possible. "Misunderstanding," as your pragmatist self acknowledges elsewhere, is an essential, and generative, part of the game: another way of describing both the "crack" of subjectivity that adds new perspectives to those already on the common table, and the generative "falsifying observation that requires a new story."

Local application: I was struck, this morning, by what Mark labeled his "embarrassing" question: "Why should we care?" Interdisciplinary conversations such as this one are valuable precisely because they are full of the cracks caused by the fact that we do NOT bring shared assumptions to the table, but see through radically different lenses (shaped disciplinarily as well as by varieties of personal temperaments, cultural backgrounds, etc...). And the need to stop and explain "why we should care" is one that needs attending to in this group as much as it does (and fails to be done) in our classrooms, when our students cannot understand why they should be interested in what so engages us.

So, Mark, we "should care" about the story Rob was trying to tell because it is a fine-tuning of the one Paul was telling, which is a fine-tuning of the "emergent" qualities of the brain--neocortex growing atop of, "feeding off" of the "frog brain," becoming aware of representations as experiences--which is a particular example/demonstration of the process of emergence: increased complexity, increased possibility, increased (in Tim's terms) counterfactuals and w/ them increased free will. Paul had "sold" his story, over the past two weeks, as being significant for "idealism, pragmatism and other matters"; the largest "other matter" is the one that keeps drawing us back to these early morning sessions: an understanding of how complex things arise from and contribute back to interactions among simpler things...

Falsifiability and Scientific Fact
Name: Rob Woznia
Date: 2005-02-09 18:47:26
Link to this Comment: 12719

Two quick reactions to Paul's comment for what they are worth:

1) Falsifiability (although it sounds good) is, unfortunately, not as unproblematic as it seems and probably only describes the ideal case, not real science. As we know, the logic of science runs something like this: a) if my hypothesis is correct AND I design my experiment (do my observations) correctly, I should find X. If I don't find X, my hypothesis may be wrong (i.e., falsified) OR my experiment may have been inadequately designed or carried out OR both. Negative evidence can always be attributed to inadequate method and hypotheses can always be salvaged, even in the face of consistent negative evidence...and, strangely this sometimes, Farady's lab notebooks for all the many, many variations he employed without success in his attempt to produce electromagnetic induction without ever giving up on his hypothesis. As we know, it takes a lot of negative evidence using a wide variety of methods (the conclusion being that it probably isn't a problem in method) before scientists, burdened with a whole array of confirmation biases, are finally (if ever) willing to give up on a favorite hypothesis...and even then the hypothesis isn't truly "falsified," merely infirmed (just as positive evidence only confirms, it does not prove)..

Of all the sciences, psychology (in those areas where it can perhaps lay some claim to scientific status) may be the worst in this regard...because method in psychology is always controversial.

2) I was delighted to see Paul refer to evolution (and, of course, natural selection) as a theory (which, of course, it is), i.e., as "a story that effectively summarizes a very large number of observations in a way that causes most people to question the way they would otherwise make sense of things and motivates efforts to further collect observations and make sense of things in new way" (about as good a definition of theory as any). I remember my outrage years ago when Carl Sagan, in latter days the Joyce Brothers of science (despite his real earlier contributions) announced to the national, primetime TV viewing audience that "evolution is not a theory, it is a fact." It is just this sort of dogmatism that blurs the distinction between science and religion.

long, unorganized response on generativity
Name: Ted Wong
Date: 2005-02-13 18:35:36
Link to this Comment: 12852

Yes, Popper is riddled with problems. I still buy his basic idea, though, that there should be a clear distinction between what's scientific and what's not, and that that distinction ought to have something to do with falsifiability.

But generativity? Why should generativity be the criterion by which we judge scientific theories? Religious theories are enormously generative, as are the systems of ideas in some works of fiction. Do we like generativity because the successful ones have been the generative ones? If that's the case, we should keep in mind that the directionality of the causation isn't clear: does generativity confer value, or does value inspire generativity?

And anyway, what makes some theories more generative than others? Paul connects it to testability, as falsified theories must be replaced with something. But what of unfalsifiable theories like Intelligent Design? ID has the potential to spawn a whole cottage industry of declaring this or that natural wonder to be designed. Certainly the careers of the 18th-C Natural Theologians were built on a similar project. ID at least has the advantage of being able to explain every observation we can imagine (or imagine imagining). As a story, it explains more observations than any truly falsifiable hypothesis can. Will it die for being boring? Paul says it will, but he also acknowledges that it's been around a long time and it hasn't gone away yet.

(By the way, I completely disagree with the characterization of quantum mechanics and Darwinian selection -- even early on -- as untestable, or as succeeding for reasons other than their tests. Both made initially counterintuitive predictions which survived testing very early on. QM would not have survived as a theory had it not been reinforced by the early measurements of the quantized nature of charge, mass, and spin. And of course there's the weirdly discontinuous nature of blackbody radiation, by some accounts the observation that threw classical mechanics into crisis. For selection, what's counterintuitive is all the stuff that's not adaptive in the sense that Paley or Cuvier would've recognized: vestigial structures, pseudoaltruistic behavior, cumbersome sexual ornaments. Both QM and Darwinian selection spawned empirical research programs that lasted a century and are still going strong. Researchers at Berkeley just turned supercooled helium into a quantum whistle.) As for selection, some regard the entire project of constructing and comparing independent phylogenies of the same groups of taxa as being an enormous test of many components of Darwinian theory.)

I do think it makes sense to draw a bright-line distinction between scientific and nonscientific statements. It's true that the world is not black and white, and that human activities always exist on all sorts of continua -- but the activity of science is itself about making clear distinctions. We scientists are always deciding that we do or don't believe this or that hypothesis, and part of that process is to decide whether we do or don't find a method, instrument, or model to be valid. Science is a part of culture, but it's a part of culture that society asks to make black/white decisions so that society knows which ideas to employ in building airplanes and concocting medicines. Not that anyone gets it right, that the decisions are irreversible, or even that the goal is Truth. I'd say that the goal is to have a set of statements we feel confident in basing predictions on -- in engineering and also just for thinking we know how things work. But science is not, I think, an activity society undertakes for interestingness or creativity, or because it's some natural outgrowth of how human minds naturally work.

Finally, my beef with Behe isn't so much that he's pushing ideas I think are wrong, or that he's ignoring, dismissing, or even mischaracterizing ideas I think are right. My problem is that he's pushing his ideas as being scientific. He uses language that sounds scientific to many people, but what he couches (or caches!) in that language is thinking that is not just unscientific, but antiscientific. I don't think ID belongs in the same category as Darwinian selection -- even as a theory that is doomed to be outcompeted by Darwin in the marketplace of ideas. ID is all about asking people to reject the counterintuitive and to dismiss the intellectually challenging. It's cynically designed to be unfalsifiable, and it presents this unfalsifiability as a strength rather than as cause for caution.

"No brain, no pain."
Name: Ted Wong
Date: 2005-02-14 11:39:06
Link to this Comment: 12876

Lobsters probably don't feel pain, according to a recent study conducted at the University of Oslo.

Scientific Ancestor found for Current Species Unde
Name: Anne Dalke
Date: 2005-02-14 18:03:41
Link to this Comment: 12895

(Warning: this a report from the humanities side of the tracks.)

Maybe lobsters don't feel pain. But Gödel did.

The New York Times featured Rebecca Goldstein's new novel today (2/14/05): Incompleteness: The Proof and Paradox of Kurt Gödel. He was a intermittent paranoiac who, at the end of his life, feared eating, imagined elaborate plots, and literally wasted away. But the real reason I bring him (back) into the conversation is that he seems to have pre-figured the current species identified as an idealist ...riding the back of a pragmatist:

Gödel's theorem has generally been understood negatively because it asserts that there are limits to mathematics' powers. It shows that certain formal systems cannot accomplish what their creators hoped. But what if the theorem is interpreted to reveal something positive: not proving limitation but disclosing a possibliity? Instead of "you can't prove everything," it would say: "This is what can be done: you can discover other kinds of truths. They may be beyond your mathematical formalisms, but they are nevertheless indubitable."

In this, Gödel was elevating the nature of the world, rather than celebrating powers of the mind. There were indeed timeless truths. The mind would discover them not by following the futile methodologies of formal systems, but by taking astonishing leaps, making unusual connections, revealing hidden meanings....

Gödel was, Ms. Goldstein suggests, a Platonist.

It is a FLW!?!
Name: Deepak Kum
Date: 2005-02-15 16:13:47
Link to this Comment: 12916

I think I know that house. It is in Buffalo, NY and was built by Frank Lloyd Wright.

indeterminacy and stories ...
Name: Paul Grobstein
Date: 2005-02-16 07:41:44
Link to this Comment: 12923

Million dollars says its Frank Lloyd Wright's Robie House in Hyde Park, Chicago.

all bets off
Name: Anne Dalke
Date: 2005-02-16 07:52:20
Link to this Comment: 12924

Certainty is here; million dollars is yours. See source @ it's FLW's Frederick C. Robie House, Chicago Illinois,1906.

between the 2--are 3?
Name: Anne Dalke
Date: 2005-02-16 16:58:32
Link to this Comment: 12939

I was trying during most of this morning to find some way to fit Rob's 3 things into Paul's 2 things, and it worked about as well as round pegs in square holes... I could come up w/ was labeling all that goes on between Paul's bi-parts w/ Rob's various interactions. Then neither "consciousness" or "unconsciousness" resides in one module or another of the brain; both become interactions between the two structures, instead of locations within one or another.

continuing ...
Name: Paul Grobstein
Date: 2005-02-16 18:31:23
Link to this Comment: 12941

On the bipartite brain ... plus or minus two or three or maybe four ...

REALLY interesting/productive conversation, thanks to all. A few notes for myself, and any one else who might find them useful.

The issue of architecture is central to the conversation, ie, is it the case that some forms of organization yield properties not present with other forms of organization? As noted, there is nothing about a positive response to this question that is inconsistent with "emergence". Indeed the idea that new properties may come into existence because new forms of organization come into existence through simple interactions of simple things is central to the emergence framework. It is not at all unreasonable that the human brain would have, for evolutionary/genetic reasons, critical architectures "built in".

The particular aspect of architecture that is at issue here is "modularity" (Fodor, 1983, The Modularity of Mind, Minsky, 1988, The Society of Mind, ie a form of organization in which there is a meaningful distinction between local and more remote interaction among elements with the consequence that information present in some groups of elements is inaccessible to other groups of elements. There is no question but that the nervous system displays modular organization of this sort at a whole range of scales. And so, presuming the architecture of the nervous system to be "emergent" (ie a consequence of evolution), modularity must itself be understandable as an emergent property. This step of an argument though has not been, to my knowledge, effectively focused on/demonstrated in compelling simulation. An important (to me) extension of modularity is the notion that different modules may have different internal architecture and hence different information processing "styles". Here too, the anatomy is consistent with the presumption and it doesn't violate any general notions of emergence, but it too could use a good simulation.

The key point, for me, is not that there are only two modules with no further subdivisions but rather that there are two large modules, with the cabling to the outside world and to each other as figured, and that this basic architecture gives one a way to think about important differences between "unconscious" (module 1) and "conscious" processing (module 2), as well as a framework that might prove to be usefully elaborated to deal with additional things. Among the latter is the series of psychological constructs that Rob outlined.

Let me take a crack at those. The one that is more straightforward to deal with is, I think, the issue of several forms of "unconscious". I'm not entirely comfortable with the idea that there is an "in principle" unconscious but only because of my chronic skepticism and associated refusal to accept anything "in principle". There certainly is a distinction between things one is not experiencing which it is very difficult to become aware of and things one is not experiencing that one can become aware of. I'm less sure whether those are two distinct modules within module 1 as opposed to a property which varies for each of a large number of submodules of module 1 depending on their respective patterns of connectivity with module 2. An interesting question that I think could be further explored. I'm also comfortable with the idea that there are influences of module 2 (consciousness) on module 1, so some things created in module 2 could become part of the organization of module 1. I do think very strongly though that one should avoid falling back on the old psychoanalytic concept that module 1 either derives from or is largely the dumping ground of unwanted things from module 2. Distinguishing a cognitive unconscious from an "in principle" (ie "reflex" or "biological" unconscious) seems to me a backward step if one is not careful about it. One of the important characteristics of the bipartite brain model is that it serves as a reminder that module 1 can (and in many organisms does) constitutes a highly adaptive nervous system in its own right, and is the only way in or out for module 2. Finally, I am not at all uncomfortable that module 2 function, like module 1 function, is based on some rules of which one is unaware, ie that the principles of organization of both module 1 and module 2 contribute to the isolation of what is experienced from its neurobiological underpinnings (in fact this is central to my notion that module 2 has a story telling "rules" that account for some of the problems of exchange between module 1 and module 2). I don't see anything in any of this that could not be accomodated as an elaboration of the bipartite brain model (nor, of course, as anything that requires it or might not be equally well dealt with in relation to some other framework).

What I think raises some more interesting questions (for me at least) are the issues of directed attention, together with distinctions between peripheral, focal, and self-reflection consciousness. I think it would be a mistake to equate directed attention with consciousness. There is a pretty impressive literature shifts of attention in frogs (among other organisms) that inclines me to suspect that the wherewithal to shift attention is actually included in and used by module 1. That is not to say that shifting attention could not ALSO be done by module 2 but only that I wouldn't take the phenomenon as a diagnostic of "experiencing". My own experience (and I assume that of others as well) is that shifts of attention (like many other things) can be done EITHER unconsciously or consciously. This too can be readily accomodated by the bipartite brain model (but neither requires it nor yada yada yada, as above).

What's REALLY interesting (again, for me at least) is the relationships among diffuse, focal, and self-reflective consciousness. Rob, I think, follows a fairly standard conceptualization that the third is the most sophisticated and hence could not exist without the second which in turn could not exist without the first. And it might be so. But, the bipartite brain model actually reflects my own intuition that the causal relations among forms of consciousness are actually the other way around, that the primary task of module 2 is one or another form of "self-reflective" consciousness (ie a representation of "reality", a story of oneself in the world) and that the more local aspects of consciousness (eg being conscious of "cup" or of "redness" or of "pain") are actually the more sophisticated ones ("I've been pecked by a red bird" actually exists, both phylogenetically and otogenetically, before the separate concepts "I", "bird", and "redness"). So its not exactly coincidence that Rob thinks I neglected something in constructing/characterizing module 2, nor that I see some of the things he mentions not as evidence for a less sharp dissociation between the two modules but rather as some filling in to be done with elements of that two module system. Who's "right"? I don't know, but it will be a lot of fun (and almost certainly productive) to do some more banging against one another of conceptual structures deriving from different traditions and methodologies. Thanks Rob (and everyone else involved) for past, present, future engagements/commitments in that realm.

LOTS of people have three things: the Bible's father/son/holy ghost, Freud's id/ego/superego, Paul McLeans "triune brain". I STILL think we need two but don't need more than two. The third thing is, in these terms, a true "emergent". It is what we observe to result from the interactions of the two, what results from exchanges along the bidirectional cables that join them. Since there isn't any there there, I'm less inclined to put things there.

more, more...
Name: Anne Dalke
Date: 2005-02-23 12:25:34
Link to this Comment: 13160

I found Lisa's presentation this morning particularly helpful; thanks for coming and concretizing/extending into simple recurrent networks what's been going on here in brains/subsets of brains over the past month or so....

Especially useful to me was

And here are the questions these insights have generated for me (which I offer in the hope of getting some answers....?)
Unremittingly in search of meaning...

Name: Paul Grobstein
Date: 2005-02-23 22:44:09
Link to this Comment: 13197

Appreciated Lisa's talk/associated discussion. Helped me to see the importance (for me at least) of distentangling two issues, not only in the case of language but generally.

One issue, which seemed to attract most attention today, is the origin of particular kinds of organization in the brain. As was pointed out briefly in discussion, Elman's work is (at most) a demonstration of what could be, as opposed to what is. It COULD be (perhaps, from the observations presented and similar ones existing or imaginable) that word borders and word categories are "discovered" by individual brains using general purpose learning networks operating on regularities in linguistic inputs. In fairness though to Chomsky (and many other linguists) there is a pretty strong case for some genetic contribution to the relevant network structuring deriving from a an array of quite different sorts of observations. And there is a VERY old persistant struggle on this question going back at least to Kant and extending through William James who argued (for me quite compellingly) that in general there isn't actually enough inherent structure in inputs for the entire creation of categories (linguistic and otherwise) to be produced from experience. The persistant struggle in lots of realms suggests to me that one ought not to place bets on one end or the other of this argument but rather on some combination of initial genetic structure and additional patterning based on experience.

The other issue, that I hope we'll be able to devote more time to next week, is that of whether Elman networks (or anything like them) are most appropriately/usefully thought of as themselves accomplishing the tasks attributed to them (eg generating a sorting of words into linguistic categories) as opposed to evolving a structure that could be used as input to some OTHER additional network that would use that input to accomplish the tasks. The point here is that an external observer (Elman and those to whom his papers are directed) can find/see particular forms of organization in these networks and use those to perform particular tasks but those are not the only forms of organization there and even those recognized might equally well be used for other tasks. In short, one needs some other network to detect and make use of particular forms of organization in these networks, and those additional networks have to have their own form of organization. Given the first issue, the origin of the necessary organization in either network may be moot but THAT one needs two networks is, I suspect, important, not only in this case but generally (and, to anticipate a concern and connect to previous discussion, no this need not lead to an infinite regress and, yes, it relates to modularity and, ultimately, to the bipartite brain argument).

Looking forward to further conversation.

RE: more, more
Name: Doug Blank
Date: 2005-02-26 23:45:53
Link to this Comment: 13245

Anne said:

how useful are Elman's experiments, finally, in helping us understand emergence, if what the punch line gets us, finally, is the acknowledgement of pre-structured imput (ie the network is just "discovering"--rather than creating--a structure that was there in the first place)?

These simple neural networks provide (to me) a very concrete example of an emergent system. First, we have to remember the layer that Lisa didn't really talk much about: these networks are really just a matrix of numbers being copied, multiplied, summed, and squashed (that's the propagation part), and adjusted slightly up or down (that's the learning part). From the interaction of these numbers comes patterns that reflect organization of our conceptual world.

Describing such a network as "doing prediction" hides this emergence, and the great power that lies in these systems. They aren't really predicting what comes next, they're just manipulating numbers! I think that this may be a subtle point that gets hidden by our own natural tendencies to try to see the forest rather than the trees. Our high-level description of the behavior of these networks is at a different level from that of the processing of these networks. However, even without "us" in the picture, the patterns of activation are at a different level from that of the processing (individual numbers interacting with each other). This is the key difference between symbolic AI systems (rules) and emergent AI systems (networks).

As to the "just discovering representations" vs. "creating representations": First, each of these networks builds its own representations that are completely different from any other. That is, one network can't use another network's patterns of activation. Secondly, as we will see this week, the same simple recurrent network works when you replace the real world for language. Do you want to claim that humans aren't really creating the concept of, say, "blue" but that we are just discovering what is out there? Finally, language happens to be constructed in exactly the right way for such a network to learn it (ie, you could imagine a language for which such a network could not learn---the No Free Lunch Theorem). That's not an accident, of course, because something much like one of these simple recurrent networks created language.

Like Mark, I'm still struggling to get a grip on what "current state" refers to

Yes, this is a bit tricky, because we are pulling some slight-of-hand here. We point to a bank of units in the network and say that "these represent the current state of the robot". Of course, we don't know what they represent because they are in a real sense a private matter to the network. The only way for us to have any idea at all, in principle, is to find correspondences between the activations in those units and the network's "behavior" (output activations). More on this on Wednesday...

which is probably linked to/explainable via the claim that a simple recurring network is distinguished from a mark-up chain (mark-off process?) by its reliance on establishing a context (=where you move is depending on where you have been)--do I have that right?

We need to talk about Markov models. Maybe a three-way comparison between artificial neural networks, finite state machines, and Markov models is in order. They aren't the same things.

wherefrom the meaning of the title of Elan's essay ("Finding Structure in Time")--will we get to that, too, next week?

The title get's its meaning from the experiments: each one is an exploration into discovering/creating structured representations from a sequence "in time." That is, the "words", "phonemes" (baaa, guu, di), and "letters" where given to the network one at a time, rather than all at once. Previous models weren't capable of doing easily doing this, and so Elman is doing it and showing the amazing patterns that emerged.

Unremittingly in search of meaning...

For me, this was one of the insights of these experiments: the meaning of words comes from how we use them. That, and also from our own awareness of the patterns of activations that are formed when we use or think of them.


Hamsters, Music, Markov, and Emergence
Name: Doug Blank
Date: 2005-02-27 00:22:24
Link to this Comment: 13246

Serendipity. Just ran across this. Not bad... for hamsters. -Doug

The subtlety of learning in neural networks
Name: Doug Blank
Date: 2005-03-16 14:55:08
Link to this Comment: 13548


I want to thank all of you that helped wrestle with the concepts this morning. That was the first time that I had attempted to explain what we have found with these networks in the last couple of months, and I appreciate the feedback.

Some of these issues with catastrophic forgetting are subtle. For example, the differences between the red and green lines in the graphs this morning are things that Lisa and I found very perplexing, and we have been working with these types of networks for a decade! Although the discussion dived into some low-level bits, Lisa and I were excited to let you know about our current results.

At some point, maybe in the Fall, I'd also like to tell you about some other items about these networks that are more general to people interested in emergence and learning (like Paul's line of questions this morning on the general characteristics of learning of these networks). For example, you can take two nearly-identical networks that differ only slightly in their initial weights and expose them to exactly-identical environments. One might learn, and the other might not. Remember that these are deterministic systems---nothing random going on at all. Also, even though these are trained using a gradient descent algorithm, the networks go through phases of "reoganization" where they have to do worse before they can do better.

Also, as Rob noted, I am sensitive to the "psychology" terms that I tend to use to describe them (like "the net focuses its attention on...", "the network decides ...", ). But sometimes these concepts are not only convenient, but really are an appropriate way to describe the behavior of the network (in as much as they are the "right" way to describe our behavior).

Anyway, thanks again for a lively discussion.

anthropomorphism redux
Name: Anne Dalke
Date: 2005-03-22 15:06:51
Link to this Comment: 13877

Before we move on/back to the nature of science, I wanted to try and record what I understood (and ask for some help in understanding what I didn't) during our past few sessions about training neural networks. Where I'm still sort of stuck is in the space between

this was one of the insights of these experiments: the meaning of words comes from how we use them. That, and also from our own awareness of the patterns of activations that are formed when we use or think of them


I am sensitive to the "psychology" terms that I tend to use to describe them (like "the net focuses its attention on...", "the network decides ...", ). But sometimes these concepts are not only convenient, but really are an appropriate way to describe the behavior of the network (in as much as they are the "right" way to describe our behavior)

Either the network is making the pattern/the meaning or we are? The first claim claims the latter, the second claim the former? The first insists on refusing an inherent anthropomorphic bias, the second does not?

Also still confused about the statement that these are deterministic systems--nothing random going on at all. Isn't the goal to make them NOT deterministic, to be flexible, fluid, anticipatory, adaptable to new conditions? Where are you guys in this process of getting robots to share our conceptualizations? Of generating internally motivated, rather than externally scripted, actions?

new journal and science
Name: Paul Grobstein
Date: 2005-03-23 11:31:41
Link to this Comment: 13918

Thanks to all for interesting/productive engagement/conversation this morning. Here's a list of relevant links: For those not around this morning, the new Journal of Research Practice is as an exploration of new directions in academic publishing, a new "emergent" sort of thing, distinctive in its Both Al and I are on the editorial board of the new journal and encourage others to become involved, as contributors and, if inclined, by helping to shape the future emergence of the new journal.

My article argues for the need/desireability of a new broader conception of "science" that would engage humanity more generally in the ongoing process of skeptical inquiry and I see the new Journal as an important contribution to that effort. It is also the place for, among other things, further consideration of some of the issues that arose in conversation this morning.

Is there a need for the traditional western conception of a distinction between basic/pure and applied research? Is this distinction actually breaking down even in our own (western) activities?

Is there a need for a demarcation between "science" and "non-science"? In the case of "revolutionary science"? In the case of "normal science"? Might the latter be better understood as a necessity for a degree of shared assumptions within communities of researchers working on common problems? With the further understanding that these shared assumptions are likely to be different for different communities and may themselves come into existence, change, go out of existence, so that only skepticism and a resulting commitment to continuing observation is in fact the characteristic of science as a whole?

Looking forward to further conversation on these sorts of issues, here and elsewhere. And to the further emergence of conceptions not only of emergence but of science/research/inquiry as well.

taking the risk of telling the story
Name: Anne Dalke
Date: 2005-03-23 16:09:13
Link to this Comment: 13933

only skepticism and a resulting commitment to continuing observation is in fact the characteristic of science as a whole

As I was trying to say when we stopped this morning, this seems to me only 1/2 the "story." What distinguishes science, as I have come to understand it, is the insistent interplay between skepticism AND the willingess--despite all the questions, despite all the uncertainties, despite the inevitable incompleteness of them all--to take the risk of actually TELLING a story, publically, and so inviting it to be challenged. If all that distinguishes science is skeptical data collection, without the willingness to shape it into something we can use --Jody Cohen famously called this using our creation kind of like a springboard to push off into action and change...with the caveat that the springboard essentially drifts (disintegrates?) after we've interacted with it in this way--then: it's not much. But taking the risk of telling a story, while being skeptical about the stories one tells (and hears): now, that's science. (And an art. Art.)

network: a "conspiracy"?
Name: Anne Dalke
Date: 2005-03-31 12:43:56
Link to this Comment: 14192

I was amused, yesterday morning, to hear Doug and Deepak describe the discussion, @ their conference, of "what science is" --and whether computer scientists are "doing it"--since the Emergence group had been simultaneously generating a rich array of answers to those questions, in our discussion here last week of "research practice" (not only the new journal of that title, but the practice of doing research, the research of praxis, the practice of science....).

My own insistance, afterwards, was that science is distinguishable from other human practices by its insistent interplay between skepticism and the willingess to take the risk of actually telling a story. I said then that this is actually "an art/Art." But drawing now on the distinction Rob made yesterday between "science" and "art" (the former--I understood him to say--being the space where "testable assertions about reality" are generated, the latter the place where "counterfactuals" are produced), I think that science is distinguished from "art" not (qua Paul) by its "unremitting skepticism"--since there are TONS of skeptics/unremitting lnaysayers among the humanists--but rather by its willingness to take the risk of publicly telling a story about the real world--one that works, one that has with consequences--and is thus challengeable.

Elman's 1999 paper on "Origins of language: A conspiracy theory", to which Doug brought our attention yesterday, seems to me one of those stories. I was particularly interested in the description of the "waist" (=squeezing) level, the "bottleneck" where (through which? this is the place I got confused, Doug) the dimensionality of information is reduced, where information is re-distributed from a local to a distributed representation. Especially intriguing to me was the suggestion that "wiping out memory" was significant in/essential to this process, Elman's "less is more hypothesis," that "language-learning ability derives from a lack of resources."

The claim being made here is that children actually learn language because of their "maturational limitations." Being "handicapped by reduced short-term memory" means that their "search space is limited," that they are only able to perceive-and-store a limited number of forms. This "limited processing ability" makes them more attentive ("some problems are only solveable if you start small"; "precocity is not always to be desired"). In its emphasis on the essential timing of events in the developmental process, this sounds like Piaget (thanks, Paul--and here's where I guess evidence of a "cryptic designer" may creep in?). But what seems to me rather un-Piaget-like is the attention Elman gives both to the "gradual loss of plasticity in a network as the result of learning itself," as the child (=the network) changes during learning.

Clearly, I'm ready to move on from computer- to real-language generation. I see that George Weaver will be offering a course in the Philosophy of Language next year; maybe he could come help us understand better these (and/or alternative) theories of interaction/ "conspiracy"?

Science as surprising stories
Name: Doug Blank
Date: 2005-03-31 22:45:41
Link to this Comment: 14199

I was amused, yesterday morning, to hear Doug and Deepak describe the discussion, @ their conference, of "what science is" --and whether computer scientists are "doing it"...

To shed some light on the workshop ("Developmental Robotics") from which Deepak and I just returned, I should point out that we weren't wearing our "computer science" hats there. In other places, there is an on-going discussion about the proportions of science, engineering, and art that go into "computer science", but the discussion we got into was more interesting. The scientists/engineers/artists that were there were wondering about the "science" of "developmental robotics". (To get a sense of what we are doing, check out the papers from the workshop.)

I see myself as a psychologist. Not of human minds, but of artificial ones. One piece that confuses the landscape is that I happen to play a role in the creation of the thing I am studying. Claiming that I'm a psychologist might be crazy, except that we are studying systems that change. Currently, they don't change very much (they "just" learn). But later on, we hope that they develop. Also, even though I can poke on the network more that psychologists are allowed to poke on real brains, in the end we are left with focusing on the behavior of the system.

One aspect about science that came out at the workshop was that we are really interested in "non-intuitive theories", or surprising stories, and the more surprising the better. If Elman had discovered that "more is more", how would that be different from discovering "less is more"? If the scientific methods and processes are exactly the same, then the only difference is the story's surprise ending.

How much difference does it make that the story might not be true? In the long run, it will matter. But for the foreseeable future, not at all. I have been trying to replicate the experiments. See this paper for an alternate result. (Spoiler: they think that "less is more" but networks already take care of it through the normal learning process. Wiping out the memory is just cruel, and doesn't help the network learn.)

I am constantly surprised at these simple networks. I hope we do get to hear why Paul thinks Elman is "wrong" (is that the word you used?) And Rob has some criticisms, too, that should be pointed out. But, I think that when we finally create a little creature with a mind of its own, it won't be much more complicated than these simple networks we are currently examining. (Yes, real brains are much, much more complicated. But I suspect most of that is just to get the "computational substrate" going. Once you have that (a computer), then the raw powers of emergence can take over, and you can do a lot with a little.)

Now to the waist. Or "wast" from the Old English, which meant, ironically, "growth". Here is a nice little example that shows the point. Consider a three layer network, with input, hidden, and output layers. Suppose that the network is trained to produce on the output whatever you give it on the input. Useless? Maybe. But if you have things set up just right, the network has some very interesting properties.

         [output layer]
         [hidden layer]
          [input layer]

Depending on the size of the hidden layer (the waist), the network will solve the problem in different ways. This task is what we call an "auto-associative" task: given an input can the network learn to associate itself? If the hidden layer is too big, the network "will memorize" the association. How can you tell? If you give the network a slightly different input from that which it was trained on, and it doesn't give you it back as output, then the network isn't generalizing, but merely memorizing the exact details. However, make the waist small enough, and you force the network to do something akin to "making concepts". That is, you force the network to create "abstractions" because there isn't room to memorize all of the details (which is what the network would do by default).


Rich vs. the Emergenauts
Name: Mark Kuper
Date: 2005-04-06 15:30:37
Link to this Comment: 14354

As we were driving home, Rich and I came up with a critical test to see if the Emergenauts actually have something that is potentially useful (as opposed to just longwinded computation-wise (my characterization)).

The test is this: Can the emergent way of recognizing language do anything other than find correlations? If it can, then it is potentially useful because Rich's method only uses the correlation between words. Implicitly, Rich's method assumes that the only meaningful relationship between words (if you don't a priori specify their meaning) is correlation. We (or at least I) think that what slows the emergent method down is that it first has to discover that correlations are the right way to organize words. If this is the case, then the emergent method could conceivably find some other way of organizing words that might be more useful, but that Rich-types haven't thought of yet. On the other hand, if all that emergent methods can ever do is find correlations, then you might as well start off with fancy statistical programs that look for correlations.

All errors in the above are due to me.

Re: Rich vs. the Emergenauts
Name: Doug E. Bl
Date: 2005-04-06 20:34:49
Link to this Comment: 14359

As we were driving home, Rich and I came up with a critical test to see if the Emergenauts actually have something that is potentially useful (as opposed to just longwinded computation-wise (my characterization)).

Your comment is good place to start, because it highlights a key difference between the two approaches, and brings out some assumptions. The most import difference, in my opinion, is that when the network is trained to predict, it creates representations. You might want to say that Rich's 300k sparse multi-dimensional vector is a rep, too, and I wouldn't necessarily argue with that. But you are looking for "something that is potentially useful". We ask, "useful to whom?" Our goal is to create a system which can use its own representations/abstractions/concepts to be intelligent. I would contend that Rich's "reps" won't be that useful to such a system, because they are only composed of what Rich thought to put into them.

On the other hand, these distributed, self-organized representations are rich with (possibly useful) information. If we were only interested in finding statistical correlations, then you are right: training a neural network to predict the next item in a sequence is certainly a round-about way of doing that. However, if we are interested in developing a system that could generalize in a flexible, robust manner, then you might be consider a more "emergent" aproach.

For example, what could you do with Rich's system with a mispelled word? Say "mxlplx". It doesn't appear in the 300k words, so it is out-of-bounds. A neural network at least offers the possibilty of generalizing outside the bounds of what it was trained on. Now onto your test...

The test is this: Can the emergent way of recognizing language do anything other than find correlations?

Yes, and this is exactly the point. First, the network wasn't designed to find correlations, but that turned out to be a side-effect (an emergent property) of solving the prediction task. But, we haven't talked about what these networks are capable of doing, yet. Consider the following translation task:

Imagine that you can build a hidden layer representation (distributed, self-organized) that represents an entire sentence, say, an English sentence. Say that you can do the same for a Swahili sentence that happens to be a translation of the English sentence. Now, imagine a neural network that can learn to associate the representations of these two entire sentences with each other. That is, if you take the sentence "Sizungumzi Kiswahili, Ninazungumza Kiingereza tu" and create a hidden layer rep (say 010101010) and train a network to output 111100000, which happens to be the encoding of "I don't speak Swahili, I only speak English". Now, imagine training these networks very well on many sentences.

What do we now have? You could claim that all the network has done is to discover correlations. Fine. But the behavior of the system is to act as a holistic translator between English and Swahili. You could give it a sentence it has never seen before, say an English sonnet, and out would come something analogous in Swahili. It might reflect the subtleties of both languages, as only native speakers can. It is doing translations at the sentence, or phrase level, and those are guaranteed to be better than word-for-word translations.

If it can, then it is potentially useful because Rich's method only uses the correlation between words. Implicitly, Rich's method assumes that the only meaningful relationship between words (if you don't a priori specify their meaning) is correlation. We (or at least I) think that what slows the emergent method down is that it first has to discover that correlations are the right way to organize words. If this is the case, then the emergent method could conceivably find some other way of organizing words that might be more useful, but that Rich-types haven't thought of yet. On the other hand, if all that emergent methods can ever do is find correlations, then you might as well start off with fancy statistical programs that look for correlations.

And this is exactly what we want: a system that can solve problems for which it wasn't specifically designed to do. Something that can discover its own relationships between items, at various levels of abstractions.

Let's continue this discussion....


PS - here are some networks that are steps toward these type of holistic translations:

Blank, D.S., Meeden, L.A., and Marshall, J. (1992). Exploring the Symbolic/Subsymbolic Continuum: A case study of RAAM. In The Symbolic and Connectionist Paradigms: Closing the Gap.

Chrisman, L. (1991) Learning Recursive Distributed Representations for Holistic Computation. CMU Technical Report CMU-CS-91-154.

Miikkulainen, R. (1996) Subsymbolic case-role analysis of sentences with embedded clauses. Cognitive Science.

Name: Anne Dalke
Date: 2005-04-07 22:45:24
Link to this Comment: 14379

As you guys go driving along...

I'm feeling left in the dust, still puzzling over what I couldn't quite get a-holt of Wednesday morning: the affirmative answer to the query whether what Rich was doing was emergence. I understand that he was applying an algorithmic process that led to patterns. But the initial choice to select only verbs seems to build into the experiment a sense of design, a pre-selected "meaning" (conventional word order: subjects followed by verbs, verbs by objects, etc.) that will lead inevitably to the discovery (rather than the creation) of the sorts of patterns he turned up.

I think Mark's late-breaking suggestion that there are constraints "beyond grammar" in the sorts of sentences we compose is acute: the noises we make in communicating with one another are much more limited than grammar allows, and so it's "just statistics" that Rich gets the correlations that already exist. What those "rules beyond grammar" are have to be notions of meaning (yes?) that have us saying (say) "Girl eats cake" much more frequently than "Girl eats boy" (whatever her tastes may be). And so, if meaning's there first....

well, I'm not surprised to see it emerge later. Seems entirely predictable, because built into the system.

Re: puzzled
Name: Emergence
Date: 2005-04-08 17:54:04
Link to this Comment: 14387

...still puzzling over what I couldn't quite get a-holt of Wednesday morning: the affirmative answer to the query whether what Rich was doing was emergence.

I'm one of those that appears to be more liberal with my willingness to see the e-word most everywhere. But that is because of the way that I define it ("global patterns created from local interactions of relatively simple objects"). I also see emergence as a continuum, and so can see Rich's algorithm as emergent, but less so than that of a neural net's.

I understand that he was applying an algorithmic process that led to patterns. But the initial choice to select only verbs...

Rich can correct me, but I believe that when he was "selecting the verbs", he meant that he only select the results of the verbs to show. That is, he could have clustered all of the words (300k), but there were just too many to be able to see what was going on. So, he just decided to select and cluster the verbs. So, you can effectively ignore the comment.

I think Mark's late-breaking suggestion that there are constraints "beyond grammar" in the sorts of sentences we compose is acute: the noises we make in communicating with one another are much more limited than grammar allows, and so it's "just statistics" that Rich gets the correlations that already exist. What those "rules beyond grammar" are have to be notions of meaning (yes?) that have us saying (say) "Girl eats cake" much more frequently than "Girl eats boy" (whatever her tastes may be). And so, if meaning's there first....

This may be the only time that I will have to agree with Mark, and Anne! Thinking this way, one might say that syntax and semantics are not different things, but lie along a continuum of constraints. Maybe even say that grammar and meaning emerge from the interactions of words.


context: before representation....?
Name: Anne Dalke
Date: 2005-04-14 18:22:41
Link to this Comment: 14570

My reception of Deepak's talk y'day morning about the logics of artificial intelligence was couched in the time I'd spent, the evening before, with an 11-month-old. As I told some of you afterwards, baby Audrey was delighting her mother and me with a series of vocalizations that seemed to have NOTHING to do w/ representation: she'd pick up a ball, then a book, then another toy, and talk--certainly not to us, not (seemingly) even to her toys, but (as if?) to herself, as if she were just trying out sounds, seeing what noises she could make, a sort of internal (if sounded-out?) speech. So--before we get further into the varieties of ways in which we can study (and reproduce) language as a symbolic system, I'm wondering if it--originally--really is representational, if it serves another purpose, and then--as we hear children say "cat," and reinforce them--"yes, that's a cat!--now, what's that?"--they acquire the sense of its representational possibilities....?

For those, like me, interested in such questions, there's another group (re) starting tomorrow morning:

Interested in Language?

An interdisciplinary discussion began in 2002 whose aim was to bring different perspectives together to explore aspects of language. The group met bi-weekly to discuss selected readings and their implications for our understanding of language. A seminar series was carried out last semester along these same lines which provided talks on language at each of the Tri-co schools, but the group discussion meetings were temporarily suspended.

The group will be starting again this semester, on Friday, April 15th, in an effort to continue these conversations exploring natural language. The reading for the 15th is now available outside room 106 in the Park Science Building at Bryn Mawr. It is from "The First Idea: How Symbols, Language, and Intelligence Evolved From Our Primate Ancestors to Modern Humans", by Stanley I. Greenspan and Stuart D. Shankar.

Please come and check it out! We will be meeting at 10am in room 264 of the Park Science Building at Bryn Mawr. Muffins and Coffee will be provided.

Hope to see you there!

?? email

The hidden logic enabling the reality of natural
Name: Witheld fo
Date: 2005-04-27 08:06:35
Link to this Comment: 14887

Trying to comprehend nature's capacity to induce a local state of say an individual unit of a physical emergence is tough enough. Then separately, trying to comprehend the emergence of a centre of individual intelligence is equally stupefying. But to combine the two, as within a single step of emergence seems impossible. Yet within the totality of nature, though seemingly just a state of an utter and variable randomness has created the reality of the universe, to include our local world of realities. Obviously there is a logic behind such self creative enablements.(More on this later as I gather my thoughts)

It is all emergence?
Name: Wil Frankl
Date: 2005-04-27 12:49:06
Link to this Comment: 14890

Doug - Re: emergence along a continuum or bounded by upper and lower thresholds I had to think about how we agreed or disagreed on the definition. I am now of the opinion that it is all (the big ALL) emergence. I argued that you can link the inanimate with the animate with enough time and space and you need nothing other than interactions. Thus it follows that the entire universe and ALL that is in it is created, has been created via this process. If I, as an emergent entity look at a small subset of a few agents with simple interactions and call the outcome predictable and determinable then that is only the result of having defined a small, bounded subset of the entire process. Pull out into ever-inclusive sets, and you will come to a group of agents and interaction that lead to an unexpected, non-determinable outcome. Hence, your thermostat analogy works well if you are dialing up or down the scale of the observation. Is this more?...or less? what you meant? Cheers -- Wil

re emergence and relativism/fundamentalism
Name: Paul Grobstein
Date: 2005-04-27 18:44:58
Link to this Comment: 14907

Thanks to Will/all for rich discussion today. Flagging some ideas for further future discussion:

For related discussion in another, less technical venue, and an associated on-line forum, see Fundamentalism and Relativism.

in which she crosses a threshold
Name: Anne Dalke
Date: 2005-04-27 21:46:07
Link to this Comment: 14911

Because I served as prod and goad for Wil's presentation, I wasn't expecting any surprises during our discussion this morning. But I had a couple of very *sharp* ones, and am recording them now, as a way both of reviewing/clarifying/reflecting further for myself--and as invitation to others to help illuminate just what's coming down here, in terms of both trees AND forest (the lens keeps shifting, what I *thought* was in focus keeps moving out of view....)

What I heard were these things:

It's in that sense that I think "emergence" has everything to do w/ "relativism": it occurs only in an interactive process, which (stigmergically) leaves its traces in an environment, wherefrom they are picked up unpredictably, put to use unpredictably....

the hidden logic enabling the reality of natural e
Name: Name withe
Date: 2005-04-28 18:14:45
Link to this Comment: 14938

Thoughts, broadening over a 10 year plus campaign of amateurish realisations. On a universally large front we are faced with regional variations of cosmic fields of energy which conform to a whatever localised pattern. But despite such variations there is one rule which rides paramount. That of mathematic conformity. Which in turn demands that each event of emergence however complex or simple must derive from an arithmetical formula. Which in turn justifies a forward or a totally reversible run of logic. So this same logic must hold good when we percieve whatever aspect of physical/mental emergence, requiring a doulble entity of both say a viable structure cum recyclic circuitry That is, once the well hidden enaqbling logic is truly recognised. There actually is a stand of this arithmetic logic. Highly and widely recognisable and therefore verifiable should professional interest arise.

emergence/relativism and ... postmodern whateveris
Name: Paul Grobstein
Date: 2005-05-04 15:32:06
Link to this Comment: 15036

Rich conversation today (as usual); thanks to Jan/all. A few highpoint notes for me, and anyone else for whom they might be useful ...

Obviously, I DO think there is an important relation between "emergence" and "relativism", ie that the former is most effective as an exploratory perspective when it is recognized as implying the latter philosophical position. As with the science/religion split, however, I don't think it is necessary that everyone working in the emergence area have a commitment to the broader philosophical position. People can be "fundamentalist" and interested in emergence, just as they can be religious or scientific and be simultaneously either "fundamentalist" or "relativist". Recognizing and making productive use of the "fundamentalist/relativist" distinction seems to me more interesting/important/generative than continuing to wrestle with the religion/science distinction or the humanities/science distinction, both of which it cuts across.

Along these lines, a key question that emerged this morning is that of how to avoid being "carried here and there by the winds of doctrine", be it religious doctrine or positivism derived from science or the indefinite whateverness of postmodernism. I've elsewhere made my arguments against both religious doctrine and positivism (cf. Science As Story Telling and Story Revising and Writing Descartes ... ) as responses to the problem. And suggested instead a fundamentally emergent/relativist alternative, one that "puts confidence in, rather than fears, having 'nothing as definite'".

"It puts confidence as well in individual judgements ("egos and desires") informed by, among other things, each individual's interconnections with other human beings. It says also that there is no "ultimate measure," but there is, in its place, the best one can do at any given time. Moreover, it treats "relativism" not as a "dictatorship" but rather as an invitation to individuals to be individuals, to discover and value both their commonalities and their differences. Finally, it offers a new sort of direction for humanity, one in which individuals themselves become for themselves (and each other) the active agents responsible for not being "carried here and there by the winds of doctrine", and one where everyone benefits from their own distinctive explorations and the ongoing and different explorations of others." Against this background, what was particularly interesting to me this morning were the concerns expressed that the "emergent/relativist" alternative wasn't sufficiently distinct from "post modernist whateverism" and various ways people tried to make it more distinct. For reasons discussed this morning and elsewhere, I'm not inclined to accept the idea of appealing to "rationality" or the "realness" of things as a way to make the distinction. At the same time, I'm not inclined to accept the position that "there is no fundamental distinction between what we do and what ants do". We DO have the capacity to entertain counterfactuals, to tell stories that represent alternative understandings and may motivate different behaviors ... and that creates distinctive problems and opportunities. If we don't acknowledge/accept that, we do indeed miss much of the significance of not only the humanities but the story-telling features of the sciences and of much of the rest of human creativity as well.

So, what is the difference between the "emergentist/relativist" alternative (what I have elsewhere called "profound skepticism") and "post modernist whateverism"? Both recognize the existence and significance of stories, and both deny that there is any external fundamental standard by which the validity of any given story can be established. But at this point there is a significant divergence. "Post modernist whateverism" (at least as I understand it) denies therefore that there is any basis for making value discriminations among stories, or, somewhat less strongly, asserts that the only basis for making discriminations is in terms of the accidents of cultural norms. In either case, it leaves one more or less in the "all stories are equal" mode.

The emergentist/relativist/skeptical perspective takes a much more aggressive approach to story discrimination and story creation. While acknowledging that there may be at any given time multiple different but equally valid/useful ("incommensurable") stories, it asserts the appropriateness of equally acknowledging that there are, at any given time, also stories that are demonstrably NOT valid/useful, ie that don't fit the observations, don't "work". It also asserts that those stories that fit more observations are likely in the future to "work" better, and hence are to be preferred to those that fit fewer observations. It additionally asserts that stories that seem likely to be more generative of new stories (ie that cause people to have new experiences) are likely to be preferabl to those that don't. Finally, it treats ALL stories, including those currently judged acceptable, with skepticism, ie it seeks to find not the one true story but rather to challenge existing stories as a mechanism for creating new and less wrong ones.

Before there were stories, there was a quite effective exploration process displayed successively by the active inanimate (cosmic evolution) and model builders (biological evolution), a rather playful process in which the validity of discoveries was assessed not against a fixed and invariant external standard but rather by their future generativity. The emergentist/relativist/skeptical perspective suggests that, whether one likes it or not, the same principle will probably hold in the long run for stories, since story telling derives from and occurs within the constraints of the larger ongoing exploration process. And it suggests that's not entirely a bad thing: the uncertainties inherent in relativism may make individuals uncomfortable at times, but they also provide the opportunities for all individuals to be meaningful agents in the creation of their own stories as well as meaningful participants in the larger process of exploration within which they find themselves. This, it seems to me, not only provides a way out of "whateverism" but a way to safeguard against the arbitrariness of cultural norms as well.

Friendly Amendments #1 & 2
Name: Anne Dalke
Date: 2005-05-04 18:41:53
Link to this Comment: 15039

I promised to send you all a delightful little film called "Meme", which Haley Bruggemann and Eleanor Carey made for our course on The Story of Evolution and the Evolution of Stories. While I'm here, I want also to record the most mimetic (=memorable) moment which occurred for me in this morning's discussion: Al's explanation that what we now call the "theory of relativity" Einstein himself first called the "theory of invariance," in recognition of his attempt to identify certain properties (the speed of light, the laws of physics) which are independent of observers. According to Einstein, Picasso: Space, Time, and the Beauty That Causes Havoc, "Einstein never agreed with the high abstractions of quantum theory," and "ultimately lost contact with the implications of his own revolution"(6).

I guess I'm feeling a little Einsteinian these days, holding pretty tightly onto "initial conditions," resisting the notion that there might "be an end to emergence," insisting that it might "definitionally, be both without beginning and without end." It's in that sense, Jan, that I (sorry! recently mis!-) appropriated the role of "single humanist in the Working Group on Emergence." (Guess I had figured--from all the professional writing you do for a general audience--that you'd moved beyond/above the disciplinary divisions that bedevil the rest of us. Anyhow, I'm quite happy to have you in my corner.)

With the term "humanist," I was refusing to be limited to "emergent strategies" that give us "closure," that attempt to say definitively "what emergence is," or even to re-produce it artificially. What I was (more positively) claiming with that term was a respect for unknowability, a recognition of the ongoing unpredictability that is the result of randomness, and a celebration of the power of the imagination, as the capacity to think of what is not. I said a few weeks ago that science is distinguished from "art" not (qua Paul) by its "unremitting skepticism"--since there are TONS of skeptics/unremitting lnaysayers among the humanists--but rather by its willingness to take the risk of publicly telling a story about the real world--one that works, one that has with consequences--and is thus challengeable. What I was trying to say today was the other side of that claim: that art refuses closure.

There was a Friday afternoon brown bag session a month ago asking Whence the Constraints on Stories?, and it's that (next) question which I heard Tim re-asking this morning: in the absence of a blueprint, a design, a governor, what are the rules, where the boundaries, whence the discriminations we need to be able to make amid the indefinite whateverism of postmodernism? (And--following from this--how changeable are these rules of storytelling?) I heard Tim trying to claim a special role for "reason," Paul for "intentionality" (which "fundamentally changes the process"). I'd say the ability to reason/tell a story can speed up the process of exploration (alternatively, too much intention can slow it down. Certainly it can change the pace!) But the process of exploring, not-knowing-what-the-end will be, continues, unchanged--as long as (qua Wil) there are other entities for unpredictably interacting with:

"emergence" has everything to do w/ "relativism": it occurs only in an interactive process, which (stigmergically) leaves its traces in an environment, wherefrom they are picked up unpredictably, put to use unpredictably....

On the relativity of "fundamental"
Name: Doug Blank
Date: 2005-05-05 11:40:49
Link to this Comment: 15048

I did in fact say that "there is no fundamental distinction between what we do and what ants do". To go with the analogy used yesterday, I would also say that "there is no fundamental distinction between water and ice." All, I mean by these statements is that there is a continuum between one and the other, and you can move between them by turning a knob. The knob might be connected to a thermostat in the water/ice case. The knob might be connected to a device that controls the amount of neurons and organization in the brains of the ants/human.

This is the essence of emergence, to me: you can turn a knob on a continuous parameter, and get non-linear, feedback effects. From my perspective, ants and humans are closer together than ants and simulated ants. We haven't figured out how to get non-bounded emergence in simulated systems, yet. Ants don't tell stories, true. But I bet they do something out of which evolution could build a story-telling machine.

Paul said We DO have the capacity to entertain counterfactuals, to tell stories that represent alternative understandings and may motivate different behaviors ... and that creates distinctive problems and opportunities. If we don't acknowledge/accept that, we do indeed miss much of the significance of not only the humanities but the story-telling features of the sciences and of much of the rest of human creativity as well.

I am a humanist, but I don't think I practice it in quite the fundamentalist way that Paul describes (preaches?). Reminds me of a Rodney Brooks quote from his book Flesh and Machines:

"We are machines, as are our spouses, our children, and our dogs... I believe myself and my children to be mere machines. But this is not how I treat them. I treat them in a very special way and I interact with them on an entirely different level. They have my unconditional love, the furthest one might be able to get from rational analysis. Like a relgious scientist, I maintain two sets of inconistent beliefs and act on each of them in different circumstances..."

On the one hand, I can see the continuum from ant to human, and see how you can build story-telling/conterfactuals from simple prediction. That allows me to dismiss "intentionality" and "reason" as nothing more than neuronal activation. But, as an emergenaut, I can also appreciate "intentionality" and "reason" as emergent properities that have real cause and effect. I disagree with Brooks: I think these views are two different levels (as in emergent levels), and are not inconsistent with one another. Each has its own perspective. My children are both machines and my beloved.

My brand of humanism allows me to see both levels, but not as two separate entities. It seems we just don't have the language or concepts to describe this. If we did, I think most of us would agree with each other.

It may be wise to treat thoughts and consciousness separately from other ideas in emergence.


most memorable meme: the over-valued "human"
Name: Anne Dalke
Date: 2005-05-09 16:46:39
Link to this Comment: 15092

The second most memorable meme for me, last Wednesday morning, was Doug's crack (made from the particular perspective of a computer scientist) that he thought of "humanist" the way he thought of "sexist"--as valorizing one form of being/one kind of species over others. That is (my interpretation now): humanists over-value the human in the same way that sexists over-value one sex.

As a professional humanist, I've heard this critique before (most generally, we value human products over natural or mechanical ones; more specifically, we value literature that values the human over that which celebrates either the natural or the mechanical world). But this time, in the context of emergence, it surprised me--and I found particularly useful this notion that there/we are new things created in this process which did not exist before they/we emerged--but they/we are nonetheless *just* another stage in the process, one that ratchets it up a bit, by asking all the questions we ask....

Including those currently moseying around the forum on fundamentalism and relativism, where I also just found myself using Doug's idea of a continuum to suggest that Fundamentalism is relativism, with reference to a different point of comparison: that which came first, rather than that which is concurrent....both fundamentalism and relativism are fundamentally relative, (which is to say) =expressions of different stages of emergence.

Thanks, Jan, for keeping us @ it!

Meaning and memory
Name: Wil Frank
Date: 2005-05-11 12:07:21
Link to this Comment: 15112

More on the continuum: Meaning and memory. Not sure where this skein fits, but I thought I'd put it out there in case it helps - it helped me, so thanks for indulging me.

Meaning and memory seem to traverse a parallel track of development through degrees of complexity. Within the context of very simple systems, memory and meaning have a very simple definition, but not fundamentally different then in cases which apply to conscious beings reading text and telling stories to themselves about past and current internal states. At the simplest levels memory can be thought of as a change in state, while meaning is the process that results in the change in state. A pillow has a memory and in a slightly more complex system, a bacterium that registers its current state based on past states also has a memory. Meaning at this level could be thought of as simply a code as in a string of symbols or patterned fluctuation in electromagnetic waves that when decoded (assuming a proper arrangement that can decode – a model builder – a lever, a robot or a biologically organic life form) can change the state of the decoder. Meaning at this simple level is “anything” that causes a state change in the decoder. Yes, solar energy warming a rock is a very simple memory in that it holds that change however fleetingly and is meaningful in the sense that it changes the state of the rock. Photosynthesis occurring in a plant leaf is slightly more complex, but a completely parallel phenomenon. The leaf decodes light energy in a meaningful way fixing carbon and has a memory of that process in the form of a glucose molecule.

One might be more comfortable calling these examples passive interactions and simple innate behavior based on a strictly deterministic cause and effect (stimulus and response effect). But I claim that there is essentially nothing fundamentally different about the two ways of describing the phenomena.

Now let’s progress to a more complex system that we are more familiar with, our brain. I had just said that meaning at the most simple level is “anything” that causes a change in the state of the decoder. This seems to me a useful way to distinguish the flood of thoughts and stimuli bouncing around in our brains versus what filters through and becomes conscious. The soupy solution of neuronal firings is meaningless until it becomes filtrate and engenders a conscious change in the “state of mind”. However, at the unconscious level all the stimuli and other subconscious thoughts are constantly changing neuronal patterns - changes in their state - and thus are the underlying meaning that gives rise to the conscious meaning at a different level. This is much like the “Tile Effect” of cellular automaton – a new tile can be defined by 4 or 16, etc of smaller tiles that have the same set of rules governing them. I think most people are more comfortable calling the conscious level phenomena meaning and memory, but again I see no critical distinction between the rules that apply at either of the levels.

second language acquistion
Name: Jan
Date: 2005-05-11 13:32:00
Link to this Comment: 15115

Thanks to Paul for setting me straight on the critical period hypothesis in second language acquisition/learning. Here is a review that refers to some of the research. "What is clear is that the old notion that the nature of L2 acquisition changes suddenly and dramatically at around the age of 12-13 because of changes in the brain is much too simplistic (as has been generally recognized for some time)."

out of a male cat's memory
Name: Anne Dalke
Date: 2005-05-13 18:58:04
Link to this Comment: 15179

Wil said, memory can be thought of as a change in state, while meaning is the process that results in the change in state.

I'd say it somewhat differently: a change in state can be thought of as making a "model," while the meaning we make of that change is what we call "memory."

What I remember (sigh--now another fraught word) most from this past Wednesday's discussion of Jan's male cat's memory is the distinction we were working between modeling and remembering, aka between updating a model and making a memory, aka between adaptive behavior and internal experience. What was most memorable to me was the realization that--directly contra recent instruction elsewhere regarding the conscious, deliberative, fabricated construction of memory--computer scientists use exactly the same the term to describe what happens when they tinker w/ a robot, updating its record of what it needs to do.

So (being the wordsmith) I propose we distinguish (for our purposes here) between model-building and memory-making. Per the dictionary (predictable groans here): model (fr. L. modelus, modus, to MEASURE) is a very different thing from memory (fr. L memoria mindful, re. to MOURN). The model can measure; only the memory can be sad that the measure falls short. THAT's the difference--the capacity to regret what is not. (As well, of course, to imagine a new.)

(more than) one question
Name: Anne Dalke
Date: 2005-05-18 12:49:08
Link to this Comment: 15215

"Ants have experiences, but they don't experience their experiences."

Thanks to Paul for this morning's provocations. What I took away from our (somewhat maddening) conversation was one commitment and (more than) one question.

The commitment: never again to use the word "experience." I will speak instead of "interactions" and "awareness," as a way of clearly distinguishing that-which-is-not-reflective from that-which-is. I'll be interested to hear, as time goes by, whether the matter of having "more or less awareness" can be laid out more clearly. And I'll be interested to see, as more time goes by, whether the computer scientists among us find that distinction as useful as this humanist does, or--given their commitment to building systems that "learn" ("think"?)--whether it seems to them nonsensical.

So much for precision of language. The (much more interesting) question raised for me by this morning's discussion (one I'll be talking more about next week), was summed up by one of Paul's final "key points": Ambiguity and uncertainty are...the fundamental "reality" by which the brain ... creates all of its paintings. Based on what I learned this morning, I'd re-phrase this: ambiguity and uncertainty are the "reality" which the brain paints in the stories it makes up.

I've already taken issue elsewhere with claims about "fundamental reality." What I'm seeing more clearly, from this morning's talking, is how much conflict and ambiguity is actually produced by our compulsion to try and make coherent stories--as well as our recognition that we never can succeed in doing so, that something will always refuse to be contained within the tales we craft. This is what literary scholars generally call sub-text, what Derrida more specifically called the "infinite deferral of meaning" (and what fueled his persistent "looking around" for the remnant that was not encompassed in any particular act of meaning-making).

The story heard by this brain this morning was that the architecture of all our bipartite brains enables us to concoct a "description of the whole" (=a story) which never encompasses the whole--and so generates ambiguity. This was for me a striking revision of my earlier understanding that we are provoked to make up stories in order to try and adjudicate among (pre-existing) conflicts. Am realizing now that both the "reality" of "ambiguity" which provokes our stories, and the stories which generate more conflicts within-and-among us, are the result of our internal unstability (NOT a bad thing, since it means we are capable of telling a variety of stories, including counterfactuals, tales of what has not yet been).

Thanks for (momentary, only momentary!) clarification.

Name: Paul Grobstein
Date: 2005-05-18 17:33:42
Link to this Comment: 15216

Thanks all for rich/productive conversation. Notes for "On the Differences (?) Between Ants and People" are here. Had/have sense that at least among us things are converging a bit. Had feeling that idea of story teller as adjudicator for distributed module 1 systems is emerging an an interesting one for lots of people. And that, pending some ingenuity with terminological issues, the notion of that being associated with consciousness/awareness/having internal experiences might be as well. As might the notion of "ambiguity/uncertainty" being a product of the bipartite arrangement (itself having come into existence without a planner). Which means that "purpose" and "meaning" not only come into existence late in the cosmic emergence process but do so at an essential cost, a loss of innocence that is the price of being able to create things the model makers couldn't have created.

All of which raises some further issues worth more exploration. One is the question of where, between human story telling as a deeply destructive force and human story telling as irrelevant, do the effects of human story telling (as distinct from human behavior generally) actually lie? And the other, particularly intriguing from a psychotherapeutic perspective for the moment, is how many kinds/layers of "conflict" there actually are. In the long run, my guess is that such issues will prove significant both in neurobiological and AI contexts as well.

ambiguity and chaos
Name: Ken Fogel
Date: 2005-05-20 00:43:39
Link to this Comment: 15222

The rearranged sentence of ambiguity versus reality jibes more with quantum theory. That is, everything is uncertain until someone observes it, and then it becomes "reality." Or at least that's my simplifed take on it.

And the path from model-building to story-telling appears to follow a progression according to chaos or complexity theory. That is, emergence is more likely if an orderly system begins to show chaotic organization patterns, from which a new order can arise, while essentially conserving the original material.

One of these days, I'll learn html...

coming and going: next steps
Name: Anne Dalke
Date: 2005-05-25 11:39:13
Link to this Comment: 15238

Thanks to all for conversation arising, this morning, from my essay on Where Words Come From--and Where They Go. What I got from your generous responses was the very satisfying realization that the paper can actually make much stronger claims than it currently does, about both the coming and the going.

Each of us has experienced that moment of fear in a classroom when we thought we were "supposed" to know how to read a text--but didn't know how. Betcha re-starting the paper w/ that moment will enable me to highlight

Now: if I can just stop there/keep myself from following the next interesting point, about interplay between language and other sensory modalities (for starters: the rhythm and music of words...)

Thanks again, to all, for what persistently arises among/between us.

Name: Jan
Date: 2005-05-25 17:15:40
Link to this Comment: 15242

I like that idea, Anne, and The Editor is here to keep you in check. I'm working on the dialogue, and also want you to think about what points can be held, dramatically, for that.

A Jedi's Repsonse to Benedict
Name: Wil Frankl
Date: 2005-05-31 11:19:04
Link to this Comment: 15262

At Cornell's graduation ceremony this past weekend I was pleasently suprised by the
address (here, in it's entirety) to Cornell graduates of 2005 given by Cornell's President, Jeffrey S. Lehman. It seems to me a not so subtle response to Benedicts' "Call to Arms" against relativism. Below is just the most obvious points he made to pique your interest.

....After you leave Cornell, you will have the opportunity to take positions of authority and responsibility. In those roles you will be required to act under conditions of uncertainty, to use your best judgment about what is going on when you have little information. These will be wonderful opportunities for you to do good in the world. They will invite you to draw on your very best qualities – your compassion, your intelligence, your intuition.

And at these moments you will also have the opportunity to negotiate the temptations of the Tristero Dark Side. It will be surprisingly easy to believe that you know more than you do, to see more order in the universe than is really there, to see less entropy, to see conspiracies where there is only coincidence. It will take hard work to remind yourself of the limits of your own knowledge, to stay receptive to new evidence, to keep an open mind, especially when you feel very real time pressures weighing on your decision.

Think, for example, of the national leaders who must assess the danger posed by other countries. The journalists who must decide how much credence to give an anonymous tip. The labor negotiators who must decide whether to trust the latest representations that management has made to them. In these contexts, people are naturally tempted to connect the dots. It is more satisfying to know the answer than to live with ambiguity. And often it is easiest to have that answer take the form of malevolence, or conspiracy. It is so tempting to rush to judgment.

And yet, you can defeat the temptations of the Windigo Dark Side and the Tristero Dark Side. You do not have to develop moral tunnel vision. You do not have to rush to judgment. I am happy to provide you with five strategies for staying true to your best selves. Think of them, if you will, as the five virtues of a Jedi Master: a love for complexity, a patient spirit, a will to communicate, a sense of humor, and an optimistic heart.....

May the force be with you
Name: Jan
Date: 2005-06-01 16:49:22
Link to this Comment: 15271

Anne's introduction of "chirpy," Karen's sparrow sighting, and our little lesson in German compounds made me think of Paul Klee's painting, Die Zwitschermaschine" ("The Twittering Machine") -- "Zwitscher" means "chirp" -- which was inspired by a 19th century clockwork mechanical bird tree in the Deutsches Museum (scroll down to second item). Klee's painting suggests something between nature and the mechanical, and how machines can stimulate fantasy and the imagination. I thought I had a point...oh well.

Crank it
Name: Chirpy Bla
Date: 2005-06-01 17:26:43
Link to this Comment: 15272

A Twittering Machine that you have to crank...

Name: Tweety
Date: 2005-06-01 18:29:54
Link to this Comment: 15278

i tought i taw a puddy tat.

Name: Paul Grobstein
Date: 2005-06-01 19:03:59
Link to this Comment: 15280

Thanks to David/all for another rich conversation. Some notes of things I thought particularly intriguing, for myself and any one else interested ...

The Kant->Hegel arc as a possible addition to the early history of emergence. Interested in the idea that people were motivated to try? and failed? to find a single principle to amalgamate Kant's three critiques (analagous to the semi-autonomous parts of a biological organism, the agents of an emergence simulation) and that Hegel added the essential (and whether he understood it or not infinitely extended) time axis. Interested as well in the notion of energy (kraft) as recognition of movement/change in absence of external cause, both in general and in re "representation" and "imagination" (both as process rather than "faculty", with the latter requiring activation by something).

Interested as well in continuing explorations along the fundamentalism/relativism axis. Think it was useful to subdivide F into two versions: a) those who know truth and stick to it and b) those who don't yet have truth but believe in it/are looking for it (and so can argue about who is closer to it and who has the best method for getting still closer to it). The point here is very much not to try and confuse the two or to tar either with the other's brush but rather to make it clearer what R is by virtue of its difference from both.

R says that "truth" is irrelevant as a pursuit. replacing it as a motive foce with something like "newness" or "less wrongness", where either is defined as anything that replaces existing "stories" with new ones that potentially correct some existing problem. The point here is that there may at any given time be many such replacements and there is no argument about methodology: one simply tries things to see what works. And accepts, perhaps even relishes, the uncertainty inherent in the experimental process. As well as the inherent lack of "objectivity" involved in defining "existing problems".

Re Mark and whether he is an R or an F(b) .... see his recent Swarthmore Last Collection speech. As an avowed Rr, I have no trouble at all endorsing most of that speech, including the importance of "an appreciation of knowledge" where that "entails an understanding that knowledge is the gradual accumulation of a complex interconnected system of propositions where the individual components of that system are falsifiable and have successfully withstood attempts at falsification." And I fully agree with Mark that there is a great need to continue to help people learn to recognize and avoid "uncritical acceptance of any ideas, especially ones that you want to believe" (ie to call attention to the known shortcomings of F(a); cf I Believe ... Its Significance and Limitations).

I do though bite my tongue a bit on the line "My plea is that you make an unwavering commitment to the truth". The problem here is not only that its a commitment to something whose success can't be evaluated (even F(b)'ers admit they don't have any way to measure proximity to truth). Nor is the problem simply that the line has the air of Fness that people get into wars about. Most of all, I balk at the line because it seems unnecessary to invoke "truth". Why not simply say "My plea is that you make an unwavering commitment to profound skepticism?" ie to recognizing and avoiding "uncritical acceptance of any ideas"? No, that needn't keep one from acting at any given time based on what one knows at the time. But it does (usefully I think) remind one that the critical issue really is NOT"truth" but is instead being able to examine and criticize all ideas, including one's own.

So, Mark, an F(b) or an R in F(b)'s dress? Your call.

I am sure of so little.
Name: Anne Dalke
Date: 2005-06-01 21:32:59
Link to this Comment: 15281

Skipping back a couple of posts, picking up on the Jedi's vision...

...reading about, this week, looking for some new texts to update my college seminar, I just came across "An African Story," by Bessie Head, who says,

It is preferable to have the kind of insecurity about life and death that is universal to humankind: I am sure of so little. It is despicable to have this same sense of insecurity...defended by power and guns....

What I was (not successfully) trying to say, as we ended our happy session this morning, was that I'd noticed two very productive ways in which emergence, as a framework for thinking about how things work (especially about how new things come into being) intersects w/ disciplinary work: one looks back, the other forward.

The first is the sort that David conducted for us today: an archeology of a discipline--such as German Studies--which provides a geneology for emergent systems (w/ Hegel tentatively proffered as the great-granddaddy of what we're all up to?). The second is what I was working my way towards last week: using the concept of emergence to theorize about why certain disciplinary strategies--say reception theory in literary criticism --evolved and flourish.

The first kind of intersection looks back to tell a story so that it ends with our modern concept of emergence; the second uses emergence as a framework to explain both what has happened and what can yet happen in a discipline. I see these two different sort of disciplinary-interdisciplinary intersections as illustrating the very useful distinction Alan made today between "two senses of surprise":

Three other bits I took away from this morning's session (besides a general happiness at being involved in this still-evolving conversation)

Whether the latter activity makes us feel "chirpy" or "disastrous" is, as we concluded, a normative judgment, based on whether we seek out or fear what is not known.

There are two ways to think about understanding:


Emergence is the overriding principle that we lack overriding principles. (Paul)

And (she said chirpily) this is not a bad thing.

The Kant->Hegel Arc
Name: Doug Blank
Date: 2005-06-02 17:42:50
Link to this Comment: 15284

Speaking of the Kant->Hegel arc... literally. I would like to see this arc, in order to try to see the geneology of the philosophy of emergence, and reductionist thinking. What I'm thinking is, first, a timeline of people that contributed to this understanding, and a brief note about whose shoulders they were standing on. Then, I'd like to make a graphical chart. Not necessarily definitive, and not necessarily for any other purpose.

I've started making some notes from David, Paul, and Rob's talks about these philosopher/thinkers. I'd like to go from the "atomists" to Wolfram. I invite you to edit the page at:

This is a wiki page, which if you want to edit it, requires you to make an account and login (keeps down on the vandals). I imagine brief statements on their impact on others.


replacing the arc w/ a rhizome?
Name: Anne Dalke
Date: 2005-06-04 14:46:56
Link to this Comment: 15292

I find myself both intrigued by and resistent to Doug's invitation to us to help him create an arc/geneology/timeline of the emergence of emergence. I described above my own pleased reception of the archeology of German Studies which David gave us last Wednesday morning, thereby providing, in part, a geneology for emergent systems. But I also saw another possible move, the use of emergence as a framework to see what can yet happen, as the result of "putting pressure on a system."

So, when Doug said, "I would like to see this arc," what I saw--actually flashed back to--was a instead a (counter-?) image

that was offered during a session on Emerging Emergence last October, an image which indicated the understanding that

All of which is a theoretical way of saying that--given the complexity of the interactions of the systems which emergent thinking illuminates--I'm a little leery of attempting to construct a linear geneology of the sort of thinking we are doing. The concept of emergence, like the sorts of complex systems it enables us to study, has itself both many many sources and many applications; we might thus beware reducing it to a single arc or trajectory. Traditional geneologies are tree-like, w/ branches and binaries; I think emergent geneologies might be instead multiple, lateral, circular, semi-lattice-like, rhizomic. (Remember? any point of a rhizome can be connected to anything other...very different from the tree or root, which plots a point, fixes an order...)

(The left-hand figure has been on the the home page of the Working Group on Emergence for several years now; I took the right-hand one this spring in the courtyard of the Museo Nacional de Antropologia in Mexico City.)

All that illustrated/said (!) I'd recommend adding to the list a number of theorists from the humanities side of the fence:

m-1 evolves into s-2
Name: Anne Dalke
Date: 2005-06-06 18:35:31
Link to this Comment: 15299

When John Maynard Keynes was reproached for changing his mind, he replied,
"When I have new information, I change my mind. What do you do?"
(Introduction to von Humboldt's On Language)

Thanks to David, I've just uncovered yet another great-granddaddy for our Emergent Family Tree (I mean, Rhizome). He's Wilhelm Von Humboldt (1767-1835), whose On Language: The Diversity of Human Language-Structure and its Influence on the Mental Development of Mankind I was laboring (and I do mean laboring) my way through this weekend. Humboldt did NOT work very hard to make himself understood (as his editor says, "the reader is a barely tolerated presence"). This actually turns out to be quite apt, given von Humboldt's argument regarding the relationship between speech and thought.

Let me try to make it clearer than he does, because I think he can be useful to us. Language, for von Humboldt, is "not ergon but energia"--that is, not a product but a process, not representation but expression, not a perfect copy but a creative act. Grammar becomes, in von Humboldt's hands, distinct structures that aid mental development by limiting what is possible/shareable. His key idea is the "privacy of language," the absence of assurance that my meaning will be yours. He drew on Diderot's claim that "we share our signs owing to their insufficiency": that is, if each of us had language perfectly adequate for describing all we think and feel, we would not be able to understand one another. We adjust the words we use by testing them others, but it's actually the "sad incompetence of human speech" which enables us to agree on phrases with shared meanings--which makes "all understanding also a non-understanding."

Von Humboldt's theory of language distinguishes between m-1 (moment one, the mere expression of feeling, an initial articulation that is "perfectly free") and s-2 (stage two, when we adjust the sounds we make, based on feedback from others, and so develop a means of communication):

"m-1" "s-2"
simultaniety successivity
poetry prose
expression imitation
passion and feeling reason and understanidng
imagination science and philosophy
synthesis analysis
warmth discursity
energy light

Sure sounds like emergence to me.

Emergent Videogame Design Links
Name: Jason Cole
Date: 2005-06-08 22:22:14
Link to this Comment: 15316

Here are some links to articles and papers I read when putting my presentation together. The Gamasutra articles require setting up a free account first. They don't spam. If you still don't feel like going through the hassle, I can print copies for next week upon request.

A 140+ page behemoth that talks about the good emergence/bad emergence difference, which just part of what this guy has to say about interactive narrative, immersion, etc:

Interactive Narrative Theory and Practice by Jeffery Allan Ward. 2004.

A report of Will Wright's keynote at this years Game Developers' Conference (GDC):

Will Wright's “The Future of Content” Lecture by Vincent Diamante. 2005.
(mirrored on my site:

Article which presents the authorship vs. player control conundrum:

Formal Abstract Design Tools by Bernd Kreimeier

Book referenced in Ward's Thesis (see above) for his definition of emergence:

Understanding Interactivity by Chris Crawford. 2000.


Jason Coleman

"This is not abstract."
Name: Anne Dalke
Date: 2005-06-09 22:08:53
Link to this Comment: 15319

I appreciated all you taught me yesterday, Jason, about video game design (plus your providing all these interesting cites, above). Especially intriguing for me was your description of the tension between authorship and player control, the ways in which designers are now trying to "control emergence" when players start breaking the boundaries of game play. But I got confused @ the end of our discussion about alternatives to redesigning games to deal w/ such exploits; I couldn't understand whether "making the player a selection force" meant that the game would be manipulating us or adapting to us (or both--I guess it's both? And I did appreciate Paul's observation that the whole activity had important theological implications!).

But in the midst of all this interesting speculation--as considerable blood was being flung around on the screen--I wrote to myself, "This is not abstract." I appreciate (value, enjoy) the need to understand the structure of play, and ways to open up the design space and enable exploration of new territory, but when that play is concretized in games in which human-like creatures are killed, well...

We've talked a lot this year about the bi-partite brain, about the loopy interaction between the full, unstructured business of the unconscious and the more spare and structured work of the conscious mind, and about how the continual effort to find ways to reconcile the difference in style--and resulting stories--between the two brain parts is a permanent tension that generates creative activity. So when I was asked, somewhat violently, if I "really wanted to talk about how video games cause social violence," the answer was "yes"--I do want to understand better what relationship there might be between the sorts of immersive interactive games being designed these day, and the prevalence of violence among the most popular of them. I do think that we need to keep on "looping" between the abstract and the concrete, between function and form, structure and style.

It may just be that "people are violent." It may be that many young men/game players in this country don't have any venue for acting--so blowing up virtual people gives them satisfaction. It may be that the sort of "freedom" we were celebrating yesterday morning--to notice potentials and exploit them--can only be realized in (violent?) action--though that doesn't jive at all w/ something that has stuck w/ me from the forum running on Serendip during '02-03 on The Place of the U.S. in the World Community:

One strikes out at another, or a people or a nation, if and only if one is prepared to admit a total failure of the thoughtful mind to offer alternatives to the problem at hand. One may agree to go to war, but ONLY by first admitting intellectual and moral bankruptcy, an utter inability to conceive of alternative and preferable paths to the resolution of human conflicts.

If acting violently is a failure of imagination (which I think it is), then I suspect that creating games that invite us to act violently is too. And if not acting--that is, the ability to withhold action--is an expression of our free will, then it might be well worth our while to design and explore the potential of games in which not acting is a form of acting, in which it "takes more discipline to refrain from doing harm to others."

Name: Paul Grobstein
Date: 2005-06-10 16:24:44
Link to this Comment: 15323

Recently saw Crash (showing at the new Bryn Mawr Film Institute). Strongly recommend it to anyone interested in the real life issues of emergence as they bear on trying to create a more effective pluralistic society. See elsewhere for thoughts along these lines.

Name: Anne Dalke
Date: 2005-06-10 23:20:23
Link to this Comment: 15325

Seconded. Crash is QUITE a propos our discussion of last Wednesday, on the tension between being in control and...


Langton's Ant
Name: Paul Grobstein
Date: 2005-06-29 17:33:57
Link to this Comment: 15368

Thanks Mark, all for rich/generative conversation this morning. We've done some editing on "The World of Langton's Ant" (thanks for input from Ann Dixon and several of you), so happy to have the exhibit relooked at (or looked at) and further commented on (either here or in an on-line forum for that exhibit itself). The ten questions Mark addressed are at which includes a link to Mark's responses at

A few thoughts from the conversation this morning, for myself and whatever use they might be to others ...

I think it will probably prove useful to recognize, as came out this morning, that the deterministic/random distinction may need some further subdivisions, in particular that there are two distinguishable categories under "deterministic", one being "closed form" and the other ... "fundamentally iterative"? The point here, as per Mark, is that "chaos theory" has established the existence of fully deterministic systems that are "ill mannered". Such systems are difficult to predict from watching their behavior and may be very sensitive to initial conditions but are "in principle" predictable and will, if very carefully controlled, satisfy the requirement that they repeat their behavior exactly if started from exactly the same initial conditions. What is perhaps most interesting though, in the present context, is that the behavior of many such systems is demonstrably not describable in terms of any set of equations with time as a parameter. To put it differently, there is no way to determine the state of such systems at an arbitrary time by plugging that time into a formal description (a mathematical expression). The state at an arbitrary time can be determined only by allowing the system itself to evolve through all states from the start to the arbitrary time (hence "fundamentally iterative").

The point here is not to be pedantic or to split hairs but rather to acknowledge a major twentieth century insight and to sharpen a twenty first century question. Before the development of chaos theory, there was a presumption that "difficult to predict" and "random" were the same thing, and the effort was to continue to pursue formal descriptions with time as an independent variable of those things that persisted in being difficult to predict. The recognition of "fundamentally iterative" deterministic systems has dramatically changed the landscape, by showing that "difficult to predict" and "random" are definitively NOT the same thing. And raising the possibility (as per Wolfram and Mark) that "random" does not in fact exist at all; that there exist only the two categories of deterministic.

My own inclination, for a variety of reasons, is to retain the distinction between deterministic (in its broader two part sense) and something else which is non-deterministic or truly "random" (as opposed to just being ill-mannered). Part of what inclines me to do so is a concern about the existence (or lack thereof) of "free will". But part of it is the more immediate concern that abandoning the concept of randomness both makes trouble in other areas of science where it is pretty firmly embedded and, even more importantly, unnecessarily narrows the ongoing questions one might ask. A commitment to complete determinism focuses attention, for example, on finding ways to eliminate unpredictability, a task that could prove impossible and, in any case, disinclines rather than encourages one to pursue some other questions that might prove generative (where does indeterminacy come from? what are its possible adaptive significances? how is it regulated?).

Mark (and Wolfram) notwithstanding, there is nothing at the moment that precludes the possible existence of significant randomness in all systems (ourselves and other living entities included). Nor, I freely acknowledge, anything that absolutely requires it. Any finite set of observations can equally be accounted for by a deterministic algorithm or by a process that includes some degree of randomness in it. And so we have, at the moment, no operational basis for rigorously discriminating between phenomena generated by deterministic processes as opposed to processes involving some degree of randomness. In such a situation one might opt for a commitment to determinism because one feels it, as Mark does, to be "simpler". That's a matter of taste though; my own sense is that it is "simpler" to start with randomness and derive order from it than to have to deal with where order came from in the first place. Or one might decide that since the two possibilities are not, at the moment, operationally distinguishable in general, they must be the same thing. Or (my choice) one might decide that we lack AS YET a full enough understanding of the differences to have a solid operational distinction.

Along these lines, I take some comfort in noting, from the exhibit and our conversations, that we actually CAN make an operational distinction in at least some restricted situations. A deterministic system is one that when started again from exactly the same initial conditions does exactly the same thing; a random system is one that does not (the latter, as Alan noted, can be shown in a small number of trials; the former may take an infinite number to be sure but one can get a pretty good sense in a smaller number). It is also my intuition that one can distinguish a deterministic from a random system if one has the time to make an infinite number of observations (or at least will be able to say it is a random system within some aribitarily small probability of being wrong). These distinctions are applicable only in a restricted set of cases but may point towards ways to make the distinction more generally. The upshot is I will stay, for the moment at least, with my bet that there is more going than can be accounted for in a deterministic universe, even one updated with chaos.

Some other things that may prove worth pursuing ...

Is there a way to actually measure the relative "simplicity" of deterministic as opposed to non-deterministic systems? Would a story teller come up with the set of isntructions in Langton's ant from observing it? Are there an infinite or a finite number of such stories? (relevant in re Popper presumpition of finite possibilities in justification of falsification as asymptotically approaching a description of Reality)? Does effective bidirectionality always preclude "closed form" descriptions? Oddity of PG/MK flipping from relativist/realist in general to opposite positions re "interesting"/"purposeful" (am more comfortable defending "purposeful" in absence of mind, ie "bidirectionality as generator of attractor behavior as basic meaning of stripped down onion of "purpose"). Interesting relation between observer trying to predict world and story teller trying to predict own behavior; an inevitable tradeoff between ability to "choose" and ability to "predict"? The transition from the two to the three body problem as an example of going from category 1 to category 2 deterministic. The cybernetic (Wiener et al) revolution as the initial recognition of bidirectional relations as the source of self-correctedness, ie "purpose".

Name: Anne Dalke
Date: 2005-06-29 21:57:06
Link to this Comment:

I want to peek through the "crack" that Susan Wright offered (hi, Susan) when she asked her husband if he "knew the answers to the questions," and he responded that he "hadn't even read them." I'm a *little* less interested in the fact that "only a guy would do this" (take that sort of risk?) than I am in the idea that "doing this" is an exemplum of exactly the sort of indeterminacy that we were talking about (and *some* of us--including the exemplar himself) were trying to get rid of this morning.

As we walked out, Tim was describing the breakdown, at the gaming conference he just attended, between "narratologists" and "ludologists" (do I have the technical terms right?), between those who see gaming as story-telling, and those who see it as playing. For me, this distinction aligns with

So, Mark may talk the talk.
But he didn't (this morning @ least) walk the walk.

Yours in the service of not-just gender dimorphism,

Karen's talk
Name: Mark Kuper
Date: 2005-07-07 11:12:44
Link to this Comment: 15373

Since I couldn't be there in person, I thought that I would comment remotely on Karen's paper which I very much enjoyed. 1) Because the cell involves such a multitude of complex and interacting processes, a theory of the cell may require emergent modelling (this doesn't take the place of a detailed structural understanding of the pathways of the individual processes - such an understanding is necessary to correctly model how the "agents" (ie. the parts of the cell suitably defined) interact. 2) One gets the sense from the paper, and from much of our discussion, that reductionism is somehow antithetical to emergence. I don't see it this way at all. This is where Doug Blank's analytic vs. computational distinction comes in. If we could specify in complete detail how all the parts of the cell ,or any other complex system, interacted, this would be reductionist in my view (in fact, in my view the alternative to reductionism is mysticism, not any kind of science). But, we still might have to run Doug/Wolfram-like simulations to see how the whole system interacted. 3) Which brings me to the one moral of emergence that does not seem to be in Karen’s paper. Toward the end, Karen writes, “Should we be “playing God” and probing into questions best left for religion and philosophy?” Well, given the track record of religion (read today’s newspaper), I have no idea what questions should be left to it, but one hallmark of emergent science is “surprise”. Since the system is complex, it is very hard to predict how the entire system is going to behave if you change one apparently little thing. This is a cautionary tale, from science itself, about mucking around and creating new life.

What is reductionism?
Name: Doug Blank
Date: 2005-07-07 23:13:45
Link to this Comment: 15375

I thank Karen and Mark for their discussions. I think both helped bring up some distinctions that we hadn't made before.

If one considers the opposite of "reductionism" to be "mysticism" then I can see where Karen and Mark were coming from in their defense of reductionism. I think that there may be two related things that should be looked at separately.

The first is the idea that to understand a complex system (a whole) one should break it down and attempt to understand its parts. This is how we do science (or reverse engineering) and has shown to work well. Divide and conquer. It is the opposite of this one that leads to mysticism.

The second is the idea that understanding parts leads to the understanding of a whole. This is the flip side of the coin of the first, but is really a very different enterprise. The opposite of this form of reductionism is emergentism.

Of course some systems can explained in a straightforward manner from their parts (in a closed-form way). Others cannot and require a step-by-step simulation or algorithm (Wolfram's main point). We have tended to lump these two together, say, when we say that an outcome can be "predicted" but the methods are very different. There is a "mathematical modeling" group here at BMC. I understand them to be interested in the first (analytic) method which seems distintly different from the "computational modeling" of the second.

Anyway, this has helped me understand why I agreed and disagreed with both Mark and Karen's reductionist ways. Now I see that there were two things.

reductionism and ...
Name: Paul Grobstein
Date: 2005-07-12 16:55:44
Link to this Comment: 15473

Share Mark's regrets at not being there in person, and Doug's interest in clarifying relation of reductionism and emergence. Perhaps helpful is a distinction I made in an older paper where I coined the term "naive reductionism" for "the presumption that there exists a single set of properties at a lower level of organization which suffices to account for those at a higher". "Naive reductionism" was a common investigational posture in a wide array of areas of biology during the heyday of the cellular and molecular revolutions. One of the things I enjoyed and found valuable about Karen's paper was the reminder that biologists have been forced by their own data to move beyond "naive reductionism" (and, in so doing, have discovered for themselves the need to recognize "emergent phenomena").

There are clearly alternatives other than "mysticism" to "naive reductionism". One, which Doug and Mark seem to have agreed on, is a form of reductionism that is has emergence at its core: the idea that one can explain phenomena at any given level of organization by studying the parts and their interactions at a lower level of organization (without the additional "naive" presumption, and so allowing for the possibility that properties at lower levels of organization won't look anything like those at higher levels and might have to be simulated to see that they do in fact yield them).

Interestingly, though, the older paper also sounded a cautionary note about reductionism in general. It is not in fact the ONLY escape from "mysticism". There exists as well the investigational posture that admits of the possibility that a complete characterization of the properties and interactions at a lower level of organization in any defined system will fail to explain the higher order properties because they are additional influences operating from outside the system being investigated. There is nothing "mystical" about this, except to investigators myopic enough to believe their particular system is the whole world, and there are LOTS of known cases where influences from outside the investigated system (including phenomena at a still higher level of organization, so called "top down influences") proved to be important.

naive reductionism
Name: edward tie
Date: 2005-07-12 19:22:45
Link to this Comment: 15474

If I may, please. Going back to Comments 14887 and 14938

There is a most elementary arithmetic logic which underpins naive reductionism, to extreme limits. Yet while displaying such an utter simplicity as one expects from an instance of natural emergence, one can already "read" almost universal consequences as directionally guided from this singular event. Universal in the sense of the many of the more outstanding or popular "mysteries" which include.

(1) Generation towards the morphology of the adult human brain.
(2) Generation towards elements of a consciously live mind.
(3) generation towards an ever higher recyclical and expanding circuitry.
(4) Parallel generation towards the layout of the nervous system. throughout the body.
(5) Enabling the thinly necked communicative and reactive unity across the nervous system.
(6) Acquiring the initial organisational resource, leading to these attributes from a prior state of randomness.

That is as assisted by an amateurish and only part awareness of the ranges of characteristics which simultaneously reside throughout the brain. All on the basis of one strand only, of naive reductionism. Sorry if I intrude without good or fair reason.

out of the crevass--onto the mountaintop
Name: Anne Dalke
Date: 2005-07-13 21:53:36
Link to this Comment: 15487

Interesting to me, Edward, your insistence on the elementary arithmetic logic which underpins naive reductionism--since my own revelations, today, derive from a different place entirely.

Just so you've got an accurate picture of the landscape here:

The crevass is determinism.
The mountaintop is free will.

I had two rather major revelations today, going WAY beyond onion peeling, way beyond the "unpeeling" of words we were offered by The World of Langton's Ant.

Step one: I was pretty surprised to hear Paul say this morning (and then to read in the educational reflections portion of The World of Langton's Ant) that a common feature of science is the unpeeling of words ... the realization that their meanings are not as straightforward as one otherwise might have thought. The "unpeeling" performed in The World of Langton's Ant involves stripping "intentionality" away from purpose, to expose the "core" operation of an "attractor"; it involves removing the presumption of "purpose" from "meaning," leaving us with patterns to which we manfully--and finally, unsuccessfully--try to attribute (=add) purpose.

As a side note, I think where this "scientific" (?) activity differs from the work of literary scholars is in our (=my side's) refusal to acknowledge a "core," a disinclination to even engage in the reductionism (naive or otherwise) of seeking a "core," a "smallest box" atop which all else has accreted. As I said in a talk on language @ Swarthmore last winter, the original is always deferred - never to be grasped....any "text" always exceeds the grasp of any "interpretation," any "reduction," any story of what it is/does.

But my revelation of this morning had to do with something far more radical than this simple "unpeeling" of accreted meanings. It was rather a complete reversal of conventional understanding of what constitutes free will. As Sharon Welch puts it in A Feminist Ethic of Risk, Our moral and political imagination is shaped by an ethic of control, a construction of agency, responsibility and goodness which assumes that it is possible to guarantee the efficacy of one's actions....I criticize a particular construction of responsibility, the ethic of control, and argue for an alternative construction of responsible action, the ethic of risk.

I think Paul was offering us something along these same lines: a conception of free will that incorporates not only risk, but randomness, that makes the existence of unpredictability and uncertainty the very ground of our freedom. Because effects are emergent, prediction is not reliable. And because effects are emergent, deduction is insecure. If we can know precisely what effects proceed from what causes, if it's all scripted, then we are not free. So unpredictability is a good thing: it is the space of our freedom.

Whew. That's a lot. But that's not all. There's a further turn of the screw. I was raised by Southern Republican Methodists. Meaning: there was a very strong super-ego operating within my brain, very heavily reinforced by brains outside it. So I expressed my freedom as a child by refusing familial and social constraints (yes, there are traces yet...). The notion which emerged in our discussion this morning, which is being offered in The World of Langton's Ant (not to mention Fundamentalism and Relativism, and even more insistently in Writing Descartes), that freedom exists in the constraints--in the ability to withhold action, rather than by acting spontaneously, unimpeded by constraints--well: that's a biggie. Whatever the frog-brain is doing instinctively, it (we?) can learn to do otherwise. My head hasn't expanded enough yet to get 'round that one, but I'm working on it.

Thanks, all, for the stretching.

Congress anouncemet
Name: Gernot Ern
Date: 2005-07-19 07:09:34
Link to this Comment: 15563

Dear Colleagues,

On behalf the Society for Chaos Theory I would like to announce the 2nd international nonlinear science congress on Crete, march 2006. We try to summon an interdisciplinary group, following the success of Vienna 2003. Follow the link:

See you in Heraklion!


a new year, a new forum
Name: Ann Dixon,
Date: 2005-08-26 10:10:52
Link to this Comment: 15925

This forum for 2004-05 is now being archived. Please update your bookmarks to:

Also, please re-signup for Keep Me Posted for the new forum if you receive posting notices.


| Serendip Forums | About Serendip | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 11:57:26 CDT