Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!
Emergence, Week 3
Welcome to the on-line forum associated with the Biology 361 = Computer Science 361 at Bryn Mawr College. Its a way to keep conversations going between course meetings, and to do so in a way that makes our conversations available to other who may in turn have interesting thoughts to contribute to them. Leave whatever thoughts in progress you think might be useful to others, see what other people are thinking, and add thoughts that that in turn generates in you.
As always, you can leave whatever thoughts occurred to you this week. But if you need something to get you started ...
Deterministic vs non-deterministic models? The importance of context in presenting models? General thoughts from experiences so far in building your own models?
Determinsitic Model
The model I created is a representation of a rash being formed in a particular manner. The rash will always be the same under my requirements that make it a deterministic model. The thing I've been thinking about is how deterministic models interact with each other. What would happen if this one rash would overlap or run into itself or another deterministic rash example? Would they then dance together and spread more or because both of their programs have run through completely, would they merge and just end their spreading? It of course depends on the output that is given from the computer at the time--if there is more for the program to compute, of course it wont stop until it has finished.
Are they truly deterministic or are there minor "surprises" to us that we don't see--with two instances interacting with each other?
To give an opinion to a great question asked above, I think that they still _will_ be deterministic, but we never know until we can test two of the same/almost same models in close proximity to one another.
Deterministic?
In class, we discussed deterministic versus non-deterministic models. Deterministic ones are those that have the same outcome every time. It follows a set of rules and thus has the same interactions when we run the model. While I was playing with Netlogo, I wrote a few simple lines of code and expected a deterministic model, but somehow got different outcomes when I was running the model multiple times. I had created 4 turtles, facing 0, 90, 180 and 270 degrees respectively in the setup function. In the go function, I asked them to move forward 1 unit, and if the patch is black, set it to yellow and turn right 90 degrees, but if it isn't, then set it black and turn left 90 degrees left. (I think that's very similar to what Kathy had shown in class). It is creating a bunch of different results and I am not sure why. There isn't any variability here – shouldn't it be doing the same thing every time? Any thoughts?
Here is an example model
Here is an example model that uses the "random-seed" command to illustrate what you describe...
Pseudo Non-determinism in NetLogo
The reason you're getting
Deterministic Models
From the beginning of the course, we began with the idea that complexity can arise from simple interactions. Last week it became even more convincing that simple deterministic rules can result in complex seeming behavior as seen in Langton's Ant and the Game of Life. In addition, the importance of the environment was highlighted in Langton's Ant as it became clear that the instructions of the agent do not change and therefore, the agent is not changing. Instead, the environment changes as a result of the established rules, which affect the agent. Additionally, it is also the case that changes in the environment can be produced by an agent as well as by an observer. As far as our conversation from last week of deterministic models, we were left with the following thought: maybe everything that can be done non-deterministically can also be done deterministically?
I think that it is possible and even helpful to deterministically model something that is thought to occur in a non-deterministic way. On the other hand, while many things can be modeled in a deterministic way, there are certainly things outside of the deterministic range. A problem with deterministic models is that we think that they are limited in what they can show. However, this idea does not take into account the fact that deterministic models can teach us a lot and help us to understand things that are seemingly complex. If we accept that the point of a model is not to determine what is real, but rather to show what might be, rather than what is, then deterministic models may be considered to be good models.
On another note, I think that both deterministic and non-deterministic models can contain the emergent element of "surprise" because you can get unexpected results in both. However, in the deterministic model, what initially is "surprising" soon becomes predictable because of the basic deterministic qualities in which the model will do exactly the same thing again if started again from the same starting point. On the other hand, non-deterministic models are able to uphold the element of “surprise” and unpredictability, which brings an important question to mind; does rule-based unpredictability still leave us in a deterministic mode?
Comparing Context- 'Vants' and Serendip’s 'Langton’s Ant'
My initial reaction to Serendip’s version of Langton’s Ant was ‘Ahhhh! Too much text!’ In part this was because we had been talking about the model in class, so much of the text was more or less repeating things we had discussed already. Also, I’m one of those people who really can’t stand to read blocks of text on a screen. When I get long newsy emails from my friends I automatically drop them in a folder so they won’t stay in my inbox, taunting me: ‘I’m really interesting! You should read me!’ and then never actually read them. Text-rich websites tend to illicit the same reaction. This is not to say that I have a chronically short 21st-century attention span conditioned by action movies and television advertising—I spent hours a day during winter break reading, and have spent hours at a time this weekend playing with NetLogo. Just don’t like reading on-screen.
As an inveterate tinkerer, I appreciated the various controls provided by Serendip’s models, but what I really wanted was to see the code. With the Vants model I could not only think about emergence and toy with the controls, I could read the code and pick up programming tips, or even write in some new code and see what happened. I guess I’m a ‘learn by doing’ rather than a ‘learn by reading about’ sort of person (although I do read and save instruction manuals). Serendip’s models definitely have an interactive component, but they’re not nearly as ‘Tinker-Toys’ as a model that can be fundamentally altered.
Randomness and Deterministic Models
I’ve been thinking more about the question as to whether, were the Big Bang to be run again, every individual event would happen exactly the same as has happened this time around. On the surface, it would appear that there is too much random chance for that to happen, as we pointed out in class. However, the more I think about it, the more it seems vaguely possible that what we perceive as chance is actually predetermined and in fact deterministic. In the cast of Langton’s Ant, it looked at first like the ant was moving randomly until it began building the road. Upon replaying the model, we saw that the “random motion” could be re-created no matter how many times you run the simulation. The key here is that it took a second running for us to recognize it. Since we cannot and have not (to the best of my knowledge) replayed any sequence of time, we can’t yet recognize that what we think of as chance isn’t random at all. Assuming this is true, the Time Paradox makes sense. If one were to go back in time and change something in the past, the program would be altered and would fail to progress as planned. Alternately, if one were to get a glimpse of their future self and as a result try to change the course of their life, the program would also be altered.
The opening scene in Tom Stoppard’s play “Rosencrantz and Guildenstern are Dead” illustrates what I’m trying to express. The two title characters have been flipping a coin for awhile, and every time it has come up heads. They talk about how it shouldn’t be surprising each individual time the coin comes up heads, because chance tells them that it is just as likely to come up heads as tails. They also talk about how they could potentially be trapped in a time loop where they are not spinning a single coin multiple times but are replaying a single coin spin multiple times; hence the result is always the same. If that were the case, it is a prime example of a deterministic system. If you replay an even in exactly the same way, it gives you the same result every time. This may be a simple example, but it does help support my idea that randomness is just a narrow perspective of a deterministic system. After all, without knowing the rules of Langton’s Ant, we couldn’t predict its outcome. Maybe we just aren’t familiar enough with the overlaying rules of the universe to have the perspective that would allow us to eradicate the concept of randomness.
determinant and indeterminant models, plus ....
Intriguing conversation today about deterministic/non-deterministic models/processes. Interested in the unanimity of the feeling that if the big bang was repeated, I might not be there lecturing the second time. Because there are too many variables to control? Because it wouldn't be as much fun as computer games? Because consciousness introduces something new? Because we don't like the idea of fate? But, maybe, anything that can be done non-deterministically can also be done deterministically? To consider further ...
On another note (related)? Could one implement Langton's ant in a Game of Life format, ie with (in the Netlogo terminology) only patches, no turtles?