Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

You are here

agent-based models vs. CAs

jrohwer's picture
Projects: 
Here is my attempt to better articulate my argument that agent-based models are representable as CAs (which we already know because a Turing machine can be built in Conway's life... but this shows that it's not very complicated to do for a simple agent-based model like Langton's Ant): (also, before the argument, the implication I'm going for--that the distinction between CAs and agent-based models is in fact arbitrary, and although this does not mean that the distinction is not a useful concept, we should recognize its subjectivity) I think that any agent-based model can be represented as a CA. Granted, the rules will have to be more complex, but it is still entirely possible to rewrite agents and their environments as elements of a single CA. For example, Langton's Ant: (and this is just one of several ways to do it--the added complexity can be represented differently in the rules) can be rewritten as a 2D CA in which each cell has 4 states: on, off, on + Ant, off + Ant. Really this is just two booleans variables per cell instead of the one we are used to. The rules are: A cell maintains its state unless a neighboring cell's ant state is on and the ant is "facing" that cell, in which case that cell's ant state is turned on and it's color switched. The issue of the direction the ant is facing could be dealt with by giving each cell a 4-valued variable (valued up, down, left, or right) and setting all of these initially to whatever direction the ant starts out facing (up). The values change as a function of what direction value the cell that previously "was the ant" had. Clearly, the same rules are applied at each iteration to every cell in this system and to nothing but the cells (no agent), and it gives rise to the exact same behavior as Langton's ant. I challenge anyone to find an "agent" in this CA. There is no agent because there is no encapsulated information that is specially treated. Of course, it is more efficient (and easier to understand, probably) to write this system as an agent-based model, but maybe this example can serve as a reminder that there is nothing that agent-based models can do that CAs can't. A more direct and complete proof of this is the fact that a Turing machine can be built in Conway's life.

Comments

LauraKasakoff's picture

I agree. If CAs are supposed to be the representation of everything (life, computers, the game of life), it follows that an agent based model is an emergence of CA's with higher complexity like you have described (very nicely). My gut tells me that an agent based model should be able to be described as a one dimensional CA, as well. In fact, shouldn't anything computable be represent-able through a one-dimensional CA?
Kathy Maffei's picture

So I guess what you're saying is that it's not a matter of determining whether or not there's an agent in a CA, but that everything - including agent-based models - are in some way representable as CAs. So perhaps "agent" is just a convenient label for some collection of data (of CA cells, for example). It occurs to me that since a Turing machine can be built from a CA, wouldn't the halting problem also exist in the CA's domain? And if that's so, then doesn't that prove that a CA isn't deterministic?
JoshCarp's picture

About the halting problem: all programs are wholly deterministic, but their behavior may not be predictable without running them. So a program can be irreducible without being non-deterministic. Regarding agents, I think we agreed (or no one contested) that a 1D CA can anything that any other computer can compute. That includes agent-based models. In this sense, at least, agents and CAs are not distinct from each other. Like Jesse wrote, the apparent difference is encapsulation. But the example of representing Langton's ant as a 2D CA shows that two systems can follow the same ruleset with different levels of encapsulation. I don't think 'agent' or 'CA' usefully describes a ruleset; I think terms of this kind describe implementations of rulesets unrelated to what those rules do. If we accept this, I don't know where we stand on the question of 'cheating' with agent-based models. If we grant that agents are no more than encapsulated CAs, it doesn't seem like cheating to use them. But maybe we aren't justified in taking that encapsulation for granted. The book I'm reading for this class, The Complexity of Cooperation, uses agent-based models to describe the aggregation of simple agents into larger, more complicated ones. Perhaps we should give more attention to possible mechanisms for the emergence of agent-like systems from CAs.
Kathy Maffei's picture

I don't think that the terms "agent" or "CA" describe rulesets, either. I was suggesting that since CAs can represent agent-based systems, the term "agent" (as we're trying to determine if it applies to CAs) might be best described as a collection of data. Consider a glider, for example - it's a collection of data (with changing states) that moves from one set of CAs to another through the domain. Your definition of "agent" as describing implementation is also agreeable, and maybe not inconsistent with mine, I think. So, back to Doug's question: "What is the difference between a glider in the Game of Life, and Langston's ant?" Would you say that your definition "implementation" implies that the glider and the ant are both agents?
Leslie McTavish's picture

The halting problem is undecidable. This is not the same as non-determanistic. Maybe I'm not following your argument correctly?
PaulGrobstein's picture

Yes, Langton's Ant can be simulated by a CA. But it doesn't follow from this that "any agent-based model can be represented as a CA". Yes, agent-based models may be simpler/less time demanding than CA's and yes, they may be more "subjectively" appealing, but there are some additional issues at stake that have yet to be resolved.

One is the problem of determinacy and indeterminacy. A CA is by definition a deterministic system. An agent-based model may or may not be. Yes, we normally run them on a deterministic machine and so the model implementation is at at best "pseudo-random". As a result any observed behavior can in principle be simulated using a Turing machine (ie a one dimensional CA). But that leaves entirely open two questions. One is whether that particular Turing machine/CA (of the inconceivably large number of them) would ever have come into existence were it not for the prior existence of the agent-based system. The issue here is not entirely one of "subjectivity". If we take our problem to be accounting for what we observe, then we may have to take seriously that agent-based systems existed prior to there being humans to observe them and that they created phenomena that MIGHT have been created by a Turing machine but not within the span of available to create the phenomena we are trying to make sense of.

The other open question has to do with the possibility that agent-based systems (as opposed to our implementations of them) may in FACT be non-deterministic. As we talked through in connection with Voyage to Serendip, there clearly ARE situations where non-deterministic processes can do things that deterministic algorithms cannot. And, as we also talked about, there clearly are in existence things for which there is no deterministic algorithm (Chaitin's Omega).

An additional reason to suspect that there is something in agent-based modelling that is lacking in CA's has to do with Laura's point that causal bidirectionality between different levels of organization may bring into existence new properties that don't exist in "flat" architectures such as CA's (see also Josh). Here again, one may simulate "computable" characteristics of such systems on deterministic, serial machines but there is an open question as to whether all such characteristics are "computable".

The upshot is that a deterministic universe, (digital or otherwise, may have been the favored default view among scientists for several centuries but there is increasing justification for denying it that status any longer. And, along with that, there are increasing reasons to suspect that the reducibility of agent-based models to CA's may also be a passing phase related not only to our intellectual heritage but also to our current technology (the deterministic serial computer).