Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

You are here

Emergence-C, Confusion!

LauraKasakoff's picture
During Monday's lecture Professor Blank told us about a split into two separate approaches to studying artificial intelligence. The split is between the rational models and emergent models. Professor Blank writes that "These two paradigms, in my opinion, have little to do with one another. That is, emergent models can certainly show rational, rule-like behavior. But the implementation of emergent models have nothing to do with how rational models operate." I don't understand how this dichotomy is possible! I agree that emergent models can show logical "rational" behavior. We need only look to Langton's Ant and look at its rule like behavior when building its road to believe it. However, I feel like rational models could have a role in the implementation of emergent models. The only reason I can seem to find to support this intuition is the way in which I imagine the human brain works. It seems to me that the brain is a rational model of neurons which calculate and makes decisions, while the way in which we experience consciousness is an emergent phenomena resulting from the rational model of these neurons. It is beyond possible that I am thinking about these models in the wrong direction. I am hoping to start up a discussion on the matter to help me think more about it and relieve my irrational emergent confusion.


DavidRosen's picture

I think Professor Blank was using the word "rational" in the sense of "logical". That is, the "rational" approach to AI usually focuses on explicit operations and symbols, and recursively applying these operations until the goal is reached. possible moves as it can in order to determine which it should perform. They are designed in detail by the programmer. For example, Chess-playing programs store all of the rules of chess, and then use these rules to try and essentially evaluate as many states as possible in the time allowed. An emergent approach would use more bottom-up systems like genetic algorithms and neural networks to solve this problem. Other ways to think of rational vs. emergent might be symbolic vs. subsymbolic, algorithmic vs. learning, top-down vs. bottom-up, and so on.
Doug Blank's picture

Thanks, Laura, for your comments. I should say that most experts in the field would probably side with you and say "what in the world is he even talking about?! That doesn't make any sense!" You make a good point when you point out that (and here I paraphrase) I need a rational system to write the computer program in order to implement my emergent neural network. But the level of the computer doesn't have much to do with the level of the neural network. Likewise, the level of the neural network doesn't have anything to do with logic, or even "making decisions". All the network level is doing is adding, multiplying, and number squashing. What the two systems do have in common is their behavior. Both can compute the same "functions". Maybe one does it through a lookup table, the other by adding, multiplying, and squashing. The network goes through a level of nodes and numbers. This is the level that does the actual "thinking". In the lookup table, there is no other level. This is a critical difference, I think. You could say that neurons are being rational and deciding to make decisions. But the level that they are acting is at a different level than what they are actually computing. Another way to say this is that I think Einstein would make a really lousy neuron. Why? Because he would be thinking about whether he should fire or not based on trying to make sense of the signals around him. But the "sense" isn't made at this level, but at level above this one---the network that Albert would be embedded in. It emerges from the network of the micro-Eisteins' interactions. So, a system that computes "emergently" (or, as David says, "subsymbolically") computes below the level of the meaning. On the other hand, rational systems compute at the level of the meaning. This idea has been developed more fully in cognitive science. For example, see the paper Syntactic Transformations on Distributed Representations that discusses exactly the type of language-processing networks we explored on Wednesday. We should continue this discussion...