I think that the reason most people, myself included, feel more comfortable with the idea of a model with multiple input/output boxes is that it permits more "internal" activity and allows for more complex behaviors. With the single box, or shall one say "function model", there is only one unique Y (output) for every X (input). This creates a problem in terms of modeling behavior since there are often situations where the same input will result in two different outputs or when two inputs will result in the same output. A model with many different processing units helps explain this discrepancy and allows for a sequence of logical steps and "loops" which are more easily equated with thought.

Thought is a fundamental part of our definition for a human being. Our capacity for abstract reasoning has long been considered what differentiates us from other creatures and machines. Whether or not other creatures have this ability may be a point of contention, but human beings have since biblical times dismissed the possibility of self-awareness in other animals. While this may be an example of gross human egoism, after all it's easier to harness your plow-horse if you regard the animal as nothing more than a plowing machine, this distinction between humans and animals is deeply ingrained and has a profound effect on the way in which we view ourselves and our place in the world.

The idea that thought is a characteristic limited to human beings also helps to explain the pessimism, fear, and wonder surrounding the effort to create artificial intelligence. A machine with the ability to think would cease to be only a machine. Although it might still lack the tell tale opposable thumbs questions regarding its humanity would arise. The manufacture of another person tends to make us nervous. The monster of Mary Shelleys Frankenstein and HAL of Arthur C. Clarks 2001 serve to illustrate that fact. We fear the consequences for such hubris; human beings have never looked on usurping the gods lightly. Our fear may be completely irrational in many respects. The creation of a thinking machine may prove impossible. If ever accomplished we might discover that the lack of knowledge regarding the human thought process resulted in something that thinks in a manner foreign to us. We might also discover more about our thought processes from a thinking machine.

Lots of interesting thoughts. And I agree with your analysis of why people tend to draw a sharp distinction between humans and other animals/machines. The question of whether it will continue to be useful/possible to maintain such a sharp distinction is a very interesting one, with lots of significant ramifications. I, for one, would like to have as many "thoughtful" entities around as we can manage. PG