This paper reflects the research and thoughts of a student at the time the paper was written for a course at Bryn Mawr College. Like other materials on Serendip, it is not intended to be "authoritative" but rather to help others further develop their own explorations. Web links were active as of the time the paper was posted but are not updated.

Contribute Thoughts | Search Serendip for Other Papers | Serendip Home Page

Biology 202
2000 First Web Report
On Serendip

Computers, Artificial Intelligence, the Brain, and Behavior

Andrew Jordan

It has long been suggested by many philosophers of mind that the computer might be a useful means of explaining human behavior, and human brain function. Computers can already recreate many convincingly human-like behaviors, and many both in and outside of philosophy believe firmly that computers, at some point in the not too distant future, may actually be able to recreate human consciousness. The obvious upshot of drawing an analogy between a computer and the human brain is that computers are not mysterious to us in the same ways that brains are. A computer's inner workings, as something created, are understood by virtue of being produced. With the brain however, we are confronting a somewhat alien artifact, and are forced to plumb its depths in orders to explain how it might work. In what follows I will be exploring the computer analogy to see if it makes sense.

In his essay "Minds, Brains, and Programs" (1) John Searle draws the distinction between advocates of Weak AI, and Strong AI. For the weak AI advocate, the computer is simply a tool that can be used to test out various hypotheses about brain function. Strong AI advocates on the other hand believe that given the appropriate programming a computer actually becomes a mind i.e. can be said properly to understand, and have other related cognitive states. One of the basic slogans of many so-called strong AI advocates, is "take care of the syntax, and the semantics will take care of itself". The idea being that if we provide the computer with the proper linguistic rules, then the properties appropriate to meaning will arise out of that. Computers can be said to recreate human languages, which seem at least to be rich in meaning, but fundamentally they operate in binary. The question then for the strong AI advocate is clear, does the human brain operate in binary, as a computer does? If so, then claim that given a complex enough computer we could create a human-like consciousness seems to be a bit more valid.

What would the requirements be, if the brain is to be understood as a computation device like that of a computer? According to the so-called Church-Turing Thesis, if we can come up with an algorithm, then we can come up with a Turing machine that can perform that algorithm. (2) The form that the Turing machine can take can vary greatly. All that is required, is that there is something that can be recognized as being analogous to the binary 1's and 0's that make up the base level language of any Turing machine. As Ned Block points out in his paper "The Mind as the Software of the Brain", we could fulfill this requirement using some elaborate contraption of cats mice and cheese where the cats open and close gates depending upon the behavior of the mice. (3) Now, is there such a view of the brain that would give us this base level language of 0's and 1's such that the brain would be a Turing machine? The claim goes, that if there is, then given enough technology, we can create an artificial "human" having all the qualities that we view as being distinctly human i.e. intentional states, existential concerns, understanding etc. We'll come back to whether this conclusion is valid given the formal language used by computers later.

Now, the simple solution to the question of "what in the brain could serve as the base level binary language" is that the neurons themselves function as on or off buttons. They are either in an excited state or in a passive state. The neuro-transmitter cocktail that is released between neurons is something that is either released or not, thereby causing another action potential or not. This may be an overly simplified view of neuron function, but it does at least seem to be a picture in which a binary language is present. For many cognitive scientists, that is enough to draw the conclusion that the brain functions like a digital computer, and that therefore strong AI is a reasonable stance to maintain. There are at least two seeming chinks in the argument however.

First, as I said above, if strong AI claims are valid, then there must be a way of explaining distinctly human cognitive states, most of which are very much related to language. Haugeland in "Understanding Natural Language" gives us a good outline of exactly what that would entail. (4) Computers on his view would have to be able to make sense of fundamental ambiguities in natural languages. For instance in the sentence "he put his raincoat in the bathtub because it was wet" it is not clear exactly what is wet, given just the syntactical structure of the sentence. In addition to actually create a true artificial "human" (making clear here that human is not a biological category, but one pertaining to entities with certain cognitive abilities), one would have to create something with an existential awareness i.e. was aware of a social context, had a sense of humor, exhibited emotional states etc. Now, Haugeland's point is not that artificial intelligence is impossible, at least that is not the conclusion that follows from his argument. His general goal is to show us just what would be required, and then to let us decide whether this seems like something that could be coded into a computer. Obviously his disposition is to say that it couldn't, because, as he puts is "computers don't give a damn".

The stronger argument against strong AI comes from John Searles paper "Mind, Brains, and Programs". The basic claim that he makes in it is that we cannot arrive at an artificial intelligence that mirrors human understanding using solely a formal language structure. In other words, the mistake made by the strong AI theorists is their belief that semantics can follow from a proper syntax. To demonstrate this, he embarks upon the famous Chinese box thought experiment which undermines the idea that a computer which recreates human-like responses can be said to "understand".

Briefly the experiment goes as follows. Suppose that a person who does not understand Chinese is locked into a room and receives in sequence, 1) a batch of chinese writing 2) another batch of Chinese writing and a series of rules in English that correlate the first batch of Chinese with the second batch and 3) a third batch of Chinese writing and instructions that correlate the first and second batch with the third, and explain how to give back responses in Chinese to the symbols in the third batch. The people who are conducting the experiment happen to call the first batch a script, the second a story, the third questions, and my responses answers. The point that Searle makes is that there is no understanding of Chinese involved in the process. A computer likewise can provide answers to questions given it, but has not understanding of what it is doing. Now Searle takes this to be an indication that there is no way to get from a strictly formal language (a series of 0's and 1's for instance) to a claim that a computer understands what it is doing.

Obviously then, if we can understand the brain as a binary system, something else must be going on in human consciousness that differentiates it from what a computer can do. Searle takes issue with the argument for that a Brain functions as a digital computer by claiming that simply being able to understand something as functioning in binary is not enough to justify the claim that it is a Turing machine. The fact that there are multiple ways of realizing the system of 1's and 0's should make us nervous about the claim that any binary system is sufficient to function as a Turing machine. Searle's View is probably congenial to neuro biologists, as he does not seek to undermine the notion that there is a value in examination of the physical pathways of the brain and that, upon sufficient study, they will give us a good picture of human behavior. He says after all that human brains, like computers, are machines, but machines of a very specific sort, namely a sort that can think.

The other place to take issue with Strong AI theorists, and with neuro biologists, is in the conclusion that the "mind" is dependant on the brain. I.e. that there is nothing else to behavior other than the brain. I won't get into this argument more fully here, but just to demonstrate its possible validity, as an example of how this might work, suppose that I am sitting in front of a computer typing. Everything that the computer does is a result of inputs that I put into it through various input devices such as the keyboard or mouse. Me typing is in this example, the analogue to the mind. The computer with its hard drive, video cards, sound cards, and CPU is analogous to the brain. Some outside perhaps alien observer could figure out exactly how the various electrical pathways in the computer function. The observer could even have a perfect knowledge of the inner workings of the computer. However, this knowledge will never lead the observer to any knowledge of the person typing. If the mind is separable from the brain, then a true artificial intelligence could never be realized, as the something else (namely the mind, spirit etc.) could never be realized in the physical object (the computer, or brain). In any case, it seems that a computer might not be the best place to start in an exploration of human cognitive functions.

WWW Sources

1) Minds, Brains and Programs, by John Searle

2) Is the Brain a Digital Computer?, by John Searle

3) The Mind as the Software of the Brain , by Ned Block

4) John Haugeland "Understanding Natural Language" Journal of Philosophy 76 (1979) 619-632.




| Course Home Page | Back to Brain and Behavior | Back to Serendip |

Send us your comments at Serendip
© by Serendip '96 - Last Modified: Wednesday, 02-May-2018 10:52:54 CDT