Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Beyond Turing: Exploring the Inner Workings of Human Intelligence through AI

SerendipUpdate's picture

The Story of Evolution, Spring 2005

Second Web Papers

On Serendip

Beyond Turing: Exploring the Inner Workings of Human Intelligence through AI

Rebekah Baglini

Beyond Turing: Exploring the Inner Workings of Human Intelligence through AI

"Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt , and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificial signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants." - Jefferson Lister, 1949

Since the beginning of recorded history, the nature of the human intellect has been constantly questioned and explored, inspiring some of the greatest achievements in thought, art, and science over the last two millennia. The same questions asked by Plato, later by Descartes and Hume, now embroil modern-day philosophers and scientists like Dennett, Minsky, and Hofstadter, and captivate the general public through popular culture. We care about artificial intelligence (AI) because its development has a significant bearing on the way we think about ourselves and our place in the universe. Although technology continues to advance at an astronomical rate, AI research has yet to determine whether the possibility of true "machine thinking" exists. Nearly 60 years ago, Alan Turing wrote a brief article in Mind titled "Computing Machinery and Intelligence" (Turing, "Computing Machines and Intelligence") in which he established Turing test, the test which remains for many the holy grail of artificial intelligence. No machine has yet decisively passed a rigorous Turing test, and the decades-old debate continues as to whether the Turing test should be considered a conclusive test for intelligence.

Turing's original name for what eventually came to be called the Turing Test was the "Imitation Game", and was based upon a popular party game of the time. In his Mind article, Turing described a test in which an interviewer attempted to discern a machine from a human, physically separated from him but able to communicate through some form of text-based messaging, simply by posing questions. The human would attempt to aid the interviewer in discovering the right answer, while the machine would attempt to fool the interviewer; any machine which succeeded would prove that it possessed intelligence.

The Turing test raises serious questions and objections from people for a number of reasons. Probably the most common objection to the test is that it only tests for the appearance of intelligence on the outside; that it is a surface test that doesn't probe as deeply as it would be required to test true intelligence. It's true that a machine could pass a Turing test and that an observer would still not know what was going on inside the machine's proverbial head; the observer would not know whether the machine engaged in a method of processing, making connections, and formulating a new idea in the form of a relevant response as humans do in conversation, or whether it was just blindly follow some tricky code designed to simulate complex thought. Essentially, the question raised is this: Is intelligence really only a matter of our perception? Clearly the definition of intelligence is tricky, and AI researchers and philosophers often have differing opinions on what sort and what degree of intelligence AI research should seek to create.

Of course, machines have been able to fool people into thinking they're intelligent for decades; even in the 1960s, the simple ELIZA program was able to fool psychologists. Today similar bot programs can still frequently fool those who are unfamiliar with concepts of programming and computer science. Although the Turing test standards to which the most sophisticated programs of today are held are extremely rigorous, the discomforting idea of a machine simply designed to "fool" us, no matter how elaborately or elegantly, remains a concern for many who continue to argue that the Turing test can never be valid test of AI.

Other critics of the Turing test feel that while the machine's capability to pass the Turing test is a true sign of intelligence, the Turing test doesn't necessarily measure consciousness, desire, semantic understanding, emotion, or intention, leaving the test very much incomplete.

Other objections focus on the test's exclusive use of language as a measurement, arguing that it's possible that the Turing test could shortchange a machine, one that might be extremely intelligent but unable to express itself linguistically. No one could deny that very young children possess intelligence before they are able to speak. And, as mentioned before, an ability to simulate human conversational behavior may be much weaker than true intelligence, since the machine might just blindly follow a set of rules without understanding anything about them. Of course, a response might be, "But why can't it be the case that humans just blindly follow sets of rules?" The test's original name, the "Intention Game" was apt--in the end all a computer must do to pass is effectively imitate the way humans use language.

It is important, however, not to discount imitation: simple imitation is possibly the most important way that human children gather data about how to interact with their environment. Clearly in the case of human children and most other intelligent entities, imitation is only the first step—the capacity to experiment, recall, compare, and innovate must be present for an intelligent entity to take the information they gather through imitation and then move towards more autonomous behavior---but it is a critical first step.

The Turing test was designed to measure a machine's ability to mimic a human, but clearly humanlike intelligence is not the only sort AI explores. Nevertheless, machines replicating the intellectual capabilities of humans remain the source of the most interesting and challenging debates on AI and it is with anthropocentric AI that this paper is concerned. I'll continue to base this discussion on the ideal of computers effectively using language, since not only is the Turing test based on language but humans' ability to acquire and communicate through are probably the most commonly cited traits indicative of our intelligence.

The ability to construct syntactic sentences is based on an apprehension of sets of relatively simple rules used to process and construct new words. These rules must be arranged in some sort of hierarchical structure, given that they must be applied in a particular order to yield proper results. Apart from the technically foreboding task of programming such a complex hierarchy of rules, there's little that would lead us to believe that a machine could not be programmed to use all of the necessary rules for processing and constructing proper sentences in a given language.

Syntactic ability alone, of course, indicates little about intelligence, as demonstrated by Searle's thought experiment of the Chinese room. ( Searle, "Minds, Brains, and Programs") Imagine a person who understands no Chinese sitting in a room into which Chinese characters are passed. The person in the room has a book containing a complex set of rules on how to manipulate these characters before passing back out their results (in the form of new sets of Chinese characters). The idea is that, like a Turing test, a Chinese-speaking interviewer would pass questions in Chinese into the room, and the corresponding answers would come out of the room, as though an intelligent, understanding entity on the inside had understood the questions and formulated a corresponding answer on their own.

The point Searle is trying to make is that an entirely unthinking entity following a rote set of rules could pass a Turing test because the ability to manipulate linguistic data syntactically has no bearing on semantics. The machine (or the person in the Chinese room) doesn't necessarily have any understanding of what is being communicated.

Here we arrive at the much larger and more complex issue of semantics: what would we need to do to program a machine capable of making distinctions among all the nuanced meanings of the thousands of words and concepts humans use everyday? How could we create a machine able to make abstract connections between seemingly unrelated concepts? (Remember Lister's sonnet criterion from the quotation that begins this paper—how could we get a machine to the point where it could in fact generate a comparison between an admired woman and a summer's day? )

Perhaps the way to approach these questions is to start at the beginning—that is, rather than attempting to program a system of vastly interconnected sets of data, metadata, metametadata, etc., all processed by complex hierarchies of rules, let us instead attempt to create programs that mimic that way in which we apprehend and classify all of this data and develop and organize all of these rules. Such programs could be run off of robots which would not require many more advanced features than those we've already achieved: Robots with sensory perception, memory, the ability to imitate, and the ability to favor or eliminate certain behavior and ideas based on experience and the acquisition of new information. Imagine that such an astronomically complex machine could be designed: it's almost certain that such a sophisticated machine could pass a Turing test. We must then ask: could it also be that the processes it follows in generating its responses be indistinguishable from ours?

I hold that even in such a machine some significant factors would be missing. All of its complex, preprogrammed abilities would be essentially meaningless without a single factor: self-awareness. Even to do something as simple as embark on the process of gathering data from its surroundings (if it were a toddler, we'd probably say "exploring"), the machine must be able to construct an internal model of its environment, its place in that environment, and potential actions it might pursue in that environment in order to sift through all the sensory data and make a decision on how to behave.

At the surface it appears that the problem of self-awareness is insurmountable. It is nearly impossible for us to imagine how to build a machine that is truly more than the sum of its parts and somehow reaches a stage at which it becomes truly conscious of itself. Other similar questions arise: what about intentionality and purpose? Where could such phenomena arise, even in the most detailed modeling of human intellectual processes? There are hypotheses, theories, and new experiments in AI conceived of every year, as we continue to ask these fundamental questions we've pursued since the beginning of history. It is frustrating yet fascinating knowing that the subject of human intelligence that we so passionately seek to understand is itself the only means by which we may ever comprehend it.

Sources referenced and consulted


1. Dennett, Daniel. Darwin's Dangerous Idea. Copyright 1994, Simon & Schuster.

2. Hofstadter, Douglas. "A Coffeehouse Conversation", The Mind's I, pp. 69-92. Copyright 1981, Basic Books.

3. Hofstadter, Douglas. Godel, Escher, Bach: An Eternal Golden Braid. Copyright 1979, Basic Books.

4. Searle, John R. "Minds, Brains, and Programs", The Behavioral and Brain Sciences, vol. 3. Copyright 1980 Cambridge University Press.

5. Turing, Alan. "Computing Machinery and Intelligence", The Mind's I, pp. 53-68. Copyright 1981, Basic Books.

 

Comments made prior to 2007

You've got a lot of great material on your website. I'm writing to correct one misstatement in Rebekah Baglini's paper that I see repeated frequently on the Web. The author of the Argument from Consciousness objection to machine intelligence is not Jefferson Lister but Geoffrey Jefferson. See the correction, copied below, that I've posted to the Turing Test wiki, which also had it wrong. I wonder where this misstatement started... Best regards and keep up the excellent work of promoting and sharing serendipity and free culture - I love Bryn Mawr! ... Eddie Shanken, 18 March 2007