Artificial Intelligence: Is Data Really 'Fully Functional'?

This paper reflects the research and thoughts of a student at the time the paper was written for a course at Bryn Mawr College. Like other materials on Serendip, it is not intended to be "authoritative" but rather to help others further develop their own explorations. Web links were active as of the time the paper was posted but are not updated.

Contribute Thoughts | Search Serendip for Other Papers | Serendip Home Page

Biology 202
2004 Second Web Paper
On Serendip

Artificial Intelligence: Is Data Really 'Fully Functional'?

Dana Bakalar

Every day, as I walk into Park Science Building and round the corner, I am faced with an intriguing poster. This poster asks about the possible personhood of machines. If a computer, robot, or android can pass the Turing test, the poster asks, can it then be considered a person? If it cannot pass, can its personhood be discounted?

Since humans began to develop complex machinery, and recently computers that mimic the human mind in many ways, they have been preoccupied by this question. Consider the science fiction series Star Trek: The Next Generation. One of the major characters is Data, an android. It ( I will call Data "it" until we conclude his personhood satisfactorily) is generally treated as a person by its crew mates on the Enterprise, and people relate to it as if it were not only a person, but a friend. But is Data really a person, and can we refer to it as "he"?

For this paper, I feel I need to define several terms, or the discussion will be very confusing. I am defining a "person" as an entity with "a sort of awareness - of self, of interaction with the world, of thought processes taking place, and of our ability to at least partially control these processes. We also associate consciousness with an inner voice that expresses our high level, deliberate, thoughts, as well as intentionality and emotion" (2). I will refer to members of the species Homo Sapiens as "humans." "Humans" are not necessarily "persons," but many or most are, and all "humans" deserve the presumption of "personhood."

Alan Turing believed that personhood could be tested for. He devised a test wherein a human subject sits in one room and interacts indirectly, like through a computer terminal, with two tentative persons. One of these tentative persons is a human, and one is a computer, an artificial intelligence. The subject is allowed to communicate with both tentative persons, to ask questions, state feelings, etc. If the human subject cannot identify which of the terminals represents a human, or if she determines that the AI is a human, then the AI has passed the Turing test, and must be considered a person (2).

Lets say, then, that Data is subjected to a Turing test. If it passes, (which it is almost certain to do, based on evidence of the way it is treated on the Enterprise,) it will be a person, according to Turing. Can we then be sure that Data is a person and should have rights as such? Not really. One major argument against the Turing Test providing indisputable proof of personhood is the Chinese Room Paradigm.

This thought experiment, as suggested by Searle in 1980. He asks us to imagine a room containing one human and a code book. Chinese writing is pushed under the door of the room by humans outside. The human inside does not speak or read Chinese, but humans outside do. The code book contains a complex set of directions detailing how to "correlate one set of formal symbols with another set of formal symbols." (1). The human in the room can thus provide the correct answers to questions in Chinese without having any understanding either of the questions or of his responses. To summarize, the person in the room has a codebook which allows him to produce output which looks like understood Chinese. Applied to an AI, this experiment claims that an entity like Data could process input and provide output such that its shipmates would perceive it as a person, but without having any consciousness or understanding of either the input or the output. Data could pass a Turing test, but pass it only because it is running a very convincing code.

The Chinese room experiment cautions us to not conclude personhood when none may be present. There are several responses that challenge Searles's conclusion. The Systems Response claims that the human in the room cannot understand Chinese, but the room and the human taken as a total system can. The Robot Reply says that if we can get a robot to act as if it was perceiving, understanding, etc., then it would. This is a similar argument to the Turing test. These replies bring up interesting ideas, and there are many more of them to explore and consider.

Back to Data. If we cannot prove it is a person (it might just be a Chinese room), can we assume that it is not one? I would suggest that we must err on the side of caution, and assume that he (I will now call Data "he") is indeed a person. I say this out of fear. What would happen if he was a person but was not considered such? What would the ethical implications of this be? What about humans who do not seem to be persons (could not pass the Turing test, show very little intelligence)? If an Autistic human is unable to pass a Turing test, should we deny that human personhood? The ethics would be appalling, and true persons would be denied their basic rights simply because we cannot prove their personhood.

"I think, therefore I am" is an interesting statement to apply to this discussion. Since we can perceive our own emotions and thoughts, we consider ourselves to be persons. We cannot directly observe the thought processes of other humans or of artificial intelligences, so we cannot prove that they are persons. In order to be safe, in order to keep society running, and in order to remain sane, we assume that other humans are persons, unless proven otherwise. Since we cannot prove that Data is not a person, we have the same evidence of his personhood and of the personhood of humans around us. The response to Searle that I want to emphasize is the other minds response. "If you are going to attribute cognition to other people you must in principle also attribute it to computers (1)."

So Data should be considered a person. But Data is a fictional Android created by a fictional mad doctor who took the secret of how to construct a person to the grave. Can we now construct artificial persons? Is it even reasonable to believe that we will ever be able to? If we cannot create artificial persons, even in theory, then their potential personhood is moot.

Perhaps the largest problem in artificial intelligence, and in computing in general, is the frame problem. This problem was described eloquently in 1984 by Daniel Dennet, a leading author in philosophy of mind. He tells a story in which scientists build a series of robots. The first, R1, fails in its task to survive because it does not anticipate the reactions that will be caused by its actions and the secondary etc reactions caused by those. The second robot, R1D1 (Robot-deducer1), fails because it does consider all implications, and is locked in an infinite computation of all the possibilities. The third robot, R2D1, is programmed to decide which implications are relevant and which are not, and likewise fails as it sits and rejects those thousands it deems to be irrelevant. Dr. Westland of the University of Derby provides a more complete explanation of Dennet's story and of the frame problem on his website (4).

Westland explains the with robots, you start at zero. The things that seem obvious to a human, the things you never have to explain, need to be explained in detail to a robot. You do not have to tell a child, to use an example from Professor Grobstein, that opening the refrigerator door will not cause a nuclear holocaust in the kitchen. That possibility never occurs to them; that is, it is rejected implicitly. With artificial minds, the implicit processing is not there, so the simplest tasks require the processing of impossible amounts of information (4).

But how do humans solve the frame problem? Where does our implicit programming come from? Nobody really knows. I would claim that since humans have solved this problem, the possibility exists, however remote and impossible it seems from here, that AI's could be developed which were not subject to it.

Data, to get back to our original example, seems to have solved it perfectly well, although he does sometimes need to be told simple things, taught like a child. Organisations such as IDSA and CSEM and projects such as the SWARMBOT EU project, are doing just that (5). They are working on algorithms and neural networks that allow robots to learn.

Assuming we can develop intelligent robots that can learn and pass the Turing test, we should treat them as if they were people, because we do not know that they are not. In order to develop these, the frame problem must be mastered, perhaps through the use of learning algorithms. Who knows, one day we may be attending a march for robot's rights!

References

1)The Internet Encyclopedia of Philosophy, description of Chinese room argument

2)Brain Web Entrainment Technology, Introduction to artificial intelligence

3)Internet Encyclopedia of Philosophy, overview of AI

4)Dr. Westland's Site, description of frame problem

5)Learning Robots, site of IDSA robot project


| Course Home Page | Course Forum | Brain and Behavior | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 10:53:05 CDT