Biology 202
1998 Second Web Reports
On Serendip

Will Androids Obtain Consciousness?
Deep Blue and the rise towards a conscious computer

Douglas Holt

In the Stanley Kubrick / Arthur C. Clarke 1968 epic film 2001: A Space Odyssey, the central character - the HAL 9000 computer - talks amiably, renders aesthetic judgements of drawings, recognizes the emotions in the crew. However, HAL ultimately murders four of the five astronauts in a fit of paranoia and concern for the mission. Shortly after we meet HAL, he plays chess against Frank Poole. Kubrick chose chess in large part to show how “intelligent” HAL was; chess has long been held as the paragon of human logic and reasoning (15). HAL 9000 was the pinnacle of technology, a self-contained conscious machine.

One of the major impediments to a productive discussion of consciousness is a disagreement as to what exactly is being discussed. Unfortunately, It is impossible to fix the relevant of the term - one cannot point to consciousness the way one can point to a book or even a brain, so there is no way to resolve disagreements. The fact of consciousness should not be in doubt, there is something to be explained, not merely explained away (29). This lack of substance as to the direct nature of consciousness has been the heart of discussions for eons. Consciousness is the "I" that we all know, from which we view the world and interact with it (19). It is easier to view consciousness as a set that has many attributes that arise from it, or perhaps give rise to it, rather than a single entity. Our greatest impediment to properly approaching consciousness is our inability to recognize “non-event” causation and that cause (brain processes) and effect (consciousness) do not necessarily have to be two different things. Brain processes could cause “consciousness” in the sense that consciousness is itself a feature of the brain (29). This is similar to light that arises from the heating of a filament. The properties of the filament do not describe the properties of the light. There is no set definition for consciousness, as it appears that there are many influences and methods of explanation. Neuroscience and neuroreductionism provide physical or structural explanations by virtue of their nature and may not be best suited to explaining the subjective nature that we seek to understand (29). Despite all of this, there have been many attempts to quantify the "conscious experience".

Theories of Consciousness:

One explanation of consciousness is based on the theory of experience, either through sense perception or reflection upon that experience. David Chalmers, a philosopher at the University of California Santa Cruz, has a definition of consciousness based on qualia. From this, he has posed several questions to explain consciousness: such as "How does sensory information get integrated in the brain?" How do we see and reach out for an object? How are we able to verbalize our internal states and report what we are doing and feeling?(4). Thus far, nothing in physics or chemistry or biology can explain these subjective feelings. Chalmers says "What really happens when you see the deep red of a sunset or hear the haunting sound of a distant oboe in the distance, feel the agony of intense pain, the sparkle of happiness, or meditative quality of a moment lost in thought?" he asks. "It is these phenomena, often called qualia, that pose the deep mystery of consciousness" (4).

In a contrasting view, Francis Crick is a strong believer in the reductionist philosophy. "Your joys and your sorrows, your memories and your ambitions, your sense of personal identity and free will, are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules." This is not a new idea, it is materialism. What makes Crick’s argument so notable is that advances in neuroscience are showing that it is not too soon to start examining the scientific basis of consciousness (5).

Daniel Dennett, a philosopher at Tufts University, is a forceful proponent of the idea that consciousness is "no big deal". Scientists have shown that information coming into the brain is broken down into separate processing streams. But no one has yet found any "box" where all the information comes together, presenting a whole picture of what is being felt or seen or experienced. The temptation, he said, is to believe that the information is transduced by consciousness. However, it is entirely possible that the brain’s networks can assume all the roles of an inner boss. Mental contents become conscious by winning a competition against other mental contents, Dennett says. No more is needed. Consciousness is an epiphenomenon, a mere side effect (6).

Conversely, Roger Penrose, from the University of Oxford, is an advocate of the connection between consciousness and quantum mechanics. One form of dualism involves the mysteries of quantum mechanics. Penrose, argues that consciousness is the link between the quantum world, in which a single object can exist in two places at the same time, and the so-called classical world of familiar objects where this cannot happen. Speculation that quantum mechanics and consciousness are linked is based on the principle that the act of measurement, which ultimately involves a conscious observer, affects quantum effects (7).

Meanwhile, Colin McGinn, a philosopher from Rutgers University, argues that people can never understand consciousness. The mystery is too deep. He argues that since our brains are the products of evolution, they have cognitive limitations. Just as rats and monkeys cannot even conceive of quantum mechanics, humans may be prohibited from understanding certain aspects of existence, such as the relation between mind and matter. He says that for humans to grasp how subjective experience arises from matter, might be like "slugs trying to do Freudian psychoanalysis - the just don’t have the conceptual equipment." Consciousness, in other words, may remain forever beyond human understanding (8).

The common thread to all these definitions of consciousness appears to be either some kind of inner awareness or personal experience, brought about by our ability to sense our external environment (29). Yet, by placing a primary focus on sensory input and processing we immediately come upon our first major obstacle, the binding problem. The binding problem, is that we have no definitive explanation for how these bits of sensory information are integrated to form a complete whole, a single human perception or image of the world around us. To some neurobiologists, this synthesis of sensory information is what is known as consciousness (29). However, there is no box where sensory information from various modalities could be integrated to form a complete picture of the external world. Crick and Koch have suggested an explanation for this: "The coordination of the senses results in essentially a “rhythm of electrical activity coordinated and synchronized by the thalamus." They have suggested that herein lies consciousness, arising from oscillations in the cerebral cortex which become synchronized as neurons fire 40 times per second; two pieces of information as essentially bound in time by synchronous neural firing (29). Other neurobiologists have supported this idea of a sensory scanning as that would sweep through the cerebral cortex. Llinas proposed that the electrical scan stimulate all of the synchronized, active cells of the cerebral cortex, which at that instant are recording specific sensory information. These cells then respond by instantaneously sending signals back to the thalamus, all of the signals at that precise moment in time together reflecting the specific pattern of neural activity based on precise sensory stimulus. The data from all of the body’s senses could come together not in place, but in time, the time of the thalamus; i.e. a scanning cycle. Consciousness is, by this theory, the dialogue between the thalamus and the cerebral cortex, as modulated by the senses (29).

From combining the relative theories on consciousness, researchers have attempted to define consciousness in terms of the functional aspects. Paul Churchland, author of The Engine of Reason, has accepted this challenge. According to Churchland, the following are necessary in the explanation of a theory of consciousness:

1. Short-term memory and its decay
2. Directable attention, or conscious control over what we attend to and what we do
3. Multi-valent comprehension through "mulling" or reflection
4. Independence from sensory input
5. Disappearance of consciousness during sleep
6. Unity of senses over time (29)

Out of this, we are still faced with the fundamental problem with defining consciousness within ourselves. Since this is so, how can we hope to be able to determine whether there is consciousness within another entity, biological or silicon? Bernard Baars, a psychologist working at the Wright Institute has evolved an applicable operational definition. His definition of consciousness considers people to be conscious of an event if (a) they can immediately afterwards say that they were conscious of it, and (b) we can independently verify the accuracy of their reports (28). While this is not a perfect definition of consciousness (it presumes several things, including volition, communication, and metacognition), it suffices to isolate consciousness for a scientific study. The use of a functional definition of consciousness is the most attainable basis for the determination of machine consciousness.

Operational Consciousness:

In a similar vein, the definition of consciousness that I will present for the rise of computers into "artificial life" will be an operational one. Consciousness will be defined not only by what it can do, but also what it cannot. The conscious machines will be dictated by Gödel’s theorems of Incompleteness and Relative Consistency. These theorems are used to determine whether the information requested is present within the memory of the computer. Ultimately, the goal for an autonomous "conscious" machine is that it must be able to perform the following functions:

1. Observing its physical body, recognizing the positions of its effectors, noticing the relation of its body to the environment and noticing the values of important internal variables (e.g. the state of its power supply)
2. Observing that it does or doesn’t know the value of a certain term.
3. Keeping a journal of physical and intellectual events so it can refer to its past beliefs, observations and actions.
4. Observing its goal structure and forming sentences about it.
5. The ability to observe intentions.
6. Observing how it arrived at its current beliefs.
7. Should be able to answer the following questions: "Why do I believe in p?" or alternatively "Why do I not believe in p?"
8. Regard its entire mental state up to the present as an object (i.e. context).
9. Knowing what goals it can currently achieve and what its choices are for action. 10. General facts about mental processes so it can plan its intellectual life. (10)

In addition, the robot will need to express two closely related phenomena: understanding and awareness. Understanding that a logical robot will need requires it to use appropriate sentences about the matter being understood. The understanding involves both getting the sentences from observation and inference and using them appropriately to decide what to do. Awareness is similar. It is a process whereby appropriate sentences about the world and its own mental situation come into the robot's consciousness, usually without intentional action (10).

The Turing Test and Recognition of Consciousness:

In the classic Turing test, two beings are separated by all physical means and all communication between them must pass through a screen. The screen could be email or anything else that hides the identity of each of the participants. If one of the participants cannot identify whether or not the other is a computer, least 50% of the time, the computer is said to have passed the Turing test. This was originally devised in the early 1950’s as a method of determining intelligence. The Turing test has been the hallmark of intelligence among computers, one of the most fundamental basis for the "consciousness" of computers. However, with LISP programming, recent computers can challenge the Turing test. Nevertheless, are they any more conscious than their predecessors are? By changing the language of the program, it would appear that "consciousness" is the result of the software of the computer rather than the hardware associated with it. The implications that software is the seat of consciousness are rather dramatic. It is easily imagined that an artificial neuron, which exhibits all of the properties of the organic neuron, can be designed and constructed. From this, sets of these neurons could then be used to construct and artificial mind, possessing all of the natural characteristics of the natural one. Would this machine be then considered conscious? Explained by a reductionism perspective, yes. After all, the neuromush that composes the human brain is nothing more than neurons and support cells. Taking the engineering one step further, this implies that if the artificial mind can be constructed out of the artificial neurons, then the artificial mind can be first modeled with programming. Ultimately, consciousness could be generated in a program and therefore is not the property of the mind. If this is so, consciousness can be transferred from one entity to another!

The Turing test has been seriously challenged in the last few years. This has been accomplished by having computer programs ask repetitive questions or even generating new sentences in response to queries. Software agents "living" and acting in a real world software environment, such as an operating system, a network, or a database system, now carry out many tasks for humans. The apparent key to this new rise in computer architecture is the use of metacognition. Metacognition includes metacognitive knowledge, metacognitive monitoring, and metacognitive regulation. Metacognition is very important for humans. It guides people to select, evaluate, revise, and abandon cognitive tasks, goals, and strategies. Thus, metacognition plays an important role in human-like software agents. Conscious Mattie (CMattie), "living" in a Unix machine, automatically reads and understands email (in natural language), composes and distributes weekly seminar schedule announcements. Cmattie implements Baaris global workspace theory of consciousness and other cognitive sciences (cognitive modeling) as well as computer sciences (intelligent software) (8).

The recognition of consciousness within another entity will be the crux of the problem. While the Turing test is a useful tool, it does not provide all of the necessary information. The conscious machine must also be able to gather information and act independently. That is, the machine must be able to "decide" as to its course of action, without prompting by other external conscious beings. As described earlier, the conscious machines must be able to not only make decisions, but also be able to communicate via a common language to observers. While these descriptions may make it appear that many of the current robots have their foot in the door on consciousness, it is the capacity for introspection and metacognition that place consciousness beyond current capabilities.

The Rise of the Yammy Robots:

There are several avenues for the studying of the generation of robot consciousness. Most are aptly labeled "idiot savants" in regards to their capabilities. Their computational abilities are unmatched in many arenas, but they fail miserably at what we would consider simple tasks. There has been one train of research that dictates that the conscious robots should not be designed for a particular purpose, but rather a general overall awareness. Recently, researchers in New Zealand have developed criteria for conscious robots. The robots are given the name "yammy" rather than conscious to emphasize that they are only candidates for consciousness.

Yammy-1: A robot will be described as yammy-1 when it’s behavior is most efficiently predicted in terms of its own intentions (=plans) and its knowledge (= information) of the intentions of other purposeful (=having plans) entities.

Yammy-2: A robot will be described as yammy-2 when its behavior is most efficiently predicted in terms of its intentions, its knowledge of the intentions of other purposeful entities, and its knowledge of their knowledge of its intentions

Yammy-3: A robot will be described as being yammy-3 when it can exhibit the definition of yammy-2 as well as being able to exist in an open environment, and be irreversible.

Yammy-4 In addition to insisting that the robot be irreversible and that it exist in open environment, it added the robot’s ability to predict its own and other entities’ future intentions. The robot that knows what it is doing should know what to do if a cooperating robot had to change plans.

Yammy-5: A robot that will be described as yammy-5 when its behavior is most efficiently predicted in terms of the answers it gives to its own questions about its own intentions.

If a conscious robot is going to convince us that it is reliable and properly motivated, it will need to be able to explain questions about its behavior. At first sight, an explanatory expert system will suffice because it can explain to a user what it has been doing and what it would do in various situations. To date, the researchers have been able to create machines that fulfill up to the Yammy-2 criteria.

Deep Blue

In 1996, Deep Blue made history by being the first supercomputer to beat a grandmaster in chess. The original Deep Blue concept arose out of IBM's research laboratory as a method of exploring how to use parallel processing to solve complex problems. Specifically, Deep Blue is a 32-node IBM PowerParallel SP2 high performance computer. Each node of the SP2 employs a single microchannel card containing 8 dedicated VLSI chess processors, for a total of 256 processors working in concert. The net result is a parallel system capable of calculating 50-100 billion moves within three minutes, the amount of time allotted to each player’s move in classical chess. Also, in order to give Deep Blue even greater resources from which to draw, the Deep Blue team collected an opening game database which provides the system with grandmaster games played over the last 100 years. Although Deep Blue is a very powerful machine that never becomes tired or distracted, has it achieved consciousness? The loss by Garry Kasparov to Deep Blue struck a resounding note among both artificial intelligence enthusiasts and critics. The enthusiasts point to the "brilliant move in Game Two when it unexpectedly offered an exchange of pawns instead of simply advancing its queen to an apparently overwhelming position"(2). This move jarred Kasparov, who later described it as brilliantly subtle. For its creators and many of its fans, Deep Blue had, for a moment, used its incredible processing power and the accumulated knowledge of computer scientists and chess champions to engage in something resembling "thought" (1). Critics, on the other hand, have compared this step in artificial intelligence to pitting a man against a motorcycle in a footrace. Nevertheless, the differences are profound than that. Chess is more than the application of brute force, hence the extensive period of time that resulted to produce a computer able to match wits with a grandmaster. What was demonstrated in this classic match was that a computer is now able to master applications that were once thought exclusive to human minds.

Computer chess proved to be much more difficult to master for a computer than the early intuitive estimates had suggested. The way that humans represent a chess situation in their minds is far more complex than just knowing which piece is on which square, coupled with knowledge of the rules of chess. It involves perceiving configurations of several related pieces, as well as knowledge of heuristics, rules of thumb. Even though heuristic rules are not rigorous in the way that the official rules are, they provide shortcut insights into what is going on the board, which knowledge of the official rules does not. This much was recognized from the start; it was simply underestimated how large a role the intuitive understanding of the chess world plays in human skill. It was predicted that a program having some basic heuristics, coupled with the blinding speed and accuracy of a computer to look ahead in the game and analyze each possible move, would easily beat top-flight human players-a prediction that has taken over forty years to realize (12).

Debates have raged over the implications of the loss of Kasparov to Deep Blue. Commentator Dr. William Calvin has analyzed the match in this manner. "Some animals have gotten to be so fancy that they simulate a course of action before taking even a tentative first step. The chess master, who looks a half-dozen moves ahead is a prime example-as is the army general or poker player who thinks through a bluff and counterbluff before acting. These are extreme examples of how to make and compare alternative plans"(15). The language that Deep Blue used to analyze the positions in chess is similar to PLANNER. This is a developing AI language that uses the principles of reduction for determining problem solutions. The problem creates trees, and subtrees, and so forth to determine the base questions being addressed. If one path in the tree fails to achieve the desired goal, then the PLANNER program will backtrack and try another route (12).

The theory that a planning mechanism is the basis of consciousness is attractive. There are several proponents of the evolution of consciousness as a function of planning. The base definition is that consciousness is the operation of the plan-executing mechanism, enabling behavior to be driven by plans rather than immediate environmental contingencies (16). Here the plan is defined broadly as to mean a scheme that can control a sequence of actions to achieve a goal (16). Thus the concept is that consciousness is not a separate neural module, but rather the result of the operation of the planning process taking control of behavior and gaining access to memory and sensory input. In this context it is meaningless to look for the box labeled "consciousness" in the brain model or to try to localize it in the brain's anatomy. The operations that make us conscious occur in the context of controlling behavior from a plan, and consciousness has no separate existence of its own. Because it is an effect not a cause, there is no sense in looking for its functions (16). In defining consciousness in this manner, one has to wonder whether or not that is all there is? Planning seems necessary to evoke consciousness, but it is not complete. Some very routine plans, even quite complex ones such as driving home along an accustomed route, seem to take place without awareness, or at least without a subsequent episodic memory of the events. The episodic memory seems so confounded with earlier experiences of the same activity, that it fails to be recorded as a separate experience. The activity fails to pass the memory test of awareness (16). In defining tasks that require deliberate attention resources, Norman and Shallice (17), have argued that the following are required

1. Involve planning or decision-making
2. Involve components of trouble shooting
3. Are ill-learned or contain novel sequences of actions
4. Are judged to be dangerous or technically difficult
5. Require overcoming a strong habitual response or resisting temptation (17).

Executing a very routine plan that does not meet these criteria seems to leave the planning mechanism free to engage in other activities. How does this relate to the problems with chess computers? There are billions of possibilities for moves for each game. Each game has the potential to be unlike any other before it. Therefore, it commands the attention of each of the players. The trouble with assigning terms such as "consciousness", to Deep Blue, although it does exhibit many of the qualities that are shared in many of the definitions of the term, is that there is no way to communicate with the computer on that level. Many of the definitions of consciousness rely on several important assumptions. One of the main ones is that the being that is being observed to exhibit consciousness, it must be able to communicate with the independent observers. Deep Blue does not have the capability to explain how it “decided” to make the pivotal move in Game #2.

The match between Deep Blue and Kasparov has elevated the status of machine intelligence to the next level. While detractors would argue that Deep Blue is only functioning by analyzing each and every possible move, i.e. the brute force method, Game #2 would indicate that this might not be so. By trading the pawn instead of attacking with the queen, Deep Blue has given hints that it may no longer be relying solely on its memory data base and creating new patterns, “thinking”. This realization was enough to distract Kasparov, ultimately forcing him to resign from the game. Later at the conclusion of the match, Kasparov admitted that Deep Blue played significantly different than its predecessor, Deep Thought. Kasparov described Deep Blue as a “presence” rather than a machine.

This last game showed little of the relative strengths and weaknesses of Deep Blue, but it did serve to remind the world that only humans are capable of truly blowing a chess game. In any future match (and the World Champion immediately intimated that there would be one), Kasparov will have to take his own susceptibility to gross blunders into account. He will have to outplay the computer consistently enough to leave slack for such lapses (13). Kasparov, who on several occasions expressed unhappiness with the ground rules of the six-game rematch, challenged Deep Blue to a showdown under regular tournament conditions. "The match was lost by the world champion," he said, "but there are very good and very profound reasons for this. I think the competition has just started. This is just the beginning"(14).

The Ultimate Conscious Robot: Person Building Project

Both the Yammy robots and Deep Blue represent one type of envisaged conscious machine: the autonomous self-contained machine. They are able to act in relative independent states and be able to sense and make sense and act on the sense and display, or report on, the act and its results (21). The projected machines will have a highly integrated neural network that has a vastly distributed set of processing subsystems which sense (feed forward), reflect and control (feed back) each other, keeping it in touch with itself and its world (21). Unfortunately, Gödel’s Theorem of Incompleteness dictates the impossibility of ever describing the world and consciousness in a systematically complete formal or logical system. This will prevent researchers and engineers from "pre-programming" the conscious machine with everything that it needs to know. Instead, the engineers will rely on the machine learning and assimilating new knowledge in a manner similar to that of children. The technological challenges for this are daunting. A shift away from traditional computer architecture and towards a more biological model may be required.

Much speculation has been made as to where the first "true conscious" entity will arise. Many researchers are predicting that it will come from a distributed network of machines that would act more as a society, but might be able to act as a single combined entity. It is known as a net or a network and would need enough layers of linked subsystems, in some sort hierarchical as well as horizontal structure, with enough of an organized basis as to be able to distribute the array of tasks necessary for conscious behavior over an array of appropriately inter-linked subsystems of computer embodied sub-nets (21). From this perspective, it is thought that the Internet may provide the first forum for this entity.

The ultimate goal of the artificial intelligence community has been the creation of an artificial person. Currently there are seven projects funded, three in Japan, one in England, two in the United States, and one in Korea. The product of these projects has produced robots that are able to walk, and react to stimuli (e.g. bright lights and sounds) in manners similar to that of humans. But these projects have not been without controversy. Underlying the debate as to whether or not a robot will be able to function in society similar to a person has been the writings of Selmer Bringsford, a professor from Rensselaer Polytechnic Institute. His latest book What Robots Can and Can't Be, is directed by the overarching argument that the Person Building Project will fail. His argument goes as follows:

(a) Persons aren't automata
(b) If artificial intelligence’s Person Building Project will succeed then people are automata
(c) Therefore, artificial intelligence's Person Building Project will fail.

While the logic behind his argument may appear insurmountable, there are a few chinks in his armor. The claim that persons aren't automata is still the subject of many disputes. While no one can claim that people do not have free will, and freedom of actions and thought, these are processes that arise from the interactions of neurons. There is not enough information as to what will arise from the development of computer programs and hardware that are able to completely mimic the actions of neurons.

Why Build a Conscious Robot?

The debate over conscious machines then turns to: why build a conscious robot? What are the advantages presented in this medium? The use of a conscious robot has great promise in many areas of human exploration and expansion. Robots can be designed to survive a wide variety of environments and locations that would be inaccessible to humans. There are no metabolic requirements and little to no emotional attachments. NASA alone has devoted a great amount of resources towards this goal. Current projects include: AERcam, a free-flying robotic camera for use near the Space Shuttle and the Space Station; Dante II, the Dante II volcano exploration project; Rocky 7, development of a very long-lived science rover for Mars (30). Similarly, the government is also developing robots for military applications. The use of the robots in these areas allows the robots to enter potentially hazardous locations and then report back the relevant information that it discovers. By using a conscious robot, scientists will have the option of sending the robots on deep-space missions and to other planets for exploration, with full knowledge that the robots will be self-sufficient and will explore areas that are programmed into its memory as being "interesting". Other attractions for consciousness are that the robot will be able to complete self-maintenance and maintain independent action.

Implications for Conscious Machines:

Artificial intelligence and conscious machines will quickly create a social and hierarchical problem within the human population. By creating machines that are conscious of their own existence, we may be in essence creating a "new life form". By exploiting them for our own uses, these new creations will quickly be relegated to subservient status in society. It must be stressed that endowing the machines with human like emotions will cause them to be targets for human sympathy or disdain. Visible emotions in robots will cause society to relegate them to some level and, starting with children, begin to react to them as though they were persons.

There are several very important safeguards that must be put into place prior to the widespread introduction of conscious machines. First, machines must never be created with "feelings" and emotions. By avoiding this aspect of humanity, we can attempt to avoid emotional attachments to the machines and remember them for what they are, silicon and wires. It would be irresponsible to bring consciousness to robots in an uncontrolled manner. They will need to be programmed according to a hierarchical structure of instructions detailing their relations with human masters. In order to use robots to serve our purposes to explore potentially dangerous situations, equipping robots with mechanisms of fear and panic may be counterproductive. Indeed, such emotions may even be dangerous if they were to conflict with the machine's mission. The second will be the creation of an artificial culture in which the machines may survive. The purpose of the society will be the propagation of cooperative structures that will gather information and promote thought and culture. The traditions of the institution form the skein of ideas and information, which the network uses to inform and shape the embodiment of the ideas that make up that institution (21).


What will happen with the advent of conscious machines? The hypothesis that the first conscious machine will arise from the World Wide Web and create a "global brain" consisting of all of the hypertext linkages provides a theoretical model for a neural network. But the implications of the creation of truly autonomous machines are staggering. With ourselves as the model of consciousness, we can assume that the robots will too start to:

1. Organize itself and its relations (with other machines as well as humans) and maintain that organization, and name its social situation

2. Inquire into some of the "imponderables" of its existence, such as how it came to be, and what are the humans? (21)

When robots begin to question their existence, will they be considered living? Descartes' cogito ergo sum, described that there is always a self within our thoughts and has long been considered the hallmark of humanity. Where will these place robots in our culture? "Low-grade" conscious machines that are designed for space and planetary explorations will predictably occupy little to no place within society. Conversely, machines that are designed for care of humans, especially the elderly and the infants, may quickly be considered part of the family. If this is the case, then moral issues quickly rise to the forefront. Consciousness and the creation of artificial life carry a burden that society will have to shoulder but for which it may not be prepared.

1. Deep Blue Technology

2. Deep Blue up close and personal

3. On The Evolution of Consciousness and Language

4. David Chalmers: The Hard Problem

5. Francis Crick: The New Materialism

6. Daniel Dennett: The New Epiphenominalism

7. Roger Penrose: Quantum Consciousness

8. Colin McGinn: Lacking the Machinery

9. Robots Need to Be Conscious

10. On the Evolution of Consciousness and Language

11. Making Robots Conscious of their Mental States

12. Gödel, Escher, Bach

13. Deep Blue Wins Match

14. Deep Blue Wins

15. the Chess Mentality

16. On the Evolution of Consciousness and Language

17. Norman, D.A., and Shallice, T., "Attention to Action: Willed and Automatic Control of Behavior", University of California, San Diego, Center for Human Information Processing Technical Report 8006, 1980

18. Metacognition in Software Agents Using Classifier Systems

19. What Might a Conscious Computing System Be?

20. Conscious Reframed '97

21. An Introduction to the Physiology of Ordinary Consciousness

22. 'Learning' Brain-like Webs

23. Mindless Thought Experiments (A Critique of Machine Intelligence)

24. Computational Architecture and the Creation of Consciousness

25. Conscious Machines

26. Minds or Machines

27. A Systems Approach to Consciousness

28. Global Workspace Theory

29. I'm Alive! Fundamental Strategies for Solving the Puzzle of Human Consciousness

30. NASA Space Telerobotics Program

31. ROBOTICS at Space and Naval Warfare Systems Center

| Course Home Page | Back to Brain and Behavior | Back to Serendip |

Send us your comments at Serendip
© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 11:47:57 CDT

This paper reflects the research and thoughts of a student at the time the paper was written for a course at Bryn Mawr College. Like other materials on Serendip, it is not intended to be "authoritative" but rather to help others further develop their own explorations. Web links were active as of the time the paper was posted but are not updated.

Contribute Thoughts | Search Serendip for Other Papers | Serendip Home Page