Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Disconnection Between Brain and Computer Connections

This discussion is closed: you can't post new comments.
emily's picture

 “The Brain—is wider than the Sky—

For—put them side by side—

The one the other will contain

With ease—and You—beside—”

—Emily Dickinson

 

A few weeks ago, I learned about basic digital logic in a physics lab. The activity provoked me to think about connections. On the breadboard, one input led to one output. Are computers just complex breadboards? Does the brain work this way too? Modern scientists and philosophers debate whether the brain is like a computer or not. Indeed, computers are already capable of performing many humanlike behaviors, and many philosophers of the mind and computer scientists believe that computers may one day be able to recreate human consciousness.  In this brief overview, I will account why ultimately a brain cannot be a computer and why a computer cannot be a brain.

A computer is a device in which given the same state, one specific input will always lead to the same output. We can look at the brain on a small scale and at a very specific moment, an action potential, to see how the brain can be similar to a computer. An action potential is an all or nothing disturbance of the axon (the brain’s “cable”) that propagates down the axon and leaves the axon unchanged. All “cables” transmit this same signal. “All or nothing” means an action potential is an event that requires a specific input (in this case a change in membrane voltage above the threshold) in order to generate the desired output (in this case propagation). As the membrane permeability changes in one region of the membrane, and sodium ions flow down the concentration gradient, the sodium ion channels further down the membrane, and thus the next region of membrane, become depolarized. Once the first region of the axon reaches the minimum level of depolarization (the threshold), this propagation can occur. Thus, an action potential has the computer-like quality that given the same state, one specific input (the minimum level of depolarization) will always lead to the same output (the propagation of the action potential).

            What changes when we step back and look at the brain on a bigger scale? We do not have to move that far away from a single neuron to see where the brain differs from a computer. Compared to a computer as a single processor, the brain is a “massively parallel information processing system”. The brain comprises a network of about 10 billion brain cells, or, neurons; each neuron itself connects to about 10,000 synapses of other neurons. [1] Therefore, given one specific input into a neuron, there are as many as 10,000 different outputs to synapses of other neurons. In turn, each possible new input from the 10,000 synapses can give rise to 10,000 different outputs as well.  While we saw that an action potential within a specific “cable” is an all or nothing event, one given “cable” is connected to a number of different cables, and thus one specific input does not always lead to the same output, as is the case with computers. Instead, we can say that since a neuron is a computer, a brain is a network of billions of interconnected computers.

            Another way to see the difference between brains and computers is to look both on a molecular level and on a neuronal level at randomness and unpredictability. The Harvard Law of Animal Behavior states, “under carefully controlled experimental circumstances, an animal will behave as it damned well pleases”. This law not only refers to behavior, but to the unpredictability of information processing that stands behind behavior as well, and comes to mean, in other words, that “there is some intrinsic variability in the intrinsic properties of the nervous system as a whole which influences its input/output relationships”. [2] For example, action potentials themselves depend on the random movement of particles. Without continual random motion of ions, there could not be a concentration gradient of charged particles or the changes of membrane permeability, and thus no action potentials. Similarly, an action potential can often be stimulated at random because all neurons have areas of “leaky membrane” that constantly produce a source of passive current flow (movement of charged particles in response to a change in potential difference, i.e. propagation of an action potential) into membrane regions with voltage-affected channels. Because action potentials and thus patterns of neuronal connections do not always rely on input from the outside world, our thoughts can be random and go beyond anything ever sensed. Unpredictability in the determinants of behavior is inevitable (because of random movement of particles) and desirable (because without unpredictability, we would rely solely on external inputs and we could not be creative).

The brain, because of unpredictability, is not limited in its capabilities (as Emily Dickinson notes in her poem “The Brain – is wider than the Sky” below the title of this paper). On the other hand, a computer is a deterministic system, limited in its capabilities. Unpredictability is something a computer designer would not tolerate; for a computer, given the same state, an input should always generate the same output. In other words, a computer is designed to be consistent. According to the mathematician and philosopher Kurt Gödel’s incompleteness proof, any consistent formal system (something operating deterministically by previously agreed on axioms and rules of procedure for developing statements from them) cannot be complete, meaning it fails to produce some true statements. Thus, a computer designer constructs a consistent computer at the price of completeness. Many philosophers of the mind and mathematicians since Gödel have considered his theory in the context of the brain. In his book The Emperor’s New Mind, Roger Penrose uses Gödel’s theory, among other topics and examples in physics, mathematics, and philosophy, to prove that “Artificial Intelligence through computers, as presently constructed, cannot in principle duplicate the workings of the human brain” (as the title implies: the emperor’s new clothes, if you recall, were no clothes at all; so, the emperor’s new mind, a computer, is no mind at all). [3] Penrose doubted that the brain was “incomplete” and so thought brains were not computers, and thus that Gödel’s proof did not apply to brains. Indeed, we already decided that in terms of unpredictability, a brain could not be a formal system because the brain does not operate deterministically. Most likely, a brain is not incomplete; but is a brain more complete because of its inconsistency? Inconsistency leads to unpredictability and randomness, which leads to “thoughts beyond sensation”, creativity, and thus “more-completeness”, more possibilities. Inconsistent and complete: would this, in fact, make the brain a formal system? Questions like this one and other ideas about formal systems and neuronal connectivity continue to provoke the brain-computer debate.

 

Computer scientist Kwabena Boahen of Stanford University has developed a “neuron-like” computer chip called Neurogrid made of millions of silicon neurons and requiring the amount of power close to a human brain’s actual energy consumption. The chip “trades the extreme precision of digital transistors for the brain’s chaos of many neurons firing, with misfires 30 percent to 90 percent of the time”. [4] Can a computer chip replace a neuron, or a bunch of neurons? It seems likely, because the computer chip can perform the same function that a neuron or a bunch of neurons can. Can a computer replace a brain? Even if the computer adopted the brain’s chaos, could that chaos have meaning and produce consciousness? John Searles argues against strong Artificial Intelligence in his essay “Mind, Brains, and Programs” by explaining “the famous Chinese box thought experiment which undermines the idea that a computer which recreates human-like responses can be said to ‘understand’”. [5] The Chinese box example also applies to other facets of consciousness. For example, a computer can help us visualize information, but a computer itself cannot visualize information, just like a person does not understand Chinese if they are told how to respond to different “Chinese inputs”. Visualizing information and understanding Chinese require an “observer”. In our class, we call this part of the brain the “I-function”, the “story teller”, or the part of you that IS you, that is conscious and aware of your self.  In order for a computer chip’s chaos to mimic human consciousness, it would have to, in some aspect, be aware of itself. Until then, a computer cannot replace a brain.


Works cited:

[1] William. Neuron-Like Computer Chips Could Simulate Human Brain. Artificial Intelligence: A Blog Dedicated to Artificial Intelligence Technology & News [Internet].  [cited 2010 May 10]. Available from: http://whatisartificialintelligence.com/576/neuron-like-computer-chips-could-simulate-human-brain/

 

 

 

[2] Grobstein, Paul. Variability in Brain Function and Behavior. Published in The Encyclopedia of Human Behavior, Volume 4
 (V.S. Ramachandran, editor), Academic Press, 1994 (pp 447-458).Serendip [Internet]. [2004 Dec 13; 2010 May 10]. Available from: /bb/EncyHumBehav.html

 

 

 

[3] Ross, Kelley L. The Emperor’s New Mind, by Roger Penrose, Oxford University Press (A Book Review) [Internet]. [2002; 2010 May 10]. Available from: http://www.friesian.com/penrose.htm

[4] Neuron-Like Computer Chips Could Portably Digitize Human Brain. Brainicane: Brainstorming on a Higher Level [Internet]. [2009 Nov 6; 2010 May 10]. Available from: http://www.brainicane.com/2009/11/06/neuron-like-computer-chips-could-portably-digitize-human-brain/

[5] Jordan, Andrew. Computers, Artificial Intelligence, the Brain, and Behavior. Serendip [Internet]. [2000; 2010 May 10]. Available from: /bb/neuro/neuro00/web1/Jordan.html