Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Reply to comment

Bill Huber's picture

The unconscious etc.

Your question about the unconscious, Alice, brings forward an important distinction among levels of description of the behavior of a Turing Machine. A TM as we know it acts on individual cells in a tape filled with letters of an abstract alphabet, etc. The analog in humans might be a description of the brain as an assemblage of molecules and ions interacting according to chemical and physical laws. The "unconscious," however, is a description based on observing general patterns exhibited on the scale of an entire system comprising a heptillion or so pieces. To make progress with this question, which is fundamental, we might want to begin by clarifying what we mean by the "unconscious" so that it becomes a concept that can be unambiguously discussed and researched. The challenge then is to show how the defining markers of unconscious behavior can emerge from the (theoretically) simple interactions of the ions and molecules. It's a daunting task that we can hope to carry out by studying the system at several natural levels, ranging through larger molecules, neurons and ganglia, assemblies of neurons, and so on through the major structures of the brain. This analytical approach, though, risks not finding elements of system behavior that don't exist "in" any level but are solely properties of the whole big interacting mess.

The easier part of your question is about change over time. One way to use a Turing Machine as a (simplified) model of human thinking is to use a very long sequence of symbols on the tape to represent the constant stream of sensory inputs a person receives during their lifetime. Recall that one part of the machine's reaction to these symbols is by (optionally) changing its internal state. It can be helpful to think of this "state" a little less abstractly: it can consist of a huge database of information, for example, which the machine "updates" (that's just a change of state) in response to what it "sees" on the tape. I have in mind a caricature of the absent-minded professor who writes everything down in a notebook: to find out whether she likes chicken soup for breakfast, for example, she would look it up in the notebook. The caricature becomes less obvious when the professor is able to look things up quickly and surreptitiously and to record new observations just as easily. At that point, would an observer recognize the professor's decisions as being absent-minded anymore? Or even being purely mechanical?

Reply

The content of this field is kept private and will not be shown publicly.
To prevent automated spam submissions leave this field empty.
4 + 1 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.