Biology 202
1998 First Web Reports
On Serendip

Mechanisms of Originality: Comparing Language Systems to Neural Systems

Kristin Chimes Bresnan

This paper reflects the research and thoughts of a student at the time the paper was written for a course at Bryn Mawr College. Like other materials on Serendip, it is not intended to be "authoritative" but rather to help others further develop their own explorations. Web links were active as of the time the paper was posted but are not updated.

Contribute Thoughts | Search Serendip for Other Papers | Serendip Home Page

"When I was a boy I felt that the role of rhyme in poetry was to compel one to find the unobvious because of the necessity of finding a word which rhymes. This forces novel assocations and almost guarantees deviations from routine chains or trains of thought. It becomes paradoxically a sort of automatic mechanism of orginality ..." ---- Stan Ulam, Adventures of a Mathematician

In a previous paper, I began exploring a comparison between language and DNA based on their function as information systems. In this paper, I would like to consider some of these issues further, as well as extend the comparison to the nervous system. The conversation was structured around the five "essential characteristics" of DNA; these are stability; variation; reproducibility; the ability to store information; and the ability for that information to be read. For this paper, I'd like to focus just on the criteria of stability by looking at what some researchers are saying now about the structure of language and the structure of the nervous system.

One complication which is intrinsic to any kind of discussion like this is that the parallel lines one tries to pursue are only parallel in places; eventually they do overlap, and often they are indistinguishably tangled. The most obvious and forbidding example is that language is itself a product of neural function; thus, when one gets to the root of how sentences are understood and generated, the comparison to neural activity becomes moot, because in fact it IS neural activity (highly specialized and probably not easily generalized neural activity at that). Similarly, any discussion about the origins of language is also by definition a discussion of the evolution of the brain. I mention this only because I think that while the risk of chasing ones own tail is very real, the observations which arise from a consideration of the places where the two structures parallel one another (in an extremely basic way) are sufficiently interesting to warrant the attention.

The simplest way to think about structure is in terms of building blocks or discrete units. With language, the most basic units are either letters or phonemes (9); the next level of organization is words; following words are series of words (which in Western languages are usually sentences). Interestingly, meaning is not acquired until letters have made the leap to words. That is to say that a single letter, "A" for example, has no intrinsic meaning; it only acquires significance when it is in relationships with other letters (9). Furthermore, there are very specific rules governing the relationships between letters, such that "peanut" is acceptable and has meaning while "lsrigegejg" does not. Words depend on their own preexistence as well: while theoretically there are an infinite number of words possible, there are a distinctly finite number of words in existence at any one time.

The leap to the next level of organization is in many ways the most pleasing, both because it is simple and because it generates an infinite number of possible outputs. It is made through syntax: the application of a few reasonably simple rules which govern the relationship between words. It is the many ways of juxtaposing those words which allow for the infinity of meaning; it is the simplicity of the rules which allow those meanings to be conveyed and understood - that is, for the meanings to exist.

What bearing does this have on the nervous system? If we build the analogy up from the same place, we begin with neurons as the basic units (2). Whether neurons correspond to letters or words as building blocks is not nearly as important as the idea that they gain new abilities or characteristics with each level of organization. At what level of organization is the system catapulted into the realm of infinitely variable outputs? Is there a mechanism at work in the nervous system analogous to syntax, a few simple rules of relationship between neurons which allow (indeed insist upon) novel and complex behaviors? Is such a mechanism one of the properties of "complex systems"?

In "Variability in Brain Function and Behavior", Paul Grobstein makes the point that a complex system is an "'open' and 'self-organizing' system. Its form depends on a continuous flow of matter and energy through it, and emerges entirely from the interaction of a large number of elements" (2);. What strikes me as curious is that the rules of syntax or grammar are not in any way implicit to the nature of words; that is to say that syntax does not emerge from the interaction of words based on their intrinsic properties. The rules of grammar depend on the form and nature of words, but there is something new thrown into the mix when words make the organizational leap to sentences. The two traits which most distinguish words in a sentence from words in a random string are intent (in that a sentence shows evidence of the intent of someone else) and the fact that each word is highly dependent on context. I will leave the issue of intent out of the discussion for now because it is one of those places where the functions of language and the brain are indistinguishable, and just consider the nature of context dependence.

In "Language as a Dynamical System", Jeffrey Elman argues that language may more usefully be considered a dynamical system rather than a representational system; that is, "representations are not abstract symbols but rather regions of state space. Rules are not operations on symbols but rather embedded in the dynamics of the system." (1);. He suggests getting away from the architectural, bricks - and - mortar model of language which poses rules as the operators and the lexicon as the operand. In a dynamical system, which he explores with computer modeling, "words are not the objects of processing as much as they are inputs which drive the processor... inputs operate on the networks internal state and move it to another position in state space." (1);. This description bears a strong resemblance to what was described in class as the bidirectionality of the nervous system: outputs (words) drive inputs as much as the other way around. Thus, the words have a relationship amongst themselves in a sentence which is mediated but not determined by syntax; they are active, not passive, participants.

Where does this leave us in terms of neural functioning? We already know that neurons are active, and we know that they depend on anatomical specificity and therefore are "context dependent". Elmans research supports Paul Grobsteins point that "variability is by no means the residuum left over when everything else is understood, but may instead in many cases be part of the very essence of what is to be explained" (2);. If syntax is the mechanism by which a finite number of words produce an infinite variety of meanings, perhaps the mechanism of variability in behavior can be found in looking for what relationships characterize those finite numbers of neurons which produce an infinite variety of actions.

WWW Sources

1. Language as a dynamical system , by Jeffrey Elman, University of California, San Diego

2. Information about Variability in brain function and behavior, by Paul Grobstein, Bryn Mawr College

3. From the head to the heart: some thoughts on similarities between brain function and morphogenesis ..., by Paul Grobstein, Bryn Mawr College

4. Symbol- using systems at two levels of organization, by Martin Sereno, University of California, San Diego

5. The emergence of intelligence, from Scientific American, by William Calvin, University of Washington

6. The great climate flip-flop , from The Atlantic magazine, by William Calvin, University of Washington

7. The unitary hypothesis: a common neural circuitry for novel manipulations, language, plan-ahead, and throwing?, by William Calvin, University of Washington

8. A stone's throw and its launch window: timing precision and its implications for language and hominid brains, from Journal for Theoretical Biology, by William Calvin, University of Washington

9. True language: animal language abilities and what the linguists study, by William Calvin, University of Washington

10. Viruses of the mind, by Richard Dawkins.

11. Oliver Sack's Awakenings: reshaping clincal discourse, by Ann Hunsaker, Pennsylvania State College




| Forum | Back to Biology | Back to Serendip |

Send us your comments at Serendip
© by Serendip 1994- - Last Modified: (none)