Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.

The Linguist and the Neurobiologist:
A Conversation About "Incomplete or Mistaken" Constructs and What to Do About Them

Eric Raimy and Paul Grobstein

Most recent: 27 January 2006
On line forum

Eric and Paul have similar inclinations to try and understand how people think about problems, what trouble they get into because of that, and how one might get out of that trouble. And, over several years of weekly or so conversations, they have developed a sense that there may be parallels between the troubles people get into in linguistics and in brain research, and perhaps in the ways to get out of them. Several years ago, Paul offered Eric a 1951 paper by Karl Lashley, "The Problem of Serial Order in Behavior" as a possible instance of parallels between issues in linguistics and neurobiology. In that paper, Lashley wrote
"If there exist in human cerebral action, processes which seem fundamentally different or inexplicable in terms of our present construct of the elementary physiology of integration, then it is probable that that construct is incomplete or mistaken, even for the levels of behavior to which it is applied"

Having finally gotten around to reading the paper, Eric agreed with Paul that it might provide a good take off point for some public conversation about the relationship between linguistics and neurobiology. The following, which will be updated regularly, is intended to further encourage ongoing explorations of this interface, as well as of broader scientific and cultural issues to which it relates. Join the conversation yourself in the associated on line forum or email with longer contributions.

Join the conversation yourself in the On-line forum or email us.

Following lunch conversation 18 January 2006

ER to PG - 27 Jan 2006

"Our common meeting ground is the faith to which we all subscribe, I believe, that the phenomena of behavior and mind are ultimately describable in the concepts of mathematical and physical sciences."

"The study of comparative grammar is not the most direct approach to the physiology of the cerebral cortex, yet Fournié (1887) has written, 'Speech is the only window through which the physiologist can view the cerebral life'"

... Karl Lashley, The Problem of Serial Order in Behavior

I wanted to start this conversation with these two quotes from Lashley's paper because they are illustrative to me in how far our understanding of language and the brain has advanced and also highlight how we are still only able to formulate basic (probably still ill-formed) questions about some fundamental issues about the brain.

As a linguist, I can attest to the advances in our understanding of human language since 1951. Simply put, Chomskyian linguistics had not arrived in 1951 and now, whether you agree or disagree with Chomskyian linguistics, it is fairly obvious that the questions linguists are asking at the present time have advanced in their well-formedness and focus. Regardless of the better questions that linguists are asking about language, little to no progress has been made about some of the fundamental issues that Lashley raised in his paper. In fact, some of those who are opposed to Chomskyian linguistics probably feel that linguistics has moved farther away from addressing the issues raised by Lashley because of the more complicated and abstract claims about the nature of human language. I'm on the optimistic side so I'd like to talk about a few ideas about language which might help us form better questions about the issues that Lashley raised.

One important idea that we need to consider is the one that we stumbled across during lunch which is that the 'binding problem' in neurobiology is the 'streaming problem' in auditory analysis. A simple explanation of the 'streaming problem' in audition is the question how the brain decides to group or not to group different components of an auditory signal. A concrete example of this phenomenon (see Bregman's Auditory Scene Analysis) is that if the 3 notes of a musical chord are presented in a temporally disjointed manner, the perception of these notes is 3 distinct notes. But, if the 3 notes of a chord are presented cotemporally, then the perception is of a chord and not 3 distinct notes. This is an oversimplified version of the 'streaming problem' because different manipulations of temporal sequencing, different loudness of the notes and different frequencies of the notes all interact in determining whether a auditory stimulus is 'decomposable' into its component parts.

The 'streaming problem' struck me as the same basic problem as the 'binding problem' that you described to me at lunch. A simple description of the 'binding problem' in neurobiology is that we are unable to determine where or how a perceptual event occurs in the brain. An oversimplified convenient example of this is asking the question of which neurons are involved in the representation of a visual input. Apparently, this is a big question for neurobiologists and I'll take your word for this.

The similarities in the problems to me is that it is the same question of how to decompose a complex signal into component parts. This strikes me as an assumption crucial to the binding problem in that what we are trying to do in investigating the binding problem is to identify what component of the overall activation of the brain corresponds to a particular perceptual event. I don't know if this is helpful for the binding problem but I think understanding of how the streaming problem can be understood is likely due to how the ear is actually constructed. The cochlea in the ear contains many small hairs that have different resonances which likely provides a coarse Fourier Transformation of the incoming auditory signal. The number and distribution of hairs in the cochlea helps determine what frequencies of sounds we can hear and if the brain can access individual activation or output from each hair, then the 'auditory signal' fed into the brain is already broken down into component frequencies. This would hopefully provide a level of analysis of the auditory signal that would allow the brain to 'stream' things appropriately. Activation of different hairs at a specific point in time and the tracking of changes in activation presumably would be sufficient to allow 'streaming' or 'grouping' to occur.

Now we should not become too enthusiastic about this endeavor though because I think this is a plausible solution only in the most simple of cases. When we consider more complicated cases of audition which are actually still 'trivial' we can see where our current knowledge runs out. The type of case that I'm thinking of is the ability of human beings to distinguish different musical instruments when they are playing the same note. If we look at the spectrograms of two instruments (say a violin and a trumpet) we can easily pick out differences in the acoustic 'fingerprints' of the instruments. Simplified, different instruments have different spectrum where a spectrum is the distribution of amplitudes of different frequencies. The important question though is how does the brain keep track (or even produce/process) this type of information. It seems to me that the brain somehow creates a representation of the spectra of a violin and stores it somewhere. Note that the representation has to be abstract in that the actual spectrum produced by the violin changes when different notes are played. Thus, the brain has to somehow create some sort of abstract 'spectral finger print' for a violin that can handle the variation in the actual spectrums of specific notes and specific violins. The brain must also have a distinct spectral finger print for a trumpet also so it can distinguish between the two.

The relation of these observations and the 'binding problem' is that the brain is somehow able to keep track of these two 'spectral finger' prints in an acoustic signal even when the instruments overlap under some circumstances. The idea is that if we can figure out how the brain keeps separate these two distinct sources in a single acoustic signal, this might provide understanding in how the brain can keep track of the different processes that we believe are going on in it. This is the crux of the 'binding problem' in my mind, how are distinct processes separated in the overall 'activity state' in the brain.

Going back to Lashley's question and theoretical linguistics... from what I can tell, the question of the of temporal relations, spatial relations and abstract theoretical representations all fall under the umbrella of the 'binding problem'. Because we do not have a good understanding of possible solutions to the binding problem, we are unable to distinguish between different claims about the nature of time, space and representations in the brain. My hope is that if we get a grasp on the binding problem, this would allow us to get a grasp on how to investigate what representations are plausible for the brain. Right now, the overall brain state appears as a big mush so who really knows what exactly is going on in there...

PG to ER - 27 Jan 2006

Interesting conversation, as always. And glad to find the old Lashley paper, "The Problem of Serial Organization in Behavior", in the vicinity of the center of what we seem to be circling around in our conversations past and present. I do think the Lashley paper relates to issues that are still relevant in both neurobiology and linguistics, as well as to larger issues for which both are in turn relevant.

Among the intriguing things that came out of our last conversation is the idea that the "binding problem", usually conceived in neurobiology as a contemporary issue related to higher order brain processes such as perceptional recognition, is in fact a "lower" order problem in auditory processing and phonological analysis. The argument here is that not only the speech stream but any auditory input is basically decomposed in the periphery into a set of amplitude and phase values for different frequencies. This presents the nervous system with the challenge of how to associate sets of information about frequencies (how to attribute particular bits of frequency information to one or another of several simultaneously active speakers, to one or another of several simultaneously active musical instruments, to one or another of several potential phonological elements). Hence, the nervous system must be solving the "binding problem" in at least some cases at quite early processing steps (well before the kinds of distributed neocortical representations that have made the problem prominent in recent years).

Your intuition, if I'm understanding you correctly, is that the "abstract representational approach" provides a way to understand how the nervous system is doing this, one involving the use of "feature detectors" which, in essence, are tuned to and hence respond only for particular spatio-temporal patterns of activity (looking, presumably, across several different sets of frequency information). And that this, in linguistics, contrasts with a different potential set of explanations that relies instead on some kind of timing mechanism or synchronization process to link together relevant bits of information distributed in different places. I think this is well worth exploring further since the notion of solving the binding problem by synchronization has been a live issue in neurobiology (and I'm not sure alternatives have been seriously considered).

I'm also intrigued by the issue of whether similar problems exist and have been ignored early in the visual pathway. The one relevant situation that comes to mind has to do with stereopsis. in this case it is likely that the binding problem is solved by a "best" guess process of multiple analyses trying out different associations under some kind of collective best fit constraint. Perhaps that is relevant in other contexts, including the phonological and the perceptual (the brain solves its own binding problem by some kind of distributed negotiation process)?

Beyond the immediate issue, there is here also the more general problem of the temporal "flatness" of the nervous system and the implications of that for a whole host of problems, linguistic and otherwise, such as the origins and meaning of concepts of time. Here, I'm intrigued by your notion of "sequence" being generally represented atemporally, by a series of differently located elements and an arrangement of directed associations among them. Such an architecture resonates with some of the thinking I've been doing about the nature of representations in the unconscious, as well as with a generalization that the unconscious lacks a sense of time. An interesting question that follows from this, of course, is how a sense of time is generated by the "story telling" part of the bipartite brain, and how the two regions, one atemporal and the other temporal, communicate with one another.

There clearly are "filters" in the nervous system but, as we talked about briefly at the end of this conversation, I'm a little skeptical about whether they should be called "feature detectors". The issue here is the likelihood that filtering circuits one discovers related to particular tasks probably didn't actually evolve to serve those particular tasks and that even in their current state they are likely to be only one of several filters simultaneously involved in any particular analysis. Your sense that several human speech "feature detectors" actually exist in chinchillas and bats fits with the first of these notions and that a particular human speech filter characteristic is modified by other simultaneously active filters fits the second. Maybe neurobiology and linguistics really do have useful things to say to one another?

To be continued ...

[an error occurred while processing this directive]