Colin Phillips
Department of Linguistics
University of Maryland
In a series of studies using EEG & MEG recordings in an adapted mismatch paradigm, we have investigated the grouping of sounds into phonological categories, and the grouping of phonological categories into feature-based natural classes. The results of our studies indicate that discrete phonological category representations are available to human auditory cortex, but provide no insight into how these representations are instantiated.
Sentence structures pose a problem of a rather different nature. Due to the iterative and recursive properties of natural language syntax, speakers are able to draw on a finite number of words to create an infinite number of sentences - this is the 'discrete infinity' property of language - and any sentence may be understood within a very brief period of time. Therefore, the brain must provide mechanisms which support representations which are highly structured, and can be created within a few hundred milliseconds. In order to bridge the gap between our understanding of sentence structure at the linguistic level and at the brain-level, a number of steps are required. I will review some work by our group with this goal. First, we have begun to develop a dynamic model of human syntactic knowledge, which can be deployed in real-time syntactic processing. Second, fragments of this approach have been implemented in a working computational model. Third, we have run a number of experimental tests of the model, using reading-time paradigms. Finally, we have begun to explore the representation of sentence structures using ERP brain recordings. In this area, top-down efforts toward unification are more promising.