Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Facial Expression Recognition: When the amygdala intercepts the I-function

BeccaB-C's picture
Normal 0 false false false MicrosoftInternetExplorer4

The human nervous system’s propensity to see and recognize faces is an incredible evolutionary tool and provides our species with the ability to engage in enhanced social, protective and communal behavior. The cross-cultural universality of facial expressions, as demonstrated by Ekman, brings to light six distinct facial expressions which have been exhaustively researched in the social psychology community (McCormick, 2007). Neuroscience research in this area has been forthcoming, as well, and the role played by the Fusiform Gyrus as a face-specific recognition area in the brain can partially explain our tendency to recognize facial configurations and to assemble information from them. It does not, however, elucidate the mechanism behind our detection of emotional information from these configurations. Supported by evidence from behavioral and neural research, this paper will make the claim that emotional priming, social, environmental and qualitative factors influence and intercept the recognition of human emotion in facial expressions, moving this recognition process out of the I-function and into the subconscious, implicit areas of brain function.

In most exercises of recognition, it is helpful to have cues that might lead to accuracy in recognition. For example, in recognizing a particular molecule on an organic chemistry test, it is helpful to notice cues like the various functional groups and different elements from the periodic table that make up the molecule. With these cues to recognition, it is often much easier to make an accurate judgment. In a study by Righart and de Gelder (2008), participants were presented with images of faces depicting happy, fearful and neutral expressions, paired with varying background environments that fit one of the same three categories. This study supported the idea that environmental context and visual cues can enhance recognition of facial expressions, as facial expressions were more quickly recognized when in a congruent emotional context—a background image that matched, in tone, the emotional expression presented in the face (Righart et al., 2008).

            This and other studies have further extrapolated on the cues for accurate recognition of facial expressions to suggest that some are recognized more quickly based on such cues, or more fully based on the type of emotional output they elicit (Righart et al., 2008; Williams, McGlone, Abbott, Mattingley, 2008). Researchers have posited that facial expressions of fear might be more expeditiously recognized, or might elicit a stronger neural response in observers than other expressions (Williams et al., 2008). Williams et al.’s research has supported the possibility that certain expressions are more readily recognized than others. A study was conducted in which participants were instructed to report the presence of a particular facial expression—either happy or fearful—in a group of many faces, some of which were distracters of the other expression, most of which were neutral. The behavioral part of the study yielded results demonstrating that happy faces were detected significantly better by participants (Williams et al., 2008).

            Neuroimaging results in this and other studies also abounded to suggest the roles played by structures of the nervous system in facial expression recognition, and the extension, yet again, of emotional influences (Williams et al., 2008; Righart et al., 2008; Bleich-Cohen, Strous, Even, Roshtein, Yovel, Iancu, Olmer, Hendler, 2009). While the hypothesized speedy recognition of the fear expression was not supported by the behavioral  portion of Williams et al.’s (2008) research, fMRI BOLD (Blood Oxygen-Level Dependant) signals were recorded while participants were searching groups of faces for pre-determined facial expressions—some looking for happy, some looking for fearful and produced different results. In this part of the study, “displays containing a fearful face,” in fact, yielded higher activation in the amygdala, the part of the brain often associated with emotional expression and regulation (Williams et al., 2008). Williams et al., as well as Righart et al. (2008), have hypothesized that this increased activation in the amygdale might represent the brain’s propensity to attend preferentially to facial information that communicate possible danger or threat, as a self-protecting mechanism.

The neuroscience portion of the study by Righart et al. (2008) supported the role of neural structures in recognition of emotional facial expression. A particular temporal measurement of 170ms after the presentation of a facial expression has been targeted as a point at which much of the process of recognition is taking place, and so was examined in this study. The amplitude of ERPs (event-related potential) were taken from occipito-temporal electrodes, near to the amygdala located in the temporal lobes and quite close to the left fusiform gyrus. At this particular sequential stage in facial expression recognition, ERP’s showed significant change when responding to happy facial expressions (Righart et al., 2008).

Further, this data supports the behavioral pattern observed previously, in which contextual cues accelerated the recognition process. In Righart et al.’s (2008) study, the amplitudes showed the most significant negative intensity when responding to happy facial expressions in fearful scenes, suggesting a response to the unexpected pairing. Amplitudes at 170ms were more negative when presented with faces in a fearful contextual background (Righart et al., 2008).

The implications of the amygdala in facial expression recognition fit with the field of neuroscience’s understanding of this and other brain structures. The amygdala’s well-defended role as an emotion center in the brain contributes to our understanding by suggesting that emotional facial expressions would do well to be processed by a part of the brain that is highly sensitive to emotion (Williams et al., 2008). Further, extensive support exists for various “anatomical connections between the amygdala and the fusiform gyrus,” the area of the brain most known for high selective sensitivity to face configurations (Righart et al., 2008). Past research discussed by Righart et al. (2008) suggested that emotional information from the amygdala enhances responses from the fusiform gyrus, and by doing so, leads to an additive effect with the ability to recognize faces and the ability to recognize emotional expression in faces. The study by Williams et al. (2008) demonstrated increased activity in the fusiform gyrus when fearful faces were presented and correctly identified by participants, seen in conjunction with amygdala responses as reported earlier, further supporting the connection between amygdala and fusiform gyrus.

Studies of neural deficits can greatly increase our knowledge about the inner workings and differential functionality of the brain by examining how a missing portion of the brain affects behavioral ability. In the study of facial expression perception, two known disorders involving loss of some brain mass greatly impact one’s ability to perceive certain parts of facial stimuli. Schizophrenia is known to lead to a marked decrease in patients’ ability to perceive social, emotional, expressive cues from people’s faces, though patients remain able to perceive and recognize the faces. This disorder is known also to involve a decrease in the volume of the fusiform gyrus (Bleich-Cohen et al., 2009). This single dissociation supports the role of the fusiform gyrus in some level of emotional facial processing and by close anatomical association, may implicate the amygdala. Further, in prosopagnosia resulting from damage to the fusiform gyrus, many patients are able to perceive emotional expression on faces, without being able to perceive the faces themselves (Tsunoda, Yoshino, Furusawa, Miyazaki, Takahashi, Nomura, 2008). This may indicate unconscious, sub-cortical processing of emotional expression, separate from facial pattern, through the tecto-pulvinar visual pathway—a part of the visual system that passes through the superior colliculus and never reaches the cortex, remaining unconscious (Tsunoda, 2008).

Though no neurological evidence from social anxiety disorder and generalized anxiety disorder exists to implicate new areas of the brain in facial expression processing, it is important to consider these deficits as contributing to our knowledge, as well. Research on people with high anxiety, according to scores on an established scale, revealed less attentional control in emotion processing for this patient group, implying that anxiety leads emotional, especially fear, information in facial expression processing to be processed in a less automatic, less top-down manner (Fox, Russo, Georgiou, 2005).

The neurological and behavioral evidence provided so far implicate various emotion-regulating areas of the brain and behavioral processes as being highly involved in facial expression processing. Our ability to perceive emotional information from visual cues in a face, and externally, seem naturally to be derived from our conscious efforts to empathize and provide aid to those in emotional need. On the contrary, evidence from neural and behavioral studies presented earlier, as well as research on priming and unconscious perception rather imply that facial expression perception is highly rooted in unconscious behaviors, cutting the I-function almost completely out of the process. Research has indicated that images depicting fearful facial expressions that are flashed quicker than the known threshold for visual perception time and reported as unseen can activate the same ERP and skin conductance response changes as perception of fearful facial expressions would when consciously presented (Tamietto, Gelder, 2008). In research by Li, Zinbarg, Boehm and Paller (2008), priming of fearful or happy facial expressions lead to varied perceptual judgment of neutral surprise facial expressions used as the target.

It is clear that much determines the human tendancy to recognize and empathize with fellow human faces and facial expression, outside the realm of conscious perception. The I-function as a conscious processing mechanism appears to hold some responsibility for the majority of human action, interaction and perception beyond intuitive knowledge. The area of emotional expression processing, however, provides fodder for questioning the role of the I-function and the true level of consciousness dictating our social and emotional behavior. Evidence presented in this paper suggests that much of what we recognize as conscious perception of facial expressions is, in fact, dictated, primed, and otherwise determined by psychosocial, emotional and anatomical factors before it reaches the I-function. Emotional cues prime or enhance recognition and threat-posing fear expressions are perceived through different pathways and with different urgency.

The amygdala is clearly highly intertwined with facial expression recognition, but to what extent is our I-function directed by these cues, and to what extent can our I-function direct these unconscious processes as well? When we use our I-function to consider a particularly scary story, or a memory of a particularly unhappy or fearful facial expression, can we effectively prime or direct our amygdala and facial expression recognition mechanisms such that the I-function can elicit the same sorts of changes in perception as unconscious primes? Evidence has suggested that the amygdala intercepts beyond the immediate recognition of a face to the I-function process of facial perception, and directs it based on stored or immediately received information about emotional cues or primes present. Next, we need to look at the possibility that the I-function, in turn, can intercept amygdala processing and direct the unconscious perception of emotional information, especially as pertains to perception of facial expressions. In a sense, this is an exhibition of a proposed consciously implemented afferent loop, by which our own input leads to a change in the mechanisms of an unconscious output. The hidden nature of the process from the I-function complicates the potential for such an afferent loop, but it is a conceptual puzzle that warrants further exploration in future neuroimaging research.

 

References

Bleich-Cohen, M., et al. (2009). Diminished neural sensitivity to irregular facial expression in first-episode Schizophrenia. Human Brain Mapping, ahead of print.

Fox, E., et al. (2005). Anxiety modulates the degree of attentive resources required to process emotional faces. Cognitive, Affective, & Behavioral Neuroscience, 5(4), 396-404.

Li, W., et al. (2008). Neural and behavioral evidence for affective priming from unconsciously perceived emotional facial expressions and the influence of trait anxiety. Journal of Cognitive Neuroscience, 20(1), 95-107.

McCormick, R. (2007). Facial Expression Test. Retrieved April 13, 2009, from Serendip’s Exchange: Brain and Behavior 

Web site: /exchange/node/766

Righart, R., et al. (2008). Rapid influence of emotional scenes on encoding of facial expressions: an ERP study. Social Cognitive and Affective Neuroscience, 3(3), 270-278.

Tamietto, M., et al. (2008). Affective blindsight in the intact brain: neural interhemispheric summation for unseen fearful expressions. Neuropsychologia, 46, 820-828.

Tsunoda, T., et al. (2008). Social anxiety predicts unconsciously provoked emotional responses to facial expression. Physiology and Behavior, 93, 172-176.

Williams, M.A., et al. (2008). Stimulus-driven and strategic neural responses to fearful and happy facial expressions in humans. European Journal of Neuroscience, 27, 3074-3082.

Comments

Paul Grobstein's picture

distinguishing facial expressions based on feeling?

"much of what we recognize as conscious perception of facial expressions is, in fact, dictated, primed, and otherwise determined by psychosocial, emotional and anatomical factors before it reaches the I-function"

In short, we distinguish between a smile and a frown at least in large part because of how they make us feel, without our knowing why they make us feel that way?  There's a very interesting general pattern of this kind, and it does indeed motivate the kinds of questions you suggest about possible routes by which the I-function might or might not be able to influence relevant parts of the cognitive unconscious.