Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

A Neural and Behavioral Science Story of Morality

Ian Morton's picture
I. Introduction

While morality has typically been the topic of philosophical and theological discourse, cognitive sciences are beginning to offer new and valuable insight into the nature of human moral systems. Cognitive neuroscience has begun to describe what has been labeled the “social brain,” while evolutionary and primate biology has attempted to describe the origins of human moral systems. Taking into further consideration the contributions of social psychologist Jonathan Haidt, a new picture of morality is beginning to emerge. While many are apt to describe human morality as a system imbued with divine insight and influence, it seems that morality may have its origins in human biology, specifically in the evolution of the human brain, and is thereby amenable to scientific descriptions. At was with this intention in mind that we opened the topic of morality to discussion in our senior seminar on neurobiology and behavioral science.

II. Background

Moral Dilemmas: Questioning Moral Reasoning

Moral dilemmas have been a valuable tool for studying human morality, specifically for examining the nature of moral judgments. While moral judgment is typically believed to stem from moral reasoning, a rational process of practical deliberation between pros and cons with an active consideration of social norms, studies which have made use of moral dilemmas are beginning to show that morality may not be as rational as previously assumed. For example, the trolley dilemma presents subjects with a hypothetical situation in which five railroad workers are working on the tracks as a trolley approaches. One can either pull a lever to divert the trolley away from the workers, saving their lives, onto another track where a single worker is located. The “rational” cost-benefit analysis would dictate that one should pull the lever, as five lives are more important than one, and studies have shown that most people (9 out of 10) say they would pull the lever (7). In a second situation, one is located on a footbridge over the same tracks upon which five workers making repairs as a trolley approaches. Standing next to the subject is another man, whom one can either choose to push off the bridge, causing the train to stop and thereby saving the five workers lives, but killing the one man, or one can do nothing. This second situation offers the same cost-benefit analysis, according to which a rational account of moral judgment would suggest that one would chose to push the man on the bridge. However, it has been observed that most people (9 out of 10) say they would chose to do nothing in the second scenario (7).
The observations afforded from moral dilemmas such as the trolley problem show a general inconsistency within the moral domain. That is, moral judgments do not appear to follow a distinctly rational model. Further, when pressed to explain why they had made the decisions they did in each scenario, subjects were unable to articulate a reason for their judgments, often citing that something just felt wrong (7). The moral dumbfounding effect has been observed in numerous studies, suggesting that it may be a universal phenomenon. Such observations have provoked researchers such as Joshua Greene and Jonathan Haidt to question the previous understanding of morality, which placed reasoning at the foundation of moral judgment. Both Haidt and Greene have proposed that rationality has been overemphasized, whereas affective intuitions likely play an equal if not greater role in guiding moral judgments. Haidt argues that moral reasoning plays a minimal role in the formation of moral judgments, rarely having any direct influence. Instead, Haidt argues that (affective) “moral intuitions” or gut feelings are the principal source of moral judgments (5). Greene, however, is willing to grant reasoning a more significant role in morality, but proposes that moral reasoning constitutes one of two distinct moral systems/processes in the brain (3). The second process/system is that of social-emotional responses or affective intuitions, and it is from a complex interplay of these two systems that moral judgments emerge (3).

The Evolution of Human Morality: A Universal Grammar?

Greene has drawn his hypothesis from fMRI observations of subjects who were presented with moral dilemmas such as the trolley problem, which appear to show that different brain regions play relatively greater roles in specific kinds of moral judgments. Greene describes these in terms of impersonal versus personal moral contexts (3), but described more simply, one system (the personal) can be related to affective responses, while the other (impersonal) is related more with rational deliberation and the ability to articulate with language what is write and wrong (7; 10). Evolutionarily (7) and ontogenetically (1) the affective system likely precedes the rational system. Accordingly, evolutionary biology (and developmental psychology) could offer valuable descriptions of morality. Of particular interest here is the possible evolutionary path of morality.
Morality appears to play a general role in restricting selfishness and thereby promoting group cohesiveness and cooperation. Taking into consideration the Social Brain Hypothesis (SBH), such an account of morality has some coherency from an evolutionary perspective. The SBH proposes that primate brain evolution was driven by the need to meet the cognitive demands of social life, including the need to maintain group cohesiveness over time (2; 9). Group cohesion is difficult to maintain, as individuals must share resources such as food, living space, and potential mates, which often gives rise to inter-group conflicts. Despite the potential for such conflicts, group life and cooperation yields valuable fitness benefits for the group and for individuals (2; 6; 9), and members of a group must therefore learn to suppress immediate selfish/personal gains in anticipation of greater fitness benefits afforded through group life (2; 9; 10). Accordingly, a system of “morality,” a system which promotes group “values” over selfishness, may have proven beneficial to survival and evolutionary favorability.
While moral systems appear to be largely subject to cultural relativism, an evolutionary account of morality suggests that there is a genetic component to the development and acquisition of moral systems. That is, perhaps there could exist an underlying “universal grammar” of morality (11); there could be in place a genetic predisposition for acquiring the moral system of one’s surroundings. Such a grammar may stem from an evolutionary selection for individuals who are able to understand social rules in such a way so as to maintain group stability over time, from which human moral systems emerged. Importantly, such a universal grammar as described would be a system for promoting in-group cohesion, thereby leaving open the possibility for out-group amoral behaviors (11).

III. Central Questions

With these observations in mind Rebecca and I had several topics we wanted to present to the group for discussion. First, could there exist a universal moral grammar? That is, drawing a parallel to Chomsky’s notion of universal grammar that underlies one’s acquisition of culturally relative language, could a similar foundation exist for the development of moral systems? Second, if such a universal grammar exists, what are its characteristics and implications? Such a grammar would be intrinsically tied to genetics, and therefore to the evolution of the human brain, which leads to the third question: is a biological and evolutionary approach to morality valid? Stemming from this notion arise questions as to whether or not animals possess moral systems or the beginnings/origins of human morality. Finally, we want to ask “what is ‘morality’?” How do we define morality? Is morality distinct from its components (e.g. empathy, altruism etc.)?

IV. Group Discussion

From the presentation of the aforementioned observations a rich conversation emerged. Some of the major themes we touched on were animal morality and the evolution of morality, cultural relativism and subjective morality, the roles of reason and emotions, and how we all understood our own moral judgments. Surprisingly, many people in the class seemed comfortable with the suggestion that animals posses moral systems, especially since animals are not thought to possess consciousness. Perhaps this is a function of a Haverford/Bryn Mawr classroom, as I was not expecting to encounter such openness to this idea. What seems to have been the predominate theme within this topic is the belief that animals may possess moral systems, but that such systems are distinct from those of humans. Many students warned against attempt the anthropomorphosize animal moral systems, suggesting that we can only attempt to understand animal morality through the lens of our own experiences and cognitive predispositions and that we therefore cannot fully understand the nature of animal moral systems.
While many students seemed to accept the possible existence of animal moral systems, others were inclined to differentiate between human “morality” and behavioralistic examples of altruism, empathy and adherence to social rules. These stipulations are well founded and will be addressed later on, but briefly, perhaps animal moral behaviors (e.g. altruism) are a sublayers of morality rather than behaviors completely distinct from morality. I am inclined to take an open-ended stance in support of the existence of animal morality, as I do believe that other species such as chimps may have in place “social rules” which are implemented as a means for preserving the group. However, I would like to add that with a consideration of the evolutionary trajectory as implied in the work of researchers such as Greene and Haidt, it seems possible that animal moral systems are largely maintained by affective responses. Importantly, affective responses need not be the same as human emotions, but could instead be body states that are able to give experiences a quality of good or bad.
Through discussing animal morality, many were also contributing towards a discussion about the evolution of morality and therefore to a possible genetic component. Morality is largely thought to be socially constructed, and hence subject to cultural relativism, and it seems that most of the class was inclined to adopt this view. Additionally, a majority of our classmates were inclined to discuss the subjective nature of morality, a view that lead many to lean away from the view that there is a genetic component of morality. Subjectivity and relativity are important considerations to keep in mind, as they point to the unlikelihood of ever uncovering a set of categorical or universal moral rules. Perhaps, however, the genetic grammar of morality does not instill in humans a set of moral principles, but rather serves a predisposition for acquiring moral systems. While moral rules do not appear universal, the existence of moral systems can be observed across cultures around the world. This topic poses interesting implications for the understanding of morality and I hope such a conversation will continue.

V. Remarks

Taking into consideration both the background information covered and the themes which emerges in group discussion, it seems that further biological inquiry into the nature of morality is a worthy endeavor. Specifically, it may behoove research to describe morality as a spectrum or as multilayered, as opposed to an all-or-nothing phenomenon (humans have consciousness and therefore only humans have moral systems). From such an approach, one could begin to investigate moral systems in other species with an appreciation that they may possess “morality” of a different layer than human morality. One need not necessarily anthropomorphosize animal moral systems (consider it in terms of rationality, introspective awareness, or human emotions), but rather appreciate a more basic system which maintains group “values” or well being over selfishness.
Additionally, the notion of a “universal grammar” of morality warrants additional inquiry. Not only would it be valuable to inquire into the existence of such a predisposition system, but it would be of additional worth to consider the developmental, ethical, philosophical, and sociological implications of such a system. How does this grammar effect the affective and rational systems? Would such a system give reason to doubt the existence of a universal moral code? Would society at large be wiling to accept an evolutionary story of morality, or is religion too important a system for societal comfort and stability? It seems worthwhile for our community of inquires to continue this conversation.


1. Cushman, F. (2007) Crime and punishment: Distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, in press.
2. Dunbar, R.I.M., S. Shultz. “Evolution in the social brain.” Science 7 September 2007 317: 1344-1347.
3. Greene, J. & Haidt, J. (2002) How (and where) does moral judgment work? TRENDS in Cognitive Sciences. Vol.6 No. 12. Pp. 517-523.
4. Gladwell, Malcolm. (2007). Blink: The Power of Thinking Without Thinking. New York: Back Bay Books
5. Haidt, J. “The emotion dog and its rational tail: a social intuitionist approach to moral judgment.” Psychological Review, 108 (2001): 814-834.
6. Miller, G. “All together now—pull!” Science 7 September 2007 317: 1338-1340.
7. "Morality." Radiolab. PRI. WNYC, New York. 28 Apr. 2006. 2 May 2008 <>.
8. Pinker, S. (2008) The Moral Instinct. The New York Times. January 13, 2008
9. Silk, J.B. “Social components of fitness in primate groups.” Science 7 September 2007 317: 1347-1351.
10. Wade, N. (2007) Is ‘Do Unto Others’ Written Into Our Genes? The New York Times. September 18, 2007.
11. Wade, N. (2007) Scientist Finds the Beginnings of Morality in Primate Behavior. The New York Times. March 20, 2008.