Full Name:  Sophia Louis
Username:  slouis@haverford.edu
Title:  Electricity: Good or Bad?=
Date:  2005-05-04 00:15:03
Message Id:  15031
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Is electricity a harmful force to our bodies? Our brains send electricity down nerve cells towards muscles for example, to release neurotransmitters, but how does exposure to external electricity affect the nervous system? It is evident that large doses of electrical input are harmful to the body, what are the effects of smaller doses of electricity? Three sources of external electricity that can be inflicted upon the body include lightning, electro convulsive therapy (ECT), and stun guns. Injuries caused by lightning tend to affect any or all parts of the nervous system, specifically the central nervous system (CNS) and the autonomic nervous system, which is a subdivision of the peripheral nervous system (PNS) (7). The peripheral nervous system consists of sensory neurons running from stimulus receptors that inform the central nervous system of the stimuli, and of the motor neurons that run from the central nervous system to the muscles and glands-called effectors- that take action. The peripheral nervous system is subdivided into the sensory-somatic nervous system and the autonomic nervous system. Lightning injuries tend to affect the autonomic nervous system.

The central nervous system (which includes the brain and spinal cord) is made up of two basic types of cells: neurons and glia. Glial cells are the support cells of neurons. They outnumber neurons by a substantial amount (some scientists have estimated it to be as large as nine to one) but in spite of their smaller numbers, neurons are the key players in the brain (3). Everything we think, feel and do would be impossible without the work of neurons and their support cells. Neurons are information messengers. They use electrical impulses and chemical signals to transmit information between different areas of the brain, and between the brain and the rest of the nervous system. Neurons have three basic parts: a cell body and two extensions called axons and fingerlike dendrites. Within the cell body is a nucleus, which controls the cell's activities and contains the cell's genetic material, the axon looks like a long tail and transmits messages from the cell, and dendrites look like the branches of a tree and receive messages for the cell. Neurons communicate with each other by sending chemicals, called neurotransmitters, across a tiny space, called a synapse, between the axons and dendrites of adjacent neurons.

When a neuron is stimulated -- by heat, cold, touch, sound vibrations or some other message -- it begins to actually generate a tiny electrical pulse. This electricity and chemical change travels through the axon, and the full length of the neuron. But when it gets to the end of finger-like projections (dendrites) at the end of the neuron, it needs help getting across to the next extended finger. That's where chemicals come in. The electrical pulse in the cells triggers the release of chemicals that help carry the pulse to the next cell. This process consistently repeats itself, unless interrupted (3).

Serious lightning injuries affect about 1,000 to 1,500 people each year. Approximately 100 to 600 of them die, producing a 25 to 32% mortality rate. Of the survivors, 74% sustain permanent injuries. These statistics prove that there are more deaths caused by lightning than any other natural phenomena including floods, hurricanes and tornadoes. The current in a lightning bolt is as high as 30,000 Amperes with 1,000,000 or more Volts and lasts a short duration of about 1-100 milliseconds limits. Lightning strikes in several ways. The most severe is a direct strike either on the victim or on some object that he or she is holding i.e. a golf club, tripod, or umbrella. A "side flash" occurs when lightning hits a nearby object and jumps to the victim. Ground currents strike the victim when lightning strikes the ground nearby and it spreads to the person (7).

Injuries to the CNS as a result of lightning are common (7). Transient confusion, paralysis and amnesia are likely. Coagulation of the brain, collections of blood surrounding the brain, and bleeding within the brain are possible with direct strikes. Swelling of the brain is another outcome. Paresthesias (pins and needles sensations) may affect areas of the victim's body. Amnesia, movement disorders, dementia and decreased reflexes may occur. Paraplegia can be secondary to brain or spinal cord injury from lightning strikes. There may also be neuropsychiatric complications such as depression, anxiety, memory deficits, and post-traumatic stress disorder. When the brain is affected, the person often has difficulty with short-term memory, coding new information and accessing old information, multitasking, distractibility, irritability and personality change. This outside source of electricity forced onto the body interrupts and tampers with all of the electrical signals being sent in the body. The electricity forced into the body confuses the brain and forces the victim to respond to too many signals simultaneously (8).

Victims of lightning attacks may appear slow because it takes longer for information to be processed. They become easily distracted because they cannot monitor irrelevant stimuli at the same time as they are attending to the relevant stimulus. They seem to lose some of their memory because while they are focusing on point A, they do not have the processing space to think about point B simultaneously. These victims basically seem inattentive because when the amount of information they are given exceeds their mental capacitates, they cannot take it all in. Throughout the semester, we have used the example of the brain being similar to a computer. To use this analogy again: if an electric shock were sent through a computer, the outside would probably look un harmed(similar to a photo or x-rays of the person), the computer boards on the inside would probably look fine also and not be fused or melted (like a Cat scan or MRI for the person), but when you boot up the computer it would have difficulty accessing files, making calculations, printing, etc. similar to a person with brain injury who has short term memory problems, difficulty accessing and coding information, and difficulty organizing output. Victims of lightning strikes usually enter a state of depression resulting from frustration. They become frustrated with their limited capabilities, inability to function with ease and comfort both physically and mentally (8).

Electro convulsive Therapy (ECT) has been used for over 60 years as treatment for mental health disorders, most commonly, depression (6). ECT is useful for patients with significant depression, particularly for those who cannot take or are not responsive to antidepressants, who have severe depression, and/or those who are at high risk for suicide. Research shows that ECT has been most effective in cases where antidepressant medications do not provide sufficient relief of symptoms. ECT relieves depression within 1 to 2 weeks after beginning treatments. Electro convulsive therapy involves delivering elective shocks to the brain. This current is used to provoke spasms or convulsions/seizures and has a beneficial impact on depression patients. The seizures that occur as a result of the shock leads to a massive neuro-chemical release in the brain. Scientists still do not know exactly why ECT works -- except that it has less to do with electrical jolts than with the seizure they induce (1). Electricity runs through our brains already; that is what causes the neurons to fire and discharge neurotransmitters, which then carry the impulse across the synapse to the next cell. "Brain cells are set up in oscillating circuits that are firing regularly, what you are trying to do when you induce a seizure is get them all to fire in synchrony" (2). Many psychiatrists believe an ECT seizure increases the brain's sensitivity to the neurotransmitter serotonin, which is necessary because people deficient in serotonin are more prone to depression. Researchers believe that ECT raises the brain's seizure threshold, causing an anti-convulsive effect that may also involve increased serotonin levels (1).

As you can see, the mentioned sources of external electricity have served to be both good and bad. The source of external electricity that is most interesting is found in a stun gun. Stun guns do not emit natural electricity, nor are they used for medical procedures. Stun guns are used for protection against attackers. They are actually considered Bio-effect weapons, which cause electro muscular disruption (EMD).
There are three types of stun guns/stunning devices available on the market: Static Charge which uses an electrical watt with static to disrupt localized muscle groups, Phase-Induction which uses an electrical watt with phase induction to better facilitate the delivery of a static charge, and a T Wave which uses an electrical watt with electro-muscular disruption (EMD) technology. An electro-muscular disruption signal affects the central nervous system and all of its signals to the body (4).

The most powerful stun guns are the T-Wave guns, which contain air tasers. These are most efficient in causing electro-muscular disruption. T-Wave stun guns use 18-26 electrical watts, which completely override the central nervous system and directly take control over the skeletal muscles. This external electrical power causes an uncontrollable contraction of the muscle tissue, allowing the Taser to physically debilitate a target regardless of pain tolerance or mental focus. These stun guns are specifically classified as EMD weapons-designed to stop even the most aggressive attacker. This gun does not only interfere with communication between the brain and muscles, the Taser EMD systems directly tell the muscles what to do, in almost all cases, the target's muscles contract so much that they are forced to fall to the ground. The Taser has not been proven to cause any permanent damage or long-term after effects to muscles, nerves or other body functions. The basic idea of a stun gun is to disrupt the communication system. They have a charge with a lot of pressure but not much intensity. Since the voltage is high, the charge will pass through the body rapidly, but only at about 3 milliamps. This is not intense enough to damage the attacker's body. I do contend that if these stun guns are used recklessly, permanent damage may be done i.e. if it is applied for an extended period of time. A January 1987 Annals of Emergency Medicine study reported TASER technology leaves no long term injuries compared with 50% long term injuries for gun shot injuries, but are successful in causing EMD. Electro-muscular disruption occurs when the central nervous system is completely overridden. The device (Taser) overrides the CNS and takes control over the skeletal muscles.

Stun guns key into the nervous system and interrupt the many electrical signals from the CNS. The external electricity interrupts the neurological impulses that travel through the body to control and direct voluntary movement (5). When an attacker's neuromuscular system is overwhelmed by the stun gun, disorientation and loss of balance occurs instantly. Each of the aforementioned devices emits various charges. Static charge devices emit charges from 80,000 Volts-500, 000 Volt at 5-20 electrical watts (4). This charge is strong enough to take down any attacker, but if used correctly it is not lethal. When the Static Charge device is used, the attacker receives the shock of his or her life. They are immediately forced to stop, and are left feeling dazed for up to 15 minutes and suffer from muscle spasms. This is all a result of a brief shock of 1-2 seconds. Research shows that a "5 second shock can leave an attacker feeling as if he or she fell out of a two-story building and landed on a concrete sidewalk".

Merely touching someone with a stun gun immobilizes them for several minutes. What happens to the electrical signals that are being sent to the attacker's muscles? The energy stored in the stun gun is emitted into the attacker's muscles at a pulse frequency that is essentially too much to handle. This outside energy forces the muscle to do a great deal of work too rapidly, thus causing a state of muscular confusion. This is known as the rapid-work cycle that instantly depletes the attacker's blood sugar by converting it to lactic acid. The attacker loses all the energy in his muscles and his body is no longer able to function properly. Stun guns are designed to hamper with ones nervous system. The person becomes temporarily disabled and re-equilibrium of the nervous system takes some time.

Electricity is one of the most essential elements in your body. It is necessary to do just about everything. We tend to think of electricity as a harmful force to our bodies. If lightning strikes you, or your blow-dryer falls into your bath, or you stick your finger in an electrical outlet, the current can permanently harm or even kill you. The notion that "everything should be done in moderation" holds true even in neuroscience. In large doses, electricity is potentially harmful but in smaller doses electricity may be harmless.



1) Electroconvulsive therapy: Treatment for depression. Mayo Foundation for Medical Education and Research (MFMER). downloaded 4-14-2004

2) Barklage, N. and Jefferson, J.W. Electroconvulsive Therapy – A Guide. 2002. Madison Institute of Medicine, Inc., Madison, WI 53717

3) Schwartz, James., Jessell, Thomas. 1996. "Essentials of Neural Science and Behavior".McGraw-Hill.USA

4)Protecting Yourself

5)Spray or Stun Information

6)History of Electroconvulsive Therapy

7)Brain and Nerve Injury

8)Lightning Safety Awareness

Full Name:  Kristin Giamanco
Username:  kgiamanc@brynmawr.edu
Title:  Homosexuality: Born or Made?
Date:  2005-05-04 20:46:58
Message Id:  15042
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Are we born the way we are? Or are we molded and shaped by environmental factors and our upbringing? This nature versus nurture debate has been one that we have discussed extensively in class. In order to resolve some of the questions that I have concerning this matter, I decided to dedicate this paper to analyzing nature versus nurture in terms of homosexuality. This paper will examine both schools of thought, interjecting my own criticism of the evidence and ideas set forth. Furthermore, at the end, I will decide which body of evidence is most valid and attempt to determine if homosexual individuals are born or made.

Some of the earliest studies on homosexuality were performed by Alfred Kinsey of the University of Indiana in the late 1930s. Kinsey wanted to determine how many adult males engaged in same sexual behavior in hopes of understanding why certain individuals were homosexual and while others were straight. Through this survey, he found that 30% of males had experienced at least an orgasm while engaging in a homosexual act. From these results, the Kinsey Scale of Sexuality was born. All individuals were placed on this spectrum ranging from 100% heterosexual to 100% homosexual (1). However, these results hardly seem noteworthy in our present time, because today we know that individuals may exhibit a range of sexual behaviors, but his study helped individuals place themselves on a continuum and define themselves sexually.

Karen Hooker created the first psychological tests in hopes of determining if homosexuality was a socially created characteristic. These tests were conducted in 1957 under a grant from the National Institute of Mental Health (NIMH) and included the Rorschach, Thematic Apperception Test (TAT), and Make-A-Picture-Story Test (MAPS). When psychologists analyzed the results from this battery of examinations, they concluded that there was no correlation between social determinism and homosexuality (1). However, I feel that this test is not necessarily designed to investigate the relationship between social behavior and homosexuality. Personally, I feel that these two groups of individuals most likely will not test differently in terms of perceptions and picture analysis any more than individuals of the same group would test differently. Therefore, how can psychologists control for variation in the heterosexual group and the homosexual groups? Perhaps the individuals who classify themselves as straight were gay, and vice versa. Hence, I believe these tests really do not disprove the nurture debate in any way.

As a result of these executed tests the American Psychological Association (APA) eliminated homosexuality from its Diagnostic and Statistical Manual of Psychological Disorders in 1973. Later in 1975, the APA announced that homosexuality was not a mental disorder and in 1994 they acknowledged that it was neither a mental illness nor a moral depravity (1).

In 1984 a group of researchers at the State University of New York at Stony Brook corroborated a German study which indicated that male homosexuals differ from their heterosexual counterparts as well as heterosexual females in their response to injections of estrogen (which will be further fleshed out later on in the paper) (2). In 1990, D.F. Swaab of the Netherlands Institute for Brain Research found that homosexual men had a larger suprachiasmatic nucleus (SCN) than their counterparts. This structure was found in the hypothalamus, which has been implicated in playing a role in sexual drive and function (1), (2). Laura Allen and Roger Gorski, both scientists of the University of California at Los Angeles (UCLA) School of Medicine, also determined another difference in the brain structure within these two groups of individuals. Brains of the individuals during autopsies were examined in these studies. Allen and Gorski found the differences originated in the anterior commissure (AC) of the hypothalamus. More specifically, they concluded that the AC was 34% larger in homosexual males than heterosexual males (3). The SCN and AC do not have a direct role in sexual drive and function. Moreover, the SCN is responsible for the establishment of circadian rhythms and regulates the body over the 24-hour day (4), while the AC is the structure which divides the left and right halves of the brain and is located in the back of the skull (3), (4). Therefore, it would be improbable that these differences arose due to sexual practices. Rather, researchers concluded that these differences were innate (1). As a budding biologist and researcher, this theory seems extremely plausible to me. These differences in brain structure would not have arisen due to sexual practice, instead it seems that these differences were present already and then affected individuals in terms of their sexuality.

The results obtained which determined the differences in the AC region for homosexual and heterosexual men were also found by another research group. These scientists published their results in the Proceedings of the National Academy of Sciences of the United States of America (PNAS) and also reported a larger AC structure in women and homosexual men as compared to heterosexual men. The sample population included 34 homosexual men, 75 heterosexual men, and 84 heterosexual women (3). However, these results may not be as valid because the individuals under study were mostly men who died from AIDS. Since the investigators did not take this variable into consideration when making their claims, this may detract from their results.

One of the landmark studies performed on autopsied brains was done by Simon LeVay, a neurobiologist for the Salk Institute for Biological Studies in San Diego in 1991. His test involved the examination of 41 brains where 19 were self-declared homosexual men (with a mean age of 38.2), 16 presumed heterosexual men (with a mean age of 42.8), and 6 were presumed heterosexual women (with a mean age of 41.2). LeVay discovered that on average, a cluster of cells within the hypothalamus was half as large in the brains of the homosexual men in comparison to heterosexual men. This conglomeration of cells was identified as the interstitial nuclei of the anterior hypothalamus 3 (INAH3). As stated above, the hypothalamus is the center which directs emotions and sexual drives. Therefore, LeVay believed that this difference in the hypothalamus structure was critical in distinguishing these two disparate groups of individuals, thereby providing ample support for the nature component to this hotly contested topic (2). However, what LeVay failed to account for in his analyses diminished his presented results. LeVay’s tests were performed on individuals who died of AIDS-related illnesses. Perhaps the noted structural changes were induced as a result of the illnesses incurred by the individuals (1). While LeVay’s results seemed promising and exciting, before fully accepting his theories the aforementioned bias must be taken into account.

Research has also been done on other organisms, such as sheep. On November 5, 2002, British Broadcasting News (BBC) reported that American researchers found that the preoptic hypothalamus was twice as large in male sheep and male humans in comparison to the females. The cluster of cells was identified to be the sexually dimorphic nucleus. Furthermore, these structures also contained twice the number of cells in the male animals. It was also found that between 6 and 10% of rams were attracted to males rather than females and in these animals the bundle of neurons encapsulated in the sexually dimorphic nucleus was smaller in ewes and rams with the same sex preferences. With this information, scientists hope to parlay these findings into studies done on humans (5). This study provided firm evidence that there is a biological component to sexuality in sheep and perhaps, other more sexually complex systems, such as humans. In the future, it would be interesting to determine if these results were found in humans.

Neuroendocrine studies have also been carried out which involves exploring the affect of hormone levels on the neuronal development. In studies done with rats, it was found that the brain is basically female all through development unless the animal is exposed to certain levels of hormones. These hormones can then cause certain structures in the male brain to grow up to 500% larger than those in the female rat brain (3). This phenomenon has been extensively studied in rats at Stanford University as well. Researchers there concluded that sexual orientation is determined primarily by early levels, most likely, prenatal of androgen on neural structures (1). Androgens were first discovered in 1936 and are steroid hormones responsible for the stimulation and development of masculine characteristics in vertebrates. The most well-known androgen is testosterone. As a family, these steroid hormones are accountable for the development of accessory male sex organs as well as secondary male characteristics (4). Therefore, if rats are highly exposed to such hormones, then they will become males and if newly developing rats are exposed to low levels of androgens, they will be born females. Researchers also found that if female rats are exposed to high levels of these hormones, they exhibited high levels of aggression as well as increased sexual drive toward other females, mimicking the behavior of male rats. Similarly, if male rates were exposed to deficient levels of these hormones, they were found to be sexually submissive and engaged in activities with other male rats (1). Therefore, these results, in conjunction with the aforementioned studies on the brain structures, indicate to me that during development our sexual preferences are established. If this is so, it does not make sense that someone can be raised to be homosexual or heterosexual; instead, we are packaged in such a way that these characteristics are inherent. If we were raised to be straight or gay, then I would expect hormone levels to be uniform in all individuals and then as the individual grows into an adult, these levels should change, however, this is not the case.

In accordance with the aforementioned neuroendocrine studies, researchers in the United Kingdom, the United States of America, and Germany have confirmed that prenatal exposure to low levels of testosterone increases the probability of a man becoming homosexual (3). To me, these data seem a bit simplistic and the notion that it represents just a probability further detracts from the results. Also, if a woman is exposed to low levels of estrogen and high levels of testosterone, will she be more likely to become a lesbian? I found it quite intriguing that most of the research published on homosexuality has been on heterosexual women, frequently neglecting homosexual women. This topic interests me, I wonder if similar results would be noted if female subjects were studied as well. Do researchers think that if they find evidence on homosexual men, then the results should be similar for homosexual women? That does not seem to be the case, at least in my mind. Therefore, I think further research should be done on female organisms.

In 1987 Lindesay, another scientist studied the 94 homosexual men and 100 heterosexual men. Through this study, it was found that 14% of the homosexual men were left-handed in comparison to 9% of the heterosexual men (3). Of all the experiments and results that I researched for this paper, I think this finding is the most coincidental. Statistically speaking, these results do not seem significantly different. Perhaps the population pool they used in this experiment happened to have a large number of left-handed individuals. I find it hard to believe that what hand a person writes with has any bearing on their sexuality. Nonetheless, this result has been reported elsewhere by Sandra F. Witelson of McMaster University in Hamilton, Ontario. Witelson was interested in looking at handedness in terms of development and in her study, she examined homosexual women. Through her investigations, she found that lesbians had a higher incidence of left-handedness as well as in gay men (2). Therefore, this seems to be more than a trend, since the researchers were studying different pools of individuals. However, I do not think this can really be an indicator of one’s sexual preferences.

Dr. Kenneth Zucker of the University of Toronto’s Center for Addiction and Mental Health (CAMH) provided empirical evidence on this topic. Over the span of 50 years, Zucker found that lesbians have a 91% greater chance of being left-handed or ambidextrous than their heterosexual counterparts (6). Therefore, it seems that handedness cannot be used as an indicator; rather these data highlight another set of differences between gay and straight individuals. The reasons for the difference in handedness and its role in sexual preference have yet to be determined, thereby, this area represents another potential research topic.

Another interesting study I found while researching concerned body development in homosexual men and their counterparts. The gay individuals had less subcutaneous fat and smaller muscle and bone development. Their bones are longer in comparison to their bulk as well as the fact they had narrower shoulders in relation to pelvic width. Moreover, there was a noted diminishment of muscle strength in the homosexual men. Creatine and 17-ketogenic steroid levels were lower in these individuals (3), where creatine (creatine monohydrate) is a naturally occurring amino acid which serves to supply energy to muscle cells (4). Researchers also noticed a lower androsterone-etiocholanolone ratio in homosexual men (3). Androsterone is an androgen and is a male sex hormone produced in the testes and is responsible for the development of secondary sex characteristics. Etiocholanolone is a metabolite of testosterone and androstenedione, both hormones produced in mammalian species (7). There were also elevated levels of 11-keto-etiocholanolone in gay men. What I found particularly intriguing was the fact that homosexual men had lower levels of triglycerides, phospholipids, cholesterol, and beta-lipoproteins (3). Are these occurrences happening due to environmental or lifestyle factors? While these data might indicate that homosexuals are born differently, one cannot ignore the idea that over time these findings might be attributed to lifestyle choices or environmental factors. Therefore, I feel that these results and research in and of themselves cannot be used to solely to confirm that homosexuals are born rather than made.

However, there has been more convincing data obtained from identical and fraternal twin studies. Fraternal twins carried in the same womb and reared in the same household have a one-in-four chance of being gay, while identical twins have a one-in-two chance of being gay. For example, if one identical twin is gay, the chance that his brother will be gay is 50% (3). These facts indicated to researchers that perhaps there was a “gay gene.” Both fraternal and identical twins were raised in the same household, but each set of twins had a different probability in becoming gay, which then demonstrated to researchers that nurture is not the primary factor at play here.

J. Michael Bailey and Richard Pillard furthered the aforementioned results on twins, by examining monozygotic twins (identical), dizygotic twins (fraternal), as well as non-related adopted brothers. It was found that 52% of monozygotic twins both identified themselves as homosexuals, while 22% of dizygotic twins did so, and lastly, 5% of non-related adopted brothers both classified themselves as gay (1). Therefore, I feel these results do not solidly support the nature or the nurture theory. In the future, researchers might opt to investigate identical twins that were separated at birth and determine the percentage of both brothers and sisters that were gay. This long term study will enhance the understanding of whether the nature versus nurture theory is applicable.

Scientist Dean Hamer postulated that there was a homosexuality gene and it was inherited as an X-linked trait. He examined family trees and noticed a maternal link and then decided to collect 40 DNA samples from self-proclaimed homosexual men. Through this genetic analysis, Hamer determined that there was a remarkable agreement for 5 genetic markers located on the section of the X-chromosome called Xq-28. This study was then dubbed the “gay gene study.” Hamer then decided to supply empirical data by analyzing the statistical probability that the 5 genetic markers would be present on this portion of the X-chromosome randomly. This value was found to be 1/100,000 (1). Therefore, these quantitative data firmly supports Hamer’s data and hypothesis that there exists a gay gene. Further research should be done on a wider scale in order to determine if this region actually represents an area for the gay gene to be present. These genetic analyses will lend credence to Hamer’s results and this advancement will strongly confirm the idea that individuals are born as gay rather than developing as gay.

I was next interested in looking at the diametrically opposed argument stating that individuals develop to become homosexuals as a result of environmental factors or upbringing. Same-sex relationships in males have been noticed as early as in ancient Greece where Aristophane, believed that men were homosexual in order to seek soul fulfillment. In New Guinea boys ages 8-15 were frequently on the receiving end of sexual relations with young male warriors. Every adolescent boy in Crete partook in a sexual relationship with a man as a rite of passage (1).

Many of the psychological studies analyze parental and familial dynamics such as stereotypes imposed on male and female children. However, there is little biological evidence to support the idea that homosexual and heterosexual children are raised differently and until this investigation can be confirmed through more solid analyses, these results are not entirely reliable. Also, if these gender stereotypes were enforced, then would one think that all children would be heterosexuals (1). For example, if a family instills in their daughter that she should like to play with dolls and play house, while they encourage their son to play with cars and trucks, one would then think that these children would develop most likely as heterosexuals. However, this is not proven to be true, as many boys and girls develop into homosexuals even if they are encouraged by their parents not to do so through the forcing of gender stereotypes.

Within the realm of psychology there exists the Parental Manipulation Theory, which states that one or both parents of a child work to control their offspring in order to promote their (the parental) evolutionary fitness. This control allows for genes to be passed onto the next generation, ensuring the survival of the parental genes. Psychologists believe that in this theory parents pressure their children to engage in heterosexual behavior to facilitate the passage of their genes into the next generation. However, the Kin-Selection Theory states that it does not matter how the genes are passed down into the next generation, as long as they are in some form (1). If a child engages in homosexual relations though, the only way they will be able to have a baby, would be to adopt, therefore, the original parental genes would not be passed down then. It seems to me then that the Parental Manipulation Theory would be the most likely scenario for parents to adopt if they are concerned with the transmission of their genes.

Scientists have also pointed out that the brain changes with use over time. For example, as parts of the brain are used, cell death may be induced as well as the establishment of new connections between cells. The thickness of the connections between the cells can be altered as well as the pruning, or the loss of interneuronal connections. Therefore, some scientists and psychologists argue that as an individual grows and develops their particular sexual preferences, certain areas of the brain may change as a result of either heterosexual or homosexual behaviors (8). However, I do not believe that neuronal connections and cell death is really affected by whether someone engages in a particular relation with another individual. Rather, the changes enumerated above, most likely occur due to trauma, injury, or overall brain development, not sexual activities. Thus, while these scientists and psychologists have tried to argue that being homosexual or heterosexual alters the brain organization, I firmly believe that these changes were not induced by one’s sexuality.

Two social theorists who have discussed homosexuality at length, include David Halperin and Jean Foucault. Halperin believed in the Planophysical theory that homosexuality was a freak of nature, an error. Furthermore, Halperin derived his ideas concerning same-sex behavior from Freud’s analysis of the Oedipus complex. Thus, Halperin believed that men engaged in sexual relations with other men due to a failure in resolving oedipal issues. In the same vein, he concluded that a weak father and a strong mother contributed to men becoming homosexual (1). However, there is no scientific evidence to support this notion and even if there were, there are so many exceptions, that it almost seems like an antiquated theory, Halperin also believed that homosexuality was a psychological condition and he divided individuals into three categories: heterosexual men and women, gay men, and lesbians. Lastly, he believed that homosexuality was a symmetrical and equal relationship between two men, or between two women (1).

Another theorist Jean Foucault came to different conclusions concerning homosexual behaviors. He believed that homosexuality was only created about 100 years ago and that there were only two distinctions: homosexual men and women, and heterosexual men and women. This theorist also thought that homosexuality was only about sexual preference and was not a psychological condition as set forth by Halperin. Foucault concluded that homosexuality was always unequal because there was always a difference between race, age, and educational background (1).

Through examination of the neurobiological evidence as well as the psychoanalytical evidence, I have tried to determine if homosexual individuals are born or if they are made. Therefore, I have addressed the nature versus nurture debate that we have spent many class discussions as well as many forum postings talking about. Many of the psychological studies and theories seem outdated to me and do not provide any persuasive empirical evidence for me to examine, therefore, I am more inclined to believe the nature argument. Based upon the fact that there is such a gross differential brain organization in homosexual versus heterosexual individuals, this seems to be the most valid and plausible explanation. Brain usage could not have altered these structures that much; instead it seems more likely that the individuals were born this way. Furthermore, many of the psychological analyses were unsubstantiated, providing little or no quantitative data. Instead, these studies presented weak theories and links between homosexuality and upbringing.

It seems appropriate to end this semester with a topic that focuses on one of the fundamental questions raised in the first week of classes: Are we born or made into the individuals we are today? Within this realm, I opted to investigate homosexuality and nature versus nurture, and in doing so, I studied both schools of thought in-depth, providing evidence on both sides while assuming a critical role in hopes of determining which theory was most valid. From these examinations, I firmly conclude this paper as well as from my semester in Neurobiology and Behavior that individuals are born homosexual rather than raised. While I am not neglecting environmental factors or upbringing, I feel the main influence must stem from the biological and brain component.


1)AllPsych, a website that discussed Homosexuality: Nature or Nurture, as published in the AllPsych Journal Online.

2)Science News, discussed Gay brains â€" brain feature linked to sexual orientation as appeared in Science News, August 31, 1991, Vol. 140, No. 9, p. 140.

3)Gay Men’s Brains Found Different , an article which provided insight into the neural differences between homosexual and heterosexual men.

4)Wikipedia, a site used to find definitions and information on various hormones, steroids, and other compounds.

5)BBC News, an article from the BBC News November 5, 2002 discussing research performed on sheep.

6)Sexuality At Hand: Research Links Homosexuality to Left-Handedness, an article that discussed sexuality and handedness.

7)Etiocholanolene, a site used to determine what etiocholanolene did in the body and how it was produced.

8)Biological Research on Homosexuality, a discussion of biological research on homosexuality.

Full Name:  Joanna Scott
Username:  jscott@brynmawr.edu
Title:  Different is Not Wrong: Learning to Listen to People With Autism
Date:  2005-05-05 21:23:01
Message Id:  15052
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Autism is a neurodevelopmental disorder that emerges early in life, and can be detected two to four years after birth. It is characterized by deficits in social interactions, communication skills, and imagination. Prior to psychology's adaptation to a biomedical model, the widely held belief was that autism resulted from bad parenting—the mother in particular was blamed for being too cold with her child. Although the scientific community has largely moved away from such a stance, the current conceptualization of the disorder is still imperfect. Autism is portrayed in a very narrow sense and defined by its divergence from 'normalcy'. The danger in labeling a pattern of behavior as abnormal is that the person themselves is labeled abnormal, a term which our culture treats with disdain. Rather than being viewed for who they are, they are viewed for who they are not; rather than understood as expressing themselves differently, they are dismissed as being incapable of expression and lacking self-awareness. These assumptions are implicit in the now-popular theory of mind deficit account and have influenced the clinical treatment of autistic individuals. I propose we look at autism in a different light—one that has respect and open-mindedness at its foundation.

Autism's classification as a neurodevelopmental disorder rests on several points. Firstly, autism is manifested similarly in children and adults. There is a continuity of symptoms throughout developmental stages. People with autism frequently display other symptoms or disorders with a recognized genetic or neurological basis, such as seizure, Fragile X syndrome, or tuberous sclerosis (7). The forebrain limbic system has been implicated in the etiology of the disorder. Specifically, there is interference with the prenatal development of the amygdala, hippocampus, and the cerebellum. It becomes evident during a phase of synaptic reorganization that usually occurs in the first two years of life (1). The brain structures that appear to be affected in autism correlate with much of the symptomology. The hippocampus is responsible for short-term memory functions, while the cerebellum regulates motor activity and coordination (7). Autism has been described as an abnormality in socio-emotional development. The amygdala is largely involved in emotional behavior and tends to show smaller neuronal size and increased cell density in autistics (1). Many with autism struggle with making accurate social judgments and this may be the result of fetal damage to the amygdala. Genetics plays an important role in the etiology of autism. There is an increased incidence of the disorder among families, compared to the general population. The growth dysregulaton hypothesis proposes a genetic defect in brain growth factors in autism.

The term autism was introduced in 1943, by Dr. Leo Kanner. The classic Kanner's autism is today part of a spectrum of disorders known as Pervasive Developmental Disorder (PDD) or as Autism Spectrum Disorders (ASD). There are currently five disorders on the ASD, but each is characterized by varying degrees of impairments in socio-emotional development. I will use the term autism to refer to Kanner's early infantile autism, but much of this discussion is applicable to all moderate- to low-functioning ASD individuals. Beginning in early childhood, autistics show decreased levels of eye contact and joint attention. They are described as having an "empty gaze" and "withdrawing into their own world" (7). This so-called aloofness also applies to their play, which is less spontaneous and imaginative compared to their peers. Autistic children tend to have a limited range of interests that become obsessions. For example, some children may spend hours lining up toys or memorizing the TV guide. These repetitive behaviors, or stereotypies, correspond with an increased need for sameness. Threats to their routine or to their concept of order are anxiety-provoking. Some autistic children will notice if one thing in a room has been moved and become very upset. Those with severe forms of this disorder may lack language completely; others show abnormalities in language, such as pronoun reversals, inappropriate use of words and facial expressions, ignorance of social cues, and abrupt changes in topic.

Another common feature of autism is perturbation in sensory perception. This manifests itself as both hyposensitivty and hypersensitivy to various stimuli. Such sensory-motor, perceptual, and autonomic differences occurs throughout ASD, including high-functioning individuals. McGeer included the descriptions of such experiences from a person with Asperger's Disorder (a higher-functioning ASD) (6). Here is one account from G. Gerland: "I always had had, as long as I could remember, a great fear of jewelry. If I was made to touch jewelry...my stomach turned over." T. Grandin wrote that he was "scared to death of balloons popping" and that "[his] roommate's hair dryer sounded like a jet plane taking off". The writers describe not only the intensity of the sensory experience, but also the intensity of emotion that accompanies it. Grandin further illustrates the horror these perceptions could bring:

"I hated stiff things, satiny things, scratchy things,
things that fit metoo tightly. Thinking about them,
imagining them, visualizing them...goose bumps and chill
and a general sense of unease would follow. I routinely
stripped off everything I had on even if we were in a public
place...I guess I thought I could get rid of them forever!"

Everyday sounds can be so unpleasant as to make them afraid and physically unwell. Not all sensations are as unbearable. For example, Grandin, "loved to chew scratchy and gritty textures," like Emory boards, the strip from a match book, and sugar packets. The descriptions of Gerland and Grandin are particularly striking in their detail and the realness of these experiences to them. Despite their eloquence and insight, accounts from individuals with any autistic disorder are often dismissed by theory of mind (TOM) proponents as "a symptom of a distorted...self-consciousness" (3).

The theory of mind deficit account has become a hot topic of research and one of the dominant theories of autism. Theory of mind is proposed as a neuro-cognitive mechanism which allows us to attribute mental states and from this, predict behavior. This includes our ability to understand our own mental states, as well as those of people around us. Another important role of the theory of mind ability is to distinguish between a mental representation of the physical world and the representation of beliefs about the physical world (3). This model distinguishes a first-order representation from a second-order representation. It contends we have mental representations of reality (first-order) and these are separate and distinct from our beliefs about the reality (second-order). Proponents of theory of mind argue that people with autism have only a first-order representation—that autism is a lack of introspection about our perceptions of reality. They refer to this deficit as being "mind blind". Frith and Happé posed the question, "what would a mind without introspective awareness be like" and then came up with several suggestions. They first speculated that behaviors such as inflexibility and sensory responses result from this inability to reflect on their environmental experiences. Most critically, the authors state that a person with autism is "without self-awareness" and "lacks self-consciousness" and is therefore "unable to distinguish between her own willed and voluntary actions" (3)). Such loaded terms as consciousness and self-awareness are problematic in themselves. Although the TOM theory as it stands does not deny that autistics have mental states, it does hold that people with autism are unable to reflect at all upon those mental states. This calls into question the very abilities which are central to the idea of being human, which I believe is misleading and unfair. This view of autism is not only too general, but is also laden with judgment.

We might first question how people with an ASD can write so vividly and so insightfully about their experiences if they lack self-awareness. Frith and Happé answered this by saying that expressive autistics represent a small subset of the diagnosed population and even then, "arrive at a theory of other mind by a slow and painstaking learning process" (3). The mailing list of Autism Network International provides a space for people with autism and related disorders to share opinions on a number of topics, one of which included theory of mind (2). Their comments reflect frustration at not being understood and at the assumptions made by what they call 'NTs' or neurotypical adults. One person asks, "if a lack of theory of mind equals the belief that everyone thinks the way you do" then this applies to how many NTs treat people who are different, like people with autism. This person goes on to say that "problems are caused by the fact that NTs assume [social] knowledge is universal" and that "people can be very nasty" and "automatically assume I'm wrong without bothering to understand what I did". During this discussion, a woman named Elsa came up with this hypothesis of theory of mind:

It is logical that people who think differently will have trouble
understanding other people's thoughts or actions. It seems to me
that this is a matter about whether ToM is valid in a particular
situation or not, not about whether or not someone has it. In which
case ToM works between NT's, and it works between AC's but it fails
when AC's and NT's interact together.

Such comments demonstrate that this person is not only aware of her experience, but also aware that not everyone shares this experience. Elsa identifies the problem as being a failure to recognize differences between people. This is a problem of our culture as a whole, and one that needs to be addressed in both theory and treatment plans for people with autism.

The primary modes of treatment for autism involve behavioral training to help such individuals function in society. At first, this goal seems straightforward and indeed helpful, but there are hidden costs. Jasmine Lee O'Neill is a mute autistic who has written about her experiences. Self-stimulatory behaviors, like flapping and rocking, are common stereotypies of autism. Many treatments focus on controlling and eliminating these behaviors. Jasmine firmly believed it is wrong to try to take away this simple pleasure from autistics just because it is embarrassing and irritating to most of society. An accomplished individual herself, she admits having many self-stimulatory behaviors which she loves and enjoys. They provide comfort and relaxation and therefore "must be allowed to be part of the whole person" (6). It is a conflict for therapists who recognize the realities of the society they are trying to get their patients to function in and be accepted in. One method that is used in some special education programs is that of self-determination. Self-determination is "acting as the primary causal agent in one's life" and making choices and decisions "free from undue external interference" (5). This focuses on the individuals' preferences and interests, not what we would make them. In the bigger picture, it may be more important to foster a positive relationship with the patient and to focus on the issues that are most important, such as interpersonal relationships. While speech therapy is often an integral part of the treatment, it has not been found to be effect for moderate to low functioning autistics (8). Therapies which foster emotional engagement and joint attention may be a more effective method. This requires openness to different forms of communication—through music, art, collaborative play, or imitation. Giving people with autism a basis for expression and interaction will in turn foster cognitive abilities for understanding others and understanding themselves. How can theory of mind for others develop if the child is not given the opportunity to interact with others?

The TOM theory of autism is one example of underlying tones of condescension in the literature. Autism has been defined as a "failure in affective development" and "failure of social functioning" (4). Such a description implies that people with autism are different in a wrong and inferior way. Another stated that "children with autism are unable to grasp the meaning of friendship" (1) In a study on the perceptions of social relationships in autistic children, Bauminger, Shulman, & Agum found that these children had lower ratings for intimacy and companionship with their friends, but rated their overall closeness to friends similarly to neurotypical children. The authors went on to say that this likely "represents a desired rather than an actual closeness in their friendship" (1). The assumption is that the children must be wrong about their perceptions of closeness as they lack understanding of friendship. The problem of tolerance is confounded by the relation between autism and mental retardation. Approximately 2/3rds of autistics have an IQ below 70, which is the cutoff for mental retardation (6). People with learning disabilities, or a label of mentally handicapped also suffer from stigma. Autism falls into such a group of individuals, who are frequently disregarded and disrespected.

I did not realize such implications were present in the field until I started reading some of the literature; I also did not realize how angry it would make me. I have been fortunate in the past semester to have had the opportunity to work with a group of autistic preschoolers. I teach in a classroom that meets three times a week, and have gotten to both interact with and see the progress in six children who do not fit these narrow views of the disorder. Frith and Happé believed that autistics view other people as inanimate objects, as just another source of sensory stimulation. In this way, others would seem to have little to no significant meaning. But I have seen these children react to their mothers, to myself and the other teachers, and to one another. They recognize people who they interact with on a regular basis and respond to us. While most of these kids are low- to non-verbal, they do interact with other people. They have a wider range of non-verbal expression than is assumed. Their faces light up, they laugh, they clap, they sometimes pout—and in an appropriate context. At first, I did not know how to interact with them; I felt awkward and doubted that I could relate to them without language. I made an incorrect assumption. Yes, the children have certain limitations and the program works on strengthening language skills as well as eye contact, turn taking, following commands, and appropriate play. These limitations are not impossibilities or inferiorities—they are differences. I needed to learn too, to be more open to their individual style of interacting and to not make assumptions about what they can and cannot do. Another tendency in the literature is to lump autistics together in characterizing the disorder. The children I have worked with have their own personalities and differ from each other as much as I do from my classmates. Some of the children are very demonstrative with their affection; others open more during one-on-one interactions, and so on. Like the rest of the population, each child has their own strengths and weaknesses. Emma loves to sing, Christian loves to start knock-knock jokes, and Luke likes anything related to trains, planes, and automobiles!

My experiences in this classroom have made me appreciate these differences and feel strongly that above all, what any person needs is respect and understanding. As a culture, we strive for normality and judge anything else by that standard. Differences should be appreciated, and valued for what they are. We need to start considering each person in their own right, not just as an autistic person or a person who detracts from normal in such and such a way. We need learn to listen to who these people are as just that: people.


1) Bauminger, N., Shulman, C., & Agam, G. (2004). The link between perceptions of self and of social relationships in high-functioning children with autism. Journal of Developmental and Physical Disabilities, 16(2): 193-214. Article Online

2) Autism Network International, Theory of Mind from an Autistic Perspective

3) Frith, U. & Happé, F. (1999). Theory of mind and self-consciousness: What is it like to be autistic? Mind and Language, 14(1): 1-22.

4) Harris, J. C. (2003). Social neuroscience, empathy, brain integration, and neurodevelopmental disorders. Physiology and Behavior, 79(3): 525-531. Article Online

5) Held, M. F, Thoma, C. A., & Thomas, K. The John Jones Show: How one teacher facilitated self-determined transition planning for a young man with autism. Focus on Autism and Other Developmental Disabilities, 19(3): 177-188. Article Online

6) McGeer, V. (2004). Autstic self-awareness. Philosophy, Psychiatry, and Psychology,11(3): 235-251. Article Online

7) Autism Spectrum Disorders, NIMH Information on ASD

8) Travarthen, C. & Aitken, K. J. (2001). Infant intersubjectivity: Research, theory, and clinical applications. Journal of Child Psychology and Psychiatry, 42(1): 3-48. Article Online

Full Name:  Yinnette Sano
Username:  ysano@brynmawr.edu
Title:  Does Brain Washing Really Change the Brain?
Date:  2005-05-05 23:29:26
Message Id:  15053
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Does Brain Washing Really Change the Brain?

The term brain washing as we know it today came about as a result of alleged scientific experiments conducted on imprisoned American soldiers, by Chinese soldiers during the Korean War in the 1950's. At the time the term was used both to create anti-communist sentiments in the United States and to explain how the Chinese communist government was not only able to convert their own people into believing communist ideologies, but also to do the same thing to foreigners imprisoned within the boundaries of China itself, and especially to disrupt the ability of prisoners of war to effectively organize and resist their imprisonment (1). The term brain washing is a bit dramatic and its use can be considered erroneous or misleading, as the actual brains of people who are "victims" of this type of mental manipulation are not being surgically altered and, as far as researchers know, nothing is being directly inserted to the brains of these people to create drastic changes in opinions and behavior. People who fall victim of "brain washing" however, have most likely been subjected to extreme outside stimulus, both physical and psychological, that when put together create a "brain washing" effect.

Terms such as coercive persuasion, mind manipulation, thought reform are many times used as equivalents to the term brain washing. One of the mayor misunderstandings about all of the terms is the idea that the actual brain is being stimulated or changed or probed in hopes erasing memories, mannerisms, and patterns of thought and then the victims being re-taught a set of new things in hopes of forgetting and taking the newly acquired knowledge as one which is authentically their own. In actually, for the most part, what is occurring in all of the cases stated above is the application of intense physical and psychological torture which creates stimuli that inevitably changes not only the behavior of the victims undergoing it, but also the values they possess as well as how they are able to process information. "To accomplish the goals of the exercise, many techniques came into play, including dehumanizing of individuals by keeping them in filth, sleep deprivation, psychological harassment, inculcation of guilt, group social pressure, etc. (1)" In the case of the Korean War these techniques had multiple goals that went beyond just controlling of subjects in prison camps of North Korea. They aimed to produce what was known as "confessions" by the victims themselves so as to convince the accused that they had taken part in illegal/immoral activities. The techniques also aimed to make the war prisoners feel guilty of these "crimes" against the state, and as a result, make them long for of a fundamental change in viewpoint toward the institutions of the new communist society/ and, finally, to actually accomplish these desired changes in the recipients of the brainwashing/thought-reform (1).

There were two studies done on the Korean War victims conducted by Robert Lifton, an American psychiatrist and author, and Edgar Schein a professor at MIT. These studies concluded that brainwashing, or mind control, had a temporary effect when used on prisoners of war. Lifton and Schein found that the Chinese did not engage in any systematic re-education of prisoners, but generally used their techniques of coercive persuasion to disrupt the ability of the prisoners to organize so as to maintain their morale and to try to escape (1). The Chinese soldiers did get some of the prisoners to make anti-American statements by placing them under continuously cruel environments of deprivation and then by offering them more comfortable situations. Despite these physical and psychological stresses Lifton and Schein noticed that these methods of force did not change most people's attitudes. Therefore Communists ideologies were not adopted. It was more a survival mechanism. Many of them acted like they believed in the Communist ideologies just so that they could survive and not be subjected to more physical and psychological punishment. There were, however, a few prisoners who after the war showcased an affinity for Communist ideologies, but whether it was due to successful indoctrination by the Chinese soldiers or because their personalities and beliefs pre-war coincided with Communism is debatable (2).

Throughout the semester we have talked and thought about how much of an impact outside stimulus can have on certain parts of the brain and therefore on human behavior. It is somewhat surprising that there is not more scientific evidence on the web, which looks upon brain washing, mind control etc. as more of a serious type of manipulative phenomena. It could very well be that it is hard to scientifically run tests that gauge how successful these types of methods are on individuals but, it could also be that as a condition it might not be considered important for research purposes perhaps because of the context in which it was introduced.

1) 1)Brain Washing, Psych Central

a name="2">2)Hypnotic world,

Full Name:  Jenna Rosania
Username:  jrosania@brynmawr.edu
Title:  Brain Damage from Lead Poisoning as a Criminal Defense
Date:  2005-05-05 23:46:57
Message Id:  15054
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Last summer, I worked at the California Appellate Project (CAP) in San Francisco, CA, which is a non-profit law firm working for indigent death row inmates who are approaching their appeals. We assist the counsel appointed to the inmates for their appeals and we collect records that might help their case for mitigation of culpability for their crimes so that their sentences might be reduced. This work is very dependent on sources of information outside the legal system that deal with causes of behavior, such as effects of abuse and trauma and exposure to violence; in general the effects environment can have on the brain. We argue that when the brain is damaged in areas that effect judgment, planning, decision-making, and other areas that dramatically influence people's actions, the resulting crime is caused by the brain, not by the free will of the person committing the crime. This is usually acceptable by the legal system as mitigating of the personal culpability of offenders, and is thus validated by the most basic foundational institution organizing our society.

I also worked at Clean Water Action (CWA), located a city block away from CAP, so during my summer I became completely immersed in both criminal justice and public health issues. I learned the value of replacing naďve idealism with optimistic realism when it comes to issues of public health. I feel that this is also a useful attitude when approaching the criminal justice system, because I believe the justice system is always changing, if very slowly, and that it is our job to help it progress in the right direction.

Therefore, I am very interested in the implications lead poisoning can have on the way our society handles deviance. Looking at the issue of lead poisoning in Philadelphia is a pragmatic way to deal with the anti-death penalty cause because it allows one to explore the roots of violent crime in society. When some of the underlying causes of crime, such as lead poisoning, can be obliterated from society, perhaps the purpose and need for capital punishment in our society may eventually be negated as well. Therefore, it is important to gather as much information that we can call "truth" as possible, so that the greater knowledge the scientific world can supply can be applied to institutions that could use that knowledge to more aptly manage, organize, and cohere society.

Lead poisoning has been an epidemic in the city of Philadelphia for decades (1). It is estimated that 2.2 percent of all preschoolers in the US, totaling approximately 434,000 children, have elevated lead levels sufficient to interfere with their neurological development (2). Studies about lead exposure in Philadelphia show that 5 percent or 5,000 of all Philadelphia children between the ages of six months and five years have lead levels in their blood capable of causing learning and central nervous disorders (2). Once used as an additive for house paint and plumbing alloys, lead was banned from use in homes and public institutions by the Environmental Protection Agency (EPA) in 1978, after human exposure to lead was connected to a host of physiological and neurological disorders. As I researched the well-established harmful effects of this pervasive toxin on those exposed to it, I was surprised to find that human exposure to lead continues to be a crisis of public health, criminal justice, and environmental racism. Few people have been able to create enough public and federal support for its complete elimination in schools and housing, perhaps due to the expense of remedial measures and the fact that those who suffer most are poor minorities in the inner city with little political organization and influence.

The people who are usually exposed to lead tend to live in older homes and come from poorer families who cannot afford to replace the two most common sources of lead to children in their homes, lead paint and lead plumbing (3). Although lead can affect several areas of the body, the neurological damage is often quite severe for individuals exposed to even slight amounts of lead during brain development. This damage often results in behavioral problems, reasoning and attention deficits, and low IQ and mental retardation; conditions that occasionally lead to deviant behavior. The National Mental Health Information Center of the US Department of Health and Human Services state: "Many environmental factors...put young people at risk for developing mental health disorders...[including] exposure to environmental toxins, such as high levels of lead..." (4). In neighborhoods where lead exposure is so common and unavoidable due to the impoverished state of the inhabitants, the lead exposure of the children that develop neurological deficits and subsequently exhibit deviant behavior must be considered when determining their degree of culpability for a crime. If the physical state and composition of the brain determine the behavior of an individual, a brain damaged by lead poisoning can be the source of socially abnormal behavior.

Dr. Herbert Needleman of the Psychiatry Department at University of Pittsburgh has conducted several studies in the past decades dealing with children's exposure to lead, sources of lead exposure, and social, behavioral, and neurological consequences of lead exposure. In 2002 he examined 194 youths convicted in the Juvenile Court of Allegheny County, PA, and 146 non-delinquent controls from high schools in Pittsburgh, PA. Lead levels measured from the tibias of the subjects using K X-ray fluorescence spectroscopy revealed substantially higher lead levels in the bones of the delinquent youths at an average of 11 parts per million (ppm) compared to 1.5 ppm in the non-delinquent group (5). Dr. Needleman described this study, which was the first to show lead exposure is higher in convicted delinquents than in non-delinquents, as a positive step towards connecting lead poisoning with delinquency, stating "This study provides further evidence that delinquent behavior can be caused, in part, by childhood exposure to lead. For years parents have been telling their pediatricians that their children's behavior changed after they were lead poisoned, and the children became irritable, overactive and aggressive" (6).

Dr. Needleman's groundbreaking work in the area of lead exposure and behavior warrants further investigation into the area of responsibility for behavior when an affected individual commits crime. According to all the studies, lead poisoning is a disease of poverty and is in no way the fault of the person afflicted. Therefore, the effects of lead poisoning, increased aggressive behavior, low intelligence, learning disabilities, and anti-social behavior, all of which are known predictors of crime, should be mitigation of the culpability of the offenders afflicted with lead poisoning.

The neurological damage resulting from exposure to lead can result in abnormal behavior, exhibited through increased irritability and violence, learning disabilities, mental retardation, and other functional difficulties. The social effects of these abnormal behaviors through disciplinary actions, peer isolation, falling behind in school, drug abuse, domestic abuse, and a lack of understanding about the basis of an individual's impairments may also compound the neurological damage, resulting in psychological trauma, which studies show can cause other types of brain damage (7). Additionally, lead exposure is known to cause attention problems for children (8) making academic success and effectively adapting to society difficult. All these conditions have been known to result in an individual's inability to function in society or possess adaptive skills. Inability to function in society often results in deviancy of various kinds, and at times the deviancy that is a symptom of an individual's neurological damage is so seriously a breach of the mores of social structure that it is viewed by our legal system as criminal.

A question that arises when assessing degrees of criminal culpability is whether all crimes can be traced back to some mitigator that makes it inappropriate for the application of harsh penalties to an offender under the Eighth Amendment, which states no one be exposed to cruel and unusual punishment sanctioned by the state. A professor at the University of Illinois School of Law, Michael S. Moore, commented that such an argument leads to the "absurd conclusion that no one is responsible for anything" (9). The question is particularly important as the knowledge about predictors of criminal activity increases, and includes more and more mitigating factors that wouldn't previously have been acceptable excuses for crime. In the case of lead poisoning, the statistics for the number of children exposed to lead before it was as relatively controlled as it is today means that now those children who may be committing crimes as adults will be a significant population of the criminals who are subjected to the justice system, and a substantial group of offenders may be collectively excused for their crimes due to the pervasiveness of lead poisoning.

Lead poisoning has been epidemic in the city of Philadelphia for decades. Although it would seem the problem of lead in the US as well as in Philadelphia is being ameliorated and that the numbers of children being exposed is decreasing, the children who were affected by the high doses of lead far more common and rampant in the last century are now adults, trying to function in society with the effects of their exposure. In addition, those children who are currently being exposed to lead need a society that will be able to better understand the implications of this environmental toxin for delinquency and execute justice more effectively when they become adults. Human exposure to lead continues to be a crisis of public health, criminal justice, and environmental racism, and what is necessary for its complete elimination in schools and housing is a change in people's attitudes about the extent of lead poisoning in order to gain more public and federal support and make this problem of poor minorities in the inner-city with little political organization and influence everyone's problem.


1) US Department of Health and Human Services, The Nature and Extent of Lead Poisoning in Children in the United States: A Report to Congress 1 (July 1988).

2) Centers for Disease Control and Prevention. 2003. Second National Report on Human Exposure to Environmental Chemicals. NCEH Pub. No. 02-0716.

3) Center for Disease Control and Prevention. 1991. Preventing Lead Poisoning in Young Children: A Statement by the Centers for Disease Control. Atlanta, GA.

4)US Department of Health and Human Services, Substance Abuse and Mental Health Services (SAMHSA), National Mental Health Information Center, under heading, "The Causes are Complicated."

5) Needleman, Herbert L., et al.. 2002. Bone Lead Levels in Adjudicated Delinquents: A Case Control Study. Neurotoxicology and Teratology, Vol. 24.

6)Lead link to youth crime, BBC News Online, 7 January, 2003

7) Rosen J. F., Mushak P.. 2001. Primary Prevention of Childhood Lead Poisoning — The Only Solution, New England Journal of Medicine, 344.

8) Minder, Barbara, et al.. 1994. Exposure to Lead and Specific Attentional Problems in Schoolchildren. Journal of Learning Disabilities, Vol. 27.

9) Moore, Michael S. 1985. Causation and the Excuses. California Law Review, Vol. 73, pp. 1118-1119.

Full Name:  Rhianon Price
Username:  rprice@brynmawr.edu
Title:  Legal Drugs and the United States
Date:  2005-05-06 09:23:28
Message Id:  15057
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

The United States is inarguably a very medicated society. From the moment we enter the world to the moment we leave it, it is normal for legal drugs to be available to us upon demand. Most Americans probably do not even think about the effects some drugs have on their bodies; they are legal, after all, therefore they must be without negative consequence. Yet many legal drugs such as caffeine, nonprescription pain medications, nicotine, and alcohol, which are used frequently in American society, do have effects that are not beneficial.

Caffeine, for example, is a drug that many Americans rely on daily in order to function, as they believe, normally. Caffeine is a stimulant that makes those who ingest it more energized, more awake, and more capable of sustaining intellectual activity; these are the effects that most Americans are seeking when they choose to ingest caffeine. However, as one becomes more accustomed to caffeine over time or with greater consumption, he or she requires more caffeine to produce the same effects ((2)). Caffeine can be consumed in coffee, tea, soda, energy drinks, and chocolate, among other things ((1)), and takes 15-45 minutes to reach its maximum effect ((2)).

There are negative effects of caffeine even in a "normal" dosage of 300 mg, as it can decrease manual coordination and reaction time to visual and auditory stimuli. Furthermore, it is possible to overdose on caffeine, which may cause: nausea, headache, indigestion and diarrhea, irregular heartbeat and respiration, sleep disturbances, light-headedness, dizziness, nervousness, jitteriness, and an increased need to urinate ((1) and (2)). In severe cases of "caffeinism," anxiety attacks, delirium, drowsiness, ringing ears, diarrhea, vomiting, light flashes, difficulty breathing, and convulsions can all occur ((2)). Moreover, it is possible to experience caffeine withdrawal, symptoms of which include: tiredness, headache, nausea, nervousness, reduced alertness, depression, inability to focus, and confusion. Symptoms of withdrawal usually last only a few days, but can last up to seven ((1) and (2)). Nevertheless, despite these negative side effects, most Americans, driven by the United States' constant social demand for action and mental activity (at the workplace, in schools, and even at home), find that the benefits outweigh the costs.

Over-the-counter pain medication such as acetaminophen (Tylenol), ibuprofen (Advil), and aspirin are another type of drug that many Americans use daily. These drugs relieve minor aches, pains, fevers, inflammation, and muscle and joint stiffness ((3), (4), and (8)). However, they are not without risks or negative side effects. Acetaminophen can cause liver damage, especially to those who drink alcohol regularly, take acetaminophen while fasting, or exceed the recommended dosage and duration. In fact, acetaminophen is believed to be responsible for up to 40% of American's liver failures, and sends 56,000 Americans to the emergency room every year ((3)). Mild side effects of ibuprofen can cause dizziness, headache, nausea, diarrhea or constipation, depression, and fatigue, while severe ones include allergic reaction, peptic and mouth ulcers, muscle cramps, heartburn, and gastrointestinal bleeding ((4) and (5)). Aspirin can cause heartburn, indigestion, and ringing in the ears, as well as fever, seizures, allergic reaction, dizziness, confusion, and even hallucinations ((8)). Making a bad situation worse, the more pain medications ingested, the less natural pain-relieving chemicals that one's body produces ((5)). Nonetheless, because these pain medications do not require prescriptions, can be administered to children and babies, and are recommended by "experts," family, and friends to treat even minor discomforts, many Americans believe that these drugs can be taken in doses larger and for greater duration than what is safe ((3)), and do not even consider the possibility of negative side effects.

Tobacco is a controlled substance utilized by many Americans despite their knowledge of its harmful side effects. Tobacco contains nicotine, a highly addictive drug that causes the release of epinephrine ((9)). Epinephrine (a.k.a. adrenaline) stimulates the heart and increases blood pressure, metabolic rate, and glucose concentration in the blood ((10)). This stimulation causes an almost immediate increase in energy and alertness; I, a nonregular smoker, can feel it immediately after the first drag, a unique sensation of something rushing through my bloodstream and making me giddy and light-headed. This high, however, is followed by depression and fatigue when the nicotine runs out of one's system, creating the desire for more nicotine. Withdrawal from nicotine causes anger, irritability, aggression, decrease in ability to cope with stressful situations, and motor and cognitive impairment ((9)); the body comes to depend on nicotine for this high.

It would probably be difficult to find an American who is not aware of the dangers of tobacco. Tobacco causes developmental problems in fetuses, stillbirth, various forms of cancer, and a host of other medical problems caused by the smoking or chewing of tobacco ((9)). Nonetheless, because of its powerful physical stimulation, its extreme addictiveness, as well as human mentality that negative consequences do not apply to oneself, 70 to 80 million Americans nationally use tobacco at least once a month ((9)).

Another controlled substance, alcohol, is one that many Americans still ingest daily. Alcohol can facilitate relaxation, mood elevation, lowered inhibitions in social situations, and pain relief ((6)). It is quite pleasant to sit at home and drink a beer after hours in the office or classroom or have a few drinks with friends on the weekend or after a long day. However, because it is a controlled substance, most Americans are fully aware of the negative side effects: decrease or loss of motor skills, nausea or vomiting, increased impulsiveness, emotional volatility, increased need for urination, dizziness and confusion, blackouts and memory loss, hangover, increased likeliness of unplanned sexual encounters (including unintentional promiscuity and date rape), damage to developing fetuses, permanent brain and liver damage, coma, and death ((6)). Most of those who consume alcohol will experience some of these negative side effects at some point in their lives.

What drugophile Americans may not realize is that alcohol does not mix well with other drugs, which we so often have available to us. When mixed with nonprescription pain relievers, damage can occur to the stomach lining and the liver, as well as cause ulcers and gastrointestinal bleeding. Mixing alcohol and Valium decreases alertness, impair judgement, and cause death. A combination of alcohol and high blood pressure medication can reduce blood pressure to dangerous lows. Alcohol and anticoagulants increase the potency of the anticoagulant and can lead to fatal bleeding ((7)). While Americans realize that alcohol has negative side effects, its dangers may not actually be fully understood.

Why are Americans addicted to drugs? It is not all entirely a physical dependency. American society demands high levels of mental activity that certain drugs can make easier to accomplish or sustain, causes stress which certain drugs can alleviate, and wholeheartedly endorses the utilization of many of these drugs because we view them as both normal and necessary. Moreover, once they are present in our bodies, they are difficult habits to break; they also may encourage each other's usage. It is a socially very normal thing to drink two or three cups of coffee a day to fight weariness and mental fatigue, toss back a few nonprescription pain killers that may increase fatigue and necessitate another cup of coffee or a quick cigarette while also causing a headache that may prompt the consumption of another few pain killers, have a soda at lunch whose caffeine adds to the coffee's and cause a mild headache which another cigarette is consumed in order to relieve, and then have a beer or two upon arriving home in order to relax from the long and tiring day which may actually cause more weariness and necessitate another few cigarettes. It is a very cyclical cycle, too, as all of these drugs are at least mildly addictive, and who in America's busy, busy society has time to break an addiction or to feel poorly? Besides, if one feels poorly, one can always take another Tylenol, right?


1) Too Much Caffeine , from Information for Health Professionals.

2) Caffeine Effects , by Erowid.

3) Tylenol Questions , by Tylenol Dangers.

4) Ibuprofen , by Drugs.com.

5) The Unexpected Dangers of Pain Relievers , by Leslie Pepper in Marie Claire, Dec. 2001.

6) Alcohol Effects , by Erowid.

7) Using Alcohol to Stop Pain Can Be Dangerous , by About.com.

8) Aspirin , by Drugs.com.

9) NIDA InfoFacts: Cigarettes and Other Nicotine Products , by The National Institute on Drug Abuse.

10) Epinephrin , by Dictionary.com.

Full Name:  Camilla Culler
Username:  cculler@brynmawr.edu
Title:  The Link Between Prozac and Tics
Date:  2005-05-06 12:37:47
Message Id:  15058
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Recently it has been widely publicized in the news media that Prozac, an antidepressant and SSRI also know as fluoxetine hydrochloride, is linked to disturbing incidents of suicide and violence. However, there is another side effect, which is harder to detect that has not gotten quite so much media attention. Involuntary tics or muscle twitches affecting the face and specifically the eye area can occur as a side effect of taking an SSRI, a drug that directly interferes with serotonin levels in the brain. Tics are defined as, "... a problem in which a part of the body moves repeatedly, quickly, suddenly and uncontrollably"(1). A tic is a reminder that by taking drugs such as Prozac we are altering our natural neurological state.

My personal experience of witnessing friends and family members, adolescents and adults alike develop eye twitches and excessive blinking seemingly overnight, has prompted me to investigate the link between Prozac and facial tics. What causes them? What methods are used to treat them? And most importantly, how can our society address the problem of "transient tics" that is seemingly ignored?

On a neurological level, Prozac works by increasing "serotonin messages" (2). Serotonin is a chemical in the brain and dispersed throughout the nervous system that impacts mood. Depressed people often have unusually low serotonin levels. Fundamentally, brain cells are engaged in the activities of sending and receiving information in the form of chemical messages. In Prozac Backlash, Dr. Glenmullen explains this process of sending and receiving, and explains that the way the drug works to increase serotonin is through a process of disallowing "reuptake" of serotonin. He defines "reuptake" as a mechanism through which excess serotonin that is not utilized, is taken in by the same cell, which sent the chemical message in the first place (2). To understand Glennmullen's example it helps to picture a college student cleaning up a messy dorm room with clothes strewn everywhere. Some of the clothes on the floor will be worn and others will be put back in the closet. Ultimately, this process of disallowing "reuptake" or re-absorption of the chemical allows for the proliferation of the chemical serotonin in the body. The end result will hopefully be a happier mood. Trouble arises when neurological consequences occur such as facial tics. Or as Glennmullen explains, "Whenever the drugs step on the chemical gas pedal, the brain tries to slam on the brakes. The result is jerking, stop-and-go oscillations in brain activity that can go out of control" (2). It makes sense that the brain would be resistant to unnatural activity, and thus when a person develops facial tics from Prozac, these are really a neurological coping mechanism.

Dr. James Robert Brasic and Brian Bronson cite dopamine in their recent article, "Tardive Dyskinesia" (3) as a chemical related to motor functioning, and the main culprit responsible for the transient tics. In this article the doctors attribute the problem to a lack of the presence of dopamine, as it is being continuously barricaded or obstructed while the patient is taking Prozac (3). The question I was left with was how long does it take for dopamine amounts to stabilize after the person has stopped the SSRI? Could finding out this information help us to know how long a Prozac induced tic will persist, or if it will persist even after Prozac dosage has been either significantly lowered or discontinued altogether? Might the fact that the barricades for dopamine that the medicated patient has formed take a long time to dissipate? Could this lengthier period be the reason tics persist when the Prozac is no longer being ingested?

Besides simply waiting or hoping that the facial tic will go away, cognitive behaviorists and school psychologists can work together to help children and adults who suffer from this side effect to deal with them. An article entitled "Habit Reversal. A Treatment Approach for Tics, Tourettes's Disorder and Other Repetitive Behavior Disorders" written by Dr. Michelle Pearlman, details important techniques that can help to counteract the tic. Primarily, she emphasizes that developing a sense of awareness that the tic is occurring and developing a "counter-behavior" to replace the tic can help the student (4). Interestingly, in my experience it seems that people who have the tics are not even aware of them.

My friend approached me one day visibly distressed, that the night before his mother had pointed out to him that he had developed an eye tic on the left side, which looked like squinting. The problem is that because stress can make tics worse, it would seem that alerting someone of the fact that they have a tic when it might not be treatable, might increase stress levels resulting in a worse, more pronounced tic. However, a person living with an obvious facial tic does have the right to know about it so that they can do everything possible to get off the medication or to find some other way of treating it.

The treatments listed for tics brought on by taking Prozac seem to have both a rudimentary and trial and error theme. The Internet resource emedicine.com cites anti-Parkinson medication in order to keep people who decide to stay on the Prozac tic free (3). However, taking other medications can result in even more side effects. Another Internet site prozactruth.com discusses the implications of using Lecithin and Vitamin E. The web site compares the occurrence of the tic to a short circuiting in the brain. The site explains that, "many parts of our bodies are composed of positive and negative terminals, the same as an electrical switch. If you reverse the polarity, changing the negative to positive or changing the positive to negative, the switch will not work or will short out. The same is true within our body" (5). However, it seems that there is really no scientifically definitive solution in how these tics can be treated or prevented, except not to take Prozac in the first place, and also to learn from previous experiences with the drug.

In my first web paper on epilepsy, through my research I learned that once a person has a seizure they are more likely to have another one (6). Certain neurological pathways have been forged. It appears that this chance of recurrence being dependent upon previous behavioral episodes might also be the case with tics caused by taking SSRIs. Once the patient has exhibited the involuntary muscle twitch chances are it will happen again. It becomes a neurological habit. It is a prime example of brain change literally equaling behavior. Similarly, if the patient is taken off the drug and then put back on, chances are if they exhibit the tic the first time they will exhibit it the second time around as well as Glenmullen's case studies exhibit (2).

Though taking Prozac can sometimes be unavoidable, if a patient knows of the risks involved before beginning a trial of the medication and knows the red flags to watch for, it seems that this might mitigate the risk of the possibility of a lifelong, debilitating tic occurring. However, Eli Lilly the company behind Prozac does not make this information readily accessible. On their website prozac.com, running a search for "tics" yields no results. The probable reason behind the company's avoidance of the word "tic" is that society has attached a certain stigma to it, that people who exhibit involuntary muscle twitches are different or not normal. The stigma surrounding tics is similar to the one attached to epilepsy, the origination being that in ancient times epileptics or people who exhibited loss of motor control were thought to be evil or possessed (6). The idea of losing control of motor functioning is frightening. The possibility that a pill that is supposed to make you feel better could potentially ruin your life, your self- esteem, or your ability to socialize and enjoy life, is shocking and definitely would not help Eli Lilly's Prozac sales.

The emotional impact of having a transient tic obviously varies. The people I know who have tics from Prozac and Zoloft live their lives normally without any obvious disturbances. However, the fact that our society either ignores these "transient tics" or treats tics brought on by medications as though they are not serious, is a dangerous problem in itself.

Society dictates that it is rude to either point out a tic or to stare at someone who has one, but the other extreme of completely ignoring a tic is equally destructive. Are these tics brought on by Prozac ignored and overlooked because they are easier to conceal than other side effects? Are people more reluctant to acknowledge and discuss them because of the idea that they might be evidence of neurological brain damage as suggested in Prozac Backlash (2)? It seems that people are either so desperate to be helped by the drug Prozac that they are willing to endure the side effect, or that our culture has simply decided to turn it's back on virtually undetectable facial tics, because they are not deemed to be a "serious side effect". Often they do not incapacitate the patient, and are therefore might be viewed by Eli Lilly as not being worthy of being listed (7).

However, there are certain approaches that might combat this ideology and allow patients to be better informed, as well as to draw attention to how undeniably prevalent facial tics are as a result of Prozac usage. A documentary with consent from participants would be an interesting way to go about examining the scope of facial tics. Being confronted with an image of someone ticing is much more powerful than reading about it as a side effect on a pill bottle label anyway. Besides drawing attention to the prevalence of facial tics, it might as also be informative to conduct an informal, anonymous survey of how many people in Manhattan who are using Prozac either have been told that they have facial tics or have noticed them themselves.

Books like Prozac Nation by Elizabeth Wurtzel (8) and Prozac Backlash, detail the dangers of taking the medications and the frightening array of side effects that can occur. Though these books are definitely an invaluable resource, and describe in-depth accounts of patients who have suffered from facial tics, they seem to scare people as a tactic to have them not take the medications in the first place. The problem with this approach is that people can justify the fact that this type of side effect has happened to others, who probably didn't really need the medication in the first place. Readers can rationalize that they are taking Prozac in a controlled way and need to take it in order to function. By engaging in this type of thought process readers can distance themselves from the case studies Glenmullen writes about and Elizabeth Wurtzel's experiences on the drug. Therefore it is imperative that our society takes transient facial tics seriously, as they are a sign that something is wrong in the brain, or more specifically that misfiring is occuring. Ultimately, the field of tics related to Prozac usage needs to be studied a lot more in depth before we understand fully why people get these tics and how they can be controlled or eradicated once and for all.


1)Definiton of "tic" on psychnet-uk, a website that defines what a tic is.

2)Glenmullen, Joseph. Prozac Backlash. New York: Simon and Schuster, 2000.

3)"Tardive Dyskinesia" , an article by James Robert Brasic and Brian Bronson.

4)"Habit Reversal. A Treatment Aproach for Tics, Tourette's Diorder and Other Repetitive Behavior Disorders", an article by Dr. Michelle Pearlman.

5)prozactruth.com, a website that describes supplements and vitamins that could combat tics as well as personal stories.

6)Serendip Home Page, the first Web Paper assignment.

7)Eli Lilly's Website, a website run by Eli Lilly.

8) Wurtzel, Elizabeth. Prozac Nation. New York: Riverhead Books, 1994.

Full Name:  Imran Siddiqui
Username:  isiddiqu@haverford.edu
Title:  The Causes of Gender Difference in Depression
Date:  2005-05-06 13:28:30
Message Id:  15059
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Everyone can become sad at times. Emotion is a normal aspect of the human existence. If one loses a loved one, fails at an important task, or even just has a bad day, it is normal for that person to display feelings of sadness. Normally, these feelings can last from a few hours to a few days. However, in some cases, an individual's sadness will persist and not subside. This problem is called depression. If not treated, depression can become problematic for an individual's health, and eventually, develop into a life threatening illness (1). More than fifteen percent of people with depression commit suicide, and depression has also been know to increase the probability of heart attack and stroke (1).

Depression can affect anybody and can occur across all demographic variables; it can even occur in other animals. However, one aspect of depression that is extremely intriguing is the fact that its incidence is much higher in women than men. In the United States, there are a reported two women for every one man with depression. In the rest of the world, that rate increases to 3 to 1 (2). Some possible causes of this discrepancy have been noted, but there the exact causes and how they relate has not.

Although there is still much to be learned about the gender differences of depression, the scientific community has an fairly extensive knowledge of its general causes, both biological and environmental. However, it is still unknown how these causes relate, and what causes are the inherent initiators. This paper will discuss the different causes of general depression, and attempt to better understand how the causes of depression relate to one another. It will then classify the causes leading to the differences between males and females, and apply the principals of the analysis of the general causes in order to better explain why women are more prone to depression.

Generally, there are three major factors that can cause depression in individuals, and usually, it is a combination of these factors that ultimately leads one to depression (3). The three factors are genetics, neurobiology, and environment. It is disputed which one contributes the greatest towards causation, but there is a relative consensus on the fact that all three contribute to depression.

The evidence of genetic causation lies mostly in the fact that depression frequently runs in families. People with blood relatives that have had depression are more likely to become depressed themselves (1). These theories have been supported through studies on adopted children who's biological parents were depressed, as well as studies on twins (3). However, the exact gene or genes responsible for depression have not been found. Various regions of certain chromosomes have been targeted as locations of possible depression genes, but scientists are having difficulty pinpointing the exact gene(s). Some believe that this is due to the fact that multiple genes contribute a small portion to depression (1).

There probably isn't one depression gene, or even few genes devoted to depression. However, studies support that depression can be genetically caused. Therefore, it is most likely that there are some genes that trigger certain biological situations, which in turn can cause depression.

Biologically, there are both neural and hormonal areas linked to depression. The neural areas involve monoamine neurotransmitters such as serotonin and norepinephrine (1). Serotonin is found throughout nature, but when acting as a neurotransmitter in the human brain, it plays a large role in mood and feelings of happiness (4). Depression can occur when there is a lack of serotonin in the brain (1). This is most likely due to the fact that the release of serotonin causes feelings of happiness. If serotonin is not released, then one does not feel as happy. Depression can also result when there is a lack of norepinephrine in the brain (1). Norepinephrine functions as a response to short-term stress. Most likely, a lack of it causes depression, because it helps one deal with short term stress. If there is a lack of norepinephrine then one cannot properly deal with stress, and high stress levels have been known to cause depression.

Neurotransmitters are not the only neurobiological areas active in depression, hormones, most specifically corticotropine, affect depression. Corticotropine plays a large role in flight aspect of the flight or fight model. It causes the release of other biochemicals that are released during the flight aspect of the model. These biochemicals cause repressed apatite for food and sex (1). People with depression often have large amounts of corticotropine. corticotropine is also stress responsive by causing feeling of stress (1).

It is difficult to tell whether these biological causes of depression are initiators or only pathways from other genetic or environmental factors that initiate depression. What is known is that depression always occurs through neurobiological responses, and can be cured chemically. Therefore, it can be assumed that it is the final step from initiator to depression. It is most likely that genetics or environmental factors cause these neurobiological factors in one way or another.

The environment definitely plays a role in depression. As stated, high stress levels can cause one to become depressed. Anxiety can as well (2). Life events, such as traumatic childhood experiences can lead to depression in adulthood (1). Romantic breakups also tend to cause depression is some individuals (3). One of the most prominent environmental factors is lack of achievement, direction, or freedom in one's life. Many people who do not achieve the success they hope; do not have a plan for themselves or know where they want to be; or, are stuck in situations where they cannot leave, become depressed (2). Ultimately, there are many environmental factors that can lead to depression. However, these situations do not cause depression in everyone. Therefore, I contend that these environmental factors may trigger depression, but there are other genetic or biological factors that must be involved.

A causal study of depression is a difficult one. The major factors leading to depression have been located and thoroughly studied, but the role in which each factor plays in causing depression is almost totally unknown. Therefore, currently, a philosophical approach to the way in which the brain, mind, and environment work is the only means to shed light on how a person becomes depressed.

First, it must be understood that the nervous system can perform the same function in many different ways (6). Therefore, because depression always occurs in some way neurobiologically, it can be interpreted that depression can be caused in many different ways. Furthermore, the neurobiological aspect of depression is the most essential, because no matter if it is initially caused biologically, environmentally, or genetically, depression changes the brain and can be treated with chemicals that directly effect the brain. All people who are depressed have biochemical changes (1).

In assessing the relationship between causes, it is best to start at the source of depression (neurobiology) and move out, towards the initial cause. I contend that in many situations of depression either genetics or environmental factors, or both initiate the depression, and cause neurobiological changes that cause depression. However, the fact that the nervous system can manipulate behavior without inputs suggests that the nervous system can cause depression itself. But why? I believe that the everything in nature has a purpose; therefore, if the brain manipulates itself in a way that causes depression, it is done with a purpose. Maybe depression is a result of other changes the brain needs to perform; similarly, maybe the brain needs to be depressed in order to perform certain tasks. Many people who are depressed experience enhanced creativity and learn more about themselves and others (7). It is possible that the brain could put itself into depression in order to enhance specific types of learning and application.

Not only could the unconscious brain be responsible for initiating depression, but the I-function could as well. A person could want something that causes depression. For example, a person may want to be more creative, and in result becomes depressed to do so. However, in these situation other environmental factors would be present

The environment, as well as genetics can affect the nervous system to cause depression, but most likely it is a combination of the two. As stated, many different environmental factors can cause depression; however, the same environmental factors will cause depression in some, but not others. Therefore, there must either be genetic disposition towards depression, or towards biological factors that cause depression. For example, a person might have genetically low serotonin levels; thus, when they go through traumatic experiences, they not only get unhappy, they get depressed. Genetics could also cause depression without any environmental stimulus. Little is known about the details of genetics role in depression, but what is known is that in some people genetics plays a role in their depression, and in others it does not (1).

Ultimately, since there is no solid evidence supporting one factor inherently causing depression over the other, as of now, it must be left at the theory that either of the three can inherently cause depression. However, because there is evidence to support that neurobiological factors are always involved (1), and also, there is evidence to support that brain equals behavior (6), there are four possible ways that depression can arise. 1. The nervous system by itself can cause depression. 2. Genetic predispositions can cause neurobiological changes that cause depression. 3. Environmental factors can cause neurobiological changes that cause depression. 4. Any combination of the three. However, what is most important is that biology is the essence of the causes of depression. Environmental triggers must be accompanied with biological situations in order to cause depression, because the same environmental input does not always lead to depression.

Hopefully, this can be properly related to the male and female differences in depression. As stated, women become depressed twice as often as men (8). This discrepancy is real, but because depression occurs in many different ways, there are many explanations for this gender difference.

Some explain the difference through more practical reasoning. For example, women are much more prone to diagnosing themselves as depressed, or reaching out for help. However, this explanation can not account fully for the discrepancy, because it has been noted that females attempt suicide much more often than males. Also, men would rather not reach out for help and deal with their problems through substance abuse or activities, while women would dwell on their issues (8). Another theory is that men become depressed over different things than women, and the things that women get depressed over occur more frequently. However, this theory has been unsupported, and there is evidence that shows that men and women become depressed over the same issues; only women become depressed more often (9).

Others try to account for the discrepancies in incidence through environmental factors. The most prominent theorized cause is the social status of women. Women have less freedom than men do, and can not always do as they please (2). This may cause an increased need for emotional support that if women do not get, causes to depression (10). Furthermore, women have higher chance of depression right after childbirth. This has been explained socially by the fact that after childbirth women realize that they cannot aspire to do anything else, but care for their children (2). This makes sense, since both women and men become depressed due to failing aspirations and career problems (9).

Finally, the last group of reasons for the discrepancy is biological. Studies have indicated that during a menstrual cycle, women have a greater release of hormones in their HPA axis, which is responsible for the release of corticotropine (2). Furthermore, women do not have the same ability to shut off the production of stress hormones as men, because women's sex hormone blocks their ability to do so (10) . Therefore, women get stressed more easily, and this can leads to more depression. This could very well account for depression during and after pregnancy, because hormonal levels increase. Also, although it has not been confirmed, it has been suspected that there is a gene on the X chromosome that causes depression (1).

Ultimately, like there are many theorized causes of depression in general, there are many different theorized causes for the greater incidence of female depression. Also, these differences follow generally the same factors: biology/genetic and environmental causes. However, just as in the causes of depression in general, biology is the essence of the difference between male and female depression. Whether or not the causes are classified as biological or environmental, they are inherently biological, and all relevant causes can be biologically explained.

First, the social status of women is biological. If they are repressed, they are because they are biologically weaker then men. If they must cope with motherhood, it is biological, because women were biologically chosen as mothers, and men as fathers. Furthermore, if women are more depressed, because they can not aspire to achieve what men can, it is biological, because their biological roles in human the human existence has not allowed them to do so. Ultimately, because brain equals behavior, and depression is a behavior, every argument pertaining to the male-female discrepancy is inherently biological. The same goes for the causes of depression in all individuals.

In conclusion, many explanations can be made about how one becomes depressed, both biological and environmental. However, reduced to its pure form, all of the causes are biological, and the environmental factors, are only result of deeper biological meanings.


1)The Neurobiology of Depression

2)Gender Differences and Depression

3)Causes of Depression



6)Neurobiology 202 Lecture

7)Depression...Or Better: Thinking About Mood

8)Depression in Women

9)Men and Women Are Not Worlds Apart After All

10)The New Sex Score Card

Full Name:  Student Contributor
Title:  Evil: Natural or a Matter of Choice?
Date:  2005-05-06 16:00:09
Message Id:  15064
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

On September 13, 1948, a young construction worker by the name of Phineas Gage experienced a grave injury to the head; a tamping iron, three feet seven inches long and weighing 13.5 pounds, entered first under his left cheekbone and then exited through the top of his head, landing about 25-30 yards behind him (6). In an astonishing recovery, Gage retained the ability to function without severe disability and even resumed his vocation as a construction worker some months after the accident. Even more astonishing than his miraculous recovery was his sudden change in personality: before the accident, Gage was highly respected by his peers and employers, excelling at his job both as a capable construction worker and astute businessman, but after the accident Gage was recorded as being grossly profane, impolite and incompetent. Many of his friends and coworkers claimed: "He was no longer Gage" (6). Gage's story would go on to function as one of the more famous examples of the, still relatively mysterious, relation between personality and the brain. However, even more interesting in Gage's case is his sudden propensity for cruel behavior; a propensity that, in his case it seemed, resulted from his brain injury and was well beyond his control.

There was much discussion in Neurobiology 202: Neurobiology and Behavior about the nature of depression and a consensus was reached stating that individuals who suffer from some manifestation of depression are experiencing an illness that is well beyond their control. Severely depressed individuals cannot control serious spouts of hopelessness, reduced or increased sleep, a constantly irritated state of mind, etc... The notion that one can contain depression is largely an incorrect notion encouraged by society; it was agreed that it was imperative to recognize that the helplessness that is associated with depression prevents the inflicted from just "snapping out of it." To some measure, it might prove interesting to examine the phenomenon of 'evil' under this state of reference. Gage's story raises a very valid question: is evil a choice or is it, like depression, a biological illness that causes its inflicted to behave without some measure of personal control?

In a recent article in the New York Times, writer Benedict Carey explores the question of evil as it relates to murderers and serial killers. In recent years, neuroscientists have found evidence that physical differences in brain function can be found between normal individuals and individuals who have committed serious crimes. For example, using a brain-imaging study, American and Canadian researchers have discovered that psychopaths process certain words, such as power and future, differently from nonpsychopaths (3). These observations seem to suggest that it is not a matter of personal choice that forces certain individuals to commit atrocious crimes, but a matter of the brain. Of course, it is important to realize that three possible conclusions can result from this observation: first, that the brain controls behavior and therefore criminals commit crimes because their neuronal pathways force them to. Second, that behavior affects the brain and therefore criminal behavior can be documented by brain-imaging studies, and lastly, that it is a combination of the two - the brain contains a certain degree of influence, but criminal behavior can also reinforce certain brain function. It is impossible to ascertain exactly what causes 'evil' behavior, but the possibility that it is induced, rather than chosen, is a concept that has intrigued scientists and philosophers for centuries.

In the nineteenth century the philosophical movement, Natural Theology, gained momentum as a leading movement that sought to reconcile science with scripture. They argued that since God was the creator of the universe then by understanding the universe, its nature and earthly mechanisms, then a comprehension of divine law could be acquired. They argued that if nature was God's scripture then all the mechanisms related to nature were, in theory, God's work. Critics of Natural Theology argued that to believe that God was the creator of the universe, and that nature was a representation of divine law, forced the believer to accept, then, that evil and suffering was the work of God. David Hume, in his Dialogues Concerning Natural Religion, creates a scene where three characters (Cleanthes, Philo, and Demea) are engrossed in discussion recorded by observer, Pamphilus. Demea and Philo point out the misery of humans and "curious artifices of nature...embitter the life of every living being. The stronger prey upon the weaker, and keep them in perpetual terror and anxiety." Cleanthes', an advocate for Natural Theology, replies, "the only method of supporting divine benevolence (and it is what I willingly embrace) is to deny absolutely the misery and wickedness of man." (2). But the existence of misery can be proved, Philo interjects, and "not by chance surely. From some cause then. Is it then from the intention of the Deity?" (2). This controversy regarding the existence of evil was never resolved and would go on to remain a contemporary problem for believers of religion.

Religion has played a major role in shaping contemporary society's conception of evil; evil, for the most part, is associated with punishment in the afterlife or the influence of Satan. In fact, it can be argued that religion gave birth to morality as a whole, allowing individuals to divorce good from evil, to further the development of society on the backbone of benevolence and altruism. Religion teaches us how to behave, how to respond to certain situations, and encourages us to accomplish good with the promise of heaven in the afterlife. On the other hand, the argument that religion gave birth to morality can be counteracted with the fact that other organisms, who are not subject to the tenets of religion, exhibit altruism thus raising the possibility that morality might have preceded religion. If this is the case, then the same can be said about evil. If the existence of evil preceded religion, then it cannot be attributed to the influence of Satan or the creation of hell, but rather a natural component of human behavior. Just as altruism has been documented in both lower and higher organisms, so too has 'evil' behavior been observed in lower and higher organisms. For example, scientist Frans de Waal has noted that monkeys and apes exhibit kind behavior towards disabled members of their group, showing sympathy not only to kin but unrelated members of the group as well (5). In addition to altruism, aggression is also noted in the behavior of monkeys. A few cases have been observed where a member of the group will begin to exhibit psychopathic behavior for no apparent reason, engaging in cruel behavior such as attacking and killing younger members of the group (1).

If evil really is a matter of the brain, beyond our control, then this results in drastic implications for society. Not only does it throw into question the nature of free will, but it also throws into question the morality of administering punishments to criminals. Is it moral to punish a criminal if there is evidence that he is suffering from an illness, the illness being that of 'evil', as opposed to making clear judgments as to the nature of his crime? Dr. Angela Hegarty, a psychiatrist, has encountered four violent criminals in her profession who were so vicious that no other word except for 'evil' could describe their behavior. One man murdered his own wife and young children and showed more annoyance than regret; when a staff member at the facility where he was kept, after he was arrested for the crimes, arrived late with a video, he grew violent and abusive, insisting on punctuality (3). This man was sentenced for his crime, but his aggressive behavior did not lessen. It appeared, in his case at least, that it was something that was characteristic of his behavior; perhaps his brain was programmed in such a way where it encouraged aggressiveness and prevented feelings of compassion? Is it acceptable to punish this man, who brutally murdered his own family, if there is evidence that he could not help his actions? Is it possible to punish anyone if their brain made them do it?

This has serious implications when applied to cases of mass murder, for example the Holocaust or the recent genocide in Rwanda. Imagine Augustin Bizimana or Felicien Kabuga in front of the International Criminal Tribunal for Rwanda and their lawyers using the argument that their actions were purely biological and that they should not be held accountable for their actions. There have been numerous cases where it has been documented that the 'mob mentality' can override your own moral judgments, causing individuals in a mob to commit crimes that had never entered the minds of the individuals when removed from the crowd. Because it has been shown that the 'mob mentality' can force conscious citizens to become murderers, is it right to punish these individuals if it can be proven that their vicious actions are only biological responses to a specific environment? Would the society find this interpretation acceptable? I would argue that this could have serious repercussions for society because future criminals could interpret this response as a green light to commit future immoral actions.

If Emily Dickinson, in her poem 'The Brain is Wider than the Sky', is right and brain really does equal behavior, then how can we punish an individual for their biological make-up, for the patterns that evolution has deemed is appropriate, even if it encompasses the behavior of serial killers and war criminals? My answer to this above question is simply: we can punish these individuals because Emily is not right, but "less wrong." We can punish Augustin Bizimana and Felicien Kabuga because there exist men like Paul Rusesabagina who risked his life numerous times to save other Rwandans. We could have punished men like Adolf Hitler because there existed men like Oskar Schindler who risked his life to save German Jews. Our brains do not dictate our behavior; there is room for modification, for internal processing, and room for new conclusions. It is true that evil does exist in nature, that it is a natural component of human behavior, but modification, too, exists in nature. Organisms are constantly changing in their physical appearance to adapt to changing environments; what's not to say that we are not changing internally? When a change in the environment takes place, it does not lead to a certain modification, a certain by-product, but has the potential to give rise to numerous modifications and here is where our choices are made. Bizimana and Kabuga chose to act malicious, to become 'evil,' but they could have equally chosen to act benevolent.

Of course, one could argue that a harsh childhood for the aforementioned individuals led to their 'evil' nature and, to some extent, this may be true. There appears to be a critical period when the choice between evil and benevolence can be made equally, this critical period most likely existing during our development, but once evil is chosen it becomes more difficult to change. This is analogous to the development of the neural tube in chick embryos, whereby the removal of somites at stage 11-12 has drastic consequences on the nearby neural tube. However, when the same removal is done at stage 13 or more, there is no apparent change (4). Our development is impressionable; our psyche can be molded into an infinite number of ways. That is why appropriate punishments would be counseling sessions aimed at deconstructing initial notions of moral and immoral, good and evil, etc... for criminals to help and re-teach the necessary principles needed for nonviolent behavior. With this in mind, it is important to realize that criminals can change if placed in the appropriate environment. Criminals should be held accountable for their actions, but more energy and resources should be spend on ways to assist these individuals in adapting appropriate behaviors for society as opposed to classifying them as purely 'evil' without any hope for change.


1)"Oregon Primate Rescue"

2) Callaway, David. Melvile in the Age of Darwin and Paley: Science in Typee, Mardi, Moby-Dick, and Clarel. Dissertation, Graduate School of Binghamton University State University of New York, 1999

3)"For the Worst of Us, the Diagnosis May Be 'Evil'."

4)Developmental Biology

5)"The Basis of Morality: Scientific vs. Religious Explanations."

6)"The Phineas Gage Story."

7)"Morality's Biological Nature: Implications for the Attribution of "Good" and "Evil."

8) Weikart, Richard. From Darwin to Hitler: Evolutionary Ethics, Eugenics, and Racism in Germany. New York: Palgrave Macmillian, 2004

Full Name:  Ayumi Hosoda
Username:  ahosoda@brynmawr.edu
Title:  Where is my happiness?
Date:  2005-05-06 17:48:07
Message Id:  15066
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip










"I am so stressed out and depressed. I am not happy". This is a familiar phrase we hear on campus this time of the year. Many people including myself are overwhelmed by the amount of work we need to complete and we wonder, "Why in the world did I come to Bryn Mawr?" We seek help by talking to others how stressed out we are or we load too much junk food and caffeine to stay ourselves awake. "I like what I am doing, but why can I not feel happy?" I recently realized that I have not had any moments that I feel "happy" in a while. What is happiness anyway? Happiness appears in various types, sometimes a momentary one like an excitement and sometimes a long-lasting one that generally puts you in a happy mood. What is happening with our nervous systems when we feel happy? Is there anything we can do to make or keep us happy? Happiness is a type of emotion, a positive one. Many researches have been done on negative types of emotion such as depression, stress, traumas and so on. On the other hand, researches on positive types of emotion have not been studied as much so far but they are now getting attention more these days. Looking at the general overview of emotion and the researches on happiness, this paper is going to find a key to happiness.
The part we feel emotion resides in our brains.

The website, BrainConnection.com gives a good overview of emotion. Emotion varies depending on what our experiences are, and there are considerately three categories, primary emotions, secondary emotions, and background emotions. Primary emotions are "experienced as a byproduct of a stimulus-response chain of events" and the examples are fear, anger, sadness and joy (1). Secondary emotions and background emotions come from an "internal feedback loop" (1). It has been also suggested that emotions can overlap various categories like primary and secondary or primary and background. The main pathways for the primary emotions have two forms though it varies somewhat depending on the type of emotion. One way is that our sensory inputs can be transferred from visual cortex and move quickly to thalamic pathways and to the amygdala, a part of limbic system located in front of hippocampus. The second route is that sensory input moves to the cortex and then comes back to the amygdale. What secondary emotions differ from primary motions is that "it begins with cognition and follows a pathway that has been created by learning; images are associated with emotions and events triggering these images then trigger the pre-associated emotions" (1). Thus our second emotion emerges from our learning experiences. The path which second emotion takes is a simple travel of stimulus from the frontal lobe to the amygdala. Secondary emotions involve with chemical releasing process unlike primary emotions. The frontal lobe seems essential to secondary emotions, but what about people who have damages with the frontal lobe? The answer is that those people are mostly unable to experience secondary emotions though they can experience primary emotions. The third type of emotion is the background emotions which is the resting state of your emotions. Since the neurological path is focused on fear, a negative emotion, it might not be a path for a positive emotion like happiness. However, from these three types of emotions, we can think of three types of happy emotions. The primary emotions is a stimulus-response such as a momentary happiness like some kind of excitements. The secondary emotions come from some events and experiences which trigger the happy feelings. Lastly, the background emotions is the general state of happiness.

As it has been mentioned earlier, the study of positive emotions is fairly a recent phenomenon. As far as the year of 2003, HealthEmotions research Institute at UW-Madison was the only institution where researches on positive emotions such as happiness were studied. Richard Davidson, a professor of psychology and psychiatry at UW-Madison as well as a leader of a research team at HealthEmotions research Institution has contributed very much on happiness studies. One of the most famous studies was to look at Buddhist monk's brains during their meditations by using the recent technologies including functional magnetic resonance imaging (fMRI) and electroencephalograms (EEG). FMRI shows the blood flow within the brains and EEG shows the electrical activity in the brain. From this experiment, Davidson found a much greater level of activity in the left prefrontal lobe in Monk's brains than those who do not meditate (2). On the other hand, a research on negative emotions by using fMRI showed that the right prefrontal cortex was more active when people were emotionally distressed (3). This is a significant finding but the researches are still not sure if this indicates that the sensation of happiness is produced in the left prefrontal area or the area is active because people have the sensation of happiness. Another research was done by Davidson. He conducted a research on babies who were less than a year old and their brain activities when their mothers left them. From this experiment, he found out that babies who did not cry turned out to have a higher activity in their left prefrontal lobe (2). From both of the experiments, happiness and the left prefrontal lobe is even more clearly linked.

The physical difference between the more active left prefrontal lobe, a "happier" brain, and the less active left prefrontal lobe, a less "happy" brain, is questioned. Davidson believes that it maybe has something to do with neurotransmitters(2). Neurotransmitters are chemicals which carry signals from one neuron to the other. The prefrontal cortex is filled with various kinds of neurotransmitters such as dopamine, glutamate, serotonin etc. Since some studies on animals show that dopamine is very active when they transfer signals associated with positive emotions between the left prefrontal lobe area and the emotional center in the brain, Davidson believes that dopamine may be one of the most important neurotransmitters. Davidson believes, "dopamine pathways may be especially important in aspects of happiness associated with moving toward some sort of goals" since monks always have a goal as well as people who try to cut their eating/smoking habits etc (2).

Brian Knutson at Stanford also did a research by using fMRI like Davidson. His interest was to look at people's emotions when they attained the goal. The setting was that people got money as a reward when they won in the videogame. However, the result turned out somewhat different from what Knutson expected: participants reacted dramatically right before they got the reward, and the active part was not in the left prefrontal lobe, but the nucleus acumbens in the subcortex (2). Knutson believes that this was a reaction to happy feelings associated with excitement rather than a achieving a goal (2). From various studies, researchers are trying to figure out more on what the mechanisms of our brains when we are happy.
Monks had a higher activity in their left prefrontal cortex during the mediation, but is it because meditation? Does meditation make you happy? Mindful meditation is described as "a practice designed to focus one's attention intensely on the moment, noting thoughts and feeling as they occur but refraining from judging or acting on those thoughts and feelings" (4). Researches at UW-Madison continued. Participants were randomly divided into two groups: one who underwent intensive mindfulness meditation trainings and practices and one who did not do any mindfulness meditation. The result came out as they expected: participants with meditation had an increase in the left prefrontal lobe activity (4). This research confirms that meditation helps our left side of the frontal region to be more active. Another important aspect, an effect of meditation to health is experimented. Both groups of this experiment received a flu vaccine and their antibodies levels were checked at the fourth and eighth week. They found out that the meditation group had a much more increase in their antibodies level than the control group. Being that said, it is not strange to say that mediation is very effective to make you happy and keep you healthy (4).

Many researchers are now working on happiness effects on health. In fact, a positive impact is found in many researches. For example, Kubzansky studied positivism as an indicator of having a goal, happiness. She found out that people who regard themselves as optimists tend to have a much less heart related disease rate than pessimists (2). Happiness seems to play a big role in our life.

We are now closer figuring out what is neurologically happening to our brains when we have certain emotions including both positive and negative ones. From numerous researches done by Richard Davidson, we are able to see the linkage between happiness and the left prefrontal cortex. We are still not completely sure if it is the area eliciting the emotions of happiness or it is happiness causing the area to be active. According to the research, meditation seems very effective to our happiness. But I wonder what else can make us happy. As monks have goals when they meditate, we can do some other activities and have some goals. When we understand happiness as "working toward goals", other things such as religion would be as effective as meditation. What about my stressful time at school? I am working towards my goals, but why am I not happy? Probably my right prefrontal cortex is as active as the left, and I do not feel the happiness. I may be start meditating.

WWW Sources
1) href="http://www.brainconnection.com/topics/?main=fa/emotional-brain ">The Emotional Brain , Brain Connection website

2):Lemonick, Michael D. The Biology of Joy. Time. 2005

3) href=" http://web.lexis-nexis.com ">New York Times , Bryn Mawr College Library website

4) href=" http://www.news.wisc.edu/packages/emotion/8238.html "> Brains and Emotions Research , University of Wisconsin-Madison website

Full Name:  Katherine Cheng
Username:  kcheng@haverford.edu
Title:  Mirror, Mirror (neurons) in the...brain?
Date:  2005-05-07 14:11:42
Message Id:  15072
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

When asked by neuroscientist Oliver Sacks about her personal life, Dr. Temple Grandin, a famed Associate Professor of Animal Science at Colorado State University whose more humane designs of livestock handing facilities have innovated the meat-packing industry, explained that because she is autistic, she has trouble understanding other people's feelings and therefore has difficulty maintaining intimate personal relationships. She is able to function successfully in a "normal" (i.e. non-autistic) world because she has developed a mental database detailing standard scripts of behavior and their appropriate responses. For example, when she sees droplets of water seep from someone's eyes, she matches this description to a model in her head and labels it "crying." Associations of emotions that match the physical signs supplement the script so that when all the indicators are evaluated, Dr. Grandin concludes that the person is "sad." By this process of match and elimination, Dr. Grandin is able to approximate what behavior she should enact on her part to respond as social norms deem appropriate. (9)This process, of course, takes long and can become quite burdensome. The social understandings and response mechanisms intrinsic to most people become in an autistic person a systematic process requiring generous cognitive activity. Recent studies on certain brain cells called "mirror neurons" may offer greater insight as to why autistic people seem more emotionally inaccessible than non-autistic people.

Mirror neurons, which are located in a subsection of the monkey pre-motor cortex designated F5, first caught the attention of Italian researchers at the University of Parma in the early 1990s. (1) By scanning the brains of Rhesus macaque monkeys, the research team headed by Dr. Giacoma Rizzolatti observed that certain neurons activated during the performance of a task exhibited similar patterns of activity even when the monkey stayed still and watched someone else perform the task. (2) Though the monkeys were not performing the task themselves, they recognized the execution of these skills by other people. (1) Luciano Fadiga of the University of Ferrara speculated that the human brain possesses a similar system and indeed discovered similar results in the anatomically equivalent section of the human brain called Broca's area. (3)Researchers reasoned that because mirror neurons fired off electric shots in response to the actions of others, thus "mirroring" the viewed action, these brain cells must function in recognition of the action. Recently, scientists from England probed further into the function of mirror neurons and came up with some startling results.

While scientists from the University College London used a MRI scanner to record their brain activity, Royal Ballet ballet dancers and experts of capoeira, a Brazilian martial art that resembles a mix of shadow-boxing and break-dancing, watched videos of ballet and capoeira movements being performed. As a control, or a standard against which the experimented subjects were tested against, non-professional volunteers also underwent brain scanning while viewing the videos. (4) The scientists, headed by Dr. Daniel Glaser, chose to study professional dancers for two reasons, the first of which being that professional dancers are skilled in certain movements that many people are not. Secondly, ballet and capoeira feature standard movements that every professional in the art can perform. The scientists were interested in determining how viewing these different dance forms effected neuron activity in the pre-motor cortex, which functions in movement control, and in the part of the brain responsible for seeing. (5) They discovered that dancers viewing dance moves standard in their own art form experienced greater activity in their pre-motor cortex than when they viewed a dance form they were not skilled in. By contrast, the non-expert brains did not experience heightened activity in either case; rather, the non-expert brains exhibited steady neuron activity regardless of the type of dance viewed. (6)

Glaser and colleagues reasoned that the mirror neurons located in the pre-motor cortex form a "mirror system" that is specially modified to resonate with the movements and physical skills particular to each person. In the case of the ballet dancers, for example, they responded the most when viewing ballet moves because they themselves could perform the performed skills. The non-experts were unskilled in both arts. Therefore, their mirror systems did not resonate with either form of dance, which were similarly foreign to non-expert brains. Glaser reasons that the mirror system is important because it constructs a framework through which the brain can interpret information in the world. In the case of the dancers, their brains responded most to movements they themselves could perform and, thus, for which they have developed neuron pathways. These results imply that athletes and dancers can "practice" their respective skill even while physically injured. (3)Because actual movement is not required to simulate the skill in the pre-motor cortex, mentally imagining and practicing the physical movement can build neuron pathways that will enhance physical performance.

Interestingly, the mirror neuron system may also offer a greater understanding of how the brain effects social interactions. As mentioned earlier, distinct deficits in communication and social skills often characterize autism, a neurodevelopmental disorder. (6) Dr. Temple Grandin's anecdote provides one perspective of how the autistic brain does not perceive and recognize certain behaviors the same way a non-autistic brain does. In fact, researchers at UCLA medical school conducted an experiment in which test subjects were shown photographs of dinner place settings. Each photograph included a human hand, but in some photos it was obvious that the hand was clearing the place setting from a messy dinner, while in other photos it was ambiguous whether the hand was setting or clearing the place setting and if eating was just starting or ending. Brain scans of the subjects showed that mirror neurons increased activity 40 percent when the context of the image viewed was obvious (i.e. the hand was clearly cleaning the table.) Researchers speculated that in addition to recognizing actions, mirror neurons appear to play a role in understanding the intention of others. Dr. Marco Iacoboni of UCLA conducted another study in which subjects were asked to complete a psychological questionnaire while viewing images of people demonstrating various emotional states. Iacoboni observed that individuals characterized as having high levels of empathy experienced greater mirror neuron activity when viewing people in various emotional states. By contrast, subjects considered less empathetic fired fewer mirror neurons when viewing the same images. (1) Thus, in the less empathetic people, seeing expressions of motion did not resonate within their own brains; they did not "feel" as the people upon which they gazed felt.

When put through similar exercises, autistic individuals exhibit significantly different results. Hugo Théoret of the University of Montreal and Harvard Medical School and Alvaro Pascual-Leone of Harvard Medical School studied how the brains of autistic and non-autistic individuals respond to viewing hand movements. They discovered that while the pre-motor cortexes of non-autistic individuals fire more electrical signals while viewing hand movements, the mirror neurons located in the same areas in autistic individuals do not experience heightened activity. These findings suggest that autistic individuals' social deficits can be explained in part by neural—specifically mirror neuron—differences. These differences reduce an autistic individual's ability to reciprocate social behaviors, which in turn can impede the normal development of empathy. (6)Researchers attribute this difference to a dysfunctional mirror neuron system. In a study conducted by researchers at the University of San Diego, researchers found that while viewing performed movements, the brains of autistic individuals responded only to their own movement. (8)

Dr. Grandin's anecdote about how she processes external social information offers hope that therapy and treatments can be developed to teach autistic individuals how to react to certain social behaviors. Such programs would stimulate mirror neurons, thus helping autistic individuals understand the intentions of others and empathize with their thoughts and feelings, a skill essential to social behavior. (7)However, a few remaining issues that imitation treatments may not address include helping autistic individuals recognize signs that may predict behavior. Given the role played by mirror neurons in recognizing action and intention, therapies should be developed to enhance a patient's ability to read components of social situations both before and after a behavior is expressed. Furthermore, Dr. Grandin herself admits that she never "understands" because she just doesn't know what it feels like. Interestingly, one must pose the question, "Is true empathy really something that can be scripted?" Where is the actual feeling that substantiates the appropriate empathetic action?

Indubitably, men and women think differently, but a key distinction that repeats itself across cultures and time is that women generally tend to be more sensitive to emotion when making a decision. Considering how mirror neurons effect one's ability to understand intention and empathy, it would be interesting to study how the activity of these neurons may differ across genders. Furthermore, much of the research on mirror neurons has facilitated discussions regarding their evolutionary origins and functions. Some claim that it is this mirror neuron system that sets human kinds from the apes. Similarly, much of the discussion regarding women and their roles as caretakers refers to females' evolutionary roles. It would be interesting to study how mirror neurons fit into or conflict with that theory.

WWW Sources

1)The Baltimore Sun, "A close look at mirror neurons: Are these cells key to feelings of empathy?"

2)Science Daily, "Monkey Do, Monkey See ... Pre-Human Say?"

3)The Guardian, "Aping Dr Dolittle."

4)Science Daily, "Human See, Human Do: Ballet Dancers' Brains Reveal the Art of Imitation."

5)NOVA, Transcript of interview with Dr. Daniel Glaser.

6)Science Daily, "Abnormal Brain Activity During the Observation of Others' Actions."

7)Science Daily, "UCLA Neuroscientists Pinpoint New Functions for Mirror Neurons."

8)Medical Imaging Week, "Brain disorder linked to mirror neuron dysfunction."

9) Sacks, Oliver. "First Person Account," from An Anthropologist on Mars. New York: Vintage, 1996.

Full Name:  Liz Bitler
Username:  ebitler@haverford.edu
Title:  Post-Traumatic Stress Disorder and the Interactions of the World, Brain, and Mind
Date:  2005-05-08 17:00:12
Message Id:  15081
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Everyday the brain and the I-function with the external world through inputs and outputs. It seems inevitable that there would be inputs and outputs that are simply too much for the I-function to handle and control in terms that enable one to function regularly. Post Traumatic Stress disorder provides and example of one such instance. It also clearly exemplifies the many ways in which the external world, the neurobiology of brain structures, and the I-function interact and affect one another.

There are events in the world that are unpleasant to experience. When a person encounters such an occurrence, there are many ways in which their brain and I-function may interpret and respond to the instance. In a world filled with natural disasters, death, war, crime, and horrendous acts of both man and nature, what is it that makes a particular experience traumatic for a particular individual? Some people live through atrocities and are able to cope with the experience in a way that does not disrupt their ability to function normally in their daily lives and move on. However, there are those that develop post-traumatic stress disorder (PTSD.) These individuals become plagued with numerous symptoms, that include vivid flashback memories, intense nightmares, irritability, exaggerated startle responses, and avoidance of situations or stimuli that may trigger a memory of the event.(1) These symptoms may dissipate fairly soon after the traumatic experience, or they may persist for the remainder of an individual's life.(2)

So what is it that determines which individuals will develop PTSD and who will not given exposure to a traumatic event? That is to say, when an individual gets a traumatic input from the external world, what is it that determines the resulting behavior? The answer to this question is quite complicated, and it involves aspects of both the brain, on a neurobiological level, and the I-function. Neither of these alone can singularly account for the determination of whether or not an individual will develop PTSD, but together they can serve to make one more or less susceptible to the disorder.

On the brain's end of things, there is strong evidence that implicates the role of genetics in the ability to recover after enduring stressful situations. It has been shown that soldiers that have higher quantities of Neuropeptide Y in their prefrontal cortex are better able to perform under stressful conditions as well as to return to normal brain functioning after the stressful conditions have ended.(3) Also linked to the topic of neurobiology is the role of the gene that codes for the production of BDNF. IT has been found in studies with animals that those who lack BDNF are more susceptible to PTSD. Similarly, animals that were genetically engineered to overproduce BDNF were resistant to PTSD (which has positive implications for potential treatments.)(4)

Another biological factor in the likelihood of being affected by PTSD occurs from the individual differences in the hippocampus.(1) It has been shown through studies that individuals with a smaller hippocampus are more likely develop PTSD. The hippocampus is linked to memory processing, and has also been implicated in the re-experiencing of traumatic events that is common of those with symptoms of PTSD.(5) It is possible that the smaller the hippocampus of an individual is, the more susceptible a person is to having particular memories stuck in the replay position, which would then in turn cause them to be symptomatic, and to then develop PTSD.

One's personality, which directly results from the I-function, also plays a role in their amount of susceptibility to PTSD after experiencing a traumatic experience. People with personalities described as compulsive or asthenic (weak) are more prone to develop PTSD. Also, people who have dealt with depression or alcoholism in the past also seem to be more likely to develop PTSD.(2) Not surprisingly, there is also a correlation between PTSD and low self-esteem. It seems possible that those who have other difficulties dealing with aspects of their life, particularly a feeling of a loss of control or low self worth are more disposed to develop PTSD. Is it that the I-function can only handle so much negativity at a time, before reaching an overload situation which results in an overactive mind (i.e. nightmares, flashbacks, and avoidance of thoughts that may trigger such?)

Age seems to also be important when assessing one's susceptibility to PTSD. While the symptoms manifest themselves slightly differently in children, it has been shown that they, too, are vulnerable to the effects of PTSD after a traumatic experience. Like adults, Children may develop PTSD after the loss of a family member, abuse, molestation, natural disasters, and any other upsetting situation.(6) Something that is particularly interesting is that within war veterans, the younger soldiers who served in combat were more likely to develop PTSD than the older soldiers.(7) It seems to me that this could arise from either biological factors or the I-function. Because brain structures are related to PTSD, and the brain changes as people age, it is possible that the changing of structures with age decreases one's likelihood of developing PTSD. It also seems possible that because when people are older they have had more life experiences, their I-function is stronger, that is, they have a stronger sense of themselves. If someone has a better understanding of who they were before the traumatizing experience, they are more likely to be able to hold on to that personality without making the experience as much a part of them as if they had a lesser sense of self.

There are important aspects of any particular event that also make a person more or less likely to develop PTSD. For example, among Vietnam Veterans that were questioned, those that had served in high war zones were more likely to develop PTSD than those who did "Theater" work. This means that the intensity of the actual event correlates to the intensity of those experiences in the minds of the veterans. And the more intense events were more likely to result in PTSD. The length of the war zone exposure was another factor that served as a predictor of the likelihood of PTSD development.(7) For individuals that were physically attacked or abused, the severity, length of duration, and number of instances of attack were also predictors in the likelihood of a victim recovering from the trauma.(1)

What I find particularly interesting are the effects that a traumatic situation may have on the neurobiology of an individual. That is, the changes that occur in the brain extend beyond the microscopic level of changing individual neurons and synapses. The changes in the brain are so extensive that they are observable on at macroscopic level. It is very interesting that the environment, that is something outside of the brain or the mind that is completely uncontrollable, has the ability to so greatly impact the functioning of entire structures in the human brain.

One of the most notable changes occurs in the prefrontal cortex (PFC) of the brain in individuals with PTSD. There is a general decrease in the activity of the PFC. In particular, there is a decrease in the amount of benzodiazepine binding in the medial prefrontal cortex (mPFC). This results in an increase of activity in the amygdala, which plays a key role in the process of assessing situations to determine whether or not a fear response is warranted.(8) The change in the activity of the amygdala could account for the heightened sense of fear and hyper vigilance that occurs as a symptom of PTSD. Another change in brain activity that results from the changes to the PFC results from the loss of the ability of the PFC to serve as an inhibitor. The mPFC acts in a way to inhibit the dorsal raphe nucleus, which is the region of the brainstem that is primarily accountable for causing alarm in a situation in which it is necessary. The mPFC has the capability of executively deciding when a situation requires alarm, more specifically it determines if the situation is controllable or not. By losing the ability to inhibit the brainstem, the mPFC loses the ability to calm the individual by removing the sense of alarm, which could also contribute to the symptoms of PTSD.(9)

Something that I found particularly interesting is that when an individual experiences something horrendous and develops PTSD, their hippocampus actually gets smaller.(5) This means that not only are individuals with small hippocampuses more likely to acquire PTSD, but also that PTSD can cause the hippocampus to get smaller. Clearly there is a strong link between the two, and I think that more research on the specific interactions between the hippocampus and traumatic experiences ought to be completed.

Other neurobiological alterations that result from traumatic experiences in individuals with PTSD involve hormones and neurotransmitters, which have been shown in a variety of instances to alter behavior. Levels of cortisol, a hormone, are especially low in individuals with PTSD due to the abnormal activity of the HPA.(4, 5) This tends to have important health implications such as increased susceptibility to physical ailments (such as gastrointestinal problems and circulatory-related diseases.)(10) Levels of epinephrine and norephinephrine are unusually high in individuals with PTSD, which have been linked to changes in emotional state.(4, 5) As a result, the environmentally caused changes on the brain result in changes of the I-function.

There are other ways in which the I-function may become altered as a result of PTSD or the traumatic situation itself. During times of extreme crisis, a person may act in ways that they do not consider to be consistent with the idea that they hold of themselves. They may experience cognitive dissonance and have an altered sense of self because of their actions in one particularly extreme situation, one which is not a part of their daily lives or normal existence.(11) The I-function can also be greatly impacted in those individuals that experience a chronic condition of PTSD. Because the condition can persist for months, years, or even a lifetime,(2) an individual may experience a subsequent personality change as a result of the continued symptoms so that the sense of self includes the altered state.

Because the mind, brain, and body are all affected in an individual with PTSD, it is important to take each into consideration when treating a patient with this disorder. The two main approaches to treating patients currently consist of a variety of drug treatments as well as various cognitive-behavioral therapy techniques. Cognitive-behavioral therapy (CBT) can encourage one to think reflectively about their traumatic experiences and their resulting behaviors and emotions. Some specific therapy methods include teaching relaxation techniques or examination of one's morals and values. Therapy may be in a one-on-one situation, family therapy, or group therapy. This enables one to focus on their individual situation, their interactions with their family members as a result of their PTSD, or their working with others to realize that they are not alone in their circumstances respectively.(12) Notably interesting is that CBT is thought to improve function in the PFC(8), thus altering the brain in a way so as to counteract the effects of PTSD on the PFC. Prescribed medications have been found to alleviate particular symptoms in some individuals. Most of them work by increasing or decreasing levels of particular neurotransmitters in the brain (e.g. SSRIs.) There are medications that are especially effective for treating symptoms of depression, anxiety, and sleep disorders.(12) While I'm not sure of any studies with humans, it has also been noted that exercise in mice increases levels of neurotransmitters and relieves some of the symptoms of PTSD as well.(4) The National Center for PTSD has suggested that cognitive-behavioral therapy appears to be more effective than drug therapy(5), but it seems to me that a combination of both may be more effective. Medications can ease the troubles of the brain while therapy eases the troubles of the mind (and going for a run may help, too.)

Although there is knowledge that there are many ways in which the mind, brain, and external (traumatic) experiences interact, there is still much to be known about the details of the interactions. We know that there are genes that make a mind interpret an experience as too traumatic to handle. But what exactly is it about a given mind, (or even a given gene) that makes the mind work that way? Once we have more insight into the workings of the mind and the brain, that is once we know more than just mechanics of structures, we will have more insight into the causes and implications of post-traumatic stress disorder.


1)National Mental Health Association, A site with basic information and statistics about PTSD
2)Internet Mental Health, The European description of PTSD for diagnosis
3)Department of Veteran Affairs, Information about specific research on the development of PTSD among military trainees
4)National Center for Post Traumatic Stress Disorder, Information about research on the effects of PTSD on brain structures and chemistry
5)National Center for Post Traumatic Stress Disorder, General information about what PTSD is, how it develops, and the implications that it has for individuals
6)American Academy of Child & Adolescent Psychiatry, Information about PTSD in children and adolescents
7)Department of Veteran Affairs, Information about the findings from the National Vietnam Veterans' Readjustment Study
8)National Center for Post Traumatic Stress Disorder, Findings of brain imaging in individuals with PTSD
9)National Institutes of Health, An article about the role of the mPFC: Rat Brain's Executive Hub Quells Alarm Center if Stress is Controllable
10)National Center for Post Traumatic Stress Disorder, Information about the results of PTSD on physical health
11)National Center for Post Traumatic Stress Disorder, Information about the effects of traumatic experiences, particularly the symptoms of PTSD
12)American Psychiatric Association, A site with facts, symptoms, and treatment options for those dealing with PTSD

Full Name:  Patrick Wetherille
Username:  pwetheri@haverford.edu
Title:  Deconstructing Alzheimer's Disease
Date:  2005-05-08 19:55:54
Message Id:  15084
Paper Text:
<mytitle> Biology 202, Spring 2005 Third Web Papers On Serendip

As human beings, we have learned ways to take better and better care of ourselves such that we have been able to live longer and longer. However, despite the progress we have made, we have discovered a barrier to the long life of some: Alzheimer's disease (AD), an ailment that affects people in old age. Is AD simply an extreme form of the normal aging of the brain, or is it different? As we continue to understand more about it, what are we doing to combat it? What can be done to help those with the disease cope with it? How the disease works, what approaches have been taken to prevent and cure it, and the implications those measures might have on society as a whole, are all issues we shall take up in this discussion.

AD was first discovered in 1907 by German neurologist Alois Alzheimer (1). AD afflicts its patients with a dementia that increases in malignance over time: the older an AD patient is, the worse the dementia is. Dementia is a result of the loss of neurons in the brain that assist in engagement of intellectual activities. The loss of neurons specifically affects the hippocampus, which is a central region for memory operation, and the cerebral cortex. The cerebral cortex is also involved in memory functions, but also works to accomplish reasoning and language functions (1).

In asking what causes AD, we must really ask two questions: first, what factors determine a person's increased likelihood of getting AD and second, what are the specific chemical and neurobiological operations that cause these neurons to deteriorate? In answering the first question, we find that there is no one factor that causes a person to have AD. Age seems to be the across the board prerequisite for developing AD, but it is not the only factor. Family genetics seems to be an important factor in some cases (2). While the studies vary, it seems that somewhere between 50% and 75% of an individual's propensity to develop the disease arises from family genetics (3). In fact, there appears to be a split in the kinds of AD one can get. Sporadic AD happens among people who have no pattern of it in their family while Familial AD is caused by a set of inherited chromosomes. Unlike many inherited diseases such as Huntington's disease, AD is caused by gene mutations on multiple chromosomes, rather than a single one: numbers 1, 14, and 21, to be exact. If any of these mutated genes are inherited from either the mother or the father, the child has a 50% chance of developing Familial AD (4). This high probability of receiving the AD genes are due to the autosomal dominant inheritance pattern of the gene.

Unlike Familial AD, there is no agreed upon factor that causes Sporadic AD among the general population. Exercise (5), eating a good diet, and keeping one's mind stimulated (6) are all factors that have been said to reduce the risk of developing AD. However, age is the overriding factor that seems to enhance the likelihood of developing either Familial or Sporadic AD. Unfortunately, researches still do not know what it is specifically that causes sporadic AD, but for now, they believe it to be a combination of the environmental and genetic factors listed above. Now let us turn to the chemical and neurobiological side of AD.

While no one is certain what causes AD, there is one working theory that seems to get it less wrong than the others. A big difference between a normal brain and a brain afflicted with AD is the presence of protein clusters inside and between neurons. The clusters inside are known as neurofibrillary tangles, which consist of a protein named tau (1). Another type of protein known as beta-amyloid are the proteins that exist between neurons. While the presence of tau seems to be proportionate to the degree of dementia experienced by the patient, indicating a possible connection to the cause of AD, it is not unique to AD the way beta-amyloid plaques are in their unique concentrations. These beta-amyloid proteins that cluster between neurons and are accompanied by the immune system's microglia, reactive inflammatory cells that are thought to remove already damaged neurons and/or the amyloid plaques themselves (1). They originate from the beta-amyloid precursor protein (bAPP) when a bAPP fragment that is 99 amino acids is cut by gamma-secretase, creating a beta-amyloid peptide (1). Some of these resulting strings (about 10%) have two more amino acids than the rest. It is this longer form of the beta-amyloid string that has a harmful effect on neurons, ranging from calcium deregulation to damage of mitochondria, to the inflammation of the cell, which could, in conjunction with other damage, create an ongoing cycle of damage that can result in deterioration of neurons (1). While the amyloid plaques themselves are present in most old people, the high concentration of them in the hippocampus and cerebral cortex in AD patients suggests a role in the neuronal degenerative process. While researchers still do not know whether the beta-amyloid plaques are responsible for, or a simply the result of AD's degenerative process, their presence is a fundamental component to understanding how AD works (7).

Is AD just a part of getting old? For a long time, many held the belief that AD was simply the aging process at the very far end of the spectrum. That is, that aging inherently involves the loss of neurons and brain capacity and that AD is simply a more extreme end of a range of brain loss. However, that view has been refuted in recent years. Scientists have no reason to believe that the brain will deteriorate simply from aging. While mild cognitive impairment (MCI) has been identified as a "borderline" state between AD and normal brain functioning (3), it is more an early stage of AD than of a middle range in a spectrum of aging-related dementia. In fact, a study that looked at the loss of neurons in the aging of both AD and control subjects, found no link between aging and loss of neurons (8). Using statistically unbiased methods, researchers approximated neuron counts in the entorhinal cortex and the superior temporal sulcus, two regions known to be affects by AD. They found that, while severe cases of AD resulted in 66% and 52% losses to those regions, respectively, there were no significant losses in those regions with the non-AD control group.

What is being done to combat AD? While several drugs have been developed to slow the process, there is no cure to AD. Researchers have been using the beta-amyloid plaque theory we discussed earlier in their search for a cure. Inhibiting the gamma-secrease enzyme that causes the bAPP protein to be cut is an approach drug companies have tried, however the side effects have been undesirable in animal trials, as gamma-secrease does more for the body than just snip bAPP proteins (3). Anti-inflammatory drugs seem more promising, as they have been successfully conducted on animals. The idea is to reduce the inflammation caused by the beta-amyloid proteins in neurons to prevent the brain's immune system from attacking and eliminating neurons with those proteins in it (3). However, the effects of this treatment do not amount to a cure, though it does have the ability to slow the process. Another approach that could be taken is the introduction of stem cells to the affected regions of the brain. Transplanting healthy cells into a region that is plagued with dying cells could potentially reverse some of the effects of AD. The idea is that harnessing one's own stem cells could have a better effect on the patient than the introduction of someone else's cells (3). The effects of this procedure are not yet known as the use of stem cells in research is very limited due to its intersection with the broader ethical debate on reproductive issues.

While the curative approach is certainly crucial to combating the effects of AD, one avenue we might consider looking down is a focus on supplemental measures. The development of new technologies that could help AD patients cope with loss of mental function might be appropriate, given the nature of the ailment. Developments in information technology could be offer assistance to AD patients in way that could supplement the loss of biological function with mechanical functions. For example, a computer could be used to keep records of family members to help remind the patient about his or her past. While a desktop PC seems somewhat impractical for this, a computer small enough to fit into someone's eyeglasses, coupled with voice and image recognition technology, could provide AD patients with the kinds of information they need to continue to function. This, along with drugs to at least slow the process, could provide a treatment that could restore a quality of life to the patient in a way that is currently unavailable.

This kind of merger between IT and AD has already begun. In 2003, Intel entered into a consortium with the Alzheimer's Association, granting $1 million in IT research to be directed towards AD patients (9). Technology such as sensor networks is being used to study the habits of AD patient behavior in hopes of finding ways to learn more about AD and to make it more livable. This is just one example of how IT can work for AD patients.

An example of the merger of IT and AD in the extreme can be found in some more radical proposals. Scientist Hans Moravec has suggested that someday, entire human brains and the consciousnesses they hold will be able to be downloaded into a computer (10). This would certainly obviate the problem of neuronal deterioration, but it's possible that by the time we have the technology to move minds into machines, we'll know enough about AD to make it a livable or curable illness.

While scientists have learned a lot about AD, there is still a long way to go. I began studying AD thinking that it was merely an advanced form of the inevitable loss of neurons and intellectual capacities that comes with the aging process. In seeing the mechanics of the proteins that are believed to play a role in AD, I could begin to see how AD is different from aging and how aging itself does not cause the kind of loss of neurons associated with dementia. In thinking about ways of dealing with AD, the development of drugs to lessen the effects of the disease and to cure it seem like an important avenue to pursue. However, I could not help but think about the ways AD affects a person's consciousness. If the I-function draws upon input resources to make sense of the world, why cannot those inputs be supplemented through technology? It seems to me that the pursuit of advanced technologies that link the brain and the computer may yield ways of dealing with AD that might help people inflicted with the disease. This might be worth pursuing if technology advances faster than a treatment or cure.


1) St. Georges-Hyslop, Peter H. "Piecing Together Alzheimer's." Scientific American, December 2000, Vol. 283 Issue 6, p76-83.

2) National Institute on Aging: Causes of Alzheimers, Information on what can cause AD.

3) Schmiedeskamp, Mia. "Preventing Good Brains from Going Bad." Vol. 14 Issue 3, p84-91.

4) National Institute on Aging: AD Genetics Factsheet, a page about the genetic factors in AD.

5) CNN story, 4/28/98, how exercise influences the chances of getting AD.

6) Alzheimer's Association: Risk Factors, more information on what can increase risk of getting AD.

7) National Institute on Aging: Unraveling the Mystery, a source on the current working theory of the neurobiology of AD.

8) Petersen, Ronald C. Mild Cognitive Impairment. New York: Oxford University Press, 2003.

9) Intel Press Release, July 24, 2003, how IT is working to fight AD.

10) Chip, Walter. Scientific American, January 2005, Vol. 292 Issue 1, p36-38.

11) Alzheimer's Information website, a good source for current AD issues.

12) PBS info page on AD, a general source for the effects of AD.

Full Name:  Alfredo Sklar
Username:  asklar@haverford.edu
Title:  The Functional Role of Dreaming
Date:  2005-05-09 12:06:10
Message Id:  15089
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Each night we lay in our beds after a hard day's work eager to get some much needed rest and to let our brains take over as we become an audience for our forebrain. We usually wake up the next morning either marveling at or terrified of the bizarre and uncanny, yet intriguing dream we just experienced. We wonder how our brains could have produced such complex storylines without any conscious effort on our part and are infatuated with unlocking their true meaning. It then becomes easy to understand why so many people, dating all the way back to Aristotle, have devoted so much time and energy to understanding the mechanisms of how dreams are generated and what purpose they serve. As a result, there is an astounding range in the variety of theories for this topic, especially since the discovery of REM sleep and the other sleep cycles in the 1950's. This paper will survey a few of these major theories about the function of dreams in a more or less chronological fashion starting with the psychoanalytic perspective presented by Sigmund Freud.

Freud's wish fulfillment theory of dreams is closely tied to his other work on the subconscious and the intrapersonal dynamics among the id, ego, and superego. He believes that the wish impetus (sexual desires, deeply rooted emotional feelings, etc.), which is controlled by the subconscious, is manifested during our sleep through dreams (1) . The subconscious and its wishes are suppressed throughout the day, as they are seen as socially unacceptable by our superego and the rest of our conscious, and uses dreams as an escape from the stranglehold of the conscious which is not at work during sleep (1). Based on this notion, Freud believed that through a careful interpretation of a dream one can uncover the deeply repressed thoughts and feelings that are causing distress and even mania in an individual.

Freud carried out his analysis by dividing the dream into to parts: its latent content and its manifest content. In this model of the dream, the manifest content can be thought of as the plot or storyline of the dream (2). Freud suggested that this aspect of the dream is used merely as a disguise for the true, underlying meaning/wish that the dream is trying to convey and therefore relatively unimportant to the psychoanalysis of the patient (2). In contrast, he believed the latent content is where the wish impetus lies and is the aspect of the dream that can lead to one's coming to terms with their id (their subconscious) and its desires. Freud called the process by which the latent content is transformed and disguised by the manifest content the "dream work" and believed that it occurred in four different ways: condensation (the combining of two ideas present in the latent content), displacement (of the desires of the latent content from its intended target to a random object/person), symbolism (the use of images in the dream to represent repressed thoughts/feelings), and secondary revision (the process by which we try to make sense of the dream, usually the morning after, and put it into an everyday context) (2). Through the dream work and its processes, Freud was able to explain why dreams seem so bizarre and disjointed.

The neurobiological perspective on dreaming then rose to dominance after the discovery of REM sleep by Aserinsky and Kleitman in 1953 (3). This is also known as paradoxical sleep because although our bodies are momentarily paralyzed and in a state of deep sleep, our brain shows activity similar to that of waking consciousness. The brain of someone in the REM phase of sleep will display random patterns of cortical activity and single-phase waves (POG waves) can be observed in the visual cortex (3). Dreamers experiencing REM will also exhibit regular patterns of rapid eye movements (from which this sleep phase gets its name). As a continuation of these findings, Hobson and MacCarley came out with a theory which they thought would render Freud's theories completely useless. In their activation-synthesis hypothesis they contend that the POG waves observed during REM stimulate higher forebrain cortical areas causing rapid eye movements and activating parts of the association cortex (3). It is then believed that the forebrain, in an attempt to make sense of the random waves of brain activity, produces an equally random storyline to go along with it. Therefore, Hobson and MacCarley believe that there is no deeper meaning or symbolism to a dream as it is a product of random brain activity and not of any cognitive function. As a result, clinical analysis and introspection should only be done on the patient's reaction to the dream and how they deal with the ideas presented by it (3).

The activation-synthesis hypothesis, however, did not completely disprove Freud's work as Hobson and MacCarley had though and was met by quite a bit of criticism and questioning. Probably the major blow to this theory was dreams can occur during non-REM sleep and that visual activation is still experienced even without the random bursts of POG waves (4). Most of the other controversy stirred up by the activation-synthesis hypothesis came from other dream researchers who believed that this theory did not give enough of a role to higher brain functions (4). They believe that the fact that dreams can sometimes be so coherent and provoke such strong and interesting personal thoughts proves that the cortex and higher brain areas must play a more serious role in the generation of dreams. As an example, opponents of this theory often times point to lucid dreaming where the individual can exhibit some control over their dreams to refute the idea that dreams are created randomly and cannot be used to reveal deeply repressed emotions and desires (4).

Recently there has been a rise in the cognitive perspective on dreams which concentrates on the reason/purpose for dreams and REM sleep rather than what they can tell us or how they occur. Researchers in this field view dreams as an integral part of the memory formation and learning processes (5). This idea has been supported by various studies done on the role of REM sleep on memory and learning. For example, it has been shown by Smith that the learning of procedural information is significantly impaired by a decrease in REM sleep. And along the same lines, LaBerge showed that the learning of difficult tasks requiring concentration and often times newly acquired skills is followed by an increase in the amount of dreams experienced during sleep and the duration of the REM phase in the sleep cycle (5).

One of the possible mechanisms by which our brains process the days information through the use of REM sleep and dreams was proposed by Evans in his book "Landscapes of the night: How & Why we dream" (6). In this book Evans presented his view of the brain as a computer. He believes that during REM sleep, the brain is taken "offline", blocking it from all external stimulus while it scans over the experience we had throughout the day, making any necessary modifications to it (6). This process is thought to aid in the transfer of information from short-term to long-term memory (5). During this process, the brain may briefly come back "online", allowing the conscious mind to view clips of this stream of consciousness that is being replayed (6). This is what Evans believes a dream to be.

The last theory on the function of dreams that we will review is the one presented by many developmental psychologists interested in dream research. It has been shown that for new born babies, REM sleep makes up a much larger proportion of the sleep cycle than for adults (about 50% of their sleep cycle is REM) (7). The question then becomes: why do infants require so much more REM sleep than adults? The answer developmental psychologists will have you believe is that dreams and REM sleep play an important part in the development of the infant brain (7). Newborns, although born with a complete nervous system, have not had the time to develop proper neural connections or mature synapses. The neural activity generated by REM sleep and necessary for the production of dreams helps to develop these neural pathways (7). This sort of development is obviously not necessary for adults as a lifetime of consciousness and experience has taken care of it.

As we can see from the above examples, there is still much disagreement and controversy over what the true function of dreams is. However, it is unlikely that there is one, absolutely correct theory on dreams (or one that is the least wrong). The best way to think of the role and causes of dreams is through a synthesis of a few of the best theories. For example, for the experiments done by Hobson and MacCarley we know that the random discharge of brain waves from the brain stem probably plays a big part in the generation of dreams rather than the wishes of our subconscious. However, it is unreasonable to think that our internal state (thoughts and emotions) do not influence and, to an extent, dictate what happens in our dreams. I believe that the future of dream research lies in this type of cooperative effort from researchers in the different fields of study involved.









Full Name:  Leslie Bentz
Username:  lbentz@brynmawr.edu
Title:  The Neurobiology of Gender Bending
Date:  2005-05-10 03:31:13
Message Id:  15096
Paper Text:
<mytitle> Biology 202, Spring 2005 Third Web Papers On Serendip

In the scientific community as well as the world at large, the concept that there exists physical differences between biological male and female brains is not revolutionary. Extensive research has been done in this field and books such as Men are from Mars, Women are from Venus are evidence of this research boom. Yet surprisingly, still not a lot is known about neurobiological differences between genders. While sex and gender seem to be constantly intertwined and occasionally used interchangeably, the two concepts are not synonymous. Whereas sex is concerned with physical anatomy, gender is concerned with "the behavioral, cultural, or psychological traits typically associated with one sex" ((1)). Gender differentiations are commonly rooted in social constructs, but now it is possible to argue that some gender roles and identities are not purely the result of the environment and society but rather neurobiologically constructed, such as transsexual identities.

Social Constructs of Gender

Traditionally, a rigid dichotomy has been recognized that differentiates between socially acceptable gender identities and perverse identities in Western mainstream society. The stereotypes that plague notions of "women's work" and "men's jobs" have been perpetuated for centuries; leaving women excluded from the most prestigious jobs, and men not seen as adequate caregivers to children. Historian Sherry Ortner summarizes this distinction between socially acceptable gender roles when she argues that men have been equated with culture and the workforce and women to nature and childrearing ((2)). According to Ortner's linear continuum from nature to culture, it is virtually impossible to transcend the borders of each category. Yet for the individuals who do, they find it increasingly difficult to find a nonjudgmental space in which to identify themselves outside of these dominant realms.

In some instances cultures have recognized the presence of a third gender category, which specifically allows these individuals to define their own sense of self. North American Indians are one of several groups to recognize a tri-gender system where the term "berdache" was used to differentiate the intermediate gender role ((3)). Traditionally, berdaches were known to exhibit characteristic behaviors of both genders but predominantly assumed aspects of the opposite sex's chores and lifestyle from childhood. Male berdaches, meaning biological males but gendered females, were most prevalent in these societies and were recognized as sacred and honorable members of their respective groups. Although in the case of berdaches, no biological basis was used in assessing who could rightfully be considered a berdache.

Typically, the berdache status was assumed by individuals who from childhood on through adulthood exhibited behaviors of the opposite sex or who had undergone "supernatural validation, usually in the form of a vision" ((3)). In either case, the identification of berdache was granted based on the feelings of the individual and his/her outward behaviors. No justification beyond these specifications was necessary in recognizing the uniqueness berdaches had to offer their communities.

In most Western or European societies this third sex differentiation is not so easily bestowed upon a person and in many instances straying outside of the gender dichotomy becomes socially unacceptable. Ironically, it was western influence and Carolus Linnaeus' work that encouraged a categorization of life; yet classification has only lead to marginalization for those outside of the heteronormative realm ((4)). Nowadays, individuals who do not feel at ease in their given bodies and who believe their true beings to be of the opposite sex are categorized as transsexuals.

Transsexuality as a distinct category first appeared in Germany in 1923 when Magnus Hirschfeld created the term "transsexualism" ((5)). At this point Hirschfeld "considered TS/TG (transsexual/transgender) persons to be a form of intersex" ((5)). According to Hirschfeld, transsexuality was equivalent or relatable to individuals who were "born with genitalia and/or secondary sexual characteristics of indeterminate sex, or which combine features of both sexes" ((6)). Thus, transsexuality was viewed as having a biological component but was wrongly coupled with hermaphrodites, whereas transsexuality is now viewed as a condition only afflicting anatomically correct individuals.

During this period of scientific exploration, transsexuals were ostracized as a group for not molding into the dominant gender binary of the era. As recently as the 1960s transsexuals were subjected to electro-shock therapies in order to try and "cure" them ((7)). This prejudice against transsexual identities originates from decades old misunderstandings and confusion surrounding non mainstream perverse gender identities. These misconceptions are probably based on stereotypes in which individuals believe transsexuals are merely performing a skewed gender identity and not acting upon an innate identity.

Today, transsexuals are commonly assessed by mental health specialists as experiencing gender identity disorders (GID). Classified as a psychological ailment, manifestations in childhood include a "desire to be the opposite sex, persistent fantasies of being the other sex, a preference for cross sex roles, and/or an intense desire to participate in stereotypical games and pastimes of the other sex" ((8)). Feelings must be intense and must hinder everyday life. The criteria for GID are primarily based on social behaviors and interactions rather than genes or neurobiological components which lead to many questions involving the origin or cause of GID.

The Research that Links Neurobiology and Transsexuality

Although not researched extensively, a small project was undertaken by scientists from the Netherlands that could place a neurobiological explanation behind the feelings of disorientation that transsexuals have about their bodies. The group of scientists published their work nearly 10 years ago but since then similar research has not continued and the investigation into the links between neurobiology and transsexuality have been ignored ((9)). Thus, the ideas which the scientists have explored remain controversial within the scientific community.

This particular study examined the bed nucleus of the stria terminalis (BSTc) in the brain. A part of the amygdala; the BSTc is a component of the Stria Terminalis' "extrinsic subcortical fiber system" ((10)). The test group studied consisted of six male to female transsexuals, homosexual men, as well as heterosexual males and females. The main goal of the research was to weigh and compare the volume of the BSTc in the transsexuals to the control group who exhibited dominant gender role behaviors. Within the control group it was determined that biological males possessed a BSTc 44% larger than their female counterparts: a significant value for such a miniscule portion of the brain ((9)). Next, the size of the bed nucleus of the six female transsexuals was compared to the other male and female participants. Interestingly, "A female-sized BSTc was found in the male-to-female transsexuals" establishing a link between gender identities and biological sex that goes beyond social constructions ((9)).

The group's findings further give validation to the notion that transsexuals are aware from early childhood that their internal thoughts do not coincide with their external appearance. The research is also careful to dispel myths concerned with challengers who may challenge the idea that BSTc volumes are unalterable. Some contenders proposed that the size of the BSTc could be influenced based on elevated testosterone levels and sexual orientation. The research discredited this notion by asserting that the volume of the BSTc was not altered in the males who had been given testosterone supplements ((9)). Sexual orientation was not a contributing factor to BSTc size either since heterosexual and homosexual men had statistically similar BSTc volumes ((9)). Since transsexuality is commonly associated with homosexuality, it is significant that the findings show that regardless of sexual orientation the BSTc of the transsexuals was within the range of their opposite sex group. Thus, BSTc does not appear alterable amongst the transsexuals or control group, thus establishing the possibility of a neurobiological innateness to transsexuality.

Although the research has encouraged a new understanding of gender identities, the experimentation process is inaccessible for everyday application. The only way in which the volume of the BSTc can be measured is postmortem, which is the primary reason why the group's research took 11 years to ascertain ((9)). So currently, the methods used by the Netherlands team will not become mainstream in assessing transsexuality in living patients.

Although a rather obscure study, the data collected by the team was significant in further advancing GID research. With more exposure it may help to dispel myths surrounding the origins of transsexual feelings of otherness. Since the term "transsexuality" was established, the scientific community appeared relatively comfortable with its assessment of transsexuality as a psychological disorder. This study, although not extensive, has rattled the foundations on which transsexuality has traditionally been defined.

The connection between BSTc volume and transsexuality could refute our in-class hypothesis that transsexuality is due to discrepancies between the I-function, or decision maker part of the brain, and the inputs arriving in the central nervous system. Thus, according to the Netherlands team's findings, the hypothesis may have to be reworked to articulate a discrepancy between the inputs into the nervous system and the outputs being generated by the brain and more specifically the bed nucleus of the stria terminalis. Or instead, more probably, transsexuality may be the result of a combination reaction between the I-function and the bed nucleus. At this point in the research, it is too early to conclude that the BSTc is the sole contributing factor in the discombobulating feelings transsexuals experience when trying to make sense of their minds and bodies.

Ultimately, the I-function gives humans the ability to doubt reality and to conceive of the world differently than others do. Essentially, the I-function allows for an immense realm of variation in perceptions, which is essential in conceptualizing gender identities. For transsexuals, their perception of reality forces them to believe that their bodies are not in line with their brains. It is the mixed signals between the physical body and the brain that lead to such confusion. So being able to identify a link between the brain and gender identities would give transsexuals validation for their very existence. For decades these individuals have been living with the reality that their confusion was psychological. Primarily, it has been psychiatrists who have worked with this group. An enormous burden needs to be lifted from these people's souls and minds so that they will not continue to believe they did something to prompt their GID. BSTc size is unchangeable. Transsexual gender identities are fixed within the brain and cannot be altered. Regardless of their efforts to normalize within the dominant heteronormative discourse, the fact remains that their feelings are normal for how their particular brain is composed.

In this instance it appears as though the old truism "mind over matter" must be reversed. Transsexuals are not merely performing a perverse gender identity. These feelings are not a choice and should not be conceived of as perverse or abnormal. It is only with such rigid categorization that transsexuality has come to fall outside of the dominant discourse on socially acceptable gender identities.

Personal Effects

The possibility of innate gender, not primarily learned from experience and social constructs has prompted me to reevaluate how I perceive people and the world at large. After taking several classes centered on gender identities and social norms I came to realize that in all of these classes gender was defined as socially constructed; devoid of biological ramifications. I never questioned the distinction between biology and culture since a biological basis for understanding these differences was never explored. We wrestled with Judith Butler's notions of gender as a performance of self and also how gender becomes a characteristic influenced primarily by nurture instead of nature and dominant social institutions. The work that has been done towards proving the innateness of perverse gender identities forces me to rethink my assumptions. Primarily, I must reconsider the generalizations that were made concerning the astuteness of the gender binary prevalent throughout most of the western world. When we examined instances of difference we saw them as anomalies to the normative homogenous social structure; never being given the idea that a biological third sex/gender grouping could exist. It distresses me that this possibility was never raised and that I was left to assume the universal distinctions to be unquestioningly true.

Granted the study I examined is still the only one of its kind and could be considered controversial, the possibility still exists that their findings will someday be considered universal for all transsexual experiences. From the research that has been published on the links between transsexuality and neurobiology I have been forced to reevaluate many of my most engrained assumptions about gender roles and identity disorders and what causes them to be so prevalent in some individuals and nonexistent in others. Even though I may not completely understand the neurobiology behind the research that has been done to date, it has been invaluable in forcing me to realize what may in fact be the dominant discourse may not always be the true reality of the situation.


1) Dictionary reference site

2) Ortner, Sherry B. "Is Female to Male as Nature is to Culture?" Women, Society and Culture. Stanford, CA: Stanford University Press, 1974.

3) Callendar, C. and L. Kochems. "The North American Berdache." Culture and Human Sexuality. Pacific Grove, CA: Brooks/Cole Publishing, 1993.

4) History of Classification

5) Magnus Hirschfeld

6) Intersex article on Wikipedia

7) Transsexuality article on Warwick Pride website

8) and Gender Identity Disorders, Emedicine

9) A Sex Difference in the Human Brain and its Relation to Transsexuality , J.-N. Zhou, M.A. Hofman, L.J. Gooren and D.F. Swaab

10) Stria Terminalis

Full Name:  Xuan-Shi, Lim
Username:  xlim@brynmawr.edu
Title:  A Closer Look at the Familiar: What Do We Know About Trust?
Date:  2005-05-10 22:48:23
Message Id:  15105
Paper Text:
<mytitle> Biology 202, Spring 2005 Third Web Papers On Serendip

What does it mean to trust someone? Trust is often felt as an intuition or feeling that one has about others. We may trust someone consciously or unconsciously and may decide by assessing the way a person looks, talks, and behaves. Past experiences of betrayal may influence one's willingness to trust others in future interactions. I have attempted to construct a layman's understanding of trust that is related to social and psychological factors. In the age of neuroscience, several scientists have asked: what is going on in our brains as we allow ourselves to trust someone? The role our brain plays in the process of building trust has recently been investigated by King-Casas et al (2005), who reported that the development of trust between two individuals may be mapped onto a pattern of activity at the caudate nucleus (1). This study raises several interesting questions for me: What is trust and is it possible to understand trust from a neurological standpoint? Can trust be rebuilt after betrayal? This paper examines two scientific studies on trust in hopes of answering these questions.

In their experiment, King-Casas et al used event-related hyperscan-fMRI (2) to simultaneously examine the brain activity of two subjects as they played the trust game for ten rounds (1). The basic design of the game is as follows: one player is designated the investor and the other, the trustee. At the start of the game, the investor receives $20 and would decide how much of it to transfer to the trustee, keeping in mind that the amount of his or her investment would increase threefold under the trustee's charge (1). Upon receiving the tripled amount, the trustee would decide how much of it he or she wishes to repay the investor (1). After a player makes a decision, both parties are presented with the details of that transaction; their individual screens display the amount, in bar graphs and numbers, that the decision-maker has kept and has chosen to give away (1). A total of 48 pairs of subjects played the trust game and each pair was unaware of each other's identity (1). King-Casas et al claimed that they were studying trust at its core because their experimental design removed "the confounding elements of context and communication known to influence trust" (1). This study is the first to simultaneously monitor the brain activity of two individuals as they interact with each other (1).

King-Casas et al identified the caudate nucleus as the brain region that functions to build trust between two individuals. The brain scans of trustees revealed greater activity at the head of the caudate nucleus when the investor increased the sum of his or her previous investment in response to the trustee's decision to reduce the sum of repayment (1). There was decreased activity at the trustee's caudate nucleus when the investor made a smaller investment in response to the trustee's decision to increase the sum of repayment (1). In other words, activity at the trustee's caudate nucleus seems to be altered by both generosity and stinginess. It was also discovered that in later rounds of the trust game, the "intention to trust" signal appeared 14 seconds earlier at the trustee's caudate nucleus, when compared to early rounds when the signal only appeared after the investor's decision was revealed (1).

Given these findings, King-Casas et al suggested that the head of the caudate nucleus evaluates the fairness of the investor's decision and reveals whether the trustee intends to repay the investor's decision with trust (1). As the caudate nucleus receives dopamine inputs from other brain regions and dopamine has been implicated in the prediction of rewards, King-Casas et al speculated that the trustee is more willing to trust after his or her brain learns (and is thus able to predict) that the investor's response is likely to benefit the trustee (1).

I was slightly surprised to learn that the caudate nucleus may be the seat of trust because this region is usually associated with movement; it is part of the basal ganglia (3). In both Parkinson's and Huntington's diseases, the caudate nucleus is affected (3), thus resulting in the loss of voluntary control over movements. Symptoms of obsessive compulsive disorder are also associated with abnormal activity at the caudate nucleus (4). The nucleus accumbens, rather than the caudate nucleus, is usually associated with reward. Food, sex, nicotine, and cocaine, for instance, are rewarding because they increase dopamine action at the nucleus accumbens (4). Nevertheless, it is probable that the caudate nucleus is involved in reward prediction. I was somewhat bothered by the suggestion that our decision to trust someone may arise from a perhaps unconscious calculation of the probability of reward in that interaction. Am I mistaken to think that trust generally involves goodwill or a magnanimous faith in others?

While King-Casas et al have created a well-designed study and provided a reasonable interpretation of their findings, I wonder if they were investigating genuine trust. Firstly, the subjects were playing a monetary game that involved making decisions about whether to trust the other player. This study's findings may therefore apply only to trust that relates to financial decisions and may not be generalizable to other types of social interactions that involve trust, such as friendship and romantic relationships. Secondly, this is after all an experiment conducted in two laboratories; the investor and trustee were far apart, one in Texas and the other in California (1). I feel that the "social interaction" between subjects was unnatural. By removing the factors that would influence trust, the interaction may be reduced to a low-stakes gambling game, albeit one which the players have some control over their own winnings. Having said so, the players may be more willing to "trust" precisely because the stakes are low. Interestingly, the subjects were not asked to report their subjective experiences.

Zak et al (2003) offered some physiological evidence to suggest that interactions between players in the trust game resemble real-life interactions that involve trust. In their experiment, subjects were recruited to play a similar version of the trust game used by King-Casas et al; the differences were that investors started with $10 and each pair of subjects played only one round of the trust game (5). Immediately after a player decide whether or not to invest in (or repay) his or her partner, researchers took a blood sample from the player to test for 8 hormones that might be related to feelings of trust (5). Zak et al found that the level of oxytocin, which is both a hormone and neurotransmitter, was higher in trustees who received a greater sum of investment; the level of oxytocin correlates positively with the sum of investment received (5). Trustees with higher oxytocin levels reciprocated their investors' gesture with a greater sum of repayment, when compared to trustees' with lower oxytocin levels (5). Increased oxytocin levels were not found in investors, regardless the amount of money they transfer to their trustees (5). According to Zak et al, these findings are significant because oxytocin is associated with affiliative behaviors in animal studies (5). In humans, oxytocin is commonly associated with uterine contractions during delivery and lactation in female mammals (4). Oxytocin levels are also elevated during sex (6).

Could an elevated oxytocin level influence one's decision to trust others? As the trust game was played for only one round, it seems to me that the trustees would have to assume that their investors "trusted" them out of goodwill and did not transfer the money because they were taking a gamble or were greedy. Presuming that the trustees' opinion of their investors did not matter, could the release of oxytocin merely influence the trustees to act cooperatively and fairly because they have profited from that transaction, regardless whether they "trust" the investor? One may ask, how are we defining trust? According to Zak et al, we unconsciously trust others (5). That is, we may trust someone even if we have not consciously decided whether to do so. While I agree with this conception of trust, I am somewhat skeptical of the claim that oxytocin may be the "trust hormone." I am curious to find out whether the trustees in the study conducted by King-Casas et al would have higher levels of oxytocin after the trust game or if there would be any changes in their oxytocin levels from the early to later rounds of the game; unfortunately, the researchers did not draw blood from their subjects to test for oxytocin. It is also unclear whether trustees who played ten rounds (versus one) were consciously or unconsciously deciding to trust the investor for most part of the trust game.

From an evolutionary perspective, it seems probable for trust to be associated with reward circuits in the brain or even "feel-good" hormones such as oxytocin (6). Humans are social beings and our chances of survival and reproduction are enhanced if we worked together to protect or advance our shared interests. For us to be willing to rely on and cooperate with one another, we need to trust others. In this sense, the studies by King-Casas et al and Zak et al may help us to understand how the ability to trust may be encoded in our nervous systems. The issues of whether we consciously or unconsciously decide to trust others in most social interactions, or under what kinds of situations we may unconsciously trust others, may better help us to understand our own behavior. In my attempts to articulate what trust is, I have found a useful theoretical conception of trust for myself.

Soloman and Flores (2001) suggested that there are different forms of trust (7), and I find the distinction between simple trust and authentic trust to be helpful. According to these authors, simple trust "demands no reflection, no conscious choice, no scrutiny, no justification" and may be "viewed as a focused optimism" (7). In comparison, one exhibits authentic trust when one "is well aware of the risks, dangers, and liabilities of trust, but maintains the self-confidence to trust nevertheless" (7). It seems that when we unconsciously trust someone, we may be showing simple trust. This may possibly be related to physiological changes, such as increased oxytocin in our bodies in response to the benevolent actions of others. When we consciously decide to trust someone after having assessed whether he or she may be trustworthy, we may be showing authentic trust. This involves cognitive processes and may (or may not) be related to physiological changes. Authentic trust, I think, involves magnanimous faith; it may not necessarily be guided by an unconscious or conscious prediction of reward in one's interaction or relationship with others. Learning about the different forms of trust has helped me to make sense of my experiences involving trust.

Is it possible to understand trust from a neurological standpoint? I doubt anyone knows the answer although neuroscientists are searching for ways to study trust. While King-Casas et al (2005) have created a new procedure for studying brain activity during social interactions and their findings appear to be promising, one should keep in mind that these researchers are speculating about the role of the caudate nucleus in the trust building process. Unless scientists are able to implant some kind of device in the human brain that allows them to collect data on brain activity as subjects go about their daily lives, it seems almost improbable, I think, to fully understand trust from a neurological standpoint. With regards to whether trust may be rebuilt after betrayal, the findings of King-Casas et al (if their interpretations are valid) may suggest that the betrayed party is likely to trust again if over time, the traitor proves himself or herself to be trustworthy. The development of trust may be contingent on the consistency of positive reinforcements the betrayed party receives in future interactions with the traitor. Logically, whether trust may be rebuilt also depends on the nature of betrayal and severity of the consequences that result from the betrayal.

Presently, scientific research on the neurobiology of trust is at its infancy. More research is needed to answer the three questions I have explored in this paper. In the immediate future, there are several issues that may need to be addressed. Firstly, although the trust game is a useful paradigm, are researchers studying "trust" or a specific kind of trust that involves monetary rewards? Could the pattern of activity at the caudate nucleus be observed only when people play the trust game? Secondly, do player interactions in the trust game reliably mirror social interactions that involve trust? How would higher stakes influence the process of trust building? Thirdly, is there any relation between activity at the caudate nucleus and the release of oxytocin? Lastly, in order to understand trust and the process of building trust, it may be necessary to study the effects of betrayal on the human brain. How may the experience of a breach of trust influence activity in brain areas that are speculated to be the neural correlates of trust?

While we are all familiar with the experience of trust, it can be difficult to articulate what trust is. It is even more challenging for neuroscientists to study trust under laboratory conditions. This does not mean, however, that research on the neurobiology of trust is futile although it is important to recognize its limitations. While some people may share the concern that such research may be exploited and could result in dire social consequences (8), it is probable that the new knowledge we acquire about the neural correlates of trust and of the process of building trust would someday help individuals, who are unable or afraid to trust, build richer and more connected relationships with others.


1) Human Neuroimaging Lab , "Getting to Know You: Reputation and Trust in a Two-Person Economic Exchange"

2) Human Neuroimaging Lab , What is hyperscanning?

3) Neuropsychology, Medical Psychology, and Psychology Resources , Information about the caudate nucleus

4) Meyer, Jerrold, and Quenzer, Linda. Psychopharmacology: Drugs, the Brain, and Behavior. Sunderland, Massachusetts: Sinauer Associates, 2005.

5) The New York Times , "Looking into the Brains of the Stingy" by Virginia Postrel

6) The Financial Times , "A Trust Fund that Gives a Fillip to the Feel-Good Factor" by Alison Beard

7) Soloman, Robert, and Flores, Fernando. Building Trust in Business, Politics, Relationships, and Life. New York: Oxford University Press, 2001.

8) The Seattle Times , "Trust — But Verify with a Brain Scan" by Rick Weiss

Full Name:  Sonya Safro
Username:  ssafro@brynmawr.edu
Title:  Nature versus Nurture: Homosexuality's Link to Biology and Society
Date:  2005-05-11 00:20:35
Message Id:  15106
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

The New York Aquarium in Coney Island, Brooklyn, has recently confirmed two of their penguins to be gay. Wendell and Cass, who refused to separate from one another when zookeepers attempted to increase breeding by force, are now known as the best couple among all the penguins at the aquarium. Penguins do not have external organs, so the zookeepers did not know that Wendell and Cass were both male for a long time before they finally took a look. Another pair of gay penguins, Silo and Roy, have even adopted an egg together; in fact, they did a great job raising the chick. Homosexuality has been documented in more than 450 animal species: flamingos, owls, bears, monkeys, and even fish to name a few. The world has proven to be full of homosexual creatures1.
I first came across this article when a friend of mine and I were having a long conversation about homosexuality and biology. I had believed that there was an evolutionary objection to any genetic theories of homosexuality: animals are programmed to reproduce and that is their natural motive for having sex. However, my friend informed me about the penguins, and I began to think more about whether homosexuality is a trait that is inherited at birth, or whether sexual orientation is dependent on the environment one is brought up in. I wish to explore the "gay gene" theory, the debate over nature versus nurture, and the emotional, political, and social controversies about homosexuality.
In 1993 Dean Hamer, a scientist who works at the National Cancer Institute, wrote an article in the journal Science called "A Linkage Between DNA Markers on the X Chromosome and Male Sexual Orientation.2 This controversial article explored homosexuality from a scientific perspective: it was not a choice, or a "form of psychological deviance," but more about biology and genetics. Politics hit the roof with this article. The gene debate and supporters of a biological cause for homosexuality are linked with being pro-gay, just as those who claim science has nothing to do with it may be targeted as homophobic. It's important to wonder: could researchers be taking a stance based on their own sexual orientation?
Hamer claimed to have found evidence for an unidentified gene on the X-chromosome that was passed from mother to son and had an influence on the child becoming gay. With research about gay brothers, Hamer and his colleagues assert that the Mothers' genetic skew correlates to their gay sons. Hamer is trying to discover a gene through the X-chromosome and find out how it controls the sexual orientation trait. On the other hand, this has been challenged by other scientists who believe that Hamer's evidence and research is too weak and broad. It appears as though many other factors may be influencing each of the mothers' sons, and that a more varied and repetitive research is needed.
This unidentified gene has also been found in fruit flies. However, the leap from flies to humans has caused many biologists and researchers to claim that homosexuality is resolutely embedded in genetics and biology. In other words, even if we cannot fully identify the gene at this moment, or completely understand how this is so, biology is the main factor. Finding same sex behavior and the unidentified gene in flies, as well as mice, sheep, and other animals is important because humans share the bulk of their DNA with these other species3.
A gay sheep study backs up the notion that biology has a lot to do with homosexuality. The hypothalamus is known to control sex hormone release as well as sexual behavior, has been found to have distinctive differentiation in the rams. The hypothalamus is twice as large in rams as in ewes, however in gay rams; it was the same size as the "straight" females. These differences in the hypothalamus in size are almost identical to the differences found in neuroscientist Simon LeVay's study of the brains of gay men. Sheep and humans are the only animals where males can physically express utterly gay sexual preferences. Also, the hypothalamus has an enzyme called aromatase, which converts testosterone into estrogen. This has a lot to do with the hormones that present during a fetus' development.
A very interesting debate regarding homosexuality is this "nature vs. nurture" argument. Thomas Schmidt, in "Straight & Narrow? Compassion and Clarity in the Homosexual Debate," takes a religious stance, and looks at homosexuality through something he calls the "multiple-variant model.4" This model is Schmidt's attempt to get rid of the biological theory that homosexuals were born that way, which he believes is not correct. This multiple-variant model associates many external, environmental factors with a person's sexual orientation. Social constructivism, early childhood environment, early trauma, and individual choice are the most critical factors Schmidt connects to homosexuality.
Schmidt argues that the way a parent fosters his or her child influences a child's eventual sexual preference. He gives many examples of external factors, such as a son who doesn't receive love from his father, thus always searches to fulfill this emptiness by engaging in relationships with other men. If a child, more specifically a boy, is teased all throughout his childhood by his peers and considered a "sissy," then he will relate more to the girls in his age group and become like one of them, ultimately even liking males.
Schmidt also talks about choices. Feminism, he claims, is a big influence on a female becoming attracted to someone of the same sex, as well as people considered to be counter-culture like. Basically, Schmidt believes that homosexuality is a choice, and furthermore a morally wrong one that is only perpetuated by people's decisions to act upon it. He ends his argument by suggesting Christian therapy to help stop homosexuality.
This argument gives of a religious fanatic vibe. While Schmidt makes some useful points about the need to replicate biological researches and evidence of its link to homosexuality, and while I do believe sexual orientation has lots to do with environmental factors, his message is biased and conservative: based on his religious upbringings, he has learned that homosexuality is a sin, and that gays choose to commit this "sin." Schmidt claims that no relevant or important research has been done to actually prove a genetic association to homosexuality, thus he dispels all notions of the likelihood. However, in truly understanding homosexuality and dealing with this highly controversial and emotional topic, I believe that people should be open and willing to all possibilities.
A friend of mine at Bryn Mawr College, who identifies herself as a lesbian, told me that she can't help being gay: "I knew since I was 10 that I liked girls. I tried to keep it a secret but I just couldn't hide it or change the way I was. It's not fun being gay, I know lots of people who wish they weren't." As I contemplated my friend's statement, I thought about how society is so devoted to and affected by the "gay gene" debate and the issue of homosexuality as either genetically influenced or environmentally caused.
Why indeed are researchers attempting to find a gene that proves once and for all that homosexuality is in fact not a choice but a hereditary feature? Could this lead to homosexuality being considered a flaw and something that may be "fixed?" Why can't we live in a world where it's simply okay for a person to be gay, or straight? So many theories have been given concerning love and the brain, such as love being simply a chemical reaction stimulated in the brain causing someone to feel that they are truly in love with another. However, the world is still full of romantics who will always believe that love is an amazing, unexplainable thing. Homosexuality, unlike love, is not regarded as natural or even unexplainable by many people.
So is homosexuality an instinct people are born with, or is it learned? Or both? I have come to learn more about biology and homosexuality, the current research that has been going on and that has been debated. I think research is important in understanding why we are who we are. It is pressing, however, to see that homosexuality is such a heated topic. It is scary to think that our last presidential election results actually had so much to do with gay rights. People are set in their beliefs, whether cultural or religious, and while environment of course plays a huge role in people becoming who they are, perhaps we need to look outside of just society's influence.
If we accept that homosexuality is indeed biologically inherited and not an option, then it appears that many people will have to accept this as unalterably and unchangeably part of that person. Homosexuals would have to be accepted, because then it would be clearly wrong to discriminate, as it goes for race and gender5. Still, there may be repercussions with the acceptance of the "gay gene" theory, such as continued discrimination by others now viewing homosexuality as a defect. Our society is a complex one, where science and biology has so much to do with facts, reality, and our further development as intelligent human beings in a civilized world. Yet, we continue to hold prejudices, set beliefs and views in a culture-driven humankind that still has a lot of learning, and a lot of "getting it less wrong."

1 http://www.jrn.columbia.edu/studentwork/cns/2002-06-10/591.asp Columbia news Science
2 Http://www.ps.org/wgbh/pages/frontline/shows/assault/genetics/ The "Gay Gene" Debate
3 Http://www.well.com/user/queerjhd/sxthegenedebate.htm The Gene Debate
4 Http://home.messiah.edu/~chase/h/articles/schmidt/ "Straight & Narrow? Compassion and Clarity in the Homosexual Debate
5 http://www.innerself.com/Relationships/accept_homosexuality.htm Accepting Homosexuality


Full Name:  Catherine Barie
Username:  cbarie@brynmawr.edu
Title:  Medical Mind Reading: How Advances in fMRI Technology
Date:  2005-05-11 03:06:23
Message Id:  15107
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

The term "mind reading" encompasses the ability to attribute mental states to others, to make sense of one another's behavior and actions in addition to predicting thoughts, feelings, and future actions. Individuals exercise this technique on a constant basis, often without realizing it. This subconscious assessment, or "mind reading," is essential for operating in a complex social environment, as it is necessary for "normal function" in everyday person-to-person interaction. While this capability is valued, its accuracy is limited simply because the assessment is basically a guess based upon cues and inferences; the thoughts of a given person are still clandestine and personal, effectively maintaining them as the sole property of the individual. However, as modern technologies become increasingly complex, this barrier between "public knowledge" and actual thought and perception begins to dissolve. Studies involving neuronal activity and brain imaging are encroaching on the inner workings of the mind. While this does appear to be beneficial and a sign of progress, especially in the realm of understanding neurological disorders, there are drawbacks: where is the line and what are the implications of these medical developments?

Even though brain-imaging technology existed prior to the 1980s, the development of Functional Magnetic Resonance Imaging (fMRI) revolutionized the industry. fMRI is minimally invasive, and unlike some imaging methods previously used, it does not require or result in exposure to radiation. fMRI machines create pictures of the brain by using the magnetic properties of blood to monitor blood flow to different regions of the brain. This enables physicians and scientists to see changes in blood flow as they happen. Regions with increased blood flow are active (necessary for whichever process is occurring). Thus, this "dynamic brain mapping" makes it possible for scientists to determine when certain brain regions become active and the duration of said activity (6). However, there are some drawbacks to fMRI technology, namely the scans require the patient to remain completely still for period of time up to twenty minutes and the vagueness of the results: "It is difficult to determine if the subject was thinking about something that caused certain parts of the brain to activate, if the scanner picked up real data or noise, and so on" (6). Despite these shortcomings, fMRI remains the most effective brain-imaging technology currently available, especially when used in conjunction an electroencephalogram.

Since fMRI allows scientists to pinpoint activated regions of the brain with remarkable accuracy, it is often used to scan the brain as a subject is performing a particular activity. For instance, the subject may have been asked to tap a finger repeatedly. The regions responsible for this continual action would show a marked increase in blood flow, thereby indicating the activated region of the brain. This sort of research seemed to reach a plateau in that numerous studies merely confirmed that different regions would be activated while subjects performed various tasks without offering new insights. However, two recent studies introduced a new approach to fMRI research, the implications of which are still reverberating in scientific circles as well as the news media.

The first study, conducted by a team of Japanese scientists, used fMRI to scan the brains of subjects while they viewed stimuli positioned in different orientations. The stimuli were "lines" (slits); they could appear at eight different angles. They found that the subjects tended to "favor" certain orientations over others, meaning that some orientations elicited a stronger response than others. They also found that they could predict, with reasonable accuracy, which orientation the subject was viewing solely by looking at which brain regions were activated (3). After establishing the connection between the orientation of the object and the corresponding activated regions of the brain, the scientists turned to the issue of mind reading, or using information about brain state to determine mental state (3). They assessed the connection between brain state and mental state by presenting the subject with two sets of overlapping lines (in different orientations) and requested that the subject choose one of the two to focus upon. As a result, it was "...shown that fMRI activity patterns in the human visual cortex contain reliable orientation information that allows for detailed prediction of perceptual and mental states...Moreover, activity patterns evoked by unambiguous stimuli could reliably predict which of two competing orientations was the focus of a subject's attention. These results demonstrate a tight coupling between brain states and subjective mental states" (3). Thus, the author are suggesting that a link does exist between brain state and mental state so that the analysis of fMRI may, at some point, come to include interpretation of mental state (mind-reading).

Moreover, researchers in England conducted a similar experiment. They used fMRI scans to monitor blood flow to different regions of the brain. In this experiment, the subject saw two images in rapid succession (the first image was shown very briefly so that the subject was not conscious of it). Even though the subject can only recall seeing one image, there clearly were two separate activation patterns, meaning that the brain is aware of the other image. So, the evidence indicated that both images received some cortical processing: "This enabled us to infer, from single fMRI measurements of V1 activity, which of two oriented stimuli was being viewed by our participants. We were able to make such predictions even when the stimuli were completely invisible to the participants" (2). Since the scientists were able to predict which image orientation the subject viewed, even when the image was "invisible" because it was shown so briefly that the subject was not conscious of it, there clearly is some cortical processing. This is a sort of subconscious memory – the individual saw the image, but has no conscious memory of it. Instead, the only evidence of the "invisible image" is the activation of different regions of the brain. Fundamentally, this is an example of mind reading, even though the individual himself is unaware of the "invisible image." In the end though, the scientists concluded that cortical activity in the V1 does not necessarily imply conscious visual perception (2). The implication here is that analysis of fMRI scans can include interpretation of subconscious perception, which is an instance of mind reading. As one of the researchers commented: "'This is the first basic step to reading somebody's mind. If our approach could be expanded upon, it might be possible to predict what someone was thinking or seeing from their brain activity alone'" (1). Thus, this researcher believes that fMRI technology may soon be used to detect thoughts by analyzing which brain regions are activated.

Although these studies have clearly brought forth new knowledge and understanding of how the human brain works, the implications of this research also introduce new issues. Specifically, where is the dividing line between the advancement of science and privacy (with relation to an individual's thoughts)? This presents an ethical dilemma: it can be done, but should it be? This technology allows researchers to "see" the thoughts of another; even though this is at an early stage, the direction this research may potentially take in the future is startling. As Dr. Adrian Burgess, a member of the Imperial College at London's department of cognitive neuropsychology, said: "'It could potentially be used to find out people's latent attitudes and beliefs that they are not aware of...You could use it to detect people's prejudices, intuition and things that are hidden and influence our behaviour'" (1). This basically invades the privacy usually afforded to each individual; there is a conscious choice about what it said or unsaid, and this technology could possibly obliterate this choice.

The idea of "mind reading" is a disturbing idea to many people, so much so that when research planned by NASA several years ago had implications similar to the two aforementioned studies, the reaction was overwhelmingly negative. NASA then issued a press release stating that it "'does not have the capability to read minds, nor are we suggesting that would be done...'We have not approved any research in this area and because of the sensitivity of such research, we will seek independent review before we do'" (5). Clearly, the idea of "mind reading" had elicited a negative reaction; the idea that one's thoughts may become public knowledge is disturbing, albeit far off. (The aforementioned studies, however, did made advances in this direction.) Yet, NASA did pursue some research in this area. Last year, they published their findings in a "mind reading study" they conducted. They developed a computer program that detected nerve signals and patterns sent from the brain to the vocal cords. When the subject thought about speaking, they noticed that signals were sent to the vocal cords even if the subject said nothing aloud. These signals were interpreted by the program and then translated to specific words (4). The program could only recognize approximately sixteen words (ten of which were numbers), but, this is a step towards interpreting internal monologue (mind reading).

In conclusion, modern technologies have become increasingly complex, allowing scientists to see which regions of the brain are active during a given activity. This essentially eliminates the barrier between perception and actual thought as studies involving neuronal activity and brain imaging encroach on the inner workings of the human mind. Although these studies have clearly brought forth new knowledge and understanding of how the human brain works, the implications of this research also introduce new ethical dilemmas: it can be done, but should it be? Specifically, where is the dividing line between what should be private and public knowledge? What effects will these medical developments have within society and day-to-day human interaction? These developments basically illuminate and personify the ethical issues that accompany advancements in science. As this research continues to evolve and become increasingly complex, the intrusion into private though will probably become much more noticeable, forcing the issue of individual privacy to the forefront. Ethics and personal rights now seem to be at odds with science and technology, forcing society to adapt in order to accommodate everyone. Ironically, it appears that these advances are forcing people to think about ethics and morality to define what is or is not right, which may account for the recent "return to religion" in American society. Basically, medicine and morality are "evolving" at different rates, so that the result is a conflict between ethics and medicine. It will be interesting to see how this research progresses, the direction it takes, and the effects it will have on society.


1)Brain scan "sees hidden thoughts"

2)Predicting the orientation of invisible stimuli from activity in human primary visual cortex

3)Decoding the visual and subjective contents of the human brain

4)NASA develops 'mind-reading' system

5)NASA Rejects Claims it Plans Mind Reading Capability

6)Brain imaging

Full Name:  MK McGovern
Username:  m2mcgove@brynmawr.edu
Title:  Empathy
Date:  2005-05-11 13:24:22
Message Id:  15114
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Empathy is the ability to understand the internal experiences of another (11). It's the underlying mechanism that makes the outcome of sports events, movies, or video games interesting. It is the basic human characteristic that connects individuals, allowing for successful interactions and communication. When empathy is lacking, conditions such as autism can create a sense of separation in the individual, isolating her/him from others (2). Even minor differences in empathy levels can create very different approaches to communication. The stereotypical "collaborative" approach of women versus the "competitive" approach of men could be attributed to different levels of empathy between the sexes (16). As a fundamental component of communication, empathy allows people to understand each other, learn from each other, and live with each other. This paper examines the physiological basis for empathy, its adaptive benefits, and its connection to gender differences.

While the definition of empathy might be generally agreed upon, the underlying mechanism of empathy is currently a hot topic in scientific research. One theory, creatively named the "theory theory," proposes an analytical, evidential basis for empathy. People understand others' current actions using analysis based on previous experience of similar actions. Simulation theory, on the other hand, explains empathy by the modeling of others' experiences in one's own mind, i.e. putting one's self in another's shoes (8).

The recent discovery of "mirror neurons" in several areas of the brain has lent support to the simulation theory of empathy. Mirror neurons are so named because they fire not only when one is performing an action, but also when one is observing another perform that action. Essentially, mirror neurons allow one to experience what another is experiencing without going through the motions oneself. One's familiarity with the observed action impacts the activity of the mirror neurons, e.g. ballet dancers will have more mirror neuron activity when watching ballet than when watching another type of dance (2). Nevertheless, mirror neurons provide a powerful mechanism for understanding the actions of others, and thus a possible physiological basis for empathy.

In order to further clarify the role of mirror neurons in empathy, scientists have measured their activity in several contexts. One aspect of empathy is the ability to understand another's intentions. Mirror neurons are only activated when a goal-oriented action is observed, and only when that action is performed by a body part (rather than a tool, for example). This suggests a connection between mirror neurons and the ability to infer the potential movements of others (3). In addition, mirror neurons are more active when an goal-oriented action is viewed in a context which gives additional clues to an intention. For example, greater mirror neuron activity is present when a hand is observed picking up a cup in a dirty, disorganized environment than in a neutral environment, indicating a further inference, e.g. the person intends to clean up (5).

Another aspect of empathy, the ability to feel what another is feeling, also reflects the importance of mirror neurons. Brain areas activated when one is in pain are also activated when one observes another's pain. The anterior insula and anterior cingulate are activated in both cases, and higher empathy scores are correlated with greater activation (12). Interestingly, the anterior insula and anterior cingulate do not make up the entire "pain matrix," so more areas of the brain are activated when one is in pain than when one is observing another in pain (13). Perhaps it is not necessary or possible to fully experience another's physical pain. The effect of empathy is achieved without the distraction that additional activation of the pain matrix might create, where one's concern for one's own welfare might take precedence over attention to another's. The anterior insula and the anterior cingulate are also activated both when an emotion is observed and when it is felt, e.g. disgust, indicating that observation of an emotion creates a mental model of that emotion in the observer (14). The differences in brain activity between actual experience and observation could reflect the purpose of empathy, i.e. to understand another's experience. This understanding can be achieved without full immersion in another's feelings or experience, hence mirror neurons create a "reflection" of another's experience rather than completely recreating that experience.

This ability to understand one another is one of the main benefits empathy provides. It allows the social world to be translated and interpreted, and this knowledge can then be used to construct a neural representation of one's social environment (2). From an evolutionary standpoint, it could provide an especially efficient method for changing phenotypic behavior. Fitness is not just passed on genetically, but by means of communication and imitation. For example, if one person has some genetic difference that enables her/him to figure out how to catch a fish, that knowledge can be shared with others, and learned by others, even if they don't have the same genetic difference. Empathy provides an advantage in learning by facilitating communication, understanding, and imitation (2). Each individual is not required to solve every problem, but can observe others' experiences. Seeing another person in pain could lead one to avoid a dangerous activity without having to experience it personally to understand the risk (6). In fact, empathy may be the precursor to language. Mirror neurons are found in Broca's area, which is traditionally associated with language. This location may indicate a mirror neuron role in making humans "language-ready" by providing an initial method of understanding each other or communicating (4).

The importance of empathy is particularly apparent in disorders such as autism, where the ability to form social relationships and communicate with others is impaired. A recent study has found a link between mirror neurons and autism. In non-autistic people, EEG observation shows the suppression of mu waves both when performing an action and observing an action (4). In autistic children, mu waves are only suppressed when performing an action, not when observing an action. Since mu wave suppression is correlated with the mirror neuron system, the lack of suppression in autistic children implies a dysfunctional mirror neuron system (9). This suggests incorporating mirror neuron stimulation into therapeutic programs for autism may lead to improvement.

Interestingly, autism is significantly more prevalent in males than females. In fact, some scientists suggest that the capacity for empathy is a critical cognitive difference between men and women. In general, women tend to score higher on empathy tests, and men score higher on system-oriented tests. High levels of testerone in utero have been associated with anti-social behavior similar to that found in autistic children. Possibly autism is just an extreme version of the "male" brain, but then further explanation is required to understand the existence of autistic females (6). In addition, if empathy is beneficial and advantageous to the species, then high levels would be selected for, regardless of gender, and one would expect to see equal levels in men and women.

The disparity in empathy levels between men and women may be related to the evolutionary disparity in their offspring investment. Females produce fewer gametes than men, they provide energy to their offspring during gestation, and they have a certainty of genetic relation. These factors combined could make it especially advantageous to women to have a high level of empathy in order to protect and care for their genetic investment, i.e. their offspring (11). Structurally, female brains have several variations that could be correlated with higher empathy levels. Parts of the limbic cortex (which regulates emotional responses) are larger in women, a possible indication of relative importance. Women also have a greater density of neurons in parts of the brain associated with language processing and comprehension (18). Since mirror neurons have been found in areas of the brain associated with language, it would be interesting to see if the greater levels of empathy in women are associated with a greater number of mirror neurons in Broca's area.

With all the benefits that empathy provides, it seems that women would have an advantage over men due to their higher levels of empathy. However, throughout history, men have generally held the positions of power in society. Perhaps too much empathy is actually a disadvantage. If one must inflict pain to achieve power, then empathy could act as an obstacle. A certain amount of empathy is necessary to understand an opponent's intentions, and it is necessary to have an understanding of pain in order to realize its strategic usefulness. Perhaps it is in the application of this knowledge that the sexes differ. The lower levels of empathy in a male could lead him to destroy or dominate, while the higher levels of empathy in a female might lead her to please or appease. In general, men and women desire different outcomes in their interactions. Men seek to keep or attain the upper hand, while women seek to achieve consensus and closeness (16). It's possible that achieving a position of power actual costs women more than men since women would feel the pain of those who lost power more acutely. However, it may be that the mechanism underlying empathy is slightly different in each gender, and it is not just the levels of empathy, but also the processing of empathetic experiences that differs between genders.

It has already been discovered that men and women solve problems differently, respond to stress differently, and process emotional memory differently (18). Future research on mirror neurons and empathy should include comparisons between male and female brains in terms of mirror neuron location, activity level, and density. In addition, it would be interesting to see if higher levels of stress are correlated with higher levels of empathy, and if there is any difference in the way this stress is processed between men and women.

As the mechanistic underpinnings of empathy are unraveled, a greater understanding of human interactions will be achieved. Understanding how people communicate can illuminate ways to improve that communication, making interactions more productive. Gender differences in communication can be explored and understood in terms of empathetic processing, paving the way for future discussions on disparate gender representations in various environments.


Note that starred (*) sources are accessible only to Bryn Mawr, Haverford, and Swarthmore students through Tripod and double-starred (**) sources are informational, but not directly cited resources

1) McKinney, Merritt. (2003). "Brain Hard-Wired for Empathy." PreventDisease.com, online.**

2) "Mirror Neurons." (2005). NOVA, online.

3) Sylwester, Robert. "Mirror Neurons." (2002). BrainConnection.com, online.

4) Motluck, Allison. (2001). "Read My Mind." New Scientist, online.

5) Iacoboni, Marco, et al. "Grasping the Intentions of Others with One's Own Mirror Neuron System." (2005). PLOS Biology, online.

6) Bradshaw, J.L. and Mattingley, J.B. (2001). "Allodynia: a sensory analogue of motor mirror neurons in a hyperaesthetic patient reporting instantaneous discomfort to another's perceived sudden minor injury?" J Neurol Neurosurg Psychiatry, online.

7) Carr, Laurie, et al. "Neural Mechanisms of Empathy in Humans: A Relay from Neural Systems for Imitation to Limbic Areas." (2003). PNAS, online.**

8) Than, Kerr. "Scientists Say Everyone Can Read Minds." (2005). LiveScience.com, online.

9) "Autism Linked to Mirror Neuron Dysfunction." (2005). RxPG News, online.

10) Cohen, David. "Men, Empathy, and Autism." (2004). The Chronicle of Higher Education, online.

11) Hojat, Mohammadreza. "Development of Prosocial Behavior and Empathy In the Hand that Rocks the Cradle." (2004). WorldCongress.org, online.

12) Singer, Tania, et al. (2004). "Empathy for Pain Involves the Affective but not Sensory Components of Pain." Science, 303, 1157-1162.*

13) Jackson, Philip L., et al. (2004). "How Do We Perceive the Pain of Others? A Window into the Neural Processes Involved in Empathy." NeuroImage, 24, 771-779.*

14) Wicker, Bruce, et al. (2003). "Both of Us Disgusted in My Insula: The Common Neural Basis of Seeing and Feeling Disgust." Neuron, 40, 655-664.*

15) "Neuron Neuron on the Wall." PlayCube.org, online.**

16) McKenna, Eddie, et al. "Competitive vs. Collaborative: Game Theory and Communication Games." UPenn.edu, online.

17) Shen, Andrea. (2000). "Seminar: Stereotypes Persist about Women in Academia." Harvard University Gazette, online.**

18) Cahill, Larry. (2005). "His Brain, Her Brain." Scientific American, online.

Full Name:  Lauren Dockery
Username:  ldockery@brynmawr.edu
Title:  Facets of Addiction
Date:  2005-05-11 14:54:35
Message Id:  15117
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Often the general public perceives addiction as a psychological disorder and a form of weakness. Many pass judgment on the character of individuals addicted to substances such as opiates (narcotics) or other drugs of abuse and stereotype them as "bad people" lacking the self restraint to avoid drug use and abuse. While addiction frequently does represent a psychological dependence, the addiction of an individual can represent that of a physical dependence or even a complex mixture of the two causes. Proof of a physical rather than psychological addiction can be seen in controlled medical settings, especially in prenatal exposure of neonates to controlled substances due to drug abuse by the mother. Another example of physical addiction independent from psychological dependence can be seen in patients given narcotics for acute pain without giving consent, yet during weaning these patients experience often severe symptoms of withdrawal. Addiction to drugs of abuse cannot merely be thought of as a psychological weakness of the abuser. Evidenced by patients with little to no psychological connection to controlled substances, a recovery from substance abuse requires overcoming a strong physical dependence completely separate from the psychological need for drugs.

Most current definitions of drug addiction focus primarily upon behavioral abnormalities. Changes such as loss of control over the intake of a drug or drugs and continuing to take drugs without regard to the detrimental side effects typically define behavior of an addict (7). Unfortunately for individuals suffering from addiction society often regards behavioral issues as weaknesses that should be modified by the sufferer, without regard to the chemical changes perpetuating the addiction. These problems often fall into the same category as depression where people simply do not discuss the psychological "weakness," expecting the sufferer to 'snap out of it.' Current research focuses on the molecular and structural drug-induced neural plasticity responsible for the behavioral changes of addicts. In the view of researchers studying the molecular changes of behavioral modification, repeated exposure to controlled substances can alter the amounts and types of genes expressed in different areas of the brain. These changes in gene expression occur by perturbations in the regulation of gene transcription. The altered expressions of these genes affect the functioning of individual neurons and the circuits of which they are a part (7). Other researchers focus on structural changes of the brain to explain the behavioral changes caused by addiction. The research in this area provides evidence for the fact that repeated exposure to drugs of abuse creates structural changes involving reorganization and branching of neurons. These changes mimic those seen as a result of learning and memory (8). The reorganization and structural changes of the neurons affect the functioning of routine neuronal impulses, thus modifying normal behavioral patterns. This case relies upon behavioral or psychological impetus to produce substance abuse and provides no physiological explanations for the beginnings of abuse. The structural changes merely display a physical representation of the behavioral changes associated with addiction.

Scientists do not yet fully understand the mechanisms behind addiction or the brain's response to stimulation and rewards. However, the effects of drugs compare to those seen by electrical stimulation of the hypothalamus and surrounding structures. Ordinarily, rewards only possess effectiveness if the brain of an individual is in a drive state. For example, food represents a reward only if the individual receiving it feels hungry. The rewards of electrical stimulation work independently of the drive state of the brain. Electrical stimulation goes as far as to create a drive state even if a need for a drive does not exist, as well as using neural circuits normally activated by the stimulus of a reward. Trained to self administer intracranial electrical stimulation, laboratory rats often choose activation of the electrical pulse over normal physical rewards of food or sex. Stimuli activate dopaminergic neurons in the hypothalamus causing increased output of dopamine in the synapses between these neurons in the mesolimbic dopamine system. Currently, most believe the mesolimbic dopamine system regulates the signals responsible for controlling biological drives and motivation (4). For example, this system possesses the responsibility of regulating the sensation of hunger as well as producing the drive necessary to induce eating in an organism and the reinforcement gained by eating. While electrical stimulus can create a larger than normal opportunity for release of dopamine, addictive drugs increase the rewarding effects caused by electrical stimulation. Drugs such as cocaine, amphetamines, narcotics, and nicotine all function as positive reinforcers in the mesolimbic dopamine system by facilitating transmission of dopamine (4).

Psychoactive drugs increase the level of dopamine released at the synapse of the dopaminergic neurons by blocking the dopamine transporters thereby causing the dopamine to remain in the synaptic cleft for extended periods of time. The longer the dopamine remains in the synaptic cleft the greater its effect. After repeated exposure to drugs of abuse the body's reward center becomes accustomed to the effects. The tolerance developed as a result of adaptation to excessive exposure requires increased dosages in subsequent uses to achieve the same level of euphoria. However, after prolonged drug use addiction can no longer be described solely as the desire for the increased positive reinforcement of the drug and the anticipation of the euphoria it produces. Instead, in the face of prolonged use an actual physical dependence occurs, thereby changing the level of addiction. Once physically dependent, the drug abuser faces a psychological addiction impeding his or her cessation of the drug, however, he/she cannot simply quit due to the negative and often severe physical effects of withdrawal (4). Withdrawal from drugs such as opiates can include symptoms such as fever, night sweats, nausea, vomiting, headaches, leg cramps, abdominal pain, and visual changes among other debilitating complaints (3). The body of the addict adapts its normal functions to include the addition of these drugs.

The same adaptations occur in infants exposed to drugs of abuse in utero, proving addiction can occur completely separately from psychological aspects. These babies, born completely addicted to the drugs their mothers expose them to, play no role in asking for the substances or perpetuating drug abuse by seeking additional doses of the drug or drugs. In the case of neonates, behaviors or behavioral adaptations to drug seeking cannot occur prior to birth proving a completely separate physical facet of addiction. Neonatal abstinence syndrome, the neonatal form of adult withdrawal, presents in a high proportion of infants exposed to opiates or other drugs of abuse during a mother's pregnancy (5). A second example for the proof of a completely physical addiction can be seen in pediatric intensive care patients or even adults receiving opiates for pain management or sedation. Following surgery adults often receive potent narcotics to manage postoperative pain, and sometimes require prolonged administration due to surgical complications. Many patients in these situations develop a physical dependence on the narcotics yet do not display the drug seeking and drug abuse often stereotyped as addiction.

Children in pediatric intensive care units receive sedation while on a ventilator to avoid frightening or stressing such a young patient. Doctors administer a combination of drugs, including benzodiazepines and narcotics, to maintain a sufficient level of sedation. Both of these drugs possess high potential for causing physical dependence following repeated exposure. Children spending extensive amounts of time on a ventilator typically require a slow and controlled weaning plan to prevent the adverse effects of withdrawal. Patients suffering from complex congenital heart disease and/or respiratory failure most often receive the combination of these drugs for sedation and pain management, thereby making them very susceptible to opioid and benzodiazepine withdrawal. In recent studies researchers tested the use of standardized weaning plans for dealing with this type of withdrawal through slow tapers. Interestingly, this study shows that patients given fentanyl, a type of narcotic, display a higher occurrence of withdrawal symptoms (2). Their withdrawal does represent a form of addiction; however, these patients receive the drugs without consent and therefore suffer only a physical dependence due to their bodies' adaptations to function with the narcotics and benzodiazepines in their systems.

The number of pregnancies in drug addicted women, particularly those addicted to opioids has risen over the past 30 years (5), thereby raising the number of infants addicted to drugs and suffering from neonatal abstinence syndrome at birth. Nationwide, women of childbearing age (ages 15-39) make up 22% of individuals who abuse drugs, and 91% of these women abuse heroine or other opiates (6). Opiate abuse causes the most severe fetal effects as well as producing the strongest cases of neonatal abstinence syndrome (1). Not only does drug abuse while pregnant produce drug addicted babies it can also cause adverse effects such as intrauterine growth retardation, reduced birth weight and head circumference, prematurity, and an increased rate of neonatal death or sudden infant death syndrome (SIDS) (6). Once born, the babies' bodies do not know how to function without the chemical effects of the drugs producing profound neurological disturbances. This withdrawal manifests as central nervous and gastrointestinal abnormalities, causing high pitched crying, poor sleeping, tremors, increased muscle tone, vomiting, diarrhea, poor weight gain (5), fever, apnea, frantic sucking, and poor feeding in the addicted neonate (1). Recent studies, showing that administration of therapeutic doses of opiates such as phenobarbital reduces the severity of neonatal abstinence syndrome and shortens the amount of hospitalization required for severely addicted neonates, attribute the previously mentioned symptoms to the effects of withdrawal from drugs of abuse (5).

Babies born addicted to narcotics or other drugs of abuse provide an excellent example for a purely physical addiction to narcotics. The same proof can be seen in patients receiving sedation in pediatric intensive care units, because the children receive narcotics without drug seeking behavior and require weaning after repeated exposure. In both instances addiction manifests physically rather than psychologically, proving that an individual does not need to be the stereotype "addict" with behaviors of drug seeking and abuse to be considered addicted to drugs. In the case of some individuals addicted to controlled substances the physical changes in the brain due to unregulated physical addiction can lead to observable behavioral changes; thereby perpetuating dependence through drug abuse and drug seeking behavior. Based upon this evidence, someone suffering from drug addiction does not necessarily represent an individual with a psychological "weakness" that should be overcome. Instead a complex mixture of physical and psychological dependence must be dealt with to overcome an addiction.


1)Emedicince, Belik, Jaques MD. "Neonatal Abstinence Syndrome."

2)Franck, LS., Naughton, I, and Winter, I. "Opioid and benzodiazepine withdrawal symptoms in paediatric intensive care patients." Intensive Critical Care Nursing 20 (2004): 344-51.

3)Emedicine, Jain, Ashok MD. "Withdrawal Syndrome."

4)Kandel, Eric R., Schwartz, James H. and Thomas M. Jessell, ed. Principles of Neural Science. New York: McGraw –Hill, 2000.

5)Langenfeld, Stefan et al. "Therapy of the neonatal abstinence syndrome with tincture of opium or morphine drops." Drug and Alcohol Dependence 77 (2005): 31-36. Science Direct. 6 May 2005.

6)McElhatton, P.R. "Fetal Effects of Substances of Abuse." Journal of Toxicology 38 (2000): 194. InfoTrac OneFile Plus. 6 May 2005.

7)Nestler, Eric J. "Molecular mechanisms of drug addiction." Neuropharmacology 47 (2004): 24-32. Science Direct. 6 May 2005.

8)Robinson, Terry E., Kolb, Bryan. "Structural plasticity associated with exposure to drugs of abuse." Neuropharmacology 47 (2004): 33-46.

Full Name:  Carly Frintner
Username:  cfrintne@brynmawr.edu
Title:  Nutrition and the Brain: Do Certain Foods Aggravate ADD?
Date:  2005-05-11 16:50:32
Message Id:  15119
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

By Carly Frintner
Paper #2 for Neurobiology and Behavior, Spring, 2005
Professor Paul Grobstein

I was diagnosed with ADD 2 years ago at age 19, and immediately, the doctors I was seeing recommended prescription drugs to help alleviate my symptoms. Since the time of my diagnosis, I have tried varying dosages of at least 4 different drugs to treat ADD: Strattera, Concerta, Ritalin, and methylphenidate (the generic brand of Ritalin.) I finally settled on methylphenidate because it does not cause noticeable side effects when I take it. (I experienced headaches, irritability, anxiety, and other side affects when I tried the others.)

It was not until this year when I moved into Batten House, however, that I noticed how much the food I eat influences not only my mood and energy level, but my concentration. All meals cooked communally at Batten are prepared only with vegan ingredients. That means there are no animal products (meat, fish, eggs, dairy, gelatin, honey, etc.) in any of these meals. Additionally, we use organic ingredients in many of our recipes. Over time, I did not consume certain foods with as much regularity as I had before I lived in Batten, including milk, sugar, greasy foods, caffeine, alcohol, and foods with preservatives and artificial color and flavors. I began to notice that when I did consume these things, I often felt much more sluggish, irritable, tired and introverted. I found it significantly more difficult to start or stick to tasks, especially academic tasks. I began to wonder how some of my ADD symptoms might be affected by the nutrients I get (or don't get) from the foods I eat, and if I might experience better results in altering my consumption patterns rather than taking symptom-suppressing medications.

How does food affect my emotions, my energy level, my ability to think critically? Has ADD always been as prevalent as it is today? To what extent to external factors affect attention? Is it possible to use our "I-function" to override the factors that affect attention? Is it more difficult for people with ADD to do this than for those who do not have ADD? How frequently is ADD the result of genes vs. environmental factors? Can I significantly limit the degree to which ADD affects me by changing my eating habits? Could the effects of such changes be significant enough that I might not need to use prescription drugs to treat my ADD anymore? The research I found suggests that it is definitely worth a try.

To start on a very basic level: The brain needs glucose to function. It is "dependent on a second-by-second delivery of glucose from the bloodstream, as neurons can only store about a 2-minute supply of glucose (as glycogen) at any given time." (1). Blood sugar and blood insulin levels greatly affect the brain's ability to absorb glucose from the blood. Ideally, blood should have a normal blood sugar level and low blood insulin. Blood sugar levels rise rapidly when refined sugars and starched are consumed, causing blood insulin levels to increase "10-fold within minutes, and keep on increasing insulin to even higher levels for 2-3 hours." (1). The result is "a rapid glucose uptake by almost all body tissues, leaving far less than optimal supplies for the brain." (1).

After a large meal, or eating heavy or rich foods, I often feel sleepy. Sometimes I feel downright exhausted. I know this is a common phenomena—the Thanksgiving Dinner Effect is something I think most people have experienced. Many people have come to believe that tryptophan in turkey and red wine causes the drowsiness we experience after a Thanksgiving feast. Actually, it is simply the large amount of food the body has to digest that draws blood away from other parts of the body—most significantly, the brain. It wouldn't matter whether a person ate turkey and drank wine—the effect would occur either way, as long as he or she had consumed a large amount of food—and the more fat and sugar present in those foods, the stronger the effect. (6). For someone with ADD, however, the effects may be significantly more debilitating.

When I recently began researching certain chemicals and nutrients in foods and the very specific ways in which they affect a person's body and brain, I found that I often do get enough of the vitamins and minerals that I need for my body and brain to be healthy and functional. However, I hadn't realized that I may be consuming other foods and chemicals that cancel out their benefits. I also found that stress can affect the way nutrients pass through the body.

Vitmains B, C and E, Magnesium and Zinc are only a few vitamins and minerals that are essential to brain function. Even when consumed in sufficient amounts, consumption of other types of foods can limit intestinal absorption, rendering them useless in our bodies before they even reach the bloodstream. In the case of Magnesium, "high intake of phosphate (common in meat, soft drinks and baked goods) calcium, fat, phytate (found in unleavened bread and wheat bran), lactose (milk sugar), oxalate (found in spinach, rhubarb, chocolate), and alcohol... all inhibit intestinal Mg absorption.... [Also,] urinary loss of Magnesium is caused by the stress hormones adrenaline and cortisol, diuretics (including caffeine), some antibiotics, digoxin, alcohol, high sodium, calcium and sugar intake...and birth control pills." (1).

"The cell membranes and synaptic endings of neurons in our brains and nervous systems are composed of DHA, an omega-3 essential fatty acid. These membranes go rancid unless protected with antioxidants [such as vitamins C and E.] Since most people don't get enough DHA, other types of fats are incorporated into the brain, but they do not function as well because they are the wrong shape." Also, "the all-important neurotransmitters are manufactured by the body from dietary sources." (2). In order for these neurotransmitters to function well, the B vitamins, magnesium, zinc, and Vitamin C must all be present in sufficient amounts. "Some studies have shown a relationship between fatty acid deficiencies and ADD, learning disorders, and behavior problems." (2). (For more information on Magnesium and the brain, as well as how deficiencies in B-vitamins, vitamin C, and Zinc affect the brain, go here (1). or here (2).)

Dr. Allen Buresz believes that ADD and ADHD symptoms may often be caused in part or in full by "allergies to one or more foods (usually milk, cane sugar, chocolate, American cheese, or wheat (with sugar, additives, and cow's milk being the most frequent problems.)" (2). He notes that the majority of patients he has treated have shown significant behavioral improvement upon removing these foods from their diet (especially after 6 weeks of maintaining the altered diet.) I personally have noticed an improvement in my physical health since cutting out dairy products. When I occasionally consume dairy products now, I do sometimes notice that I my attention falters. However, as I usually consume dairy in the form of ice cream or chocolate, it is hard to tell whether that effect is a result of the milk or the sugar.

Another topic worth examining is the way in which food additives affect mood and behavior. Many sources I reviewed for this paper mention the work of Dr. Benjamin Feingold. The Feingold Hypothesis asserts that food additives are responsible for learning and behavior disorders, or more specifically, that "food additives cause hyperactivity." (3). Feingold tested this hypothesis on children in 1973 with 1,200 cases, examining over 3,000 different food additives. "About half of them reported greatly improved behavior and another 25% reported some improvement in the clinical trials." (4).

Additional studies have found evidence to both support and refute Feingold's claims. The evidence I have seen, as well as my own experiences, are limited and very informal, yet for me, they do lend credibility to Dr. Feingold's case.

The most hyperactive boy I ever knew, a close friend from elementary school, had a diet that daily included Kool-Aid, soda and fruit roll-ups, among other things. Likewise, two boys I babysat for one summer ate almost no natural foods. The foods they ate included (not exclusively) white bread, Ritz crackers, pepperoni with red dyes, American cheese, colorful and/or sugary cereals, potato chips and cheese curls. One had severe ADHD, and one was almost always irritable, restless, disobedient, aggressive and throwing temper tantrums. Contrary to this case, another boy I babysat whose parents were more conscious about limiting the amount of sugary and non-natural foods they gave him was quiet, obedient, and calm whenever I saw him.

Within my own family, I have noticed that, as children, my sister and I were generally calmer, more attentive, less argumentative, and more obedient than my cousins, both male and female. I'm starting to think this may be due in part to the fact that my parents raised us almost exclusively on all-natural foods, with an emphasis on fruits, vegetables, whole grains, and alternative proteins (tofu, nuts, and beans for example) while my aunt and uncle allowed their children to eat a wider variety of junk food more frequently.

The following excerpt from a study by KL Harding, RD Judah and C. Gant indicates several factors that contribute to ADHD (with indications for ADD, as many symptoms are the same.)

"Numerous studies suggest that biochemical [causes] for AD/HD cluster around at least eight risk factors: food and additive allergies, heavy metal toxicity and other environmental toxins, low-protein/high-carbohydrate diets, mineral imbalances, essential fatty acid and phospholipid deficiencies, amino acid deficiencies, thyroid disorders, and B-vitamin deficiencies. The dietary supplements used [in the study] were a mix of vitamins, minerals, phytonutrients, amino acids, essential fatty acids, phospholipids, and probiotics that attempted to address the AD/HD biochemical risk factors. These findings support the effectiveness of food supplement treatment in improving attention and self-control in children with AD/HD and suggest food supplement treatment of AD/HD may be of equal efficacy to Ritalin treatment." (5).

For me, the results of this and other studies are not surprising—I know I feel better and think better when I've eaten natural, healthy foods, when I've taken my vitamins in the morning, and when I haven't eaten dairy products. I notice a big difference in my energy level and ability to focus based on what I eat. I know, for instance, that eating vegetable sandwich with hummus keeps me alert and energized for hours, whereas having a can of soda and a few cookies gives me a boost of energy and attentiveness for a while, but then causes me to crash after about 30 minutes. I now understand much more about the neurobiological reasons I have those respective responses to those foods.

Maintaining a healthy diet is important for everyone, whether or not physical or mental differences or disabilities affect one's life. The evidence of physical and emotional benefits that come with a healthy lifestyle is widespread. However, in the specific cases of individuals who have ADD or ADHD, it seems very clear that there are several foods, or vitamin/mineral supplements that can help improve brain function in such a way that many negative patterns and symptoms that show up in individuals' moods and behaviors can be limited or even ended completely. For me, methylphenidate has been effective. However, I am newly encouraged and inspired by what I have found in my research to try to reduce the effects of ADD in other, healthier, more natural ways that will not wear off as the medication does.


1) Beating Attention Deficit Disorder or Nutrition and Nootropics for Focus & Attention., James South, MA.

2) Attention Deficit Disorder and Hyperactivity Success. (What are the true facts?), Dr. Allen Buresz, (Natural Health and Longevity Resource Center)

3) Issues: ADHD and Food Additives. (A review of Feingold's findings.), Family Links International.

4) Food-Induced Attention-Deficit Hyperactivity Disorder: The Research., Shula S. Edelkind, 1998.

5) Study: Outcome-based comparison of Ritalin versus food-supplement treated children with AD/HD.
KL Harding, RD Judah, and C. Gant. (McLean Hospital, Belmont, MA, USA.)
PubMed, National Library of Medicine. From "Alternative Medicine Review." August 8, 2003.

6) Will Eating Turkey Make You Sleepy? The Facts About the L-Tryptophan Effect., Environment, Health and Safety Online

7) More information on how vitamins and minerals affect your body and your brain

Full Name:  Kara Gillich
Username:  kgillich@brynmawr.edu
Title:  The Criminal Inside All of Us. Are We all Bad Apples?
Date:  2005-05-11 18:09:11
Message Id:  15121
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

"Whether it's shark attacks, wars, school shootings or child abductions, something in human nature gives people a sick thrill in such horrific voyeurism. That's what drives the information industry we like to call the nightly news. In the Civil War, spectators went out to watch the battle. Until fairly recently, watching public executions was regular entertainment for the masses. Few have the guts to admit it publicly, but we're all monsters." Michael Middleton ((1))

Is Michael right, are we all monsters? Does everyone has these feelings? What stops us from going on killing sprees, and when other people do why we enjoy hearing about it? Humans are the only known species that feels the need to kill within its species for fun. Other animals work on an inhibition mechanism to not kill one of its own species without reason. Something must have happened in our evolutionarily history that would explain why we developed differently. Some say it is the development of complex emotions, and civilization that make up for the lack of inhibitory mechanisms to not kill within our species. However humans still injury and hurt each other for no apparent reason, so our normal ways of inhibiting ourselves from hurting each other are not working ((2)). Some say these violent crimes against each other can be attributed to an inability to use logic and rational to access a situation before making a decision. Until recently, people believed it was mostly environment that determined whether or not violent behavior was expected. However new research is saying otherwise, genetics and nutrition could play up to a 50% role in violent behavior ((3)).
The desire to "pin down" the nature of evil, violence, or aggression in human beings has had a long history, as far back as the 17th century. The concept of human nature has been through many phases starting with the belief that humans were neutral creatures- neither inherently good nor inherently bad. We are shaped entirely by our circumstances and environment. Through time the belief has fluxed between thinking humans are inherently good or we are inherently bad. Charles Darwin was the first to see humans as having not only shared/similar characteristics with animals, but that human beings are bound to the same laws of competition, survival and natural selection, as animals are. The idea that human behavior could in any way be dependant on instinct was a radial conclusion, and Darwin's theories still stir debate today. However, the infamous Sigmund Freud, father of psychoanalysis, was the first to really define instinct as a drive inside of you that you cannot control. Animal behavior is completely controlled by instinct, and human behavior is at least partly controlled by instinct. Freud' theory says that the basics drives for humans are to reproduce and also to destroy ((2)).
Is Freud just a sick man, with a pessimistic view of the human mind and its capabilities? Freud's theories have been highly accepted, though controversial, and if nothing else serve as a good starting place for exploring the "nature verse nurture" debate. If Freud is correct, then there must be something inside of us that controls our emotions, particularly aggression. Combining the theories of Darwin and Freud would mean that there is some reason which humans have adapted to being aggressive. Being aggressive to other animals takes energy, and natural selection would not select for a trait that wastes energy. In other words, aggressive behavior has been selected for rather than non-aggressive traits. If aggressive behavior can and had been selected for, then there must be something genetically controlling it. There must be something more than environment affecting violent and destructive behavior in humans.
Waller, in his book Becoming Evil, gives an explanation as to why humans have the potential to be so destructive. He says that we are less than 10,000 generations away from being hunter-gatherers. Our ancestors needed to be aggressive in order to eat and survive; however the evolution of traits has not had enough time to catch up with our development. Modern day lifestyles do not require us to hunt for our food, or fight for our survival. Waller comments, "there is often be a lag in time- very substantial in the case of human evolution- between a new adaptive problem and the evolution of the mechanism designed to solve it" ((2)) . The human brain is still highly evolved to living in a hunger-gatherer type of lifestyle, even though new developments have allowed us to surpass our ancestors.
Combining modern day technologies such as guns, bombs and other advanced weaponry with the hunter-gather mindset can be a deadly mix. With violent crime rates on the rise, the search for the genetic answer to aggression has never been more needed. The latest research has shown that a smaller prefrontal cortex in the brain can lead to aggressive and very violent behavior. The prefrontal cortex is found right behind the eyes and is believed to play a critical role in self- restraint and rationalization of emotions. It is also a key component in feeling guilt, remorse and experiencing social awareness. Using brain-mapping technology, scientists were able to find that even a reduction of 11% of the prefrontal cortex nerve tissue leads to antisocial and violent behavior ((4)).
One factor believed to at least help with aggressive behavior in small children is proper nutrition. Good nutritional habits help promote brain growth and function in developing humans, so by eating better, even during pregnancy can potentially help with aggressive behavior. Also a diet rich in omega -3 fish oil showed a reduction in antisocial behavior even in adults ((3)). The concept that food, or nutritious elements found in our food can have an impact on our brain and therefore behavior is more evidence that aggressive behavior does have a large biological component to it.
An equally interesting study dealt with the idea of revenge. "Tagged" water was followed through out the body and the brains of men playing a card game. The men, when another player cheated, could decide if they wanted to get revenge on the other person by taking away some of their points. However, when the stakes were too high, or too risky the players who could get revenge usually decided against it. So even though the player wanted revenge, they decided against it. The results found were that the area of brain activated the most for revenge was the dorsal striatum, however it is interesting to note that the prefrontal cortex was also highly active when the player was weighing the risks between punishing/ getting revenge on another player or deciding that the risks were to high. A researcher comments, "In fact, I believe that our evidence shows that people deal quite rationally with their emotions"( (5)). This implies that indeed the prefrontal cortex does seem to control what emotions we choice to act on or not act upon, and without a fully functional prefrontal cortex people are less likely or maybe incapable of rational decision making.
Other researchers are studying specifically chromosomes and looking for any inherent differences in our genetic makeup itself that would lead to abnormal, violent behavior. Males born with the chromosome combination of XYY are believed to be more inclined to commit dangerous and violent crimes. Known to be very tall, have a small IQ, and low fertility XYY men could be genetically predisposed from birth to be aggressive. Tests done in mental hospitals in the United Kingdom found that 7 men, all who had committed violent crimes, had XYY chromosomes ((6)). One flaw in jumping to the conclusion that because a man is XYY he is therefore a threat to society is that only men who had been already convicted of crimes had been tested. There could be plenty of XYY men living perfectly normal lives without any record of abnormal behavior because they have none anything wrong, but they do not factor in the study results because only men previously convicted were tested. Therefore it is hard to say that all XXY men are predisposed to abnormal behavior.
If all humans, seeing as we all came from the same ancestors, have the potential to be destructive why aren't we? Culture and the rise of civilization has something to do with it. We learn to control our impulses or instincts as young children. We learn quickly that we will be punished if we do something wrong. If we do not fear punishment from other people, we fear punishment from ourselves in the form of guilt. People who are lacking this ability to learn "right from wrong" and rationalize their actions are the ones who actually lash out and do violent things. Whether or not the size of the prefrontal cortex or an extra Y chromosome are the genetic causes to aggression is still unclear.
It would be interesting to see more research on the nutrition end of the discussion and how it could affect the size of the prefrontal cortex. Also an analysis of the evolution of XYY to see if it was more prevalent 10,000 years ago or if the numbers of XYY men are on the rise. Maybe there are so few known XYY males because national selection is beginning to select against this genetic combinations because aggressive behavior is no longer needed in our society. One other major flaw in all current research is the lack of study of violent woman. There have been no recorded links between genetics and female aggressiveness.
If all the newest research is correct then we could one day be able to determine based on genetics or brain size, whether or not someone will be violent. Does this mean we have to the right to punish them, is it their fault that they lack the metal abilities to adjust to our society rather than the ones of our hunter- gatherer ancestors? I would say no, because any genetic explanation for aggression or violence can not completely explain our behavior. Even serial killers turn themselves in because they know they are doing something wrong, or something is not right with their actions even if they can not stop themselves. Clearly environment and circumstances affect human behavior. The debate is only to figure out what the ratio is between the factors and how genetic factors can be minimized.

Works cited
1) Quotation website
2) Waller, James. Becoming Evil. New York: Oxford University Press, 2002.
3) Raine, Adrian. Biological Key to Unlocking Crime
4) Size of Brain Linked to Violence
5) Roach, John. Brain study Shows Why Revenge is So Sweet
6) Blame It On Your Genes
Works consulted
1. Levin, Jack. Mass Murder America's Growing Menace. New York: Plenum Press, 1985.
2. Stream DR. Herbert. Our wish to Kill. New York: St. Martin's Press, 1991.
3. Tithecott, Richard. Of Men and Monsters. Madison: The University of Wisconsin Press, 1997.

Full Name:  Shu-Zhen Kuang
Username:  skuang@brynmawr.edu
Title:  Compulsive Gambling: Why Gamble?
Date:  2005-05-11 23:05:46
Message Id:  15127
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

A game of tiles called mahjong was first introduced to me at the age of ten. My aunts and uncles seemed very fond of the game where small amounts of money were either won or loss among the four players. I always considered gambling as a form of entertainment that is played occasionally with family and friends. As with other forms of entertainment, such as going to the movies or watching TV, gambling takes your mind off of daily life and involves you in a game of strategy and chance. Although a card game can be fun by itself, the excitement is amplified when money or any type of wager is involved. The thrill from gambling may be one of the reasons why gambling have taken over some peoples' lives. It has been estimated that 1% of American adults have compulsive gambling problems (1). Since people with this disorder seem to be unable to control their own actions, there may be a biological component to this disorder. However, social backgrounds could also influence the likelihood of developing this disorder. What roles does biology and society play in the development of compulsive gambling among individuals?

Compulsive or pathological gamblers are characterized by their inability to control their gambling urges to the point that all they think about is gambling and ways to support this habit. Most compulsive gamblers ruin their own lives as well as the lives of family members (2). There are two types of compulsive gamblers: action and escape. Action gamblers typically think of themselves as highly skilled individuals. Therefore, they prefer to take a lot of risks in order to demonstrate their talents. As the name suggests, escape gamblers gamble to forget about unresolved problems and issues in their own lives. In this case, gambling is used as a form of relief from the depressed or anxious state the person is currently experiencing (1).

From a biological perspective, compulsive gambling is an addiction similar to drug and substance abuse, even though it is categorized under impulse control disorder (1). The neurotransmitter called dopamine seems to be involved in reward, pleasure and addiction in the brain. Some doctors have noticed a linked between Parkinson's disease and compulsive gambling. Since dopamine also controls muscle movement, medication is given to patients with Parkinson's disease that stimulations the dopamine receptors in the brain. When higher dosages of this medication were given, some patients reported gambling problems as a side effect. When the dosage was lowered, patients reported decreased urges to gamble. Therefore, it is possible that dopamine can be linked to compulsive gambling problems (3).

However, another study reported that levels of dopamine were found to be changing in the brain when a person gambles. Subjects were given three different scenarios: one where the monetary reward was unpredictable, one where the monetary award was predictable and one where there was no award expected or given (control). The unexpected monetary reward, resembling gambling, increased levels of dopamine in the certain regions of the brain at the same time decreasing dopamine levels in neighbors regions. The predictable monetary situation showed no significant changes in dopamine levels in the brain. Since dopamine levels increases and decreases in different regions of the brain, there seems to be a more components to compulsive gambling than only the dopamine reward system (4).

Other research has shown that the brain's prefrontal cortex, which is linked to judgment and decision-making, may be damaged in compulsive gamblers. People with and without gambling addiction took mental tests for this study. The results demonstrated that compulsive gamblers made more mistakes and bad choices in tasks involved in decision-making, and attention and inhibitory control, respectively. An abnormality in the prefrontal cortex may be a contributing factor in compulsive gambling or compulsive gambling may be damaging the prefrontal cortex. It is unknown which one of these factors is causing the other to occur (5).

Genetics is also another biological factor to compulsive gambling. Studies have shown that some people are predisposed to develop gambling problems due to their genes. Genes could explain for as much as 35% of the gambling problems found in individuals (1). If damage in the prefrontal cortex is causing compulsive gambling, genetics could be the origin of this abnormality. Since there is genetic variety among individuals, dopamine may not have the same effect in everyone's brain. Some individuals that are predisposed to gambling problems may have an entirely different experience of dopamine that is similar to substance addictions. As mentioned before, dopamine can be released when the person gets a thrill from gambling. In order to recapture the high levels of dopamine, the person must continue to gamble (3).

Other psychiatric disorders such as depression, mania, and substance and drug abuse are also present in many compulsive gamblers. These disorders seem to contribute to compulsive gambling in individuals. For a person with depression, gambling provides some form of relief by possibly causing the release of high concentrations of dopamine in the brain. In order to make the depression go away momentarily, the person continues to gamble and slowly develops compulsive gambling problems. If the situation is reversed, the compulsive gambler can develop depression from gambling because he or she becomes bankrupted and loses his or her family. The person will continue to gamble due to his or her depression. In both scenarios, compulsive gambling continues to perpetuate no matter where the disorder originated from (6).

Although there are many biological aspects to compulsive gambling, the societal and environmental impacts can not be ignored. The benefits from gambling institutions are numerous. Gambling is a $40 billion business in America. Every state has legalized gambling of some form except two states. From an economic point of view, gambling is great for the state government since it is the easiest way to generate tax revenues. Gambling institutions, such as casinos, attract many tourists and provide job opportunities in the area (7). Although many people gamble only for recreation, some develope gambling problems due to the availability of casinos, horse tracks, and other gambling sites commonly found in many places. Previous studies have shown people living within 50 miles of a casino have a higher rate of compulsive gambling problems (1). Even though gambling institutions does not created economic expansion in the areas where it is open for business, gambling continues to be a growing industry. The American population will only become more vulnerable to gambling problems as more casinos and other gambling institutes are built (7).

Social backgrounds are another contributing factor to compulsive gambling. Higher rates of compulsive gambling are seen in African Americans compare to Caucasians. People with low income and limited education are at a higher risk of developing a gambling problem because they are more likely to visit casinos and buy lottery tickets to gamble for that chance to win big. The temptation of winning an extraordinary amount of money may slowly develop into a gambling problem and eventually compulsive gambling (1).

Certain population demographics are more at risk to develop gambling problems than others. Because gambling is becoming more accessible online, teenagers are more likely to develop gambling problems than adults. The gambling addiction is starting at a younger age given that teenagers are learning how to gamble before they could legally participate in real gambling situations (8). The elder demographic (65 and older) may be more vulnerable to gambling problems too. Seniors often visit casinos for entertainment and most seniors just gamble for fun. However, some seniors are at risk of becoming compulsive gamblers, which is problematic since many live on a restricted income. The mental statuses of seniors are also more likely to be impaired due to age, which may influence their ability to gamble reasonably (9). Although seniors and teenagers, for different reasons, are more at risk, gambling problems are present in every age demographic.

In the past, women were not allowed to participate in gambling games due to social restrictions. However, as women began to be treated with equal rights as men, more women began to gamble as well (2). Although the majority of compulsive gamblers are men, women are also developing an addiction to gambling (1). As the numbers of women with gambling problems increased, it was thought that a different type of gambling problem was found in women than in men. However, this was not the case. In Arizona, 95% of women who called for help belong to the category of escape gamblers, but 63% of men are also identified in the same category. The remaining portion of men and women are considered action gamblers. Gender did not seem to dominate a certain type of compulsive gambling (10), (11).

Treatments for compulsive gambling have been similar to those used for other addictions. Psychodynamic therapy, 12-step programs, motivational interviewing, and cognitive behavior therapies are among the treatments used for compulsive gambling. However, medications are only used to treat depression or other psychiatric disorders that comes along with compulsive gambling. The effectiveness of medications on compulsive gambling is unknown due to the small amount of studies done in this area. As a result, drugs are not widely used to treat this disorder. Treatments are typically used in combinations, but the effectiveness of each treatment for compulsive gambling is unknown. Due to the low success rates of 12-step programs and short term effectiveness of other treatments, many people accepting treatment for this disorder will slip back into old habits (1).

With the continual growth of the gambling industry, compulsive gambling will become a larger problem than it is now. It is uncertain which types of treatments have a greater effect on compulsive gambling, if any effect at all. Without this information, it will be increasingly difficult to treat the growing number of compulsive gamblers seeking for help in the future. Obviously, the amount of research money provided for this area of study is inadequate. Therefore, the government and gambling industries should provide a larger budget for research to understanding current treatments and developing additional treatments for this disorder. Medicines used to treat other addictions should be tested for its effectiveness on compulsive gambling. More research can also go into the whether action and escape gamblers require different treatments. Although learning more about the effectiveness of treatments, individually or in combination, is necessary, preventative measures should also be taken. To reduce the number of compulsive gamblers in the future, schools should inform students of the development of this disorder and its consequences. The government and gambling industry should consider hiring gambling addiction experts to search for and counsel people with signs of gambling problems in casinos and horse race tracks (1).

Compulsive gambling is a complex disorder that is affected by a combination of social and biological components. Although more research on how differences in dopamine levels and the prefrontal cortex is connected with compulsive gambling is necessary, social aspects of compulsive gambling should not be ignored. Clearly, the interactions between the biological and the social components are what will determine the likelihood of becoming a compulsive gambler.


1)Problem Gambling

2)Differences in Pathological Gamblers in Arizona

3)The Dopamine Connection

4)Gambling increases level of brain chemical

5)Pathological gambling associated with brain impairments

6)Gambling Addiction

7)Gambling Facts & Stats

8)Problem Gamblers Show Brain Impairment

9)Senior Gamblers Play Dangerous Odds

10)Women Gamblers

11)Men Gamblers

Full Name:  Carly Frintner
Username:  cfrintne@brynmawr.edu
Title:  Nutrition and the Brain: Do Certain Foods Aggravate ADD?
Date:  2005-05-12 10:48:32
Message Id:  15128
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

By Carly Frintner
Paper #2 (v2) for Neurobiology and Behavior, Spring, 2005
Professor Paul Grobstein

I was diagnosed with ADD 2 years ago at age 19, and immediately, the doctors I was seeing recommended prescription drugs to help alleviate my symptoms. Since the time of my diagnosis, I have tried varying dosages of at least 4 different drugs to treat ADD: Strattera, Concerta, Ritalin, and methylphenidate (the generic brand of Ritalin.) I finally settled on methylphenidate because it does not cause noticeable side effects when I take it. (I experienced headaches, irritability, anxiety, and other side affects when I tried the others.)

It was not until this year when I moved into Batten House, however, that I noticed how much the food I eat influences not only my mood and energy level, but my concentration. All meals cooked communally at Batten are prepared only with vegan ingredients. That means there are no animal products (meat, fish, eggs, dairy, gelatin, honey, etc.) in any of these meals. Additionally, we use organic ingredients in many of our recipes. Over time, I did not consume certain foods with as much regularity as I had before I lived in Batten, including milk, sugar, greasy foods, caffeine, alcohol, and foods with preservatives and artificial color and flavors. I began to notice that when I did consume these things, I often felt much more sluggish, irritable, tired and introverted. I found it significantly more difficult to start or stick to tasks, especially academic tasks. I began to wonder how some of my ADD symptoms might be affected by the nutrients I get (or don't get) from the foods I eat, and if I might experience better results in altering my consumption patterns rather than taking symptom-suppressing medications.

How does food affect my emotions, my energy level, my ability to think critically? Has ADD always been as prevalent as it is today? To what extent do external factors affect attention? Is it possible to use our "I-function" to override the factors that affect attention? Is it more difficult for people with ADD to do this than for those who do not have ADD? How frequently is ADD the result of genes vs. environmental factors? Can I significantly limit the degree to which ADD affects me by changing my eating habits? Could the effects of such changes be significant enough that I might not need to use prescription drugs to treat my ADD anymore? The research I found suggests that it is definitely worth a try.

To start on a very basic level: The brain needs glucose to function. It is "dependent on a second-by-second delivery of glucose from the bloodstream, as neurons can only store about a 2-minute supply of glucose (as glycogen) at any given time." (1). Blood sugar and blood insulin levels greatly affect the brain's ability to absorb glucose from the blood. Ideally, blood should have a normal blood sugar level and low blood insulin. Blood sugar levels rise rapidly when refined sugars and starches are consumed, causing blood insulin levels to increase "10-fold within minutes, and keep on increasing insulin to even higher levels for 2-3 hours." (1). The result is "a rapid glucose uptake by almost all body tissues, leaving far less than optimal supplies for the brain." (1).

After a large meal, or eating heavy or rich foods, I often feel sleepy. Sometimes I feel downright exhausted. I know this is a common phenomena—the Thanksgiving Dinner Effect is something I think most people have experienced. Many people have come to believe that tryptophan in turkey and red wine causes the drowsiness we experience after a Thanksgiving feast. Actually, it is simply the large amount of food the body has to digest that draws blood away from other parts of the body—most significantly, the brain. It wouldn't matter whether a person ate turkey and drank wine—the effect would occur either way, as long as he or she had consumed a large amount of food—and the more fat and sugar present in those foods, the stronger the effect. (6). For someone with ADD, however, the effects may be significantly more debilitating.

When I recently began researching certain chemicals and nutrients in foods and the very specific ways in which they affect a person's body and brain, I found that I often do get enough of the vitamins and minerals that I need for my body and brain to be healthy and functional. However, I hadn't realized that I may be consuming other foods and chemicals that cancel out their benefits. I also found that stress can affect the way nutrients pass through the body.

Vitamins B, C and E, Magnesium and Zinc are only a few vitamins and minerals that are essential to brain function. Even when consumed in sufficient amounts, consumption of other types of foods can limit intestinal absorption, rendering them useless in our bodies before they even reach the bloodstream. In the case of Magnesium, "high intake of phosphate (common in meat, soft drinks and baked goods) calcium, fat, phytate (found in unleavened bread and wheat bran), lactose (milk sugar), oxalate (found in spinach, rhubarb, chocolate), and alcohol... all inhibit intestinal Mg absorption.... [Also,] urinary loss of Magnesium is caused by the stress hormones adrenaline and cortisol, diuretics (including caffeine), some antibiotics, digoxin, alcohol, high sodium, calcium and sugar intake...and birth control pills." (1).

"The cell membranes and synaptic endings of neurons in our brains and nervous systems are composed of DHA, an omega-3 essential fatty acid. These membranes go rancid unless protected with antioxidants [such as vitamins C and E.] Since most people don't get enough DHA, other types of fats are incorporated into the brain, but they do not function as well because they are the wrong shape." Also, "the all-important neurotransmitters are manufactured by the body from dietary sources." (2). In order for these neurotransmitters to function well, the B vitamins, magnesium, zinc, and Vitamin C must all be present in sufficient amounts. "Some studies have shown a relationship between fatty acid deficiencies and ADD, learning disorders, and behavior problems." (2). (For more information on Magnesium and the brain, as well as how deficiencies in B-vitamins, vitamin C, and Zinc affect the brain, go here (1). or here (2).)

Dr. Allen Buresz believes that ADD and ADHD symptoms may often be caused in part or in full by "allergies to one or more foods (usually milk, cane sugar, chocolate, American cheese, or wheat (with sugar, additives, and cow's milk being the most frequent problems.)" (2). He notes that the majority of patients he has treated have shown significant behavioral improvement upon removing these foods from their diet (especially after 6 weeks of maintaining the altered diet.) I personally have noticed an improvement in my physical health since cutting out dairy products. When I occasionally consume dairy products now, I do sometimes notice that my attention falters. However, as I usually consume dairy in the form of ice cream or chocolate, it is hard to tell whether that effect is a result of the milk or the sugar.

Another topic worth examining is the way in which food additives affect mood and behavior. Many sources I reviewed for this paper mention the work of Dr. Benjamin Feingold. The Feingold Hypothesis asserts that food additives are responsible for learning and behavior disorders, or more specifically, that "food additives cause hyperactivity." (3). Feingold tested this hypothesis on children in 1973 with 1,200 cases, examining over 3,000 different food additives. "About half of them reported greatly improved behavior and another 25% reported some improvement in the clinical trials." (4).

Additional studies have found evidence to both support and refute Feingold's claims. The evidence I have seen, as well as my own experiences, are limited and very informal, yet for me, they do lend credibility to Dr. Feingold's case.

The most hyperactive boy I ever knew, a close friend from elementary school, had a diet that daily included Kool-Aid, soda and fruit roll-ups, among other things. Likewise, two boys I babysat for one summer ate almost no natural foods. The foods they ate included (not exclusively) white bread, Ritz crackers, pepperoni with red dyes, American cheese, colorful and/or sugary cereals, potato chips and cheese curls. One had severe ADHD, and one was almost always irritable, restless, disobedient, aggressive and throwing temper tantrums. Contrary to this case, another boy I babysat whose parents were more conscious about limiting the amount of sugary and non-natural foods they gave him was quiet, obedient, and calm whenever I saw him.

Within my own family I have noticed that, as children, my sister and I were generally calmer, more attentive, less argumentative, and more obedient than my cousins, both male and female. I'm starting to think this may be due in part to the fact that my parents raised us almost exclusively on all-natural foods, with an emphasis on fruits, vegetables, whole grains, and alternative proteins (tofu, nuts, and beans for example) while my aunt and uncle allowed their children to eat a wider variety of junk food more frequently.

The following excerpt from a study by KL Harding, RD Judah and C. Gant indicates several factors that contribute to ADHD (with indications for ADD, as many symptoms are the same.)

"Numerous studies suggest that biochemical [causes] for AD/HD cluster around at least eight risk factors: food and additive allergies, heavy metal toxicity and other environmental toxins, low-protein/high-carbohydrate diets, mineral imbalances, essential fatty acid and phospholipid deficiencies, amino acid deficiencies, thyroid disorders, and B-vitamin deficiencies. The dietary supplements used [in the study] were a mix of vitamins, minerals, phytonutrients, amino acids, essential fatty acids, phospholipids, and probiotics that attempted to address the AD/HD biochemical risk factors. These findings support the effectiveness of food supplement treatment in improving attention and self-control in children with AD/HD and suggest food supplement treatment of AD/HD may be of equal efficacy to Ritalin treatment." (5).

For me, the results of this and other studies are not surprising—I know I feel better and think better when I've eaten natural, healthy foods, when I've taken my vitamins in the morning, and when I haven't eaten dairy products. I notice a big difference in my energy level and ability to focus based on what I eat. I know, for instance, that eating a vegetable sandwich with hummus keeps me alert and energized for hours, whereas having a can of soda and a few cookies gives me a boost of energy and attentiveness for a while, but then causes me to crash after about 30 minutes. I now understand much more about the neurobiological reasons for my responses to those foods.

Maintaining a healthy diet is important for everyone, whether or not physical or mental differences or disabilities affect one's life. The evidence of physical and emotional benefits that come with a healthy lifestyle is widespread. However, in the specific cases of individuals who have ADD or ADHD, it seems very clear that there are several foods, or vitamin/mineral supplements that can help improve brain function in such a way that many negative patterns and symptoms that show up in individuals' moods and behaviors can be limited or even ended completely. For me, methylphenidate has been effective. However, I am newly encouraged and inspired by what I have found in my research to try to reduce the effects of ADD in other, healthier, more natural ways that will not wear off as the medication does.


1) Beating Attention Deficit Disorder or Nutrition and Nootropics for Focus & Attention., James South, MA.

2) Attention Deficit Disorder and Hyperactivity Success. (What are the true facts?), Dr. Allen Buresz, (Natural Health and Longevity Resource Center)

3) Issues: ADHD and Food Additives. (A review of Feingold's findings.), Family Links International.

4) Food-Induced Attention-Deficit Hyperactivity Disorder: The Research., Shula S. Edelkind, 1998.

5) Study: Outcome-based comparison of Ritalin versus food-supplement treated children with AD/HD.
KL Harding, RD Judah, and C. Gant. (McLean Hospital, Belmont, MA, USA.)
PubMed, National Library of Medicine.
, From "Alternative Medicine Review." August 8, 2003.

6) Will Eating Turkey Make You Sleepy? The Facts About the L-Tryptophan Effect., Environment, Health and Safety Online

7) More information on how vitamins and minerals affect your body and your brain

Full Name:  Lavinia FIamma
Username:  lfiamma@brynmawr.edu
Title:  Alzheimer's disease
Date:  2005-05-12 12:22:42
Message Id:  15131
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Alzheimer's disease
Lavinia Fiamma
Neuro Biology 2005

Alzheimer's disease is an irreversible brain disorder and thus far there is no known cause or cure. It slowly steals the minds of its victims causing memory loss, confusion, impaired judgment, personality changes, disorientation, and loss of language skills. It is a fatal disease, and can be called irreversible dementia.
360,000 cases of Alzheimer's disease are diagnosed each year and by 2050studies show that 14 million Americans will have this disease. Alzheimer's disease is becoming more common every year. Although the cause of Alzheimer's disease is still a mystery, there are two "pathological features that appear in the brains of Alzheimer's victims". These features are amyloid plaques and neurofibrillary tangles. Something typical of Alzheimer's disease is the buildup of amyloid plaques between the neurons in the brain. An amyloid is a term used to describe the protein fragments that are produced by the body naturally. Beta-amyloid is a piece of a protein that is taken from another protein called amyloid precursor protein (APP). (1) In a normal, healthy brain, these fragments of protein would be broken down and removed. However in a brain suffering from Alzheimer's disease, instead the fragments accrue to form solid, insoluble plaques.
Neurofibrillary tangles are insoluble twisted fibers that lie inside the cells in the brain. They consist of a protein called tau, which makes up part of a structure called a microtubule. It aids the body in transporting important substances such as nutrients from one part of the nerve cell to another. Someone suffering from Alzheimer's disease, would not receive this transport as the tau protein is irregular and the microtubule structures subside.
Alzheimer's causes an general reduction of brain tissue. The grooves in the brain, called sulci, are greatly widened and there is a reduction of the gyri, the folds found on the outer layer. The chambers within the brain that hold 'cerebrospinal' fluid, are enlarged. In the early stages, short-term memory begins to decrease which happens when the cells in the 'hippocampus' (part of the limbic system), deteriorate. One is not able to perform routine tasks and as the disease spreads through the cerebral cortex, judgment also declines, along with the ability to speak any language. As the disease progresses more nerve cells die causing behavior changes, such as agitation. The ability to recognize once familiar faces and is totally lost in the final stages along with all type of communication. Patients are not able to control their bowel or bladder, meaning that they need constant care. This state of complete dependency can last for years until the patient dies. Once a patient is diagnosed with the disease, they are expected to live from four to eight years.

There is no known treatment for this disease, however there are many research programs desperately attempting to find some sort of cure for example:

"Alzheimer's Disease Research (ADR), a program of the American Health Assistance Foundation, was established in 1985 to fund research on and educate the public about Alzheimer's disease. Since the program's inception, ADR has awarded more than $43.3 million to support promising research in fields ranging from molecular biology to epidemiology. AHAF is honored to have played a role in Dr. Stanley Prusiner's Nobel Prize in Medicine in 1997 for his landmark research on prions. More than $1.2 million in research grants has been awarded to Dr. Prusiner through its ADR program to develop his prior theory as a model for Alzheimer's disease." (2) .
There are medications currently available for patients to take to help control the symptoms. There are medications that can help control agitation, depression or delusions which are all symptoms that occur with the progression of the disease. Five drugs that are FDA approved can help slow the developement of AD: Cognex, Aricept, Exelon, Reminyl and Namenda. They all slow down the metabolic breakdown of acetylochline(sufferers have a low amount of this chemical), producing more of the brain chemical used for communication between cells. Namenda is used to treat moderate to severe AD, the first drug used in this way. It seems to be able to protect the nerve cells in the brain against large amounts of 'glutamate'. This substance is a chemical released by damaged brain cells in large amounts; they are damaged due to a pathological process that can be associated with AD.

Medications that are used to control depression and anxiety can be used to help patients in the middle stages of AD. Even though they are not designed to help AD they can be part of a treatment plan by doctors. High does of Vitamin E or alpha tocopherol and Selegiline are able to slow the progression of the disease to its most severe. They work by slowing down the loss of brain cells that occurs with the disease. The latter medication actually increases the supply of brain chemicals that diminishes during the disease itself. Selegiline is used on patients suffering from Parkinson's.

Older women taking "combination hormone therapy(estrogen and progestin) had twice the rate of dementia and AD in comparison to those who did not take it. This research was found by Women's Health Initiative (WHI). Researchers decided that combination hormone therapy should not be used on older postmenopausal women for any reason. Estrogen itself is said to increase risk of dementia if it is not taken with progestin.

Other suggested cures have been medications such as Advil or Motrin, easily accessible over the counter medications which can lower the risk of the development of AD or lower the progression rate. Another treatment would be Ginkgo biloba which is made from the leaves of a ginkgo tree. Its properties are :anti inflammatory, anti-oxidant and anti coagulant, it also increases the blood flow to the brain. Some studies have shown it to be beneficial in treating some of the symptoms of AD. The Journal of the American Medical Association said that it was not helpful in helping to treat patients with the disease. It may cause bleeding if combined with aspirin and the long term effects are thus far unknown.
Studies have shown that physical activity appears to slow down Alzheimer's. The research confirmed that long-term physical activity improved the learning capability of mice and diminished the level of "plaque-forming beta-amyloid protein fragments" in their brains.
Therefore lifestyle interventions may help to slow the progression of AD. Scientists are still unaware as to whether physically or cognitively motivating activity might delay the progression of Alzheimer's disease. Scientists have now shown through this study, in an animal model system that a behavioral intervention such as exercise, could delay, or even prevent the development of AD. (3)
There have been interesting studies regarding who develops the disease. Latinos living in the US, with Alzheimer's develop their first symptoms, at a significantly younger age than non Latinos according to a report in the May Archives of Neurology.
"These findings clearly point to the need for extensive, systematic epidemiological studies targeting the U.S. Latino population, our largest and fastest growing minority group, says Maria Carrillo, Ph.D., Alzheimer's Association director, medical and scientific affairs. "The data also suggests that Alzheimer's Disease Centers across the country need to commit time and resources to meet the needs of this population, a huge undertaking but clearly required based on the projected numbers."
A possibility as to why the disease is more common in Latinos is that there is great stress involved in moving to a new country therefore living as an immigrant may make non symptoms associated such as anxiety and depression, both common In people with the illness, more noticeable in Latinos. (4)
In conclusion it seems as though this disease in particular is still waiting for a complete cure. Although many different methods of slowing the disease down have been discovered, scientists are still waiting for a cure that would potentially save over four million Americans.
1)Alzheimers Disease

2)Alzheimer's disease

3) Alzheimer's disease ,

4) Alzheimer's disease

Full Name:  Nadine Exie Huntington
Username:  ExieH@aol.com
Title:  How Comparable are Clones? Do We Have the Power to Change What is Already Written for Us?
Date:  2005-05-12 13:24:56
Message Id:  15132
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

A recent study on water-fleas shows that genetically identical crustaceans can adopt completely different physical make-ups depending on the environment in which they were raised. These animals glide over water, and can be an easy target for fish and other predators. Some have, however, developed a "helmet," or hard shell around their heads, making it more difficult or fish to eat them. While one could simply attribute this to survival of the fittest, those with the helmet and those without are exact DNA clones of one another, making this conclusion false.

This variation between the same species suggests that environment plays just as strong a role as genetics. While these fleas come from the same parents, those raised in an aquarium with no predators did not develop the helmet amour. By injecting the smell of fish into a separate aquarium, the fleas grew the helmet. This emphasizes the role of nurture, not simply nature, as "it can elicit markedly different traits from the same DNA. (1) "

The argument in favor of nurture is by no means limited to water-fleas, and has surfaced in several other recent studies. Another example is found in the oak-tree caterpillar. Depending on what time of year the caterpillar hatches, it will take on a different physical look. Those who hatch in the spring eat the flowers of the tree and take on the appearance of oak blossoms, while those who are born in the summer eat leaves and resemble twigs. This physical distinction is a result of their environment and diet, not of a genetic disparity (2) . Does this mean that if given a third or fourth environment they would again change? This distinction is not one which evolves over hundreds of generations, but arises in a single lifetime. Eric Turkheimer explains that "Once a new environment comes along it can change everything, so what you though was a fixed effect of a gene isn't (3) "

In a study examining neuroticism in mice similar results were found. Scientists assumed that neuroticism was a result of DNA because a gene had been located for it. Nevertheless, the behavior patterns associated with neuroticism, anxiety, fear, and agitation, were all reversible if raised in a more loving environment. Regardless of what gene the mice possessed, their level of neuroses was directly proportional to the amount of maternal care (grooming, physical contact, etc.) the mouse received as an infant (4) .

These findings are not reserved for mice alone, but have been examined in human beings as well. Certain traits which used to be considered innate are now being attributed to life experiences. The gene MAOA, monoamine oxidase-A (5) , has frequently been associated with violence, especially in men (as it is sex linked). Those who produce inadequate amounts of MAOA were thought to be more violent than those who produced a normal amount. In a study tracking 442 men, researchers found that weather or not there was a scarcity of MAOA in the brain did not matter, so long as the men had been raised in loving and supportive families. Those who lacked the MAOA enzyme in conjunction with being raised in an unstable and violent households were twice as likely to commit crimes, be antisocial, and physically abusive (6) . Again one sees that genetically comparable individuals can display polar traits when raised in contradictory environments.

What then of the countless stories of identical twins raised appart from each other and having similar traits later in life? They are genetic clones of one another and may display certain similarities regardless of their respective environments. Perhaps it is that their environments were analogous in certain key factors. If one had been raised in an abusive household, while the other was exposed to a nurturing and loving family, conceivably they would no longer exhibit identical behavior. Small personal habits, such as nail biting or reading a magazine from end to beginning may be associated with identical twins, but larger life patterns like violence verses passivity are shown to be a result of environment, in spite of genetic make-up. Science historian Evelyn Fox Keller states that "what our biology really gives us is our plasticity, our ability to respond to our experiences. That's what's innate. (7) " If we are all born with the potential to be one of two, or three, or possibly more individuals, regardless of our genetic make-up, how do we know who we really are?

Does this mean that I can really be whoever I want to be? Is my life destiny truly in my hands? Yes, I have been dealt a set hand of genetic cards with which to play, but perhaps each one of those cards holds multiple outcomes depending upon how I elect to use them. I can no longer change weather or not I received affection and love as a child, so perhaps that strain of my personality is already determined, but there must be other roads which are still open to interpretation. Or is there an age at which one's personality is, for the most part, fixed? The oak caterpillars do not have the ability to chance from looking like blossoms to looking like twigs after they have chosen one form, so how much room for change do I have this late in life? Do we stop developing as individuals and simply act in accordance with the way our genes have been raised thus far at a certain period? If so, when? Early childhood is when we are most impressionable, but we defiantly change tremendously in all aspects of ourselves at puberty, and many people continue to pursue their academic education long after puberty ends. If I change my environment now will I be a different person, or is it too late? My DNA will remain the same, but will my personality be affected?

This then poses the question of who I am outside of my genetics. I am a person who goes to Bryn Mawr College, and I would still be a physical clone of that person if I had attended school in another country. If those two people were to meet, however, no one could claim that they were the same, after having had such differing experiences. If I do not like who I currently am, have I been given the choice to change me, if not genetically then on another level? On a less philosophical level I have the capability to change some aspects of a supposedly predetermined fate. If I have a gene for high cholesterol, I can resist those effects by controlling my dietary fat intake (8) . Am I like the water-fleas? If I am thrown into a hazardous environment will I find previously untapped resources within myself to help me cope with those new challenges? Who knows what I am capable of, if necessary? I think I know myself, but I have no idea, and can have no idea of my full potential until it is tested.

The bottom line seems to be that "different environments can produce different [traits] from the same genotype (9) ." This makes me believe that I only know a very small part of myself, as I have really only been exposed to one life, and one series of events. To better understand every aspect of me and every facet of my personality I would have to experience a multitude of environments. To discover every side of my being I would have to try everything in every single combination in the world. This is, of course, impossible, as there are an infinite number of possible ways in which to live one's life, experiences to have, places to see, people to meet, and ideas to explore. If there are an infinite number of ways to live, then there must also be an infinite number of people I could become, despite my one set of genes. My DNA is fixed, and has been since I was conceived, but my life is flexible, and those genes could lead me down many different roads.

If water-fleas can change their physical bodies, and mice and humans can have different levels of anxiety and aggression depending on how they were nurtured, I would hope that I can alter some of my less favorable qualities. I do believe, however, that some of traits become less malleable after a defined age. I wonder what this age is, but I imagine that it must vary from person to person, and depend on what trait it is. For example, convicted sex offenders are fairly likely to commit a sex-crime again, but are significantly less likely to do so if they receive treatment. This shows that they have a predisposition to assault, but with an environmental intervention, they have the potential to change (10) . Is there a point at which one can't teach an old dog new tricks? Is the learning dependant upon the trick or the age? Are these altercations possible due to many open gateways in a defined number of genes?

After learning about the possibility for change, in spite of my predetermined DNA, I feel as though I must examine more closely my own actions and the effects they have on my life and those around me. This same question has plagued the minds of people for centuries. In mythology there is the recurring theme of whether we are masters of our own destiny or merely playthings of the Gods. If we consider our genes to be akin to the Gods of that time, in that we are, to a certain extent, at their mercy, we are still asking ourselves this same question. Now that we have scientific evidence to support our potential for mastery of our fates we must, as a society, make a more concentrated effort to use this ability to our advantage. We can not alter the way in which we ourselves were raised, but we can try harder to raise future generations in a certain manner. If creatures as simple as water-fleas can change due to their environment, so too can people. This is a huge discovery, and therefore an immense burden and responsibility for us. We can be masters, to a degree, of our brains, and those of others, and as such we must assume this power with seriousness and care in order to get things "less wrong (11) ."


1) Water-Fleas Case Shows that Ability to Adapt is What's Really Innate, The Wall Street Journal, Sharon Begley. April 22, 2005

2) Professor Eric Turkheimer, University of Virginia

3) Professor Eric Turkheimer, University of Virginia

4) Michael Meaney, McGill University

5) Genes Explain Why Some Kids Grow Up to be Violent, Abusive. The Wall Street Journal, Sharon Begley. September 20, 2002

6) Water-Fleas Case Shows that Ability to Adapt is What's Really Innate, The Wall Street Journal, Sharon Begley. April 22, 2005

7) Evelyn Fox Keller, Massachusetts Institute of Technology

8) Terrie Moffitt, University of Wisconsin, Madison and King's College, London.

9) Terrie Moffitt, University of Wisconsin, Madison and King's College, London.

10) ) The Kid Safe Network

11) Professor Grobstien

Full Name:  Jasmine Shafagh
Username:  Jshafagh@brynmawr.edu
Title:  Understanding Obsessive Compulsive Disorder
Date:  2005-05-12 14:02:14
Message Id:  15133
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Obsessive Compulsive Disorder, also known as OCD, is a very complex and fascinating disease, for its symptoms force doctors and scientific researchers to question the existence of free-will in the brain. Because OCD sufferers experience unsettling thoughts (obsessions) and carry out actions to alleviate their distress and anxiety (compulsions), even though they know both are irrational, it makes me believe that OCD is caused by irregularities in the brain, and as a result, the brain must equal behavior. Furthermore, I believe that there must be an unknown part of the brain controlling free-will, for OCD patients can minimize the symptoms of the disorder through behavioral therapy, (i.e. they can control their symptoms through recognition of the disorder and through their own free-will,) which is known to be equally as effective as medications in alleviating OCD symptoms and behaviors. Thus, from my research, I believe that the brain ultimately equals behavior, for OCD symptoms and behaviors are controlled by irregularities in the brain, and that free-will is also a component of the brain, for OCD patients use their free will to control the obsessive and compulsive behaviors just as medicine does.

OCD is the fourth most common psychiatric disorder in the United States and affects one in every fifty people, (1) . It is a disease of obsessions, fears, unsettling thoughts that occur over and over again, compulsions, and repetitive behaviors and rituals that sufferers perform to alleviate their intense anxiety, (1) . Avoiding the fearful thoughts and performing the rituals fortifies the fear and maintains the OCD cycle, (3) . OCD is fairly easy to diagnose because of its signature symptoms. The most common sign of OCD is when a person spends an inordinate amount of time washing, cleaning, checking things, repeating certain actions, or making sure that all the objects in his/her home are orderly and symmetrical. OCD sufferers may also have hoarding rituals, undoing rituals (such as repeating certain phrases or counting backward from one thousand to ward of disaster), superstitions, pure obsessions, and obsessional slowness, (1) . In a sense, they may be parting from the spectrum of normal behavior. The variety and intensity of OCD symptoms change over years and symptoms vary from person to person. Generally, OCD obsessions and compulsions worsen during stressful times, and at times, people may not even notice that they have it, (1) .

So what causes OCD? Unfortunately, the definitive cause of OCD still remains a mystery to doctors and psychotherapists today. The root cause is believed to be hereditary, (1) . However, the biological explanation is more complex and requires more time and attention to comprehend. To describe what is going wrong in the OCD patient's brain, we must first understand how "normal" brains work. In healthy and normal functioning brains, information is transmitted through the nervous system via a network of electrical and chemical signals that travel from one neuron to the next. Serotonin and other neurotransmitters travel from nerve cell to nerve cell across the fluid-filled gaps called synapses, passing along messages about moods and emotions by attaching themselves to receptors on neighboring nerve cells (neurons). Special receptor chemicals on the next neuron in the chain of communication are prepared to receive this message from a neurotransmitter, and that neuron will pass it to the next one, and so no, (1) .

One theory on the cause of OCD is problems with certain neurotransmitters in the brain; experts believe OCD is caused by a chemical imbalance of the neurotransmitter serotonin, which can throw off the sequence of communication in the brain, (1) . This communication failure occurs when serotonin is reabsorbed or sucked back into nerve cells instead of crossing the synapse. As vital chemical messages are lost, (because of the serotonin re-uptake) OCD symptoms develop. Therefore, treatment for OCD, according to this theory, is from drugs such as SSRI's, or selective serotonin-reuptake inhibitors, (drugs such as Prozac, Anafranil, Zoloft, Luvox and Paxil), which work by selectively locking the absorption of serotonin in the area of the neurotransmitter's receptors, (1) . Both the treatment for the faulty serotonin neurotransmitter and this specific theory behind the cause of OCD strengthen my belief that the brain equals behavior.

Another theory for the cause of the disease is proposed by Dr. Jeffrey Schwartz, an expert on OCD and author of the book entitled: Brain Lock. He believes that the tight and hyperactive linkage among the orbital cortex, the caudate nucleus, the cingulate gyrus, and the thalamus causes a "brain lock" situation, leading to repetitive and intrusive thoughts, (1) . According to his theory, brain lock is when the orbital cortex alerts the brain to a potential problem by unnecessarily becoming hyperactive and sending out equivalents of false alarms. "When a false signal reaches other parts of the brain – notably the caudate nucleus (which helps in switching gears from one thought to another), the cingulate gyrus (which makes your stomach churn and your heart beat faster), and the thalamus, (which processes signals from the cortex and other areas) – anxiety results. All four brain areas (which are a part of the basal ganglia) lock into a hyperactive overdrive, frantically zinging inaccurate distress signals back and forth to one another," (1) . This brain lock results in the obsessions and compulsions of OCD. Dr. Schwartz also views OCD as a "shake in the mind," similar to the tremors seen in Parkinson's patients, because both disorders have irregularities in the basal ganglia, (1) . The basal ganglia serve to combine the converging information from different regions of the brain (which were mentioned above), (3) . According to Dr. Judith Rapoport of the National Institute of Mental Health (NIMH), the basal ganglia are incorrectly stimulated in OCD patients, which cause the unwanted or excessive obsessions and compulsions (3) . According to this brain lock theory, the notion of brain equals behavior is again strengthened, for abnormalities in the brains of OCD patients cause abnormal behaviors. Also, according to this theory, only behavioral therapy may be implemented to treat the disease.

Behavior Therapy, or BT, can identify and change negative patterns of behavior in OCD patients. The goals of the program are to suppress their urges, calm their anxiety coming from obsessions, and reduce or totally eliminate rituals, (3) . The two best methods for this approach are through exposure (openly confronting a situation or object that you fear), or response prevention (resisting urges to perform that certain ritual.) In Dr. Schwartz's research, as mentioned above, 12 out of 18 patients responded well to the behavioral therapy, (2) . Because the behavioral therapy (and one's free-will) is able to minimize the OCD symptoms and behaviors for defects in the basal ganglia (on an equal effect as drugs can provide, (1) .) I not only believe that brain equals behavior, but also am convinced that the component of free-will must be located in the brain.

Other theories explaining the cause and neurobiological basis for OCD are made from studying the spectrum disorders that are genetically related to OCD, such as: Tourette's syndrome, Trichotillomania, Body dysmorphic disorder, depression and Autism, (However, there is not yet enough data or information relating these to OCD). Another striking disease related to OCD is strep throat, a link discovered by a team of scientists led by Dr. Susan Swedo at the National Institute of Mental Health (NIMH). These doctors have found a link between group A beta-hemolytic streptococci and OCD. In susceptible children, a strep throat can trigger an autoimmune response affecting the basal ganglia and leading to the symptoms of OCD and tic disorders, (1) . In those children, strep infections, caused by group A streptococcus bacterium, cause antibodies to attack the basal ganglia in the brain, leading to fears of contamination and OCD behaviors. This theory supports the previous one, concerning the link between OCD and problems in the basal ganglia. Thus, again, I am convinced that brain equals behavior and that free-will must be located in a certain unknown region in the brain.

In conclusion, based on the evidence and research I have done on OCD, to me it does seem that the brain equals behavior; for its structure and function control the patients' behaviors, and because free-will (behavioral therapy) can also change the behavior just as well as medication does. Thus, I also now believe that free-will must be an aspect that is a structural component of the brain. However, this concept complicates my argument and creates even more unanswered questions; for, if this is the case, then does this learned behavior (behavioral therapy) actually change our brains, making "behavior equals brain?" In my view, this can just be seen as a play on words, and in most extreme neurobiological cases, the notion that brain equals behavior actually prevails.

Overall, this final web paper, among other papers and discussions in the course, has solidified my knowledge of the brain and nervous system. Through each topic that I now research, I am able to integrate the same techniques and questions to come to a more solid understanding behind the cause of the specific disease or problem. Because of this course, I now realize that every abnormality, wiring, pathway, or circuitry in the brain and nervous system affects our behavior in many different ways. And by studying certain diseases such as OCD, I am able to understand what went wrong in the brain, what should happen to keep the body and brain functioning in a "normal" manner, and to what extent medicine/doctors or the patients themselves can play a role in fixing the problem.

I also now comprehend that brains of different people are always going to be different, no matter what. There should be no ideal for attaining the "perfect" brain, and nobody should strive to be a certain way, for each individual needs to find peace and happiness within himself, and because each individual views life and his/her surroundings in a different way. Thus, the human race is a conglomerate of talent, uniqueness, individuality, and difference, and by learning from these differences or "flaws," we learn more about each other and ourselves. As Emily Dickinson truly says it, "The brain is wider than the sky, for put them side by side, the one the other will contain with ease – and you beside."


1) Schackman, Dr. Lynn., and Shelagh Masline. Why Does Everything Have To Be Perfect?: Understanding Obsessive Compulsive Disorder. New York: Dell Publishing, 1999.

2) OCD Hope , article containing information about the chemistry of obsession.

3) Psych Central, article about the causes of OCD.

Full Name:  Sae RomLily Yoon
Username:  syoon@brynmawr.edu
Title:  Williams Syndrome: A Blessing and a Curse
Date:  2005-05-12 20:58:12
Message Id:  15138
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Williams Syndrome: A Blessing and a Curse

You are at the park, and a child comes running up to you. You have never met her before, but she displays a familiar friendliness as if she's known you all her life. She's only three years old, but she has an amazing way with words that adds to her remarkable sociability skills and engaging personality. It's not obvious at first, but when you take a closer look at her, you notice distinctive physical traits. When you look into her eyes, she has a star-shaped iris with full periorbital surroundings. Aside from her eyes, she has a full nasal tip and flattened nasal bridge, a wide mouth with full lips and cheeks, a long indentation in the midline of her upper lip, and a small jaw. This child has Williams Syndrome.

People with Williams Syndrome (WMS) function in the mild range of mental retardation, and they possess IQs averaging about 60.1) Williams Syndrome They have a distinctive neuropsychological profile that includes strengths in face perception, affective attunement, short-term auditory memory and select aspects of language, countered with weaknesses in visuospatial, motor, visuomotor integration, and arithmetic skills.1) Williams Syndrome It is hypothesized that the diverse brain regions and its variations account for different functions within larger networks provide the physiological bases for the specific strengths and weaknesses found in Williams Syndrome. Although the overall cognitive abilities of individuals with WMS are typically in the mild-to-moderate range of mental retardation, the peaks and valleys within different cognitive domains make this syndrome especially fascinating to study. Their display of strengths and weaknesses of uneven cognitive profile and atypical parts of specific areas of the brain is one of the reasons that make the study of Williams Syndrome so interesting for me.

Williams Syndrome is caused by a small genetic deletion on the long arm of chromosome 7, encompassing approximately 25 genes.2)Williams Syndrome This deletion codes for four genes that are highly expressed in the brain, FZD9, STXIA, LIMK1, and CYLN2. In addition, the whole brain volume is about fifteen percent smaller than normal, but the superior temporal gyrus, an area that encompasses primary auditory cortex, is of approximately normal volume.1) Williams Syndrome Preliminary structural MRI evidence suggests an exaggerated leftward asymmetry of the planum temporal gyrus.1) Williams Syndrome

As mentioned briefly before, people with Williams Syndrome possess a heightened interest in music and preserved language abilities. The language of individuals with Williams Syndrome sometimes seems precocious in their use of unusual words and conversational flourishes.3)Williams Syndrome These strengths may be due to their neurological traits of the brain structure. Although it is smaller as a whole, a small study using auditory event-related potential found increased amplitude of early endogenous components suggesting hyperexcitability of the primary auditory cortex.1) Williams Syndrome And it might be their alterations of the function in this area of the brain that may explain the rate of hyperacusis and language/music perceptual processes. Also, the planum temporale has been linked to hemispheric dominance for language. In musicians with perfect pitch, there appears to be even more pronounced leftward asymmetry of this region, like that of a person with Williams Syndrome.1) Williams Syndrome

The distinctive cognitive profile of individuals with WMS includes relative strengths in language and facial processing and profound impairment in spatial cognition. Studies have shown that subjects with WMS fail to attain the level of conceptual restructuring that most normal children achieve by age 10 or 11 which suggests that they possess a limited biological knowledge and understanding. 4) Bellugi, U., Lichtenberger,L., Jones, W. Lai, Z. (2000). The Neurocognitive Profile of Williams Syndrome: A Complex Pattern of Strengths and Weaknesses.Journal of Cognitive Neuroscience, pp. 7-29. San Diego: Massachusetts Institute of Technology. Despite this handicap, they have relatively spares expressive language. Their use of noticeably complex language shows that language is a major strength in the WMS cognitive profile. In the different stages of language development, the behavioral phenotype of WMS undergoes dramatic change across the first years of life, starting out with extreme delay at all developmental milestones, including language.4) Bellugi, U., Lichtenberger,L., Jones, W. Lai, Z. (2000). The Neurocognitive Profile of Williams Syndrome: A Complex Pattern of Strengths and Weaknesses.Journal of Cognitive Neuroscience, pp. 7-29. San Diego: Massachusetts Institute of Technology. They are extremely behind at the initial stages compared to normally developing children, but they begin to display a strong advantage for grammar later in development. WMS children are able to say many words they do not understand.
As grammar emerges, children with WMS in general improve dramatically, and they show relative strength in their facility and ease in using sentences with complex syntax, not generally characteristic of other mentally retarded groups.
4) Bellugi, U., Lichtenberger,L., Jones, W. Lai, Z. (2000). The Neurocognitive Profile of Williams Syndrome: A Complex Pattern of Strengths and Weaknesses.Journal of Cognitive Neuroscience, pp. 7-29. San Diego: Massachusetts Institute of Technology. Given their level of other cognitive abilities, WMS individuals use sophisticated language, particularly in vocabulary. Sometimes these words are used correctly, but other times they are partially, but not completely, appropriate in the accompanying context. They sometimes use words often in the right semantic field, but fail to convey the semantic nuances appropriately. They show more of an inclination to use unusual words, which was shown through an experiment in which subjects were asked to name all the animals they could in a minute. WMS subjects not only gave significantly more responses than mental-age matched normal control, but they also gave many more uncommon animal names.4) Bellugi, U., Lichtenberger,L., Jones, W. Lai, Z. (2000). The Neurocognitive Profile of Williams Syndrome: A Complex Pattern of Strengths and Weaknesses.Journal of Cognitive Neuroscience, pp. 7-29. San Diego: Massachusetts Institute of Technology. Although WMS individuals are able to produce a variety of grammatically complex forms with sophisticated vocabulary, they make occasional morphosyntactic and systematic errors especially if the language is about spatial relations.

Despite these strengths, they also have profound visuospatial weaknesses. They have difficulties visualizing the spatial relationships between objects, their distances and overall configuration. They have particular weaknesses in dealing with numerical concepts, spatial cognition and in abstract reasoning.3)Williams Syndrome In the mid-1990's, these deficits of visuospatial abilities was linked to the deletion of L1MK1, one of the four brain-expressed genes. However, further studies have shown that deletions involving this gene still displayed an intact spatial ability.1) Williams Syndrome This goes to show that there may not be one easy answer to explain a certain deficit or even a strength. It might just be specific combinations.

When WMS subjects were asked to draw an illustration, the product typically had poor cohesion and overall organization within the images. WMS subjects' spatial deficits tend to plateau and remain basically at the level of normally developing 5-year-olds.(4) Drawings of car by WMS individuals may show a door, a tire and headlights, but the parts might be depicted in an unrecognizable relationship to each other. It seems as though WMS subjects typically produce primarily the local forms, and they are impaired at producing the global forms.4) Bellugi, U., Lichtenberger,L., Jones, W. Lai, Z. (2000). The Neurocognitive Profile of Williams Syndrome: A Complex Pattern of Strengths and Weaknesses.Journal of Cognitive Neuroscience, pp. 7-29. San Diego: Massachusetts Institute of Technology. This suggests that people with WMS have a bias toward fractionation and an over attention to detail at the expense of the whole.

The functional segregation of visual processes in the brain is split by visual domain into a dorsal stream that connects the occipital cortices and the parietal lobe and a ventral stream of information flow from the occipital to the temporal cortices.1) Williams Syndrome Although this disorder causes spatial deficits, it seems to play a positive role in face perception and recognition. The fusiform gyrus, a region on the underside of the temporal lobes, seems to have a specific role in face perception. Specifically, the presence of the anatomical connection between the fusiform gyrus and limbic areas of the brain may be responsible for many emotional processes that function for face perception.1) Williams Syndrome Subjects with WMS demonstrate a remarkable ability to recognize, discriminate, and remember unfamiliar and familiar faces. While there are gross deficits in general cognitive abilities, subjects with WMS typically exhibit a distinctive pattern of peaks and valleys in visuospatial cognition: An emphasis on local over global processing and extreme fractionation in drawing; yet an island of sparing for processing, recognizing, and remembering faces.4) Bellugi, U., Lichtenberger,L., Jones, W. Lai, Z. (2000). The Neurocognitive Profile of Williams Syndrome: A Complex Pattern of Strengths and Weaknesses.Journal of Cognitive Neuroscience, pp. 7-29. San Diego: Massachusetts Institute of Technology.

There may also be a connection with high face perception and social-cognitive skills. It is believed that being able to have keen face perception and understanding the emotional states of others through facial cues may be closely tied to social-cognitive skills and the ability to form and maintain social relationships.1) Williams Syndrome This hypothesis is supported by a study with autistic people, who display a lack of interest in social relationships, seem to fail to engage the fusiform gyrus during face perception tasks.1) Williams Syndrome While there is no activation of this region for autistic people, individuals with Williams Syndrome seem to have normal use of the fusiform gyrus for face perception. In other words, levels of fusiform gyrus activation can be related to levels of social relatedness. If this is true, in normal brains, can we account for personality differences in sociability to the way our brains, specifically the fusiform gyrus, reacts? And can we say generalize this hypothesis to say that people who are keen at face perception and recognition have higher sociability skills?

This concept of connecting the fusiform gyrus to social relatedness may not work for normal brains and personality traits. In terms of brain studies, it was found that face recognition in healthy adults was associated with scalp voltage waveforms are predominantly localized to the right hemisphere. In contrast, ERP's in adolescents and adults with Williams Syndrome were found to be distributed across both hemispheres and did not distinguish between human faces, monkey faces, and cars.3)Williams Syndrome In normal brains, the cortical specialization for face processing observed in normal adults is achieved through gradual experience-driven specialization of an initially more general-purpose visuo-spatial processing system. However, in Williams Syndrome patients, genetic effects during brain development generate initial cortical structures with different neurocomputational biases which yield an overall processing that is poorer but have circuits with greater potential to process isolated features than configurations.3)Williams Syndrome
Individuals with WMS were significantly slower in development early in their lives, but then improved in language with increasing age. In contrast, in the visuo-spatial domain, subjects with WMS showed extreme deficit and limited change in ability across the age-range studied.4) Bellugi, U., Lichtenberger,L., Jones, W. Lai, Z. (2000). The Neurocognitive Profile of Williams Syndrome: A Complex Pattern of Strengths and Weaknesses.Journal of Cognitive Neuroscience, pp. 7-29. San Diego: Massachusetts Institute of Technology. Although they performed at a higher level in face recognition, other areas of spatial understandings were extremely lacking. Then we can ask whether or not there exists a relation between language and other aspects of cognition. Can language separate from other general cognitive abilities? Does Williams Syndrome represent the dissociation between language and general cognitive function?

By continuing the study of individuals with WMS, we are able to find out the separability of cognitive domains that normally develop together. Hopefully, others can learn that although some people with handicaps may score low on IQ tests, it does not completely dismiss their whole being as impaired. There are valleys and peaks within the test that show, although they might lack in certain areas, they might also be able to surpass a normal-developed person and possess relative strength in other areas. There have been numerous tests to better understand the ways in which individuals with Williams Syndrome display certain weaknesses and strengths, especially in recognizing faces. There seems to be increasing evidence, however, that the ways in which people with this disorder process incoming stimuli may be atypical compared to that of a healthy adult. Then does that mean that it is possible to be good at certain tasks when using abnormal processing mechanisms? This first peek through the window of the mind and brain progression of an individual with Williams Syndrome has revealed that nothing is simple or direct.

(1) http://info.med.yale.edu/chldstdy/plomdevelop/genetics/01aprgen.htm
(2) http://spnl.stanford.edu/disorders/williams.htm
(3) http://www.psyc.bbk.ac.uk/people/academic/thomas_m/cortex_semel_rosner_cortex_
(4) Bellugi, U., Lichtenberger,L., Jones, W. Lai, Z. (2000). The Neurocognitive Profile of
Williams Syndrome: A Complex Pattern of Strengths and Weaknesses.Journal of Cognitive Neuroscience, pp. 7-29. San Diego: Massachusetts Institute of Technology.

Full Name:  Beverly Burgess
Username:  bburgess@brynmawr.edu
Title:  Heroin Addiction, the I-function and Free Will
Date:  2005-05-12 22:15:46
Message Id:  15139
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

The I-function is believed to be a collection of neurons within the brain that houses an individual’s sense of consciousness and acts a reference point for self identity. It is thought to be a separate “box” contained within the nervous system that maintains a high level of autonomy from the ever-present myriad of chemical and electrical inputs and outputs occurring in the outlying regions of the system. As manifested in day- to- day behavior the I-function can be seen in the voluntary actions of an individual as he/she encounters the world: any choice that is made can be related back to the I-function and the execution of a choice under the influence of the I-function can be referred to as free will. How does this concept of free will hold up in the case of heroin addiction? Based on the fact that heroin mimics a naturally existing chemical in the brain that controls mood, it would appear that the I-function is at all times under the influence of those chemicals via the nervous system. From this it can be concluded that the concept of free will is difficult to support.

From Poppy Seed to Heroin

It is a picturesque sunny day in April in the small village of Essazai Kili, Afghanistan. Two small children carelessly wander about in a large, brilliant swatch of color created by a crop of pink, purple and white flowers. In a few weeks, the lovely petals of these flowers will fall to the ground leaving behind poppy capsules and the harvesting of the crop will begin. A farmer will repeatedly score the capsules to release an opaque, milky substance which he will allow to dry before scraping it off. The process will be repeated until the capsules no longer secrete the fluid, otherwise known as raw opium (6),(8). To this impoverished village farmer, and many others like him throughout the Middle East, South East Asia and Central/South America, the tedious and unusual harvesting of poppy fields is comparable to harvesting fields of gold.

Half a world away, somewhere in North America, a young man sits alone and engages in a dangerous ritual. He mixes a white powdered substance called heroin with water and heats it in a spoon over an open candle flame. His eyes glisten as the powder dissolves and he transfers the now clear liquid into a hypodermic needle. He searches his needle pricked forearms for a viable vein, injects the fluid and an instant euphoria, or rush, courses through his body. The rush is short-lived, but it is soon replaced with a warm, drowsy sensation that can be likened to a sense of well being (5). To his heroin laden nervous system, like the nervous systems of many heroin users around the world, he gains the sensation that all is well in the world.

The Evolution of Opium

The euphoric and narcotic effects of opium have been well-known throughout history dating as far back as 3400 BC in Mesopotamia. Until the 1800’s the drug was primarily ingested via smoking, eating, drinking or in pill form for both medicinal and recreational purposes (8). In 1803, a young German chemist by the name of Friedrich Serturner managed to successfully isolate the active component in opium: morphine was born. Eventually marketed by Georg Merck, morphine became a very popular drug for the relief of pain during the American Civil War and the Franco-Prussian War. By this time the administration of the drug was enhanced by the invention of the hypodermic needle which allowed for more accurate dosage and more expedient delivery, as the drug could now be released directly into the bloodstream (2).

In 1895 diacetylmorphine, or heroin, was synthesized from morphine by Heinrich Dreser, a chemist at The Bayer Company, Germany. The drug was touted as a cough syrup in the wake of a rash of pneumonia and tuberculosis infections, while also serving as an alternative to morphine due to its efficacy in the rapid alleviation of pain. Heroin soon became the drug of choice over morphine as it was believed to diminish morphine addiction, which by the 1900’s had become an obvious issue, especially in the United States (2). Eventually both morphine and heroin were found to be highly addictive and became controlled substances, with heroin’s status falling into the category of an illicit narcotic.

Despite its now illegal status, worldwide heroin demand is far from disappearing. Today, the farmers in Afghanistan with their vibrant poppy fields are more than willing to meet that demand as approximately half of Afghanistan’s gross domestic product is tied to their production of opium and heroin (3). The U.S. Department of State recently reported: “In [2004], the amount of land [in Afghanistan] planted in poppies skyrocketed 239 percent to 509,035 acres. Production of opium gum…was 17 times greater than in the next-largest producing country, Burma. Afghanistan's illicit opium/heroin production can be viewed…as the rough equivalent of world illicit heroin production (1).”

Heroin in the Nervous System

Although heroin use is mainly associated with the homeless or people of a lower social status, it is widely observed across all walks of life. Media sources around the world recognize the commonplace use of heroin as demonstrated here by an excerpt from one of England’s local newspapers:

“EVERY day at 11am 'David' joins the smokers gathered outside his city centre workplace as they meet for their midmorning fag break. Lighting up a Marlboro he has been given by a colleague because he never has his own, David fits in perfectly with all the other smokers as they puff away while discussing the latest office gossip. But, as his work-mates return to their desks, David heads towards a car parked in a nearby side street. The car window rolls down and he is handed a small package in exchange for a 10 pound note. 'At this point I'm buzzing,' admits David, 'because I know I've got my hands on the thing that will get me through the day.' The 'thing' David has got his hands on is a wrap of heroin. His smart business suit and executive job may not be what most people expect of a heroin addict but that is exactly what David is. He says: 'People look at me and all they see is a high-flyer. I've got a well paid job, a high profile in the business community and no-one at work has got a clue what I get up to. But if they saw the state of my arms they would know straight away that I am an addict (2).'”

What is it about heroin that makes it possible for David to get through the day? Exactly how does this chemical substance produce the sensation that most of us would automatically correlate with success: a feeling of relaxation, security and wellbeing?

Heroin belongs to a class of drugs known as opiates. Its magical effects are felt upon entry into the bloodstream as it depresses the central nervous system via the brain. The morphine component of the drug is recognized by receptor sites in the brain as being chemically similar to a naturally existing class of chemicals known as endorphins. Endorphins are our natural pain killers and mood lifters that normally come into play when we experience pain or participate in exercise. In the case of pain management, they act in an analgesic fashion to inhibit neurons from firing. A side effect of the pain relief induced by endorphins is a feeling of euphoria and well being (8). The introduction of heroin into the nervous system allows for immediate access to these sensations. According to Karl Sporer, MD, “Heroin is more soluble [than morphine and other opiates] in the fat cells so it crosses the blood-brain barrier within 15-20 seconds, rapidly achieving a high level in the brain and central nervous system, which accounts for both the 'rush' experienced by users and the toxicity (9). ”

The toxicity that Dr. Sporer refers to can be observed during the initial introduction of heroin into the nervous system of a virgin user as it produces many undesirable side effects such as nausea, vomiting and diarrhea. With repeated use, these symptoms lessen significantly, only to return when the nervous system detects a drop in the level of the drug, a phenomenon known as withdrawal. Additional symptoms of withdrawal include watery eyes, stomach and leg cramps, vomiting and a general malaise, somewhat similar the discomforts experienced with influenza. At this point the user is considered to be addicted and in order to avoid withdrawal symptoms he/she must administer the drug every 4 â€" 6 hours depending on the purity of the heroin and other physiological factors. Deprivation of the drug beyond this point typically results in a marked increase in the severity and duration of withdrawal symptoms (5), (7). An addicted user will make considerable sacrifices to obtain the drug, often resulting in a lifestyle that is solely dedicated to its pursuit, regardless of the risks or consequences.

Some of the risks are associated with the level of heroin purity. Current refining techniques of heroin are capable of producing a product that is 99% pure, but in an effort to increase profits, drug dealers may significantly compromise the purity to as low as 3% by adding fillers such as caffeine, powdered milk, and quinine (7). Although some fillers are relatively harmless, quinine can cause severe vascular damage, respiratory complications, coma and death. In an effort to obtain the most desirable result, heroin users may increase the amount of product that they use either, to overcome the dilution factor or to surmount the tolerance barrier created in the nervous system as a result of prolonged use. In the event that a user actually obtains a highly pure batch of heroin, the risk of overdose becomes a strong possibility. The result is often coma or death. As the addiction becomes more pervasive, the user’s approach to obtaining and administering the drug may become more reckless and include behaviors such as sharing needles indiscriminately, thereby increasing the risk of contracting diseases such as hepatitis and HIV.

Heroin addiction, the I-function and free will

A heroin addict feels that he is in control of his I- function during his heroin high, yet it becomes brutally obvious that his nervous system manages his I-function when he craves his next high or enters withdrawal. It may seem that a user’s I-function is initially in control when he decides to start using heroin, but it can never be fully known what or how all of the chemical interactions within the addict’s nervous system contributed to his first use of the drug. What can be observed is the subsequent control that heroin takes over a user’s I-function by its effect on his nervous system during his high and low periods. With all of the dangers associated with heroin use, one would assume that a user’s I- function would wake up, assess the risks, and cease to permit use of the drug if for no other reason than the sake of self preservation.

In the early days of the drugs licit use, patients consuming heroin were unaware of the drugs addictive affects and if one questioned a user at that time about their habit, it is likely they would emphatically declare that they were taking the drug of their own free will. Even David in the example above seems to have a sense of free choice with regards to his habit. Yet it is apparent from his continued use of this drug, despite its associated hazardous, that his I-function has been hijacked by a chemical invader.

The idea that free will does not exist produces a great amount of anxiety within cultures that have created laws in an effort to control behavior. One might argue that if there is no free will, then people cannot be held accountable for their actions. This is a poignant issue in societies that create moral philosophies about good versus evil and strive to hold its citizens accountable for their behavior through the establishment of laws. But how can we redefine behavior outside of these cultural boundaries? In the absence of the behavioral options created by laws, it would appear that free will fails to exist and we are only left with meaningless actions as they occur to us naturally via the nervous system. In the face of laws, we are presented with further influence upon our behavior reinforced by cultural expectations. But in choosing to abide by laws are we exercising our free will?

When we choose to deny an initial impulse, for which we cannot know the origin, in favor of an alternative behavior for which we may be rewarded, can we say that we are behaving freely? Can we say for certain why we prefer the reward? Society’s many systems of behavioral influence appear to deny our free will while simultaneously providing a sense of freedom: the freedom to deny the impulses created by our nervous system in favor of behaviors created by society in order that we may be accepted into a larger collective. But what exactly is it that makes us want to be a part of the collective in the first place? Those who modify their behavior to the societal norms are rewarded with acceptance by members of that society. Perhaps the external influence of acceptance triggers a biochemical release of some kind in our nervous system that makes us feel good. In keeping with that feeling we are likely to continue behaviors favored by society in an addictive fashion. From this it would appear that the cultural conformer, just like the heroin addict, knows how to get his ‘fix’. It appears that the I-function is unable to distinguish between constructive versus destructive behavior if the outcome of the actions is the same. For heroin users like David, perhaps his dependence on the natural high that comes from social acceptance was no longer sufficient to achieve a sense of well being and, given the opportunity, his nervous system influenced his I-function to find a solution in heroin.

Returning to the poppy fields in Afghanistan, we may look upon the flowers as inanimate life forms devoid of an I-function. From their place in the sun, they are merely slaves to the cycles of the seasons and the profit seeking will of the farmer. But like any good parasite, they silently wait for an opportunity to invade an unsuspecting host where they may assume residence and take control. These simple yet vibrant plants make their way through the manipulative hands of the farmers and chemists until they finally are able to infiltrate the nervous system and hijack the I- function and free will of millions of heroin users worldwide. Those of us without heroin to rob us of control over our I-function are still under the strong influence of our nervous system and its chemical components as well as innumerable external factors; therefore it appears that the I- function’s conveyance of free will ceases to truly exist.


1) Katel, P. “Exporting democracy.” The CQ Researcher Online 1 April 2005: 15, 269-292, http://80-library.cqpress.com.proxy.brynmawr.edu/cqresearcher/cqresrre2005040100

2) Scott, Ian. “A hundred-year habit. (Centenary of Bayer's chemical medicinal - heroin)” History Today June 1998: v48 n6 p6(3)

3) Starr, Frederick S. “Silk Road to Success.” The National Interest Winter 2004-05

4) Tony Barrett. “No-one knows I’m an Addict…” Liverpool Daily Echo 25 April 2005, first edition, features: pg.12, 13

5) Heroin, University of Washington website

From Flowers to Heroin, CIA website

7) Drugs in Sports: Recreational and Street Drugs â€" Heroin, NCAA website

8) The Opium Kings, PBS Frontline website

U.S. In The Midst Of A Heroin Epidemic, But Many Overdose Deaths Can Be Prevented, Science News Daily website

Full Name:  Carly Frintner
Username:  cfrintne@brynmawr.edu
Title:  Lonely Madness: The Effects of Solitary Confinement and Social Isolation on Mental and Emotional Health
Date:  2005-05-12 23:21:36
Message Id:  15142
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

By Carly Frintner
Paper #3 for Neurobiology and Behavior, Spring 2005
Professor Paul Grobstein

I began to research the effects of solitary confinement on prisoners' behavior while thinking about the ways in which we isolate ourselves from others, or are isolated by others in our daily lives. I cherish and am very protective of my own chosen moments of solitude, but I also know that long periods of time alone can send me into a depressive state, or make me feel like I'm going crazy. More specifically, a kind of panic sets in when I realize I'm alone with my thoughts with no one to affirm or deny the validity of what I'm thinking. When I'm by myself for too long, I start to question my own understanding of reality—of who I really am and what the world is really like. I need interactions with other people because they are such a significant part of how I understand and enjoy my life and my reason for living. All people seem to depend on varying amounts and intensities of social interaction to keep them happy, stable, and sane. This is not surprising given that human beings are social animals by nature.

Human beings are also naturally curious. Drastically reducing the amount of "normal social interaction, of reasonable mental stimulus, of exposure to the natural world, of almost everything that makes life human and bearable, is emotionally, physically, and psychologically destructive" (2) because it denies us the ability to ask questions and seek reasons and information to form explanations that allow us to understand ourselves as well as our world and our place and purpose in the world. It is logical that we feel less stable and secure overall when the things that our brain and body rely on to connect to and understand our surroundings are taken away from us.

In class, we have occasionally discussed how we check in with other people to get an understanding of ourselves. In one extreme example, we recalled a final scene of the movie "A Beautiful Mind" in which Professor John Nash asked a student to verify that there was a man standing there talking to him. Because Nash's schizophrenia often caused him to hallucinate, he relied on other people to assure him what he was seeing was not just his own reality, but the reality of the world (including other people). We all do this to a certain degree, though probably to check much less subtle information than whether a person is or is not actually a hallucination.

Out of the more than 20,000 prisoners in the United States, about 2% are currently living in "super maximum security ("supermax") facilities or units. Prisoners in these facilities typically spend their waking and sleeping hours locked in small, sometimes windowless, cells sealed with solid steel doors. A few times a week they are let out for showers and solitary exercise in a small, enclosed space. Supermax prisoners have almost no access to educational or recreational activities or other sources of mental stimulation and are usually handcuffed, shackled and escorted by two or three correctional officers every time they leave their cells. Assignment to supermax housing is usually for an indefinite period that may continue for years." (2)

I have sometimes gone for hours and even days with very minimal human contact. As a result, I experienced anxiety, depression, and a feeling of being disconnected from the world around me, even though I had complete freedom to go wherever I wanted. Prisoners who are isolated for prolonged periods of time have been known to experience "depression, despair, anxiety, rage, claustrophobia, hallucinations, problems with impulse control, and/or an impaired ability to think, concentrate, or remember." (2) Studies have also shown that isolation can cause "impaired vision and hearing... tinnitus [(ringing in the ears)], weakening of the immune system, amenorrhea [(absence of menstrual periods in women)], premature menopause... and aggressive behavior in prisoners, volunteers and animals." (1)

Previously healthy prisoners have "develop[ed] clinical symptoms usually associated with psychosis or severe affective disorders" (2) including "all types of psychiatric morbidity." (4) Many have committed suicide.

Individuals do vary in how well they can deal with living in isolation, however. (4) For prisoners with pre-existing mental or emotional disorders, living without normal human interaction, physical and mental activity and stimulation can aggravate their symptoms to levels equivalent to torture. (2), (3) In one complaint filed against the Connecticut Department of Correction in August 2003, social isolation and sensory deprivation drove some prisoners to "lash out by swallowing razors, smashing their heads into walls or cutting their flesh." (3)

It is difficult if not impossible to pinpoint the exact reasons why social isolation and sensory deprivation in solitary confinement situations causes mental and emotional breakdown in prisoners. However, in addition to the stimuli and interactions they are denied, we might also consider how people's minds are affected by others controlling every aspect of their lives, from where they are and how long they will be there to how much food they get and when, to light and noise levels, to what possessions they are allowed to have, to when or if their clothes, bedding and rooms are cleaned, to when and if they get to have fresh air.

How does the absolute denial of freedom, the denial of any kind of personal power or influence over one's life, affect the way he thinks, feels and acts? Certainly the impact is different for each person. But are there patterns across cultures and time in how slaves, prisoners, people living under a dictator, and children grounded by their parents react similarly to the denial of freedom? Are the patterns in reactions solely human, or do they extend to other animals, for instance, animals that are caged or otherwise restricted in pet stores, zoos or circuses? Do all animals, including human beings, feel and understand injustice on some level and therefore react to it similarly? Or are humans reduced to more stereotypically animalistic behavior when they are trapped and controlled? "In some states, the conditions are so extreme-e.g., lack of windows, denial of reading material, a maximum of three hours a week out-of-cell time, lack of outdoor recreation-that they can only be explained as reflecting an unwillingness to acknowledge the inmates' basic humanity." (2) Can people retain their humanity without the constant affirmation of their humanity through positive contact with other human beings? How do human beings' behaviors and thought processes shift when the human beings around them refuse to accept their shared humanity?

I am thinking more about the brain's needs based on my research on this particular topic. The physical, mental and emotional effects of living in solitary confinement seem to be beyond the control of the person experiencing them. It seems that the brain needs a certain quantity, quality, or type of stimuli to help regulate, direct and prioritize thought processes and other brain functions properly. It could mean that without certain (or enough) stimuli, the level of random activity in the nervous system increases—such as brain activity that causes hallucinations.

When inputs are all coming from the same place, parts of the unconscious experience the same inputted information differently because they are all interpreting the information with different randomness. The randomness helps us make connections between sets of inputted information and our own prior knowledge to ultimately create a story that explains our situation and surroundings. This story informs the "I"-function, which allows us to experience and understand the situation/surroundings personally. (5)

In an environment with very minimal stimulation, such as a prison cell, the randomness with which the unconscious explores the environment continues, although it is unclear whether randomization increases when fewer stimuli are reaching the brain. Perhaps the brain attempts to compensate for stimuli it is missing by creating stimuli of its own, that is, by increasing random activity. Either way, when the brain is not receiving much input from the environment, there is little information based in reality that the unconscious can focus on or try to interpret. The story reported back to the "I"-function is more likely informed by more random connections than real facts about reality because reality is not offering enough stimuli to make a coherent story. This helps explain why people often experience mental and emotional breakdowns and psychotic episodes when in solitary confinement for extended periods of time.


1)F-Type Isolation Prisons in Turkey

2)Supermax Prisons: An Overview

3)Lawsuits Attack Isolated Prison Conditions for Mentally Ill , Mental Health Law Weekly; Prison Health. April 2, 2005.

4)Isolation and Mental Health, NHS National Electronic Library for Health.

5) The Brain's Images: Co-Constructing Reality and Self , Paul Grobstein. May 2002. (And conversation May 2005.)

6) Isolation, Breakdowns and Mysterious Injections. , Vikram Dodd, Richard Norton-Taylor and Rosie Cowan. January 26, 2005. From The Guardian (UK), via Common Dreams News Center.

7) Mental Issues in Long-Term Solitary and "Supermax" Confinement., Craig Haney.

Full Name:  Erin Deterding
Username:  edeterdi@brynmawr.edu
Title:  Can Jane, June, and Jessie exist in one person? A Critical look at Dissociative Identity Disorder
Date:  2005-05-12 23:35:18
Message Id:  15143
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Imagine you and your best friend are at a coffee shop. One minute, you're chatting about your weekend plans together as she casually takes an elastic out of her purse and puts her hair in a ponytail. The next minute, your friend seems angry and upset; she's not acting like her self. You wonder what's wrong with her and why her mood seemed to switch so suddenly.

Now imagine your friend confides in you that her behavior can be contributed to Dissociative Identity Disorder (DID), formally known as Multiple Personality Disorder (MPD) a psychological disorder that affects memory and identity (1). Would you believe that it is possible to have more than one identity? Whether you said yes or no, or you were unsure, the diagnosis of DID as a real disorder is highly contested among professionals.

To understand this debate, one must first understand what Dissociative Identity Disorder is, and it is diagnosed. DID is thought to occur as a result of serious abuse as a child, whether it is physical, sexual, or psychological. Because children are unable to cope with such emotional trauma, they dissociate themselves from the actual event. As a result, children often create a new identity, one that can deal with the emotions they are incapable of handling (1).

One of the major characteristics of a DID diagnosis is having two or more personalities, both of which can become the dominant personality at any given time (1). Even though the beginnings of DID occurs during childhood, symptoms of the disorder do not emerge until adulthood. These symptoms can include confusion or disorientation, hallucinations, feelings that one's body is transforming, experiencing a daze or trance-like state, depression, and anxiety attacks (1). In addition, it is thought that those who have the disorder become so skilled in dissociating their feelings that even mild stressors occurring in every day life can trigger an episode of changing identity (5). It is estimated that DID affects about 1% of the population (2) and five times more women then men are diagnosed with the disorder (3).

The main therapy for treating Dissociative Identity Disorder involves hypnosis. During this process, the therapist attempts to have each "self" relieve the traumatic events (1). It is thought that once each "self" is able to confront the past and deal with the pain, integration of all the selves into one overarching self can be achieved (1). The goal for this kind of therapy is not only integration of the selves, but also acceptance of the memory and history of the trauma. Once this is achieved, dissociation from the feelings that are associated with the trauma is no longer necessary (1).

While Dissociative Identity Disorder is listed in the DSM-IV as disorder, there is debate among professionals about the actual cause of the disorder. Paul McHugh, a former professor of Psychiatry at John's Hopkins Medical Institutions, believes that patients acquire DID from suggestions by their therapists (4). This construction of DID occurs when patients in therapy are especially sensitive to suggestion and hypnosis. Throughout the course of therapy, patients are told they have Dissociative Identity Disorder, and therefore, feel compelled to act in a way that is consistent with the criteria for the disorder (4). One possible explanation for this can be accounted for by the research done with expectancies. The influence of one's own belief that something will happen often makes it so. For example, research with lab rats assigned randomly to the categories of "smart" or "dumb" are found to perform consistently with their label because that is what the researcher believes to
be true (6). These expectancies yield more powerful results in close social settings such as therapy (6). McHugh suggests that by not dramatizing therapy, patients can work through their personal issues by focusing on the real problem at hand (4). While McHugh seems to acknowledge the fact patients who are diagnosed with DID do have emotional problems, he believes that therapists enable their patients and the symptoms by focusing so heavily on them (4).

Similarly, there are people who feel as though DID is the ailment of the moment due to popularization of disorder by the media in the books The Three Faces of Eve and Sybil, which was later made into a movie (6). Before such publications, DID was a rarely diagnosed disorder, with about two diagnoses a year prior to The Three Faces of Eve, to as many as 50 a year following the release of the book. The numbers of diagnosed cases of DID increased again, this time to 2,000 cases a year, after Sybil. It is thought that these media productions somehow influenced people by "teaching" them what DID is supposed to look like (6).
In addition to being critical of the etiology of Dissociative Identity Disorder, others are critical that the DID even exists at all. Since the DSM-IV changed the name of the disorder from Multiple Pesonality Disorder to Disscociative Identity Disorder, the semantic debate seems to have decreased. However, the philosophical basis behind this debate still exists. Because the diagnosis of DID relies on the objective criteria of having more than one identity, it is questionable as to whether it is possible for this to actually occur. If personality or identity is defined as the brain's combination of reactions to external and internal inputs at any given time, then it could be possible for someone to switch between feelings and behaviors many times in one day. Perhaps someone wakes up on the wrong side of the bed, but then receiving a letter from a friend in the mail makes them feeling happier. Suppose later in the day, running out of gas while driving to the grocery store spoils their mood. They become grumpy, and as a result, their behavior changes to reflect their mood. The question then becomes, at what point is this switching of feelings and behaviors seen as "abnormal" relative to what the person usually experiences. One could argue that changing behaviors and moods several times in the day is a relatively common experience. At what point, however, is switching behaviors indicative of alter personalities (5)?

Because the definition of alter personality is not clearly defined in the literature by those who believe that DID is real, skeptics such as August Piper, Jr., a psychiatrist, question the idea that alter personalities can actually exist (5). For example, Piper cites the lack of a solid definition of an alter personality being responsible for instances in which people have been diagnosed with numerous personalities, and in one case as many as 4,500 different personalities in one patient (5). With numbers such as this, it can become easy to question the credibility of the disorder.

In addition to looking critically about the diagnosis and existence of DID, one must look to the implications if such a disorder is legitimate. For instance, how should people who are diagnosed with DID be treated with regards to criminal acts? Should people with supposed different identities be held accountable for acts that were committed by these identities? Cases have been documented with defendants diagnosed with DID having sentences overturned due to having a mental illness (7). The underlying question in cases such as these is the question of free will. Are people with multiple identities really in control of what they are doing? Or are they hosts to activities they cannot control? While the different identities seem to be very different in their behaviors and personalities, most literature stresses the importance of recognizing that there are not necessarily several people living in one body. In fact, the fragmentation into identities is seen as a manifestation of the same person in
many different forms (2). From this standpoint, it would seem that while people with DID may not necessarily realize when they have switched into one of their alters, because it is a manifestation of who they are, they should be held accountable for their actions, no matter which alter is actually committing the act.

Looking at all the evidence and counter evidence to the legitimacy of Dissociative Identity Disorder as a spontaneous, organic occurrence, one must still ask the question: is DID real? It seems that while traumatic events such as extreme abuse, especially during childhood, could lead to psychological problems such as dissociation and denial of feelings. Despite this, it seems questionable that this dissociation could lead to many different forms of one person, as DID suggests. It makes sense that a traumatic event could shape the person one becomes, and the way she might behave and react during certain situations, but there seems to be a difference between an event influencing the way one lives her life, and one having certain identities to deal with certain situations. In cases such as this, it is important to remain cautious in diagnosing a person with DID because of inconsistencies about the etiology and the realness of the disease. On the other hand however, one should not be so overly cautious as to deny those with legitimate problems the treatment they need to recover. All in all, it seems as though diagnosing and treating Dissociative Identity Disorder is a delicate balance between enough belief in the patient's problems to provide the help people need and enough skepticism to keep from enabling patients to continue disordered behavior.


1)Understanding Dissociative Identity Disorder Canadian Mental Health Association

2)Dissociative Disorders Sidran Institute

3)Dissociative Disorders Health webpage

4)Multiple Personality Disorder by Paul McHugh

5)Multiple Personality Disorder: Witchcraft Survives in the Twentieth Centry by August Piper, Jr.

6)Reasons for Caution about Diagnosis of DID/MPD by Russ Dewey

7)A Case for the Insanity Defense Court TV's Criminal Library

Full Name:  Bridget Dolphin
Username:  bdolphin@brynmawr.edu
Title:  Breaking into The Zone
Date:  2005-05-13 02:15:08
Message Id:  15150
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

The division of psychology which focuses on athletes is fairly new, with the majority of the most important discoveries being made in the mid to late 20th century. Pioneers include Rainer Martens, Terry Orlick and Daniel Gould, all of whom are still active professors, researchers and authors in the field of sports psychology. When they began their studies, athletes were quiet about their relationships with the psychologists and reluctant to admit they were receiving help to better their mental game. Today almost all professional squads carry a team psychologist and spend a great deal of time improving the mental aspect of the athletes' game. Many athletes who participate in individual sports also hire a sports psychologist or a coach who is dedicated to building mental as well as physical strength and skill. The psychologists use the athlete's character and the sport in which he participates, along with neurological discoveries to develop a plan to put the athlete in the optimal state of mind for every game (4).

Neurologists have discovered that brain activity can be measured in waves which vary from 0.5 cycles per second (cps) to 28 cps. The levels of brain rhythm have been divided into four phases which coincide with the levels of consciousness. The Delta phase (0.5-4 cps) refers to a comatose or unconscious mind in which the conscious has no control. The Theta phase (4-7 cps) is the "inspirational artistic imaginative phase." The Alpha phase (7-14 cps) is a relaxed phase which allows access to the subconscious and the ability to focus all attention on just one item or task. The Beta phase (14-28 cps) is the phase we are in during most of our time spent awake and allows seven or eight thoughts to be circulating at the same time. The Alpha phase is what athletes identify as "in the zone" and is that which they strive to enter for the maximum level of concentration and ability (8). It is an elite state of mind in which each athlete performs best, and with the help of a psychologist, each hopes to discover the best way to get there.
The phrase "in the zone" is used most often when referring to athletics, but every person is capable using his or her subconscious to perform at an elevated degree. Business men or women may enter "the zone" when presenting a new idea to superiors; actors may enter during their scenes; artists often enter while working on a piece. It means every skill that you need, which is already stored in your subconscious, is utilized based on decisions made by your unconscious (7). It's allowing your subconscious mind to "go on auto pilot" (2). Frequently when someone is in the zone, time goes by swiftly; all of a sudden the game or meeting is over and he or she doesn't know where the time went. He or she may not remember exactly what happened while their subconscious was in control (2). A person may have a similar experience by doing something as simple as replacing a credit card in her wallet after making a purchase and then later thinking she left it at the store because her subconscious was in control when she put it away.

I was interested to learn more about the athlete's subconscious because I have spent much of my life completely dedicated to soccer, and I always wondered why some days my game was "on" and some days I wasn't as consistent with my play. I didn't know psychologists had actually defined "the zone." I realized a lot of the time I was relying on my subconscious, but I wasn't sure how I switched from my conscious and how much other athletes used theirs. As with any activity that is eventually controlled by the subconscious, the actions demonstrated in each game need to be repeated a great deal until the brain can send signals to the body without the conscious mind being aware of carrying out the activity. In high school I began to notice I couldn't really remember certain moments of each game. These seemed to take place especially right before cheering and applause from fans and coaches. I would try to recall my most successful actions, but could only remember times I was resting on the side line, or when one of my teammates had the ball. In college I was more formally introduced to the notion of the conscious and subconscious regions or the brain. I paid more attention to how my brain worked during games.

I don't score a lot because I have a defensive frame of mind and I also often choke under the pressure of having the ball in front of the goal, but I have a strong shot and have been one of the primary players to take penalty shots (a free kick 12 yards from the goal line in an 18-yard box occupied only by the shooter and goalkeeper) on my teams since I was pretty young. My freshman year of college I played my first two intercollegiate games at a tournament in Maryland. We won on Saturday and got to play in the championship on Sunday, where we were tied with the other team through regulation time and two overtimes. As often happens in soccer tournaments, when the teams are tied after an extended period, each team chooses five players to take penalty kicks against the opposing team and whoever puts more of them into the goal wins the game. I was pretty anxious to be playing at all; college level soccer is very different from what I remember high school being like, and I was scared to death when my coach chose me to shoot second. I was also excited because I had never missed a penalty kick when it counted, so this was going to be my time to shine. The shooters sat together in the circle at the center of the field and watched as each took her turn. I remember watching the first two shots from the other team and the first shot from our team, and I remember how nervous I was. I don't remember standing up, or walking toward the goal, or placing the ball, or shooting, or walking back to the center of the field. I do remember when I got back to the circle everyone on my team was cheering and hugging me and, as my coach says, I "cracked the first smile she'd seen since I arrived for pre-season."

My sophomore year I felt a lot of pressure going into the season. We had a new coach and I was upset at having to prove myself again and seriously doubted my ability and chances at reclaiming my starting position with all the new recruits. Our first games were again in a tournament, this time in Waynesburg, PA, and we won our first match, tied the championship game, and entered another round of pentalty kicks. I was shooting second again. I remember the first three shots and being nervous. I remember walking into the penalty box. I remember placing the ball in the correct location 12 yards from the goal. I remember backing up and thinking "this is gonna suck." I remember my foot striking the ball and I remember watching it ricochet off the post.

The previous two scenarios are prime examples of being "in" and "out" of "the zone." Obviously, being "in the zone" proves a great deal more productive. The question now is how to enter the zone. Ideally, an athlete would enter "the zone" at the start of each game, match, race, etc. but clearly that doesn't always happen. The greatest athletes are able to use their subconscious more than the average competitor, which, combined with their superior physical talent, makes them the best.

I am definitely not professional soccer material. I have a good defensive mind, as I previously stated, and have acquired decent ball skills in my 13 years of playing. I am comfortable with my defense, though, and not as sure of myself when the ball is at my feet rather than at the feet of my opponent. This became even more apparent to me during a game my sophomore year in college. I saw a girl dribbling toward me and went into "the zone." I don't know how I got the ball, but I left "the zone" and "woke up" moments later with the ball at my feet and people cheering excitedly. I was able to distribute the ball without any problems, but not until after a bit of pondering, nearly panicking, by my conscious mind.

I was able to experience a sort of maturing into being able to enter "the zone" after I became aware of it. I started playing lacrosse officially in February of my freshman year in college. My only previous experience was passing to myself against a wall. My subconscious wasn't able to direct my movements because my even conscious mind had no idea what was going on, and therefore I hadn't repeated any action enough to be stored there. I couldn't catch and I didn't throw very accurate passes, but fortunately I earned some playing time because there were several injuries and because my defensive mind allowed me to grasp that facet of lacrosse rather speedily.

I believe that once an athlete uses his subconscious for one sport, it becomes somewhat athletically inclined and can pick up other sports quickly. I say this because after only a few practices, I found myself entering "the zone" during certain drills. There is a certain technique to playing legal defense in women's lacrosse. The defender's hand is allowed to make contact with the player she is marking, but her stick cannot be horizontal across her body; the head must be raised so it is not parallel with the ground. She also cannot have her whole body right against her opponent's; there must be some space between them. I would often forget, so my coach spent a lot of time reminding me of all this. I would stand in line repeating to myself "at an angle, hug a tree... at an angle, hug a tree," and I always intended to keep repeating it throughout the drill, but I don't think I was ever able. I would stop sometime without being aware and then after I was done I would realize it wasn't going through my head anymore. I wouldn't remember when I stopped saying it, or if I had in fact held my body position and kept my stick at the correct angle.

Using the subconscious in sports helps eliminate the possibility that the athlete will make a mistake by thinking too much or making a poor decision in the conscious mind. It also increases action and reaction time. Research has shown that it takes a minimum of 100 milliseconds for the brain to emit direction for action, and longer (closer to 200 milliseconds) if a complex decision has to be made (5). More recent research suggests that becoming conscious of the decision takes up to ten times as long (1), increasing the amount of time by half a second. Hence, an athlete's reaction to a 100 mile-per-hour pitch or serve seems super-human to spectators. A baseball player has a very limited window of time to decide to strike a pitch. Because conscious processing would take longer than the time that is available, the subconscious must be responsible for making such a decision (5).

During my first or second lacrosse game, I remember I was covering the woman who had the ball and I ran with her for some length of a sideline of the field, right near my team's bench. That was the first time I felt like I was playing real defense in lacrosse. I knew I was doing well because my coach was running right along with us yelling about what a great job I was doing. I was able to hear what she was saying, process it and think "yes, I am with her, my stick is right, my arms are ok..." I felt like I was running with her for 30 seconds or a minute, and I have to admit I was very pleased with myself. Later that week we watched a tape of that game and when it reached that portion of the game I saw that I was running next to the girl for two, maybe three seconds. My subconscious was working a great deal more quickly than my conscious mind ever had, but it seemed to me as though my subconscious had actually slowed time inside my brain.

How to get an athlete to enter "the zone" depends a lot on the athlete herself. Because each competition and each athlete individually is so different, and also because "the zone" is a moderately recent notion, there is not a lot of literature available on how to enter it. There are some basic instructions available from multiple sources. They consist of positive self-talk, visualization, excitation, relaxation, confidence and physical ability or physiology. Each breathing regimen or visualization exercise is tailored specifically to the individual athlete or team to best improve their game specifically (2), (3), (6). The objective is to get the athlete as focused as possible, yet relaxed enough for the conscious mind to yield control to the unconscious. Some athletes feel they must be "psyched up" to enter "the zone," others feel they must be "psyched down." Psyching up consists of increasing the amount of mental and physical arousal and activation before participating in an athletic event (4). Former world champion track and field participant Steve Backley used breathing techniques because he was "under aroused [I]and needed perking up" (4). Psyching down means relaxing mind and body in preparation for the competition so nervousness doesn't hinder an athlete's performance. World record holder and Gold Medal winning hurdler David Hemery spent time relaxing and slowing his pulse before each race (4). The best methods depend on what is most effective for the athlete, and also what kind of focus is ideal for the athlete to achieve. An athlete who participates in an individual or closed skill sport, such as track or golf, works to focus on the event itself; the athlete can center all his attention on each race or stroke. An athlete who participates in a team or open skilled sport such as soccer or basketball must divide her attention between auditory and visual stimuli. These include verbal communication from teammates and the positions of each teammate and opponent. The concept of "focused attention" refers to the filtering of stimuli and deciding which should be acknowledged and which should be ignored. Several different models suggest how this process might operate. All consist of a bottleneck filter which allows the subconscious to swiftly process one stimulus at a time sequentially, thus avoiding the congestion that might thwart an athlete's decision-making and action time (4).

The concept of "the zone" seems very complicated, and coupled with the variablity of each athlete's unique mentality, almost impossible to achieve. It is one of those things like your car keys or true love that you can never find unless you stop looking. Thinking too hard about "the zone" or putting too much pressure on yourself almost always has destructive results. It is the precise balance between being totally focused and completely relaxed, and allows the body to accomplish amazing tasks. Also, when athletes learn how to break into "the zone" for athletic competitions, they are gaining a better understanding of their own subconscious, which may make them more comfortable with what is happening in the part of the brain they can't consciously control.


1)In a Zone: Psychologist Smith suggests in book that visualizing positive can change your game

2)3 Keys to Enter the Zone Everyday!

3)In the Zone: The Zen of Sports

4) Hardy, Lew, Graham Jones, Daniel Gould. Understanding Psychological Preparation for Sport. Chichester: John Wiley & Sons, 1996:114, 121, 176.

5)Reading the game

6)How to Reach Your Achievement Zone

7)An Introduction to Mental Training

8)The Right Wave Length

Full Name:  Samantha Thomson
Username:  sthomson@haverford.edu
Title:  Pulling Teeth to Survive:
Date:  2005-05-13 04:56:40
Message Id:  15151
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip










Full Name:  Samantha Thomson
Username:  sthomson@haverford.edu
Title:  Pulling Teeth to Survive: A Discussion on Various Treatments for Depression
Date:  2005-05-13 05:02:17
Message Id:  15152
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

A dark cloud hovers over, the walls close in and the room goes black to uncover the hidden depths of depression. It affects approximately 18.8 million Americans over the age of eighteen, yet those who suffer from depression never feel as if they are one of millions. Until the brain was better understood, discovering treatments for depression proved to be a challenge for psychiatrists, doctors and scientists alike. Reports of tooth removal, colon removal and even shaking therapies circulated through research circles as possible treatments for depressive disorders. Until the spike in pharmacological treatments, patients suffering from depression were provided few options to remedy their crippling disorders. Even today, in a world where an assortment of treatments is available to people suffering from depression, this disorder still haunts an overwhelming portion of the population. Kay Redfield Jamison expresses in her book on living with bi-polar syndrome, An Unquiet Mind, "Depressed, I have crawled on my hands and knees in order to get across a room and have done it for month after month. But normal or manic I have run faster, thought faster, and loved faster than most I know." (1) With mentors like she for the millions suffering from the various types of depression, hope for an improved lifestyle for those affected seems attainable.

The most popular present-day medium of treatment for depression is through chemical intervention. The brain is made up of many neurotransmitters that serve as chemical messengers relaying signals between cells of the nervous system. Specific neurotransmitters, such as norepinephrine, dopamine and serotonin are closely related to depression; individuals that suffer from depression tend to experience high levels of one or all of these chemicals. Various drugs have been synthesized to target receptors and neurotransmitters in the brain in hopes of returning affect individuals to a less heightened state of depression (2).

The first type of drugs to go on the market, such as tricyclics or monoamine oxidase inhibitors, affected areas all over the brain. Individuals exposed to these particular types of drugs experienced many side effects and often treatments lasted for only small periods of time before individuals regressed back to highly depressed states; scientists learned that even though these particular drugs were in fact influencing the level of norepinephrine, dopamine and serotonin, they also were interfering with other receptor sites and neurotransmitters all over the brain not specifically involved with emotion (2).

Continued research led scientists to discover the integration of serotonin receptor sites in an overwhelming amount of areas in the brain that control emotion. As serotonin is passed from one nerve cell to the other some of the chemical attaches to the adjacent cell, while the rest is reabsorbed back into the pre-synaptic cell; this process of reabsorption is referred to as "reuptake" (3). A balance usually exists between the amount of reuptake and the amount of attachment to adjacent cells; however, individuals experiencing depression tend to have a higher amount of reuptake, thereby disrupting the chemical balance within their brains. Capitalizing on this feature of serotonin receptor sites, scientists created a type of drug called a "Selective Serotonin Reuptake Inhibitor" (SSRI), where by blocking the reuptake of serotonin into the pre-synaptic cell a chemical balance can be restored. Popular drugs such as Prozac and Zoloft are examples of SSRIs. These, along with many other SSRIs, have rapidly grown in popularity due to the highly selective nature of the drugs; since they only target specific areas of the brain, the patient experiences less undesirable side effects. Though chemical treatments are becoming more highly selective, still 30% of individuals seeking pharmacological treatment are unfortunately resistant to trial medications (4).

Other treatments for depression have been used both in the absence of and in conjunction with pharmacological remedies. One such method is referred to as "Electroconvulsive Therapy" (ECT). With this treatment patients are given an electrical shock, which targets a specific area of the brain triggering a minor seizure to occur within that portion of the brain (5). The seizure causes the brain to release chemicals within the brain, eventually targeting areas involved with controlling emotion. ECT treatments usually last about a month or less, as patients receive a total of six to twelve treatments at a frequency of approximately three times per week (6).

This type of treatment tends to be much more obtrusive than other alternative methods, however some individuals provide remarkable testimonies about its liberating effects. It is hypothesized that since ECT in conjunction with some psychotropic drugs has been discovered to stimulate neurogenesis, the physical change in the structure of the brain, with the creation of new cells, brings about some of the therapeutic effects. The neurogenesis involved specifically stimulates growth of the glial cells of the nervous system. Stephan Heckers and Dost Ongur state that, "since glial cells are essential for proper neuronal function, treatments that alter glial function would have significant effects on the brain." (7) Concerns remain, however, concerning the possible side effects of ECT. Since one is exposed to signals which trigger epileptic activity during treatment, there have been cases where seizures persist after exposure has subsided.

A similar type of treatment to ECT is "Transcranial Magnetic Stimulation" (TMS), where an electrical current is passed by way of a handheld port, through the scalp of the patient, then finally to the brain; the current stimulates nerve cells in the brain, however, with a much higher level of precision and lower level of force than that of the ECT (4). With his experiment involving 12 people receiving repetitive TMS (rTMS) psychiatrist Martin Szuba of the University of Pennsylvania determined that treatments lasted for about one month before the patient regressed back to a heightened level of depression (8).

Extensive research continues on this experimental method of therapy due to its low risk of side effects coupled with high precision. As compared to ECT "consider instead how easily a magnet under a wooden tabletop can move a pin on the surface-magnetic fields move almost unaffected through insulators, including the skull." (9)

Preliminary experiments have been performed concerning the effectiveness of rTMS on people with moderate to severe depression; A. Hausmann and his colleagues performed a test where 41 patients were split into 4 groups, each of which received rTMS at different locations of the brain: the right dorsolateral prefrontal cortex, the left dorsolateral cortex and both. The scientists coupled the rTMS treatments with antidepressant medicine to determine if the patient experienced an accelerated rate of treatment with rTMS. They determined after processing the results of the experiment that "rTMS as an 'add on' does not exert an additional antidepressant effect." (10) At the Medical University of South Carolina, researchers are currently researching the effectiveness of rTMS as clinical therapy on patients experiencing depression. Furthermore they work to understand the cellular physiological consequences with animal studies, and utilize in vitro approaches to more clearly understand the interactions between rTMS and cortical activity (11).

Vagal Nerve Stimulation (VNS) is yet another treatment practiced for depression. With this type of therapy, a pacemaker is inserted into the vagal nerve located in the neck. Originally this procedure was performed with the intention that the pulses of the pacemaker would stop seizures, however, once inserted patients reported feelings of bliss as the pulses resonated through the nerve. Upon more research scientists determined that the pulses did indeed alter levels of serotonin and norepinephrine in the patient, thereby lowing the severity of depression experienced by that individual (4).

Studies have also been performed to test the effect of VNS on sleeping patterns. The test was performed 10-12 weeks after implantation and results showed that the overall sleep architecture was improved after implantation of the pacemaker. Overall patients demonstrated a decreased awake time, decreased duration of Stage 1 sleep and an increase in duration of Stage 2 sleep (12). Patients who experienced an increased quality of sleep also displayed lower levels of depression.

The FDA has recently approved VNS as an appropriate treatment for individuals eighteen years and older with major depressive episodes who have not responded well to at least four other antidepressant treatments (13).

Other highly affective methods for all types of depression are attained through alternative treatments. Such methods integrate mind, body and spirit to improve mood and release tension within oneself. From self-help groups, to yoga and other types of meditation, affected individuals find comfort within themselves while lending support and first-hand experience to others. Diet is another medium that people chose to help guide them toward recovery; it is even thought that milk and wheat can trigger depression in Autistic and Schizophrenic individuals. Animal assisted programs help affected individuals build trust with spirits other than themselves as they work to improve socialization skills in a seemingly dark world. Creative Arts, such as art, dance/sports and music tend to heighten self-awareness while one gains comfort with his or her body (4).

As we go through the day to day and commit ourselves to becoming a particular person, it is important to keep this disorder in mind. It grabs hold of you and seems only to let go long enough to ask for help before the walls close in again. During those moments of light, where the world seems clear again it is our responsibility as onlookers, suffers and survivors of depression to begin the search for an appropriate treatment. The days of pulling teeth are over.


1)Serendip Home Page, a resource from Bryn Mawr College

2)Web MD

3)Medicine Net, a pharmacological source.



6)American Psychiatric Association

7) Ongur, Dost; Heckers, Stephan. "A Role for Glia in the Action of Electroconvulsive
Therapy". Harvard Review of Psychiatry. Sept-Oct 2004. v12 i5 p253(10).

8) Williams, Stephen. "Can Magnets Ease Severe Depression?". Newsweek. Sept 21, 1998.
v132 n12 p107(1).

9)Medical University of South Carolina, research site.

10) Hausmann, A. et al. "No Benefit Derived From Repetitive Transcranial Magnetic
Stimulation in Depression: A Prospective, Single Centre, Randomized, Double Blind,
Sham Controlled "Add On" Trial". Journal of Neurology, Neurosurgery and Psychiatry.
Feb 2004. v75 i2 p320(3).

11)Medical University of South Carolina, research site 2.

12) Armitage, R. et al. "The Effects of Vagus Nerve Stimulation on Sleep EEG in Depression
A Preliminary Report". Journal of Psychosomatic Research. May 2003. v54 i5 p475(8).

13) "VNS is Approvable for Depression Use". The BBI Newsletter. March 2005. v28 i3 p35(1).

Full Name:  Elizabeth Diamond
Username:  ediamond@brynmawr.edu
Title:  A Failure of Will—A new way of thinking about Depression
Date:  2005-05-13 10:35:43
Message Id:  15156
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

A Failure of Will—A new way of thinking about Depression

Depression, at least from personal experience, is not so much a perpetual state of sadness but more of helplessness; a feeling of descending into a situation into which you have little or no control whatsoever. Looking back at my experience, I remember feeling as though there was something that I should have been doing in order to alleviate the unexplained sadness and general anxiety, yet at the moment it felt as though there was nothing I could do; nothing could have helped me. Leo Tolstoy probably described these emotions best when he wrote in his story "A Confession" after a year of depression: "Why should I live, why wish for anything, or do anything... is there any meaning in my life that the inevitable death awaiting me does not destroy? ... The spring of life dried up within me, and I despaired and felt that I had nothing to do but to kill myself. And the worst of all was, that I felt I could not do it" (5). These are not unique feelings, for they are a common experience shared by many sufferers of depression. But if the symptoms and the disorder in general are not rare, why are so many sufferers of depression reluctant to seek help right away?

The answer is related to the question of "normality" that we discussed in class: there cannot be any one definition of what is normal because of inherent differences in any individual's brain. These differences in brain structure result in subtle changes between individuals Despite the fact that depression is regarded by the medical and scientific world as an illness like any other physical ailment, there remains a troubling question of "normality" for the sufferer, and a feeling of doubt as to "what is wrong with me?" and even a rejection of any other external causes that may be the main problem and root cause of the sadness. For a patient with severe depression, "this sadness may be denied at first. Many complain of bodily aches and pains, rather than admitting to their true feelings of sadness" (1). In the DSM IV's definition, one of the main criteria for depression was an "abnormal depressed mood." Can we always look at depression and other psychological diseases as abnormal? We can never really know if someone isn't normal or not simply based on what we can observe, for reality can be slightly different for different people; it is all based on how each individual perceives the world. It is perhaps this classification into categories of "normal" and "abnormal" that creates the patient's reluctance to come forward for treatment. Depression is commonly seen as not a physiological problem of the brain, but a personal problem; one that must be caused by some sort of personal failure or shortcoming. Even many sufferers of depression commonly see themselves as failures for not being able to realize the root cause of their sadness, and coupled with the helplessness that stems from these self-defeating thoughts, results in a self-perception of being "abnormal" and therefore inferior or worthless.

In fact, this is not the case at all, and we can examine certain biological factors that contribute to clinical depression, including chemical imbalances or a lack of serotonin neurotransmitter receptors on the dendrites. Situations in a person's environment can also contribute to a general emotion of "feeling depressed," but there is a difference between a normal level of sadness and deep depression—the DSM IV defines major depressive disorder as a state of "abnormal depressed mood most of the day, nearly every day, for at least two weeks" (1). The state or mood cannot be related to any environmental factors or situations, such as physical illness, alcohol, medications, or the loss of a loved one. Thoughts of death or suicide, feelings of inadequacy or helplessness, and loss of interest in once-pleasurable activities are all familiar symptoms of this disorder. Depression is often characterized by a marked decrease in neocortex activity, and perhaps even more interestingly, signs of atrophy in the hippocampus. Dr. Yvette Sheline of Washington University conducted MRI research on depressed women and discovered that overall volume of their hippocami was smaller than those of the control group. The decrease in volume was also proportional to the number of days depressed (2), and showed a strong correlation with memory problems in these patients, indicating depression's cumulative effects. This is of great significance in because the study offers more evidence to why certain physiological changes within the brain may result in depressed symptoms: Dr. Ronald Duman of Yale also performed these studies on a cellular level, and found that even in adults, new neurons are created in the brain, more specifically within the hippocampus. Stress caused by the depressed state prevents new neurons from being formed, resulting in the observed atrophy. For whatever reason, treatment with antidepressants increases the rate of neurogenesis in depressed patients, most likely because the cells that are being produced in the hippocampus are the same ones related to neurotransmitter transmission and release (4).

This brings us to the concept of "will" in the depressed person. Obviously, the patient wants to be "normal" in the sense of being emotionally content and not at constant odds with the world or with themselves. But however much they try, it is very difficult for the patient to break out of the negative mindset and feel better; that is, the I-function cannot act upon the other processes that must be causing the depressed state. This "failure of will" does not imply any personal weakness on the part of the patient, but rather simply biological limitations of the brain; the I-function cannot change a chemical imbalance, for example, any more than it can deliberately focus the lenses in the eyes. Speaking in class of depression as a "failure of will" was for me an interesting way of looking at depression after the explanation of the biological effects; I had never thought of the condition in terms of "will" before. I had, in fact, always regarded my depression as a failure of my will in the sense that I was not doing something right, that it was my own personal shortcomings that were contributing to the feeling of helplessness and inability to escape the sadness. If we are to define "will" as the I-function acting upon various internal factors in order to solve a problem within our control, then our will does not apply to depression. Looking at the biological factors that contribute to depression, it is safe to say that such things as hormone imbalances and receptor/dendrite growth are things beyond which the I-function is capable of influencing.

Once we conclude that our "will" does little to mitigate major depression, this puts the responsibility upon the parts of our nervous system over which we have no control. Since sufferers cannot will ourselves to "snap out of" the sadness and apathy, it is clear that the I-function is not connected to, or does not have any influence over, the areas of the brain that influence the depressive state, such as the hippocampus involved in neurogenesis. Once again, depression falls under the growing list of nervous system functions that are beyond voluntary control of the I-function. Therefore, there still remains a question as to choosing correct methods of treatment that would affect the neurobiological factors. Yet even though studies by Drs. Sheline and Duman have shown that antidepressant medication does help to prevent the characteristic atrophy of the hippocampus, medication alone in practice does little without some form of therapy or contact with another person who can offer support The most common treatment for depression is to administer serotonin reuptake inhibitors, or SSRIs, to keep the important neurotransmitter in the synapse longer and prevent its degradation, keeping a more stable level of the chemical within the brain. However, any effect of the drug may take several weeks to manifest because of a lack of the serotonin receptors on the dendrites in the first place. In this lag time between administering the medicine and perceiving an actual effect, the patient will most likely continue to express the symptoms, which may even be exacerbated because the supposed treatment is not producing the desired effect, leading to an even deeper cycle of depression and self-doubt. It has even been shown that such drugs often have less effect than the placebo in test groups (3), strong evidence that medication alone is not the recommended or safest method of treatment.

With more and more doctors simply prescribing medication at the first signs of depression (3), has society simply become "pill-happy" with an emphasis on being glad all the time (again, a pervasive false conception of what is "normal" or equating constant happiness with well-being), with medication as the only recourse? Perhaps a better way to speed treatment and recovery for depression sufferers would be to show a more accepting and tolerant attitude towards their condition as we would show to a patient of any other ailment, offering support and a positive outlook rather than scorn caused by any misconceptions about the patient's will. Yes, depression is a failure of will in the sense that we cannot control genetics or brain/chemical makeup, but depression is most certainly not the failure of the patient's desire to be a productive person. Helping the depressed person through a difficult episode by simply offering support allows the person to better understand their condition, and perhaps at the end of the episode, foster a sense of creativity in which to express the depths of suffering and finally experience relief after living so long in the dark.


1)Mental Health: Major Depressive disorder: , A comprehensive definition of major depressive disorder, its symptoms, and related conditions and treatment.

2)Decreased Hippocampal 5-HT2A Receptor Binding in Major Depressive Disorder:, Dr. Yvette Sheline's paper on her findings of atrophied hippocampi in depressed patients.

3)Against Depression, a Sugar Pill is hard to Beat, The text of a Washington Post article about the effectiveness of the placebo effect in depression studies.

4)The Infinite Mind: Depression in the Brain, A very good general source describing the work done on a cellular level on the brain structures in depressed patients.

5)A Confession, Translated text of Leo Tolstoy's "A Confession"

Full Name:  Amy Johnson
Username:  amjohnso@brynmawr.edu
Title:  Obsessive-Compulsive Disorder
Date:  2005-05-13 10:44:22
Message Id:  15157
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Obsessive Compulsive Disorder

Obsessive-compulsive disorder (OCD) was brought to the attention of many people my age by the movie As Good as it Gets. Before then I had never heard one of my peers use the term, but after I heard people always being referred to as being OCD. Usually they were just referring to a person's attention to detail or need to do things perfect. OCD, though, is much more than that. It is an anxiety disorder that affects men and women. It does not go away, but with constant treatment the symptoms can be alleviated.

OCD is an anxiety disorder where people have unwanted thoughts and therefore perform certain behaviors over and over again. The obsessions are the unwanted thoughts and the compulsions are the repeated actions. These thoughts and actions get in the way of the sufferers daily life. The afflicted person knows that the thoughts and actions make no sense yet still cannot ignore or stop them (1).

The thoughts OCD sufferers usually have make them nervous or scared. They can be anything from a fear of dirt and germs to fear of harming someone. The compulsions are an attempt to alleviate the thoughts. If a person is afraid of germs he will wash his hands over and over again to get rid of the germs (1).

For a person to be diagnosed with OCD there are certain criteria that must be met. There are four criteria the obsessions must meet and two for the compulsions. Also, the person must realize at some point that the obsessions or compulsions are unreasonable and must interfere with the person's daily life. One of the main ways to diagnose OCD is by taking a valid and in-depth history of the patient (7).

This may seem like something every normal person goes through. They get dirty so they wash their hands. You have a feeling the stove is still on so you check multiple times before leaving the house just to be sure. But a person suffering from OCD checks more than just a couple of times. They have a ritual that they go through to make sure that things are as they should be. They start off small and soon grow as the doubt becomes greater (2).

One such example is a man named Michael Dunn. He is a 30 year-old father of two. He has had obsessive and compulsive thoughts for nine years. He has an intense fear that things have been left on, like the stove, and that not all the doors and windows have been locked. He is scared that there may be fire or that his children will be abducted. It used to be that he could just check a few times to make sure things were off and locked, but his fear grew over the years. Now he has to go through a ritual every night and every morning. He stares at each knob to make sure it is in the off position. Then he places his hand on each hot plate and counts to ten to make sure it is cold. If he gets distracted he must start over again. He does this to all the appliances. Some days this takes him upwards of an hour and makes him late for work. As a result he was fired. The thoughts did not suddenly appear when his children were born. When he was a child he would sometimes call home to ask his family to make sure all the appliances were switched off. When he got married and had children he acknowledges, though, that that was when the thoughts intensified (5).

His treatment consisted of being exposed to things that would cue obsessional thoughts. He made a list of all his rituals and thoughts associated with them and rated how anxious he would be if he personally did not check everything on the list. He was then forced to very slowly change his rituals. He was not allowed to check the toaster, and so on. Eventually rules were implemented where instead of checking he would just turn off the appliance after using it and not turning back to check that it was off. After treatment he was resisting the urge to check everything before going to bed and leaving the house. And, at a follow up six months later he was still resisting the urge (5).

Michael's story is a common one in the treatment of OCD. Up until recently OCD was thought to be an uncommon disease, but it has become known that 3.3 million Americans between the ages of 18 and 54 suffer from OCD (9). Since most people are ashamed of and find thoughts they have irrational they hide their actions. Because of this it made it difficult to accurately determine the number of sufferers in the past (1).

OCD also has a high prevalence of comorbidity with other diseases. There are many cases of it existing with anorexia, ADHD, depression, and Tourette syndrome. There is a certain case of a young girl who was diagnosed with anorexia after her freshmen year of high school. She took part in therapy of all sorts, but her weight still dropped. Eventually her weight became 83 pounds, less than 75% her normal body weight. Doctors had tried everything, including intravenously tube feeding her. Eventually a doctor realized she exhibited sings of OCD as well. She had to count and arrange her food in a certain way, for example. She was put on medication and put into behavioral therapy to treat the OCD, and her weight climbed to 100 pounds (4).

This study suggests a relation between anorexia and OCD, but there is no definitive link as of yet. There is also a link between Tourette Syndrome and OCD. OCD often has its onset in childhood, and this is the only time that there is a difference of who is affected. When OCD appears in children it is more likely to appear in males and is sometimes accompanied by Tourette Syndrome and ADHD (7).The lingering question is what causes OCD? There is no definite cause known, but there are many elements that could have a hand in causing it. Sometimes OCD arises in children and young adults after having a strept infection and even sometimes herpes simplex. The infection triggers an immune response. In this case, antibiotics are a sufficient treatment (7).

One other theory is that there is a gene in the brain that is mutated and causes OCD. Serotonin is a messenger in the brain that that helps keep people from repeating actions over and over again. Some think that OCD sufferers may not have enough serotonin in the brain. Another similar finding is that the serotonin transporter is mutated. There are two variants within the gene that have been found to change the regulation of serotonin in the brain. Scientists performed tests and found that six of the seven people they tested with the mutation also had OCD. Some even had other disorders like anorexia, and of course stress has been found to worsen the symptoms of OCD (7).

Treating OCD is usually handled in a manner similar to Michael's story, but there are other ways. There is behavior therapy, pharmacotherapy, interventions, and sometimes even neurosurgical techniques. Pharmacotherapy is a process in which a medication is taken to treat the symptoms. Behavioral therapy is what was used on Michael. It is when the patient is exposed to things that trigger their obsessive thoughts and are taught techniques to resist the compulsions. Eventually the triggers increase in severity until the patient can resist fully. Neurosurgical treatment of OCD is uncommon and is only for patients with extreme symptoms (7).

It is because people use the term OCD in a bad connotation that for a long time people would not admit their problems. OCD is horrible to live with because people who suffer from it know that they are being unreasonable. Yet because the compulsions they feel are necessary are sometimes so odd others, who do not understand the thoughts going through the sufferers mind, automatically think the person is just crazy. OCD, though, affects normal people. They are unfortunately just victims of a brain abnormality.


1)Obsessvie-Compulsive Disorder: What it is and How to Treat it.
2)Obsevvice-Compulsive Disorder.E
3)Obsessive-Compulsive Disorder (OCD).
4)What is the Relationship Between Anorexia Nervosa and Obsessive Compulsive Disorder? Mariah Smith
7)Obsessive Compulsive Disorder.
8)Mutant Gene Linked to Obsessive Compulsive Disorder.
9)Step on a Crack... Obsessive Compulsive Disorder.

Full Name:  Amanda Davis
Username:  adavis@brynmawr.edu
Title:  Date rape drugs have many uses
Date:  2005-05-13 11:30:20
Message Id:  15160
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

I chose to research the neurobiological affects of date rape drugs. I first found the three major date rape drugs names which are gamma hydroxybutyric acid (GHB), rohypnol (flunitrazepam), and ketamine (ketamine hydrochloride) (1). It's effective as a date rape drug because often victims cannot remember what happened while they were under the influence of the drug (1). Thus I began my search on gamma hydroxybutyric acid expecting to find how it affected memory and more generally date rape victims. Instead, I found its effects relating to its use as a recreational drug and in treatment of alcoholism, both of which greatly surprised me.

GHB is a short chain fatty acid that is naturally occurring and closely resembles the inhibitory neurotransmitter GABA (2). It's highest concentrations are in the hypothalamus and basal ganglia (3). GHB had been previously thought to compete with GABA, but accumulating evidence suggests it acts a neurotransmitter or neuromodulator on its own (3). It appears that GHB neurobiological activity is mediated through GHB receptors and, in higher doses, GABA receptors (7). In mammals, however, GHB is formed from GABA. It astonishes me that a neurotransmitter produced by our own bodies can be effective to commit such a heinous crime.

During the 1980s GHB was used by body-builders because it was thought to quicken fat loss and muscle gain (4). It stimulates growth hormone release in a pathway important to protein synthesis (4). Because of the risk of misuse, the FDA outlawed over the counter sales of GHB in 1990 (4). Low doses of GHB can cause a euphoric effect in humans (3). In a study of Australians who used GHB recreationally, they reported heightened sexuality, euphoria, loss of inhibition, relaxation and increased sociability (4). Its recreational use is increasing in the rave scene and subsequently GHB is used with a slew of other drugs (4). A study done with laboratory rats shows the abusive nature of GHB (5). GHB was self administered to mice which were not previously exposed to drugs on a similar bell shaped curve as other drugs of a abuse such as cocaine, nicotine and morphine (5). This means that GHB has positive reinforcing effects on users (5).

GHB has been used to treat alcoholism, narcolepsy, anxiety and depressive disorders (3) and possibly could be used to treat dependencies on other drugs (5). Positive reinforcement of alcohol use is related to the release of dopamine in a region of the brain called the nucleus accumbens (6). Alcohol addiction, however, is associated with the interaction of several neurotransmitters such as opiod peptides may influence alcohol's rewarding effects (6). This neurotransmitter and serotonin, under the regulation of the inhibitory neurotransmitter GABA and excitatory neurotransmitter, glutamate, influence dopamine activity in the nucleus accumbens (6). GHB can interfere with dopamine, serotonin, and opiod neurotransmitter systems (3) which can account for its influence on alcoholic cravings. GHB administered orally has been shown to raise levels of endogenous ETOH in detoxified alcoholics (3). This could be another reason it relieves some alcohol-withdrawal symptoms (6). It can also inhibit voluntary ETOH intake in rats and withdrawal symptoms in physically-dependant rats (3). GHB was more effective than a placebo in a group of human alcoholics in increasing alcohol-abstinent days, and reducing the number of drinks in one day (3). GHB's use to reduce the nervous system's dependency on ETOL has been compared to the use of methadone to reduce dependency on heroin (3).

Although none of the articles mentioned GHB as a date rape drug, certain effects of the drug made clear to me how it would be effective in that sense. GHB can cause "trance-like" states, thought to be stupors, or a sleep-like state which was discovered by electroencephalographs to be non-convulsive seizures (3). A woman having a non-convulsive seizure would appear to be awake, possibly just "zoned-out," but would not remember what happened during the seizure . If alcohol is consumed with GHB, the alcohol would take a longer time to leave the individual's system, leaving her impaired for a longer time because higher doses of GHB inhibit the rate of removal of ETOH (3). Both of these aspects of GHB can account for its use as a date rape drug.

When searching for information on rohypnol (flunitrazepam), I found more information relating to date rape, but not much on the neurobiological aspects. Flunitrazepam is used to treat insomnia which is seen midwinter in Northern Norway in otherwise healthy patients (8). It is highly effective in increasing alertness and feeling of being refreshed in the morning with minimal side effects (8).

Ketamine, known on the street as Special K (9). has been used to mimic schizophrenic symptoms to have a model for schizophrenia (10). Ketamine was and is still used for anesthesia, most commonly in dental procedures (10). Ketamine works by blocking ion channels and thus blocking receptors for certain neurotransmitters (10). When given to schizophrenic patients, their symptoms become worse (10). Ketamine also increases blood flow in the anterior cingulated cortices and decreases blood flow in the hippocampus and cerebellum, which are all areas that are also abnormal in schizophrenia (10).

Date rape drugs are used for recreation, treatment of neurological disorders, and for understanding certain neurological disorders. It is surprising to know that a neurotransmitter, GHB, in large doses can cause seizures and memory blackouts and in a more moderate dose causes the reduction of alcohol cravings. It's strange that one chemical can have so many different effects on the brain and behavior. Ketamine is also interesting because it creates schizophrenic symptoms and is used for recreation. Therefore people using it for recreation much enjoy possessing schizophrenic symptoms, which makes me wonder about people with schizophrenia and what it's like to live in their brains. I started off with questions about date rape drugs and am finishing with questions about schizophrenia. The brain is an interesting and unpredictable place at times.


1)Date Rape Drugs, 4women.gov: National Women's Health Information Center.

2)"From the street to the brain: neurobiology of the recreational drug gamma-hydroxybutyric acid", Wong CG, Gibson KM, Snead OC. National Center for Biotechnology Information; U.S. National Library of Medicine

3)"The Role of gamma-hydroxybutyric acid in the treatment of alcoholism: from animal to clinical studies", Flavio Poldrugo and Giovanni Addolorato.

4)"Liquid ecstasy: a new kid on the dance floor", J. Rodgers, PhD and C. H. Ashton. The British Journal of Psychiatry.

5)Invited Symposium: Recent advances in the neurobiology of drug addiction, Liana Fattore, Gregorio Cossu, Cristina Martellotta, and Walter Fratta.

6)"Medications and Alcohol Craving", Swift, Robert M., M.D., Ph.D. Alcohol Research and Health.

7)"Selective gamma-hydroxybutyric acid receptor ligands increase extracellular glutamate in the hippocampus, but fail to activate G protein and to produce the sedative/hypnotic effect of gamma-hydroxybutyric acid", M. Paola Castelli, et al. Journal of Neurochemistry.

8)"Triazolam (Halcion) versus flunitrazepam (Rohypnol) against midwinter insomnia in Northern Norway", Lingjaerde O, Bratlid T. National Center for Biotechnology Information; U.S. National Library of Medicine.

9)Ketamine: a fact sheet, U.S. Department of Health and Human Services, Alcohol and Drug Information.

10)"The Latest Theories on the Neurobiology of Schizophrenia", John J. Spollen III, MD.

Full Name:  Christine Lipuma
Username:  clipuma@brynmawr.edu
Title:  Walking Along the OCD Spectrum
Date:  2005-05-13 11:59:35
Message Id:  15162
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

All people experience problems which cause them anxiety, but for people with Obsessive Compulsive Disorder (OCD), the problems seem irrational and the solutions can be extreme (1). With OCD, the person experiences recurrent and undesirable thoughts and so becomes obsessed. They then feel compelled to perform a certain task to temporarily relieve the problem. Unlike other types of addictions or obsessions, suffers usually receive no pleasure out of their OCD and they understand that what they are doing is irrational (1). There are many different types of OCDs because there are people have varying obsessions. Still, there is an important theme that most OCDs involve, which is the desire for things to seem "right," by being perfect, clean, controlled, or balanced.

Some common obsessions include wanting objects to be lined up correctly or symmetrically, being afraid of catching a disease due to a lack of cleanliness, or worrying about whether the stove was turned off. In order to make oneself feel better about the problem, people become involved in compulsions such as activities or rituals (1). There is an important difference between an "activity" and a "ritual," however. Rituals often include people washing their hands to the point where the skin becomes dry and breaks or checking numerous times to make sure locks were closed. There is another class of disorder called Obsessive Compulsive Personality Disorder (OCPD), which tends to focus more on perfectionism, especially for those who are characterized as workaholics or those who keep to strict religious or moral guidelines (2). Still, in many cases there is no clear-cut difference between OCD and OCPD. In the case of hoarding, in which individuals are unable to discard objects which seem to have no real value, it is undecided as to whether this is an OCD or an OCPD. A type of OCD in which the obsessions and compulsions only happen in the mind is commonly called Pure O (3). With Pure O, insecurity about a certain aspect of oneself causes the person anxiety. This seems to be something that all people go through, but for people with Pure O it leads to long periods of rumination over the thought, which causes further anxiety and panic. Pure O is characterized as a type of OCD, whereas OCPD is explained as totally separate, but they both seem to fall within the OCD spectrum of disorders, which are not exactly OCDs (2).

After reading about these disorders, it has occurred to me that I have been afflicted by all of them at different points in my life, which would lead me to believe that the disorders are simply different ways of dealing with a common problem, namely insecurity. As a child I had an obsession with balance where, for example, if I would touch my right leg, I would also have to touch my left leg. When I would ask myself why I would do this, I would simply reply that, "I didn't want to make the other leg feel bad." At present I have minor issues with hoarding and the ability to trust people. It is not difficult to make the case that all people have faced insecurity, and therefore it is not too far-fetched to say that perhaps some people deal with their general insecurity by trying to bring balance into other areas of their life that can be controlled. The idea that drawing symmetrical houses can make you feel more secure is more unconscious than conscious, and therefore it is important to recognize the functions in the brain that are related to OCD.

Brain scans have shown that OCD affects the frontal cortex and the basal ganglia (clusters of neurons located inside the brain). There is increased activity in these areas, which normally work together by the frontal cortex sending information to the basal ganglia, which then transports this to other parts of the brain (4). The frontal cortex seems to be involved in memories and decision-making, and it can also control inhibition. Because the frontal cortex and basal ganglia transmit seratonin as one of their functions, it has become important to look at the ways that chemicals in the brain affect OCD. When the neurotransmitter seratonin is being sent back and forth to neurons in the brain, it often happens, especially in people who suffer from neurological disorders, that seratonin does not stay in gap (synapse) between the two neurons for long enough (5). Because of this, too much reuptake of the neurotransmitter occurs before seratonin can show its full effect. SSRI (Selective Seratonin Reuptake Inhibitor) medication seems to help some patients with OCD. With SSRIs, this reuptake is blocked so more seratonin can build up in the brain. It is postulated that the frontal cortex controls what is remembered, what is learned, and has general inhibition power. In a person with OCD, the frontal cortex has lost some connection ability with the rest of the brain and so it cannot control whether the brain will learn a repetitive activity or concentrate on a certain rule (4).

It is also important to note that it is not only the increase in seratonin, but also the ability of the brain to use that seratonin, that functions in improving patients with OCD (4). The cells themselves often need several weeks before the are able to change to the extent that they will allow the SSRIs to have an effect on the receptor cells. In my case, the SSRI that I was given, fluvoxamine, produced adverse side-effects and did not help the OCD, but perhaps if I had continued with the medication a few more weeks, my receptor cells would have been able to evolve in order to allow the SSRI to function. Still, many people with OCD see no changes at all with any of the medications, showing that perhaps seratonin is not the only chemical in the brain that is affected. Another cause for OCD could be streptococcal infections in children. In some children, the immune system creates antibodies for the infection, but the system then starts to recognize cells of the basal ganglia as foreign, which is known as an autoimmune response (6). It has been shown that some children with OCD have these antibodies in their system, so perhaps the problems with the basal ganglia made it so that the frontal cortex was unable to communicate with it properly (4).

In addition to medication, there are other psychosocial treatments that have been effective for OCD, including Cognitive Behavior Therapy (CBT) (7). Behavior Therapy involves exposure and response prevention (E/RP). In this treatment, the patient is first given cognitive reasons as to why their fears are unfounded or unreasonable. Then they are exposed to what they are afraid of because it has been shown that with time, the exposure makes the fear seem less frightening. After exposure, the ability to perform ritual responses is blocked. Of course, not all OCDs follow this pattern of being afraid of something, so other therapies are used, especially in the cases of OCPDs and Pure O. These can sometimes be helped with thought suppression and habit reversal, where you find another, non-OCD behavior (7). OCPDs are especially difficult to subdue because the patient's entire personality revolves around perfectionism (2). These treatments help to change the behavior, but they do not explain why a certain behavior began, so medication continues to help with the neurological root of the problem. Changing a behavior also includes reversing a neural pathway in the brain, so it would seem that both therapies have a biological basis.

If there is evidence that OCD is caused by problems in the brain, then how could the disorder possibly have to do with subconscious psychological problems, such as the need for security? As one article put it, "It wouldn't be too presumptuous to postulate that OCD evolved out of a basic human need [for order and purity] and somewhere along the way, ultimately became the need itself (4)."


1)Obsessive Compulsive Foundation, Website explaining the disorder

2)Obsessive-compulsive personality disorder, Article about OCPD and its differences from OCD

3)Pure Obsessional OCD, Article about Pure O and how it differs from OCD

4)As Good As It Gets?, Article about OCD and its causes in the brain

5)Selective serotonin reuptake inhibitor, Article on the ways the SSRIs function

6)PANDAS The OCD/Strep Connection, Website about the immune system's role in OCD

7)Cognitivie Behavioral Therapy For OCD, Website about CBT other therapies for OCD

Full Name:  Laura Cyckowski
Username:  lcyckows@brynmawr.edu
Title:  A New Spin on Nature vs. Nurture
Date:  2005-05-13 12:18:59
Message Id:  15165
Paper Text:
<mytitle> Biology 202, Spring 2005 Third Web Papers On Serendip

Just how blank might our "blank" slates be as we start out life? In the realm of linguistics especially, proponents of a biologically deterministic view on language and human behavior have insisted that the proposed blank slates are not really that blank at all. Instead of a clean slate, prominent figures like Chomsky have argued that language, and furthermore many aspects of behavior and development, are deeply rooted in genes. Recently, a new and interesting idea in favor of a rather blank slate has been brought to the table by child psychiatrist and psychoanalyst Stanley Greenspan and philosopher and psychologist Stuart Shanker. Greenspan and Shanker, instead of relying on genes to explain language development and higher levels of thinking, put all their marbles on emotional development to pave the way for such development. They boldly propose that it is culture and society and our interactions with other humans during our growth that fuels this emotional development. While both theories, that of Chomsky's more "nature geared" model and that of Greenspan and Shanker's more "nurture geared model", have yet to be either altogether accepted or dispelled, Greenspan and Shanker's new model provide interesting explanations and fresh insights into observations about language and emotional development.

According to a Chomskyan theory of language, language is foremost a product of the brain. Exposure to a language is required for acquisition of a language, and thus environment and nurture is certainly not left out of the equation. However, this theory proposes that, as a part of being human, a child is born with an innate predisposition to acquire and learn a language.

This biological innateness for language is a result of a language acquisition device (LAD), which Chomsky invokes through the idea of a Universal Grammar (UG). In every child's Universal Grammar there exists a finite amount of linguistic rules. Rules are proposed to be generative and hierarchical as opposed to linear. Syntactic rules then, for example, have deep structures which are then converted into the surface representation as dictated by such rules. Through exposure to any particular linguistic environment, rules are learned and a grammar for that particular language is built. The Universal Grammar acts as a menu, providing potential for all the differing (and contradicting rules) observed throughout the world's languages. A particular linguistic environment thus acts as a switch for selecting which rules apply to the language(s) a child is exposed to and subsequently builds a grammar of rules specific to that language (or languages if the child is in a bi- or multi-lingual environment). This point is easily illustrated in two stages noted as cooing and babbling. At the stage of cooing, a baby produces many different phonetic articulations despite what sounds may or may not be in the phonetic inventory of the surrounding linguistic environment. Sounds which are not represented in the linguistic environment may very well be made by the baby; however, many sounds may be excluded for simple reasons of inadequate motor development and maturation of vocal organs, and is in no way related to what sounds the baby is or is not hearing through his or her environment. The second stage, babbling is distinguishable in that the baby picks up on the phonetic units present in his or her particular linguistic environment. More support for the hypothesis of a UG considers Chomsky's proposal of generative grammar. Rather than children observing, storing, and then imitating sentences learned from adults, a child uses exposure to language to build rules out of the innate UG and then generate novel and, consequently, an infinite amount of sentences. This theory of language acquisition provided by innate universals explains the ease with which babies and children easily acquire any language(s) they are exposed to, and also explains the near impossibility of acquiring native speaker status of a language at a much later adult age. In a innatist view similar to Chomsky's innateness of language, genes also influence and dictate social, emotional, and intellectual. Social, emotional, intellectual, along with language development are all then placed at the same level and are seen as also interacting with each other given environment, but are still however nested in human genes.

Greenspan and Shanker radically refute this model by asserting that symbols, language, and intelligence are not a direct result of genes, but rather made possible by social and emotional interaction with other humans, namely adults. Taking this idea one step further, they assert that social and emotional mechanisms are not as hard-wired as otherwise thought, but are rather made possible by emotional interactions learned very early.

It is important to note that, although Greenspan and Shanker do not place language, symbolic/higher order thinking, and intelligence in the same relationship with genes as an innatist view does, they by no means deny the existence and activity of genes. Just as Chomsky's theory of language acquisition requires a linguistic environment (and therefore experience or nurture), Greenspan and Shanker's theory also involves both sides of the "nature-nurture" model. They acknowledge that genes are required, but emphasize not sufficient alone. As is illustrated with their work with chimpanzees and ideas about brain structure, the mechanisms for basic learning must have evolved and be in place before any individual (or animal) can utilize and take advantage of social and cultural interaction with the environment. As such, Greenspan and Shanker offer co-regulated emotional signaling as the process that opens the doorway for emotional development, language, and intellectual development. Co-regulated emotional signaling involves back and forth interaction, accompanied by emotion, with an adult. Symbolic thinking is reached when the child separates perceived emotions and connects them with meaning in a semiotic way. For example, a child may have incentive to learn the words "baby doll" as a result of emotional signaling between the child and adult (Chomsky's theory of UR however would does not require an incentive, as language acquisition is assumed innately and biologically driven). When the child and adult interact, the child experiences happiness and delight playing with a doll while also interacting with the adult. Later, when the child seeks enjoyment through play or wants the baby doll, she connects the words "baby doll" to express herself. Social and emotional development, then, develop along side each other and give way to (the incentive to learn) language and symbolic thinking (connecting the words "baby doll" with the idea of or physical object) and later intelligence.

Implications for this way of viewing the sequence of different aspects of development are particularly alerting given the assumption on the innatist view that we as babies are predisposed to not only social and emotional development, but also linguistic and intellectual. Greenspan and Shanker's theory holds that such social and emotional signaling is continuously being learned anew by each generation. Greenspan and Shanker concern themselves not just with how such signaling must be passed on by adult generations and learned by children, but also hypothesize about human evolution and the events that lead up to the first group of humans signaling with each other using emotions and symbols. Rather than submitting to the "Big-Bang" hypothesis of emergence of a human species, which looks to a sudden genetic mutation for or natural selection in favor of language, the authors propose that evolution of the human race was a gradual process, as was the very development of emotional signaling and thinking. Though children hit certain milestones in their emotional and linguistic development (first word, building sentences, and so on), their development is gradual (before the first word is uttered, they are still perceiving language from and interacting with adults); so too with evolution.

Claiming that such processes are passed down socially offers an explanation for cultural universals as well as cultural relativism or specificity. That which constitutes an observed universal may be something that came about longer ago in the past (and passed down to a larger population before groups of humans separated and spread out over the globe), while something considered more culturally specific today might represent something being passed down in confines of a specific group. Greenspan and Shanker cite these two types as well; some cultural and learning processes have been passed down over an evolutionary time period (the abilities to relate and signal with emotion) while other processes which are determined by each individual and a shorter time period (the idiosyncratic ways a person deals and relates with emotions). They note, "the former involve basic learning processes and the latter embrace individual content and behavior stemming from these processes." (Greenspan and Shanker, 5)

These processes have continued up until the present day as result of a "snow ball effect." Eventually after our human ancestors begun to utilize these processes, they were passed down successively to each generation over and over. Genes are then not specific for social and emotional development or language; these processes remain manifested only because they are kept in motion through social and cultural transmission from each generation to the next. A child needs not just interaction with human beings but must be taught through interaction with a human, who has already him or herself learned and mastered these processes, in order to utilize the ability to learn these processes.

Work with various species of chimpanzees lead the authors to observe how in dynamic systems (in communities where the chimps are influenced by one another) the chimps can, like humans, demonstrate heightened communicative and intellectual capacities. If human genes are in fact not specific for language, then a species engaged in such emotional signaling should show the same development and heightened processes. And the chimps they observe do. Greenspan and Shanker worked with chimps in groups (and single chimps in isolation to provide a comparison) who were raised in and lived as a community in an environment enhanced by objects which encouraged communication as well as interaction with each other and humans (lexigrams, interaction with humans, etc.). Some might argue that if the chimps were capable of showing similar processes to humans, that a chimp should, like a child, develop the same language. The chimps do develop language in that they, along side emotional signaling, develop symbolic thinking and can communicate with each other. Chimps may just be at a simpler or more elementary stage of emotional processing (which is observed to be passed down to the next generation in the chimps just like in humans) but may not have progressed or learned to utilize higher levels of linguistic and symbolic thinking as humans yet, which does not imply that the chimps are not capable of doing such, they may just not be as far along but on the same track nonetheless.

Greenspan and Shanker's theory raises of course many questions and implications for existing observations for cases of "deviant" development and/or behavior. Cases of feral children, raised in isolation, inevitably demand an explanation in terms of their new model. Followers of the Critical Period Hypothesis, who assume language acquisition is biologically programmed, propose a cut-off (or perhaps a less radical drop-off) of ability to acquire a language natively. Arguers for this hypothesis would predict that such feral children, found after (about the age of) puberty, would never be able to acquire a native language in the same way that a child does. And, as seen in the case of Genie, their prediction is supported. Greenspan and Shanker touch on this case briefly by suggesting that Genie was not able to fully develop language because she was not exposed to emotional interaction. However, she of course after being discovered was in the presence of other people and adults and doubtless had some type of interaction. This might then imply a similar (in part still biologically wired) Critical Period Hypothesis for the simultaneous development of social and emotional interaction, which would of course need to come before any possible proposition for critical periods for language "acquisition" (or better, development). Through their elaborations the authors even offer a better understanding of autism, and provide ways of overcoming aspects of the conditions in children who develop such disorders due to biological deviations. There is no mention of attempted treatment of such disorders in people at older ages and a comparison between the age groups for apparent "treatment" would provide more insight into a critical period hypothesis.

Another situation that seeks explanation through the authors' model is that of Nicaraguan Sign Language. The language was developed by a community of deaf children in the absence of any other sign language. It might be suggested that Greenspan and Shanker's model would not explain this apparent creation of Home sign because the children were apparently "creating" their own language without input from any other language. However, this can be more concretely accounted for than the case of Genie in that the children still had interaction with adults and teachers who had attempted to teach them a manually signed language, thus there were inevitably emotional exchanges of some sort.

Greenspan and Shanker's theory further support for differences between groups and cultures. In their model, both universals and differences are, again, accounted for through the idea of cultural practices being dependent on transmission. Such implications could mean better understandings of the debate between moral absolutism and moral relativism, which would be results of cultural universals and cultural specificities, respectively. A better understanding, sensitivity, acceptance, and embracing of these cultural differences would mean steps forward especially for affairs political in nature and, with a subsequent understanding of cultural memory, a new view of the nature of history.

The authors' ideas lastly raise some concerns about the modern structure of the family and the way in which development through education in children is viewed and handled. If interaction with a caregiver is a requirement of transmission of healthy development, and if there are certainly implications if a child is left in isolation, what are the implications of a caregiver that is not constant? Specifically, how might new understandings change the circumstances for foster care children, who always seem to be uprooted from a family and environment the instance they begin to feel integrated into the new social atmosphere? Furthermore, what might be discovered about the quantity, rather than quality, of child-adult/caregiver interaction? Though an innatist view would similarly require human interaction as a requirement for language acquisition, in the age of information and technology, could there be slight ramifications for parents leaaving their infants and children in front TVs or "Baby Einstein" computers games, when the time might otherwise be spent in real, enriched, nurturing human interaction?

Given that social and emotional development, language and thinking are argued to be relearned anew by each generation and dependent solely on transmission from older generations to the next, the seemingly short time span from one generation to the next seems to allow for very quick change and an opportunity for "things to go wrong." If there are in fact (possibly irreversible) repercussions of certain situations like the above, how long might it take to recognize differences and changes? Could the be changed through younger generations if acknowledged at all?

While both an innate and a interaction-dependent model both require a certain degree of both nature and nurture, neither can provide a definitive answer to the blank slate debate; or at least neither can say exactly to what degree, certain aspects at birth are set in stone or written with experience and/or social interaction. While both provide useful explanations and give helpful insights about aspects of human experience, the less commonly held view of Greenspan and Shanker should be (at least) given as much consideration and attention as a more Chomskyan view, in order to explore what further observations such ideas could lead to and any ways this new kind of model might help in understanding, not only specific conditions like autism, but the very nature of man itself.


Greenspan, Stanley I. and Stuart G. Shanker, The First Idea: How symbols, language and intelligence evolved from our primate ancestors to modern humans. Cambridge: Da Capo Press, 2004.

Full Name:  Emily Trinh
Username:  tmai@brynmawr.edu
Title:  Memory and Personality
Date:  2005-05-13 12:37:25
Message Id:  15170
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Alzheimer's disease is currently the fourth leading cause of death in many developed nations, and in the United States alone, an estimated four million Americans suffer from the disease. In 1906, Alois Alzheimer and Emil Kraepelin were the first to identify AD through observations of a collection of brain cell abnormalities taken from a deceased 56-year-old woman who was diagnosed with mental deterioration. Some of the abnormalities they found include dense deposits (neuritic plaques) around the nerve cells and twisted bands of fibers (neurofibrillary tangles) inside the cells (1). This degenerative brain disease is a very common form of dementia found among the older population, from which there is no recovery or cure. People with this devastating disease suffer tremendously because it impairs a person's ability to perform daily activities like coordinating movement and governing emotions. Alzheimer's disease can also take away a person's memory, but is memory the only precious factor that is lost to this illness?

In order to understand the symptoms that can develop over the course of the illness, it is necessary to recognize some of the etiologies that are associated with Alzheimer's disease. The oldest theory, which is still used today, is the cholinergic hypothesis, which states that a deficiency in acetylcholine is responsible for the development of Alzheimer's disease. Some of the first generation anti-Alzheimer's medications are used to increase the production of acetylcholine by inhibiting acetylcholinesterases. The treatments include lecithin, choline, physostigmine, deprenyl, hydrochloride, and tacrine. Studies have shown that these medications are efficient in treating Alzheimer's symptoms, but they can not stop or reverse the development of the disease (2). Some side affects of the drugs include the elevation of liver functioning.

People with Alzheimer's disease are also suspected to be genetically predisposed for it. Scientists have divided the disease into two types: the Familial Alzheimer's Disease (FAD) and the Sporadic Alzheimer's Disease (SAD). FAD shows clear inheritance pattern, while SAD show vague family history of the disease. Most of the cases that are identified are SAD, where the disease occurs around or after the age of 65. FAD , however, occurs between the ages of 30 to 60. In addition, Alzheimer's disease is less common in men than women for any age-group. Some evidences have pointed to a defect in to the Amyloid Precursor Protein (APP) gene on chromosome 21, the Pr-Senilin1(PS1) and Pre-Senilin 2 (PS2) gene at chromosome 14 and 1, and the apolipoprotein -e (APOE) gene at chromosome 19, which increase people's chances of developing the illness (3). Most of the FAD cases are resulted from PS1 alteration, while a large percentage of SAD cases are caused by APOE gene complication. The changes of certain genes at specific chromosomes can have deadly consequences.

A recent hypothesis that was generated in 2004 suggests that Alzheimer's disease is caused by the interaction between three proteins or the triple-protein pathology theory. Some studies have shown that Tau protein abnormalities occur first in the brain, which is later followed by the development of Amyloid beta protein. Amyloid beta will cluster into amyloid plaques, and these plaques can settle themselves on the blood vessels and outside of the neuronal surface of the brain. Inflammation and Neuro Fibrillary Tangles (NFTs) are two processes that follow after the formation of amyloid plaques to kill neurons and cause dementia. During inflamation, numerous astrocytes and microglial cells activate to produce prostaglandin acid medicated inflammation and harmful free radical to damage neurons. When Tau protein has been phosphorylated, it loses its ability to bind to microtubules and keeps the microtubules' structures and functions intact. The tau proteins, instead, will bind with each other into a knot and this process is called NFTs (4). Neurons that no longer have any functional microtubules will eventually die. A third protein called alpha synuclein will also form and interact with the other proteins, to cause Alzheimer's disease. The function of Alpha synyclein, however, has not been identified yet, but studies have shown that it can be related other illness like Parkinson's disease (5).

Other less known etiologies of Alzheimer's disease are the autoimmune theory, the slow virus theory, and the toxic chemical theory. For instance, excess level of toxic chemicals like aluminum and mercury are found to be extremely high in people with Alzheimer's disease. The body's immune system can also attack its own tissues, thus making antibodies for it own cells. Aging neurons might undergo late changes that can trigger an autoimmune response that direct the production of Alzheimer's symptoms. Since a very similar brain disorder (Creutzfeldt-Jakob disease) is caused by the Mad Cow virus, many have speculated that Alzheimer's disease might also be developed by a virus entering the system (5). However, there have not been recent studies able to determine the one true virus that may cause the disease.

Due to the numerous etiologies of Alzheimer's disease, there are also many different types of symptoms that are can develop with the different stages of the disease. In the beginning of the illness, the individuals will feel tired, anxious, and upset easily. They have problems coping with changes, and so they get confused when they travel to a new place. They will also try to blame others for their forgetfulness. Making decisions will become very difficult for them, and balancing the checkbook seems like an impossible task. They can also anger easily, and driving becomes dangerous for them and others. They will seek isolation, and thus are very susceptible to depression (2).

In the later process of the Alzheimer's disease, the symptoms can become so devastating that affected individuals can no longer take care of themselves. Many will lose their ability to identify dates or read words, and some will have severe emotional problems. They will also lose touch with reality as they become suspicious of people around them and develop paranoid notions about being murdered. Dressing, bathing, and feeding themselves are daily routines that they can no longer accomplish (2). Some will encounter problems with walking and are forced to live with a caregiver. Death will eventually follow afterward from pneumonia or other health problems.

The most severe and vicious symptom produced by Alzheimer's disease is the loss of memory. Recent or short-term memory will be the first type of memories to disappear from the individuals' minds. The memory problem will continue to worsen with the progress of the disease, and so some people with the disease find themselves asking or repeating the same questions over again. They will forget people's names and lifelong friends. When people with the disease lose recent memory, they can also lose entire sections of time. For instance, a person can forget who is the current president or the current year and month. Many will no longer be able to recognize and identify family and friends.

People suffering from Alzheimer's disease are not only losing their memory, but they are also losing their personality. In order to understand the relationship between personality and memory, it is important to define personality and memory. Personality, as defined by some neurobiologists and psychologists, is a collection of behaviors, emotions, and thoughts that are not controlled by the I-function. Memory, on the other hand, is controlled and regulated by the I-function of the neocortex. It is a collection of short stories that the I-function makes-up in order to account for the events and people. Memory is also defined as the ability to retain information, and it is influenced by three important stages. The first stage is encoding and processing the information, the second stage is the storing of the memory, and the third stage is memory retrieval. There are also the different types of memories like sensory, short-term, and long-term memory. The sensory memory relates to the initial moment when an event or an object is first detected. Short-term memories are characterized by slow, transient alterations in communication between neurons and long-term memories (1). Long-term memories are marked by permanent changes to the neural structure.

It is very possible that when Alzheimer's patients lose their memories, they will also lose their personalities. If memories and personalities are controlled and influenced by certain structures in the brain, like the neocortex, then plaques and tangled formations around these structures can cause the individuals to lose their memories and also their personalities (3). For example, an Alzheimer's patient (Margo) has left an advance directive requesting euthanasia if she developed dementia. A couple of years later, the Alzheimer's disease left her severely demented. The problem is that Margo's personality had changed dramatically. She enjoys life and does not want to die anytime soon. After Margo suffered from the full affect of Alzheimer's disease, she is no longer the same person who wrote the request asking for euthanasia (4). The changes in Margo's brain structures are assumed to have altered her personality and wiped her memory clean.

Another example that supports the theory that one's personality will changes as the brain structure alters is by looking at the development of a child to an adult. Over the years, as a child brain begins to mature by changing the size or the shape of several brain structures, their behaviors in turn will also alter. When my brother was little, he used to play violent video games. He was the big bully in school and was very competitive in sport. As he grew older, however, my brother became very patient and friendly to other kids in school. He no longer plays violent video games, but instead took up swimming and chess. Since my brother's brain is not the same brain 5 years ago, it is only natural to expect a change in my brother's behaviors as well. Since there are so many alterations to the brain structure as one develops, it is expected that personalities will also change to fit the brain structure.

Alzheimer's disease can alter the brain structures, and the consequences of this alteration are memory loss and changes in personality. When one learns new things or experiences new events, it is assumed that changes in the brain will take place. Thus, a person's personality will alter as the brain structures change over time. Changes in behaviors that are caused by alterations of certain structures of the brain create and define changes in personality.

1) Wikipedia, Alzheimer's Disease Wikipedia
2) Mental Health, History and Orgin of Alzheimer's Disease
3) Alzheimer's Disease and Symptoms , Overview of Alzheimer's Disease
4) Psych.org, Etiologies of Alzheimer's Disease
4) Senior Health, Questions about Alzheimer's Disease

Full Name:  Anna Marciniak
Username:  amarcini@brynmawr.edu
Title:  "How Creativity Arises: Brilliance in Some of Us"
Date:  2005-05-13 13:18:02
Message Id:  15171
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip


Just as a particular cut of diamond is indicative of its brilliance, some diamonds, no matter how polished, will never sparkle as brightly as others. When it comes to "uncovering the secrets of creative thinking"- it seems absurd to imply that there is a hidden conventional method by which those uninspired individuals might gain enlightenment. In Creativity For Dummies (2), the author abruptly concludes, "there is no deceptively simple scheme of generating creativity...however, take a good hard look at your life and ask, 'does my life suck?' and if your answer is, yes it [sic: your life] does suck, then make the necessary changes to see that you surround yourself with inspiration! Dummy." While Creativity For Dummies could be a little less condescending in tone, one cannot help but feel the author's frustration with the nation's deficit in creativity, and the overwhelming desire for a quick fix. Part of the argument that creativity can be cultivated or unleashed in everyone can be traced back to common assumptions such as God is inherently good, or one is innocent until proven guilty, but when did we lose sight of the probability and the selectivity of brilliance in creative people? Perhaps there is a creative spark in all, but what causes a thunderbolt within the brain's mental exchange out of which consistent brilliance arises, ultimately classifying the person as a genius?

As synonyms must be chosen with the utmost care, a brilliant person, aka genius, must be somewhat defined, "GZA (pronounced The Jizza, aka The Genius, Maxi Million, Justice, born Gary Grice August 22, 1966 in Staten Island, New York) is an American rapper and member of The Wu-Tang Clan. GZA is, along with Ghostface Killah, widely considered the best rapper in the nine-person clan, and his solo work has been critically acclaimed" (3)....whoops. "A Genius is a person imbued with distinguished mental prowess. This can manifest either as a foremost intellect, or as an outstanding creative talent. The term also applies to one who is a polymath, or someone skilled in many mental areas. The term does specifically apply to mental rather than athletic skills, although it is also used to denote the possession of a superior talent in any field; e.g., one may be said to have a genius for golf or for diplomacy" (4). Who ever would have thought that golf and diplomacy would be definitive areas of genus? The website goes on to say that geniuses exhibit a "phenomenal brilliance" and "differentiate themselves from the rest through great originality." The energy that's involved in building a genius...wait, the rest? So then we have just established that yes, unfortunately, geniuses are different from the rest of us in ways. Big ways.


1. Their composition is "99% inspiration, 1% perspiration" (5).
2. "They see things differently, they expect more of themselves than the rest of us" (6).
3. Geniuses look different in that I'm-a-genius-type-of-way. They also act different, either really cool, or totally weird.
4. Isn't Genesis usually left up to individual interpretation?


Biologically, the energy that's involved in creating genius starts in utero. Not only is every child born into sin, but each child has the capacity for creative behavior. Whether or not that behavior is nurtured into genius is almost entirely a function of the child's upbringing and environment. Children that are born with autism or epilepsy are not necessarily handicapped creatively, if anything, children with developmental disabilities have a mentally open door to creativity. But the greater tragedy in today's society is that many gifted children, i.e. those displaying extraordinary intelligence or talent in a certain field, are not challenged at home and in school and develop disorders which hinder their creative capacity. Children with an immense capacity for understanding adult concepts might tend to withdraw from their age-appropraite social group as a result of their peer's lack of understanding. Many public schools have become a training ground for tortured souls and troubled youths; pumping out drug addicted kids who use out of boredom. This is not to say that drugs don't aid in stimulating and enhancing the creative processes, but education is a fundamental part of a child's development, and if an avid interest in the nature of the inner and outer world isn't encouraged, then the brain simply ceases creative function. This is why schools are so fundamentally important to developing the brilliance in our children, because genius is a mixture of language, perception, consciousness, learning, memory, and thinking. Proper psychological functioning is pivotal to brainwave activity and the central nervous system. Is creativity a function of inhibition? Is that why children are capable of such exuberance, because they simply lack the strongholds that bar adults from openly engaging in conversation with imaginary friends, or playing a spontaneous game of pretend? Via the deductive method, we can reduce creativity as follows: creativity--> inhibition--> social development--> mental activity--> environment--> home. Brilliance develops inside those children who grow up in an environment which reinforces the importance of asking questions, understanding, and experimentation.


1. "Geniuses are not eccentric...they have the passion...so intense that no force on earth can stop it" (5).

2. They are the only people who make history books.

3. "Geniuses are deprived individuals. I wonder if there's like, a disassociation from society. I mean, these f-ing people who sit around all day writing and reading books, I mean, Jesus, how many people have time to sit around an do that shit all day? I'd be a f-ing genius if I did that too. I mean, shit, that's why my brother's so much smarter than me, because he started reading earlier on that I did. Smart little m&*^%#f**ker" (11).



While there seems to be a never-ending supply of money to fund special-education programs in schools, there are very few public institutions which offer advanced level programs specifically suited for gifted children- there simply isn't any money to reward those children with an earnest hunger for knowledge and understanding. However, once the genius ages to sweet sixteen, she can take the Mensa entrance exam to find out if she is certifiably "wicked smart." Mensa was founded in 1946 by Barrister Roland Berrill and lawyer moonlighting as scientist Lance Ware. The not-so-secret society had one staggering qualification for applicants: a very high IQ. Also, for anyone who isn't Asian, don't worry, Mensa doesn't discriminate. In fact, Mensa membership includes the club's exclusive magazine only issued to those in Mensa. For a small fee, (and barring that you fail the exam), you can have your own laminated Mensa Membership card which proves your IQ is "in the top 2% of the population." Sweet, geniuses unite, take back the light (7). To find out if you qualify to be a Mensan, and also boost your ego in case your having a fit of the smarty woes, take the online Mensa Mental Workout (10).


Right Brain Left Brain: Back to the Subject of Creativity and because they're so cute, little kiddies

Where does creativity come from and does it really have to do with using a specific part of the brain? Only in the 1960's did Robert Sperry prove that the "brain is divided into two major hemispheres" (12), which are responsible for very different cognitive processes or modes of thinking. While the left brain is associated with analytical and logical cognitive processes, the right side is thought to be more random, intuitive, and creative. The brain functions through a delicate system of neural networks. Thinking involves a flow of activity from cell A to cell B, a sort of firing action which takes place in stimulating cell activity. The interplay and exchange of information is carried through these neurons to form cell assemblies, or networks in a global movement of information through brainwaves (14). While this free flow is constant, it creates pathways the brain will use over and over again, even after the external stimuli are removed, facilitating parts of the muscular system (15). Cognitive processes can be explained in terms of connections between assemblies of neurons, rapidly firing within the neo-cortal framework. However, should there be an interruption in the dance of distribution of activity, it could cause information to be lost. Since brain = behavior, it makes sense that cognitive conception of the creative process must be played out in some fashion. Therefore, if there is an abrupt shift from right brain to left brain activity, it inhibits the growth of the brain as a whole entity. Not only is the brain a holistically complete functioning mass of neural networks, but it is a creature of habit, and if certain neural pathways are destroyed or not exercised, they begin to weaken, and thought blockage occurs. When networks of neural connections become stalled, it causes internal conflict, and the entire flow of mental activity is disturbed. Training creative faculties is an integral, if not essential part of human growth. Children's curiosity and imagination in the early stages of development cannot be replicated later on in life. Playing barbie or dress-up at 22 is simply not the same, and not as delightful as it was when you were 7. Once a child enters into the academic institution, the focus shifts from right brain activity to left brain- solving-the-problem-correctly mode of thinking. As Sperry pointed out, "Our educational system, as well as science in general, tends to neglect the nonverbal form of intellect. What it comes down to is that modern society discriminates against the right hemisphere" (12).

HYSTERIA.... and people with brain damage

Van Gogh cut off his ear, Dali liked to trip, Einstein couldn't get enough women- what's the matter with these artists? What makes them lost regard for social norms and how is it that creativity and brilliance is unleashed in many people who have suffered serious accidents? Since brain damage can be hereditary or due to an injury of sorts, it's important to note just where the brain damage occurs. A creative idea doesn't just come out of nowhere, it is part of cognitive processes that arise in the right brain and are then evaluated and judged by the left. The brain and behavior must be thought of as a complete entity, a whole, thus what affects one side will inevitably affect the other. In his article, Unleashing Creativity, Kraft explains how the left brain's "convergent thinking can inhibit the right brain's divergent thinking." (1). After years and years of schooling where children are predominantly taught to use their left brains, creative development becomes stalled, even squelched by the left brain's reasoning power. What's so intriguing about people who suffer accidents that affect the left side of their brain is that usually they become less inhibited, showing little regard for their actions both privately and publicly, almost forgetfully reckless. Since the left hemisphere is responsible for conducting the checks and balances, when there is damage, although it may allow a person to transcend barriers, it impairs communication, which is an essential part of life.

The bottom line is, almost everyone is born with the capacity to learn. Whether intellectual or artistic growth is stimulated is entirely a function of environment. Not every child is a genius, but every child is capable of brilliance and creativity. Whenever growth is stalled, whether through education, injury, or environment, it inevitably has an affect on the individual. If the left side of the brain is predominately used over the right, then the child may not be good at making dioramas, but it doesn't point out that the child is not intelligent. True genius is not inside everyone, it is the product of a creative inhibition grounded in the electrical network of neurons all over the brain. Above all, being a genius IS A SPECIAL and UNIQUE THING that distinguishes a person from the rest of us. To achieve great ideas, one must create a special-interest incubation chamber inside the mind to harbor good ideas. The next step is to completely disassociate the self from others' opinions and society, as they simply do not matter. The third step is to finally begin to live life in a manner conducive to the individual desires and longings of the self- why not walk down the path you always stare at instead of just standing there? While these ideas will inspire creativity, they will not make you a genius, and neither will reading this webpaper. There is no test one can take to become a member of that select group, for to be a genius is to become master of your own fishing pond, and unfortunately, people with goldfish are going to have a hard time competing with those who have marlins. .


1) Kraft, Ulrich. How Brilliance Arises: Creativity in Every One of Us. Scientific American Mind, Volume 16, No. 1.
This title was almost directly lifted from Kraft's article, and as the reader will see, the author so cleverly seeks to single out that the group who can truly be described as "brilliant" or "creative" is not only small, but selective, and that Scientific American has not only advertised a campy "there's something special in each and every one of us" slogan, but also proves how utterly starved the public is for truthful headlines under this political administration.

2) Creativity For Dummies.
Another absurd title from a book which outlines the step-by-step process by which one might go about achieving creative ideas. Readers should pay particular attention to "Step Number One- the art of smokin' the reefer." This completely fictional book was actually inspired by the author's own creative cognitive techniques, which are not, unfortunately, for sale.

3) The Free Encyclopedia, Comic relief. Yes, it's okay to laugh, that's a fundamental part of being a genius.

4) The Free Encyclopedia, First of all, everyone should know that "this article is about people with exceptional mental abilities." However, it doesn't mean you need to be a genius to appreciate it, you need to be able to read, operate a computer, and have a sense of humor. Make sure to scroll to the bottom of the page, see: stupidity.

5) Genius - An Obvious Truth or an Eternal Mystery?, After reading this article, the reader will ask herself, why is there a question mark after the title?
5) Quote attributed to Leonardo DaVinci.

6) O'Sullivan Struggles to Show his Genius, Apparently O'Sullivan is a genius at playing Snooker, whatever the heck that is.

7) "Geniuses unite, take back the light." In this statement the author means to imply that the survival of the intellectually fittest could be devestating to the rest of the common people, should the smarties choose to unite and steal fire from the rest of their fellow comrades (1).

8) The use of the term "comrades" is an omage to fellow Genius Ayn Rand.

9) Mensa International Website, Mensa....hahaha, there's also a dating link on this site where single Mensans can meet others single Mensans and talk about all kinds of really intelligent things. Don't ask how the author knows this.


11) Quote attributed to the Author's genius roommate, fellow geek, scrabble lover, and local Henderson High School Video Production teacher-fav.

12) Sperry, Check out this site for information on Sperry and his breakthrough work.

13) Neuro Networks, Complete explanation of the way the neural network processes and distributes information throughout the cortical regions.

14) Neurobio link of Serendip Webpage, Anna Marciniak's fantastic paper on sublimity and brainwave activity in which she asserts that truly creative cognitive processes arise when the brain the vibrating somewhere in the theta state.

15) Hebb Legacy, Donald Olding Hebb first came up with the idea of neuropsychological cell assembly. "He point(ed) out that every bit of behavior is jointly determined by heredity and environment, just as the area of a field is jointly determined by its length and its width.

16) Right Brain/ Left Brain Website, For kids with easy-to-understand information on Sperry and right brain left brain experiments.

*in case you didn't notice that creative license was taken with this paper, now you do.

Full Name:  Flicka Michaels
Username:  fmichael@brynmawr.edu
Title:  Gray Cells Create a Gray Area in the Distinction Between Male and Female Brains
Date:  2005-05-13 15:14:42
Message Id:  15175
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

When the President of Harvard University implied during a conference held in January that biological differences between male and female brains might explain the under representation of women in fields of science and math, many women attending the conference were outraged. They claimed his comments were offensive to women like themselves who were very distinguished in such fields. The debate surrounding this issue has in turn recalled an old debate existing in biology: Are there innate differences between male and female brains? If so, do these differences necessarily imply that one sex is superior to the other? It seems that some women are anxious to avoid the label of "different" because they believe different means inferior. Given the history of rights that women have fought so hard to obtain, this fear is understandable. However, if biological differences do exist that can help one understand or explain the differences between male and female behavior, aren't we doing a big injustice to society by not even considering the possibility of it? Here, I will explore the differences between male and female brains and the implications these differences have on gender distinction and the nature versus nurture debate, as well as future ways this debate can help one think about behavior in general.

The brain can be divided into three sections: the forebrain, the midbrain, and the hindbrain. The forebrain is the most interesting of all three due to the fact that throughout one's life, the forebrain grows until it forms two cerebral hemispheres, which make up about 85% of the entire brain. (1) The thick coating over the cerebral hemispheres is called the cerebral cortex, which is the location of many studies involving gender differences. According to one study done on the brains of male and female rats, some areas of the female cortex were shown to be more developed than those of the male cortex. (1)The same study showed that the thickness of the cerebral cortex was different in males than in females. In 95% of the cases, the female rat demonstrated symmetry between the thickness of the right and left cortex. However, only in 60% of the cases did the male rat show symmetrical cortexes. (1) Marian Diamond suggests that the common asymmetry of the male cortexes could be useful for the behavior male rats exhibit of finding their female partners and defending their territory. Though many early studies in gender behavior have been (and continue to be) done on rats, technology has rapidly advanced in the last decade, allowing a great deal of research to be carried out on live human brains using functional Magnetic Resonance Imaging (fMRI), volumetric measurements of certain sections of the brain, and positron-emission tomography (PET) (2).

In general, men have about 4% more brain cells than women. However, women are shown to have more connections between brain cells than men. (3) It has also been noted that although men use the dominant hemisphere of their brain (usually the left) to master language, women actually use both sides, giving them a general advantage over men. Further research has shown that women have larger and deeper limbic systems than men, therefore enabling them to be more emotional, better able to express their emotions, and connect more easily to others. (3) Instead, men seem to be better at perception of space, and general cognition. One explanation of this discrepancy can be found in a recent study done by University of California, Irvine with the University of New Mexico. The researches found that men have 6.5 times more "gray matter", centers in the brain that process information, than women, and that women have more 10 times more "white matter", connections between these centers. (4) The study also found that 84% of the regions that had gray matter in female brains were in the frontal lobe, compared to only 45% percent in male brains. (4) Therefore, the gray matter is more spread out in the male brain than in the female brain. The study also shows that women have a larger corpus collosum than men, which is why women are better at combining information from both sides of the brain while men are better at general overall cognition.

In addition to the amount and location of gray or white matter in the brain, there are other aspects that influence the variation in behavior between men and women. In a recent study done at Johns Hopkins University, researchers discovered a region of the parietal cortex, called the inferior-parietal lobule (IPL), is much larger in men than in women. (2)The right IPL is found to be associated with the perception of emotions, as well as the perception of bodily and spatial interaction, while the left IPL is involved with the perception of time, speed, and mental rotation. (2) In the study,the researchers found that in men, the left IPL is larger than the right one, and that the exact opposite is true in women. This could potentially explain why women are generally more emotionally sensitive than men, and why men are generally more mathematical than women.

Another area of the brain that has also been shown to be significantly different in male and female brains is the hippocampus, which is involved with memory storage and spatial navigation. (5) Numerous studies performed on rats have shown that male rats tend to find their way through mazes using directional information while female rats tend to navigate mazes by using the memory of landmarks. One study executed by Tracey J. Shors from Rutgers University found that when they exposed male rats to a series of electric shocks, there was an increase of connection between neurons in their hippocampus and they learned tasks more quickly. (5) The same experiment done on female rats showed the opposite effect, with a decrease in the females' ability to learn tasks. This study demonstrates the way in which different genders respond under stress, and it has interesting implications for the ways male and female children should be taught in school.

It may seem that gender determination is pretty clear-cut, but in reality that is not necessarily the case. There are many cases of Gender Identity Disorder (GID) in which a person feels deeply associates themselves with the opposite sex and consistently feel as though they were born the "wrong sex".(6) For example, a male with GID tends to dress in female clothing and prefers playing with dolls than participating in sports games. On the contrary, females with GID tend to dress in male clothing and enjoy playing rough sports games. Both sexes who have this disorder have also shown to be extremely uncomfortable with their biological genitalia and often express the wish that they were more like the opposite sex in physical structure.(6) One interesting aspect about gender disorders is that the subject of the disorder usually has a strong sense very early on in childhood that they are the wrong sex.

This raises the question of how much of our gender is determined at birth as opposed to the environment in which we grow. It is an interesting question that has caused much discussion about the nature versus nurture debate. Through experiments involving male and female infants, it has been noted that boys tend to choose toys such as trucks and cars, while girls tend to choose dolls. A study done by Melissa Hines of City University London and Gerianne M. Alexander of Texas A&M University on vervet monkeys showed that, when presented with a selection of toys, the male monkeys spent more time playing with toys such as trucks and cars, while the female monkeys spent more time playing with dolls and other feminine toys. (5) The study also showed that both the male and the female monkeys spent an equal amount of time on the gender-neutral toys such as coloring books. (5) Since monkeys are not influenced by societal pressures like humans, this study suggests that the differences between male and female behavior could be innate instead of environmentally constructed.

Many scientists say that the differences between male and female brains can be explained by evolution. (2) During the primitive stages of humans, men and women each had distinct roles in society that corresponded to which function their brains performed better. For example, men hunted for food and protected their families while women gathered food close to their homes, cared for the children, and looked after the house. According to Professor David Greary, a researcher of gender differences at the University of Missouri, USA, "developing superior navigation skills may have enabled men to become better suited to the role of hunter, while the development by females of a preference for landmarks may have enabled them to fulfill the task of gathering food closer to the home." (2) In addition, the tendency of women to be more developed verbally can be explained by the fact that women socially interacted more than men due to their responsibilities of raising children.

Obviously there are other factors that control behavior of the sexes other than the composition of the brain, such hormones and environment. However, one wonders how much genes play a role in the differences between male and female brains. Dr. Eric Vilain, an assistant professor of genetics at the David Geffen School of Medicine at UCLA performed a study in 2003, in which he extracted RNA from 10-day old male and female mice in order to compare those genes that had different stages of action. (7) Dr. Vilain found variation in 54 genes, half of which were more operational in females than in males. (7) He said this information can help explain some of the behavioral differences between the sexes, and hopes to prove with further research that sexual identity and sexual attraction are not in fact voluntary, but rather predetermined at birth.

So, in fact we have found that there are biological differences between male and female brains. Does this mean that one sex is superior to another? Not at all. As Dr. Godfrey Pearlson says, "To say... that men are automatically better at some things than women is a simplification. It's easy to find women who are fantastic at math and physics and men who excel at language skills. Only when we look at very large populations and look for slight but significant trends do we see the generalizations." (2) In addition to the problem of generalizing, it is important to remember that the biological makeup of the brain is not the only aspect that determines the variation in gender behavior. Hormones and environmental influences, such as society and culture, also play a part in the conceptualization of gender.

Examining differences between male and female brains allow us to explore the diversity in human behavior. However, it does not determine it. The notion that our brains are composed in the way they are because of gender roles in primitive societies does not exclude the possibility of alteration in the brain's composition to accommodate future roles played by the modification in our perception of gender. Furthermore, ignoring the biological differences in male and female brains to avoid stereotyping of gender is wrong. Instead, we should use the information available to us to learn something about gender determination and gender behavior. Even though there is a great deal of evidence pointing towards the conclusion that gender is primarily determined at birth, my guess is that since general human behavior is determined by a combination of genetics and environmental influences, gender behavior can be explained very much in the same way.


1)Male and Female Brains, a speech given by Marian Diamond

2)Are There Differences between the Brains of Males and Females?, essay by Renato M.E. Sabbatini

3)Male-Female brain differences, a general overview

4)Today @ UCI, "Intelligence in men and women is a gray and white matter"

5)Scientific American, "His Brain, Her Brain"

6)At Health, Gender Identity Disorder

7)Daily Bruin, "Research shows gender linked to gene activity"

Full Name:  Kate Matney
Username:  kmatney@brynmawr.edu
Title:  Circadian Rhythms in Shift Workers and Diabetes
Date:  2005-05-13 16:52:05
Message Id:  15177
Paper Text:
<mytitle> Biology 202, Spring 2005 Third Web Papers On Serendip

Excluding the shifts of sailors and soldiers, before the Industrial Revolution work scheduling of non-daylight hours was unprecedented. However, since industrialists like Henry Ford implemented shift work to maximize profit at a 24-hour yield the popularity of scheduling irregular work hours has only grown. From 24-hour gas stations, to health care, to the more recent popularization of the "Call Industry," which allows phone purchasing at any hour, over 22 million Americans work on shift schedules (3,7.) Unfortunately, what is good for the economy comes at a human cost. As a group, shift workers are at a heightened risk for a montage of illnesses from gastrointestinal problems to depression (9.) Among the most concerning risks associated with disruption of a normal 24-hour rhythm is an increased risk for diabetes. Aside from ranking as the sixth leading cause of death in the United States, killing 73,249 people in 2001, Diabetes is strongly associated with the number one killer, heart disease (1,2.)

Although the complete etiology of increased diabetes in shift workers is not understood, the hazard of shift work is largely attributed to the disruption of the body's internal clock, or Circadian Rhythm. Just as pain is experienced by incongruence between the nervous system's expectations and input received by sensory neurons, disagreement between the internal clock expectations and actual activity (e.g. shift work; where the expectation is sleep and the activity is work) is disruptive of physiological functions. Since Diabetes is a metabolic disorder requiring the careful monitoring of the insulin-glucose equilibrium, disturbance of the body's rhythm is likely to de-stabilize glucose metabolism in diabetics. Furthermore, by encouraging de-synchronized eating patterns in non-diabetics shift work disrupts the insulin-glucose equilibrium and thus may heighten contraction risk.

Diabetes is a disorder in the metabolizing of the body's fundamental energy source, glucose. There are three types of diabetes. Type-one diabetes is an autoimmune disease most commonly found in youth and accounting for about 10% of diabetes cases. Type two diabetes, also known as adult onset diabetes, accounts for the remaining 90% of diabetes cases, most of which are adults. The third type of diabetes, gestational diabetes, afflicts pregnant women who are later at a heightened risk of developing type two diabetes. In all diabetes the hormone insulin, which is needed for glucose uptake from the blood to the cells, insufficiently facilitates the metabolizing of glucose. In the absence or malfunction of insulin, glucose remains un-metabolized in the blood. For this reason, hypoglycemia, or abnormally high levels of plasma glucose often diagnoses diabetics (4.) In type one diabetes glucose remains at unhealthy blood concentrations because insulin production is inhibited by an autoimmune attack of insulin-producing beta cells. In type-two diabetes enough insulin is produced, but for unknown reasons the individual is insulin intolerant, which means that normal insulin levels are not sufficient to facilitate the transport of glucose from blood into cells. (4.) Type two diabetes is most relevant to shift work hazards since it is more likely to develop throughout the adult lifetime rather than during childhood.

The Circadian Rhythm can be thought of as the mechanism by which the nervous system prepares for different activities throughout the day, including glucose metabolism. Largely controlled by the Suprachiasmatic Nucleus (SCN) of the Hypothalamus, the 24-hour clock is responsible for the release of 'sleepy' hormones around bedtime (i.e. melatonin), morning metabolizing hormones to provide energy after the night fast (i.e. cortisol) and the glucose-metabolizing hormone insulin at appropriate meal times (10.) Thus, the physiological rhythms of the body are significantly controlled by endogenous factors within the nervous system.

If no other influences contributed to the body clock, shift work might not be so physiologically disruptive. However, the Circadian Rhythm actually synchronizes the internal rhythm with external rhythms, most importantly light/dark cycles. In the case of glucose metabolism the circadian rhythm also synchronizes insulin release with meal times and activity level. In the absence of the external influences or "zeitgebers" (German for time givers,) for example in a consistently dark cave, the body's internal clock is actually cycling at a little under 24-hours (10.) Most people easily adjust to the approximate one-hour incongruence between their internal clock and the day cycle. However, a more significant incongruence of 12-hour differentials, as in shift-work, is not so easily overcome.

It is important to note the perpetually de-synchronizing nature of shift work in order to understand why adaptation is so problematic. Jet lag is also caused by discrepancy between the internal clock and external inputs and shares symptoms with shift work such as daytime sleepiness, depression, digestive problems and diminished alertness (8.) However, adjustment for shift workers is less complete. While travelers will eventually synchronize their Circadian Clock with zeitgebers, shift-workers' night schedules maintain constant internal/external disagreement. Also, while travelers may experience one cycle of de-synchronization, days off and inconsistent shift scheduling produce a consistently disruptive de-synchronization.

Furthermore, full-body adaptation is compounded by the different rates at which each function re-adjusts to environmental changes. While sleep/wake cycles most readily adjust, physiological functions, like metabolism, are slowest to change (10.) An example of the effects of differential rate adaptation is overcoming daytime sleepiness from jet lag while still experiencing gastrointestinal irregularities. Field studies show that physiological adaptation to shift work is incomplete. In fact, one study on body temperature rhythm showed that complete inversion did not result even after 21 consecutive shifts (10.) Circadian studies on isolated rat tissue suggests that there are clock controls within the insulin producing pancreas, rather than in the SCN or pineal gland of the central nervous system (CNS.) Evidence for tissue-level control mechanisms far removed from the conscious creating neocortex of the CNS potentially explains the difficulty in complete environmental adjustment to rhythm changes in shift workers.

Because diabetes' etiology is not fully understood, precise neurobiological mechanisms through which circadian rhythm disruption spurs development of the disease are unknown. However, research on high-risk factors, most significantly being overweight, (but also ethnicity, stress and old age,) is telling about the nature of diabetes' causes. Since diabetes is essentially a de-stabilization of the vital metabolic equilibrium between insulin and glucose it is probable that the malfunctioning is caused, or at least aggravated by, destabilizing inputs disruptive of metabolic equilibrium. This would explain the disease's association with both obesity and shift work. For overweight diabetics if neither the disease nor the weight condition is genetic it is likely that they are both, at least in part, a result of perpetual consumption of excess food, especially high-sugar and carbohydrate foods. Therefore, such individuals may have de-stabilized glucose metabolism because he/she has chronically stressed the glucose-insulin balance by system bombardment of excess glucose.

In viewing the circadian-insulin rhythm, it seems possible that shift work causes a comparable destabilization of the metabolic equilibrium. In vivo human experiments show that insulin secretion oscillates to match anticipated activity level for a normal day, showing greatest secretion during the light-hours (9.) Glucose-stimulated secretion peaks in the early morning and is maximized at meal times (10.) Since shift workers are more likely to consume greater quantities during working hours, rather than morning hours of peak secretion, these intrinsic circadian oscillations will not match activity level nor easily adjust to it. As with excess food consumption of over-weight diabetics, this de-synchronization (again, compounded by multiple de-synchronizations and differential physiological adjustment rates) will stress the glucose-insulin equilibrium. In shift workers the glucose-insulin system is probably overwhelmed by large meal consumption during the night when insulin levels are low while plasma glucose levels will be depleted during morning sleep when glucose metabolism is greatest.

Further neurologically controlled factors that are circadian controlled may disturb and aggravate healthy metabolic functioning and eventually increase diabetic risk in shift workers. Cortisol, a hormone involved in the metabolizing of stored energy is released on a 24-hour rhythm (5,10.) Like insulin, its secretion correlates with normally anticipated activity levels (5.) Secretion is greatest in the morning when night shift workers are least likely to need energy metabolism, while levels drop lowest before sleep, when night shift workers are likely to consume the most energy. Since cortisol secretion is also stress induced (5) metabolism may be further thrown off by emotional, psychological and physical strains of shift work.

Gastric emptying is also controlled circadeously, and experiments in rats show that maximized nutrient retention by slow digestion is affected by regular meal timing and ingestion of complex carbohydrates and proteins. Insulin affinity may actually be improved both by healthy choices and proper food timing relative to gastric emptying (10.) However, shift workers tend to have poorer diets, which is believe in part to be a result of decreased food availability. In addition, perhaps shift workers, tending to be of lower socio-economic classes, already maintain poorer diets and are therefore already at a higher contraction risk. It is further suspected that stress and fatigue result in increased junk-food consumption (6.)

The health crisis of shift work, especially as it relates to diabetes, is an extremely relevant topic both for medical research and socio-economic health politics. While the biggest killer in the U.S., heart disease, has decreased by over 30% in men over the past 30 years, the figure is cut in half for diabetic men. Even more alarming are the statistics in women, where the past thirty years shows a 30% overall decrease in heart disease but a 23% increase in diabetes (4.) The increased risk of shift workers for this increasingly hazardous epidemic calls attention to the ethical dilemma of lower-class health exploitation in the name of profit maximization. Furthermore, just as the symptom of obesity lead investigation in the un-mapped etiology of diabetes, the relationship between metabolic health and disruption of the Circadian Rhythm gives clues as to the importance of equilibrium in glucose metabolism.


1) National Center for Health Statistics Homepage; Death— Leading Causes

. 2) Loyola University Health System Homepage; Overview of Clinical Complications of Diabetes.

3) Tripod Member Website; A Brief History of Shiftwork

. 4 National Diabetes Information Clearinghouse (NBIC) website; Diabetes Overview.

5) WebMDHealth Website; Cortisol Test Overview.

6) Canada's National Occupational Health and Safety Resource Website; Work Schedules; Rotational Shift Work.

7) Hassen, Farrah. "Sleep Deprivation: Effects on Safety, Health and the Quality of Life." California State University at Fulerton College of Communications Homepage for Department of Radio— TV– Film. (http://communications.fullerton.edu/facilities/tvfilm_studios/content/safety/sleep.htm)

8) Mercola, Dr. Joseph "Shift Work Dangerous to Your Health" The Lancet; September 22, 2001;358:999-1005 (http://www.mercola.com/2001/sep/29/shift_work.htm)

9) Peschke, D. and E. Peschke. "Evidence for a Circadian Rhythm of Insulin Release from Perifused Rat Pancreatic Islets." Diabetologia: Halle-Wittenberg, Germany; 1998. (http://www.dcmsonline.org/jax-medicine/2001journals/April2001/shiftwork.htm)

10) Scott, Alene, M.D. "Shift Work Hazards." Jacksonville Medicine April 2001. (http://www.dcmsonline.org/jax-medicine/2001journals/April2001/shiftwork.htm.)

Full Name:  Elizabeth Madresh
Username:  emadresh@brynmawr.edu
Title:  Marijuana and Its Potential for Addiction
Date:  2005-05-14 12:25:52
Message Id:  15185
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

There are many different ways to define addiction. In the past, it has been defined as tolerance to and physical dependence on a drug of abuse, where tolerance represents an adaptation to repeated exposure to a drug so that the normal pharmacological response generated is lessened. Recently, however, a new definition has emerged due to further research. A more modern viewpoint of addiction has led to the understanding that tolerance and dependence can be major factors associated with addiction but is not essential; it is possible for tolerance and withdrawal symptoms to resolve within weeks but addiction can persist. This is evident in addicts who have gone years without using but then relapse. Currently, the definition has evolved to be described as an overwhelming involvement with the use of a drug (compulsive use) and a number of symptoms or criteria that reflect a loss of control over drug intake and a high tendency toward drug-seeking (1). Because of the different perspective of what makes someone an "addict", the debate on whether marijuana has the characteristic of being addictive has come into hot topic. In order to look at this question, one must understand the history of marijuana and previous addiction studies.

In the 1930's, marijuana was portrayed by the Commissioner of Narcotics, Harry Anslinger, as "a social menace capable of destroying the youth of America." During this time, it was believed that marijuana use not only triggered mental deterioration in the users but also provoked abusers to commit violent crimes. In 1937, the Marijuana Tax Act was passed in hopes of thwarting marijuana abuse. However, in 1969 the law was deemed unconstitutional. Currently, cannabis is regulated and regarded as illegal according to the Controlled Substances Act of 1970. However, this has not prevented many people from fiercely debating the dangers of cannabis as well as discussing its questionable abilities to turn someone into an addict. As Earl Thomas, a professor at Bryn Mawr College, said, "You don't see marijuana users pushing their drug dealers for another hit on the street corner." Whether or not cannabis should indeed be regarded as an addictive drug and whether this addiction is a physical one or an emotional one is a topic that has spurred a great deal of research. Therefore, it is essential to learn about the very nature of the drug and of addiction (2).

The main psychoactive ingredient in cannabis is a cannabinoid called tetrahydrocannibinol (THC). The active ingredient, THC has been known to induce mild relaxation, euphoria, analgesia, and hunger. THC usually consists of approximately 4% of the cannabis. Therefore, if there is a 1 gram "joint", 40 mg will be THC. When smoked, 20% of the THC gets absorbed by the lungs. This percent, however, can be influenced by the many factors involved with smoking (3).

Recent research has found two receptors of THC. They are called, respectively, CB1 and CB2. These receptors are located in the basal ganglia, cerebellum, hippocampus, and cerebral cortex. More importantly, they are concentrated in the substantia nigra and the caudate putamen, which are areas of the brain that are often associated with drug abuse. When these two receptors were found in the brain, researchers were encouraged to find the endogenous cannaboid compounds that these receptors accommodated. An endocannabinoid is a family of chemical signals composed of unsaturated fatty acids with polar head groups. They are produced by enzymatic degradation of membrane lipids. Although there are many endocannabinoids that are yet to be determined, Anandamide and 2-Arachidonylglycerol (2-AG) are two endocannabinoids that have been discovered.

Marijuana has been shown to have many immediate effects which last 1 to 3 hours. These effects can include: bloodshot eyes, faster heartbeat, hunger, intense sensations, dry mouth, fear, loss of memory and coordination, slowed reaction time, etc. There are also long term effects that can occur if marijuana smoking occurs over a long period of time. It can damage the reproductive system, increase the risk of lung infection, heart disease, and the more controversial "amotivational syndrome", which is lack of motivation to do activities related to school, work, or even previous friendships and interests. It is evident that marijuana has many positive and negative effects and thus it is important to understand whether this drug is addictive.

The mesolimbic dopamine system is one of the major neural substrates of reinforcement. It consists of dopaminergic neurons that start in the ventral tegmental area and end in the limbic forebrain. Cannabinoids can produce dopamine in this system. However, does this mean that cannabis is addictive? (4). As mentioned previously, it is difficult to pin down what addiction is and how it differs from habit. The central process of forming habits is through reinforcement. In order to understand the addiction process, a drug self-administration paradigm is often used with animals. This paradigm allows for the animal to supply itself with reinforcement (via the dopaminergic system). Although most studies have not been able to show self-administration of cannabis or THC, there have been studies which showed decreased reward thresholds and conditioned place preference in rats. This means that if rats were placed in the same environment, they would have a higher tolerance for the drug than if they were in a different environment. Further support for marijuana being addictive is found in studies using cannabinoid receptor antagonists, which can block the effects of marijuana. These antagonists were shown to induce withdrawal symptoms in rats such as shaking, tremors, tongue rolling, biting, etc. Furthermore, it was found that heavy marijuana users need 8 times the dose in order to get the same effects as infrequent users. This shows that tolerance definitely occurs, as well. Since tolerance and withdrawal symptoms are part of the definition for addiction, this leads one to believe that marijuana may be addictive after all (5)

Due to research, it is obvious that the drug causes cravings and changes in the dopamine system (6). It is true that many people go into therapy for cannabis addiction but there are also a number of people who try marijuana and do not become addicted. Nearly 40% the population over the age of 12 has tried pot at least once in their lives but not all have become addicted (7).). What is the difference between those that become addicted and those that do not? It is obvious from past studies that the dopamine system is involved in the addiction process to marijuana; however it does not completely explain it. What initiates the drug use?

It has been postulated that the initial drug use is a result of personality traits, peer pressure, or other stress. It is possible that after many times, the normal homeostatic environment of your brain is disrupted. Possibly, the reward system is being over activated during drug use. A compensatory response due to the excessive flood of drug-induced neurotransmitters results, dampening the drug's effect. This leads to tolerance. Another theory is that the withdrawal symptoms could cause someone to use again. A final theory is that the drug becomes context dependent. This could mean that any environment that surrounds the user when they smoke will become a potential incentive to use the drug the next time they are around that person, place, or thing. This ties into the fact that social factors and personality factors have also been suggested to spur marijuana use (1).

The criteria for addiction has been looked at and revised quite a bit over the past decade. Originally, it was thought that dependence, tolerance, and withdrawal all have to occur in order to call someone addicted. However, a different point of view has put attention on drugs that now have the possibility of being addictive. Through excessive animal models and correlational studies, it has been found that marijuana does indeed show evidence for being both emotionally and physically addictive. Based on the fact that the dopamine system is being stimulated and thus, the reward system is being activated, one could conclude that almost anything can be addictive, if it creates euphoria. Cannabis is certainly something that brings about pleasure. As we discussed in the beginning of class, doing anything will have an effect on the brain. When you are putting a chemical like THC into your body, which changes the way you act and think, it is only common sense that constant use of it can cause permanent changes. Therefore, I think it is not very far off to conclude that it is possible to create a new "constant state" of your dopamine system. Future research is definitely needed to further validate the point that marijuana is physically addictive. Particularly, it would be beneficial to understand what initiates the addictive process. However, science has taken leaps in its effort to understand what once looked like a harmless drug.


1) Kopnisky & Hyman. "Molecular and Cellular Biology of Addiction": National Institutes of Health, MD.

2) Meyer & Quenzer. Psychopharmacology. Sinauer Associates, Inc.: MA, 2005.

3) , an interesting journal websource

4) Nestler . "Cellular and molecular mechanisms of drug addiction" Substance Abuse Disorders 45 (698).

5)Health Education>, a health resource on marijuana

6)About Addiction, a useful addiction site

7) Marijuana Addiction, a useful addiction site

Full Name:  Amelia Jordan
Username:  ajordan@brynmaer.edu
Title:  The Benefits of Pet Ownership
Date:  2005-05-15 01:34:20
Message Id:  15189
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

In the United States, about three of every five homes have a pet, which is about 60 percent of households (1). Although the pet population is rising, one place still notorious for banning pets is the college dormitory. When my sister came to visit Bryn Mawr in early April she spent a lot of time just sitting in my room with me while I studied. It was not until she left that I was aware of how much another attentive presence in the room could diminish my feelings of loneliness. It may be true that I am always surrounded by other students because I live in a dorm, but they are typically busy and dealing with their own sources of anxiety. We are so often wrapped up in our own lives that we don't have time for each other, which leaves room for lonesomeness. One might say that my sister's presence was not unlike having a pet in the room. If I needed a study break, she didn't mind being disturbed, just as a pet would not be bothered by receiving attention. When I play with my pets at home it is almost more for me then for them. They seem to have a calming effect; they are an escape from the chaos of everyday life, and they are thankful for any kind of affection. Owning a pet is not a novel idea; they have been thought of as human companions for thousands of years. So where did this idea of domesticating a pet come from? Why would mankind even want another animal living with them? Can humans profit psychologically or physiologically from having a pet?

Out of all the living creatures on Earth, humans are the only species choosing to bear the responsibility of caring for other species, which we have been doing for over ten thousand years. The domestication of canines began around 9000 BC (2). Humans observed their hunting abilities and used the dogs as an aid for their own survival. Not only did the dogs provide humans with sustenance, but they also offered a form of protection, which is probably how they were discovered to be such loyal companions (2). Cats have also been living in the vicinity of humans for thousands of years. The Ancient Egyptians were known to keep cats for rodent control purposes, as well as worshiping them as deities(3) . Ancient art also depicts man and dog, side by side, which can attest to the value we have given them throughout time.

While pets clearly benefit humans by supplying ample camaraderie, recent studies have shown them to positively effect our cardiovascular and autonomic nervous systems too. Before looking at various studies, we must first understand the anatomical structures pets are claimed to influence. The autonomic nervous system consists of the sympathetic nervous system (SNS) and the parasympathetic nervous system (PNS). The sympathetic nerves run along side the vertebral column and branch off into organs (including skin tissue), glands, and other ganglia. The SNS and PNS regulate internal body functions and are usually not controlled by the I-function (they are involuntary). When the SNS is activated, the preganglionic neurons (neurons that bring signals from the central nervous system to ganglion) release the neurotransmitter acetylcholine, which stimulates action potentials in the postganglionic neurons. The postganglionic neuron then projects to the "target" organ and at its synapse it emits noradrenaline, or norepinephrine (4). These neurotransmitters act predominantly on the alpha receptors; when noradrenaline is received on capillaries and blood vessels, for example, the alpha receptors cause vasoconstriction. The release of noradrenaline makes the heart pump faster, thus causing blood pressure to rise, widens bronchial passages, and dilates our pupils(4) . The PNS has the opposite physiological effects of the SNS, namely bringing the body back to a resting state(4) .

One study, done at an assisted living center in Illinois, examined the effects of dogs on human vital signs (heart rate, blood pressure, and oxygen saturation) (5). The subjects consisted of fifteen women, all between the ages of 76 and 90, and the dogs were "non-bonded" to the women (they didn't have any prior contact with the pets). The experiment began by recording the women's blood pressure, heart rate, and oxygen saturation. Then two dogs were brought into the room and the subjects touched and held them. After ten minutes, the women's vitals were taken again, and the dogs were removed from the room. The women were instructed to sit quietly for five more minutes, after which a third reading was done. The results showed that 13 out of 14 subjects' blood pressure decreased (the fifteenth subject could not be read), and the subjects' heart rate revealed a significant decreasing trend as well. Changes in oxygen saturation, however, were not noteworthy(5) . These researchers concluded that dogs may have a beneficial effect on elderly women's cardiovascular system, and this experiment indicates that the parasympathetic nervous system response can be boosted by interaction with a friendly animal.

Another group of researchers, at State University of New York at Buffalo, wanted to demonstrate that owning a pet can have calming effects on the body when it is reacting to psychological stress (6). They conducted an experiment on 48 stockbrokers (men and women), all of whom had no other conditions than hypertension and had not owned a pet at any point in the past 5 years. All of the subjects were prescribed an antihypertensive drug, called Lisinopril, and were split into two groups. One group was the control group and went home with just the standard drug therapy, and the other group was given both drug therapy and a dog or cat (7). Before beginning either type of therapy, both groups had similar responses to the "stress tests" the researchers performed on them. Six months later the tests were done again, and while the drug therapy lowered each groups' blood pressure, the people with pets had a noticeably lower response to mental stress than the control group(6) . When told about the results many of the control group subjects even went out and bought a pet (8). One of the experimenters said that the findings suggested that when a person with high blood pressure is under stress, owning a pet can really help, especially if the person has a "limited support system" (8) . On a neurobiological level, pets can help reduce stimulation of the SNS, which is commonly caused by stress, thus controlling blood pressure.

Aside from stress relief, pets can aid humans psychologically in a number of other ways too. For many people, children especially, keeping a pet can improve one's self–esteem (9). It is not that difficult to make your dog or cat happy, all you have to do is play with it or pet it; basically they just want to be loved like humans do. However, humans are harder to please and generally we are not that quick to forgive and forget. Pets, on the other hand, do not pass judgments and if you care for them, they repay you with unconditional love, which makes people feel needed. Owning a pet also teaches children responsibility, because they require a great deal of care and attention (9). Many people find their pets to be a great conversation starter too, as it is something numerous people have in common with one another.

Ultimately, it seems as though the benefits of pet ownership outweigh the majority of disadvantages (of course there are exceptions, such as allergies). In particular, there appears to be sufficient proof that pets can alleviate stress, and of all the places in the world that cause mass amounts of stress are college campuses. Massachusetts Institute of Technology (MIT) recognized this, and five years ago they began allowing select residence halls to have a limited number of cats(10) . Not only do the students say the offer companionship and stress relief, but they also control the number of mice, which seem to be a growing problem at Bryn Mawr. The pros of pet owning still need to be explored, but they are clearly not detrimental; it is only a matter of time until various institutions (colleges, nursing homes, etc.) are aware of calming they can be, and permit them.







Full Name:  Anna
Username:  amarcini@brynmawr.edu
Title:  Google an Orgasm (1)..................or.....
Date:  2005-05-23 02:46:36
Message Id:  15225
Paper Text:

Biology 202, Spring 2005
Third Web Papers
On Serendip

Google an Orgasm (1)..................or.....


Although the official "G spot" was first discovered in 1950 by German sexologist Ernst Grafenberg, the clitoris was recognized long before then as the source from which explosive sexual pleasure is derived. Historically female sexuality has always been repressed and misunderstood- our bodies thought to function specifically for the purposes of reproduction. Now, the puzzling question of the vagina's practical, everyday function (besides peeing) can be answered- it is a device used for getting off and maintaining youthful vitality!

According to "Cosmopolitan," science has put forth that orgasms may be a key factor in maintaining youthfulness and a healthy lifestyle. Woman who have pleasurable sex at least once a day look about 10 years younger than those who only have sex three times a week (2). Why is this, and why don't more women jump in the sack and start getting busy? When you orgasm, the brain releases endorphins which act as painkillers to reduce stress, anxiety, and depression- all those horrible things that aid in developing wrinkles and the appearance of haggardness. In addition to the emotional and physical benefits of sex, "doing-it" is so good for your entire body, boosting the immune system, improving circulation and breathing (3) , (4). Sex not only improves your outward appearance, but many people who have a lot of sex (3x/week) have healthier bodies and live longer because they tend to eat better and exercise more (5).

Unfortunately, you won't shave years and years off your life by engaging in casual or bad sex, and it is not necessarily orgasming on a consistent basis that makes you feel healthier and younger. Since the pinnacle of the sexual act is orgasming, the two are difficult to separate. Unfortunately, the reality of life is that very few women orgasm regularly, and even fewer can come from vaginal penetration (3). Sex researchers point to many females' preoccupation with orgasm as inhibiting their ability to have one. This creates excessively stressful levels of expectation in the bedroom, basically paralyzing females' ability to relax during the pleasurable act. A lack of focus psychologically removes the female from the moment, making the altered state of consciousness one achieves during mind-blowing, multi-orgasmic sex nearly impossible (7). The misconception that it will just "happen" if you have really ruckus sex or you're really into your partner causes women to become masters at "faking it." Because brain=behavior, the solution is not as simple as magazines' suggest, and sexual frustration and tension causes a majority of women to be disappointed and disinterested in the act of sex, which is not only a tragedy, but a something of a crisis.

What exactly goes on inside the body just prior to and during orgasming? The process simulates a delightfully intricate dance between all systems inside the body! In extremely sexual women, an orgasm can be aroused from any sort of stimuli, however, researchers believe there are three different types of orgasms: those achieved through vaginal penetration (uterine O), clitoral stimulation (vulval O), or simultaneous penetration and stimulation (blended O) (7). Each method of stimulation produces a sensation very different from the other, however, all orgasms involve some direct or indirect activation of the clitoris which causes the PC muscles (pubococcygeal) to flex and spasm. Stimulation during foreplay allows the woman to become extremely aroused before climaxing- her breathing quickens, her heart rate increases, the muscles inside her body become tense and tighten up, sometimes the nipples harden, her face and body may flush, the visible part of the clitoris swells, and secretions occur inside the vagina. The labia, vagina, clitoris, and pelvic organs extend and open, creating more internal space (6). This "engorgement" is a result of a rush of blood to the pelvic area, and the more aroused a woman becomes, the more engorged she gets. The nervous system's response to pleasurable sex is a build up of blood and fluid in the genital area (vasocongestion) and an increase in tension in the voluntary and involuntary muscles (myotonia) (8). When the excitement phase culminates to the point of sexual climax, a release is felt through a series of involuntary muscular contractions experienced all over the body, but particularly in the vagina, uterus, or rectum. The immense liberation is called the "big O" and is most often accompanied by a spurt of clear liquid from the glands inside the urethra, similar to male ejaculation . The wave of endorphins released from orgasm through the entire body cause a woman to feel absolutely magnificent, acting as natural opiates to relieve any pain and tension.

"The efface of endorphins is said to decrease with age" (9), which may be attributed to lack of physical exercise, laughter, pleasure, etc. Since sex is an integral part to maintaining a healthy lifestyle, mood, and behavior, it is easy to see how many men mature into handsome distinguished figures in their old age (sleeping with women half their age), while women "turn themselves off to sex, but yearn for love" (10). Only 30% of women actually orgasm through sexual intercourse, which means that the other 70% come away from sex either wholly dissatisfied or troubled in some way (10). Developing a sexual identity and being comfortable with your own needs and desires is 99% of a healthy sex life- the other 1% is finding a partner to be intimate with and developing a mutual sexually respectful relationship. The secret to orgasming is different for each individual, but it is clear that particularly with females, the process by which one might achieve orgasm is an dependent on a mind and body willingness, interaction, and presence (in the moment). A lack of discussion in the bedroom can leave both partners extremely unsatisfied, which seems so silly in this day and age. If you are a woman, do not be afraid to explore yourself, know what you like and don't like, and communicate openly (and in a demanding manner) what you want done to you (6). Remember, "nothing kills sexual urges as fast as resentment and depression," (10) also, nothing makes you look older than being miserable, so get out there and unleash the sexual tigress inside you!


1) Google Orgasm Page, Google a 'gasm- that's sex jargin.

2) 3) Clitoris Homepage, If you've never had an orgasm, that sucks and the author feels for you. Check out this site to get educated on your vagina, then find someone who will help you realize what your missing.

4) What's an Orgasm and Why All the Mystery?, This is an excellent site for exploring the basic science behind sex and pleasure.

5) Aging and Sex, An excellent site for understanding how sex keeps you young. Blum incorporates lots of facts and funny stories about old people having sex....ha ha ha, so funny...yeah.....this is more a site for your parents.

6) 7) 8) 9) 10)