Full Name:  Emily Anne Lewis
Username:  ealewis@brynmawr.edu
Title:  Learning Disabilities and the Brain
Date:  2006-04-10 09:18:25
Message Id:  18929
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

In the United States, a large amount of emphasis is placed on disabilities. Much of our culture automatically seems to place people out of the mainstream in the category of having some type of disability, either learning or physical. If a child is having trouble at school, s/he is often diagnosed with a learning disability, whether s/he actually has one or not. Sometimes, the problem could be solved with extra help, tutoring, or a firmer teaching style, rather than medication.


I often feel that this is what prevents a large portion of the culture from taking learning disabilities seriously. Disabilities, especially learning disabilities, are often written off as excuses for not trying hard enough. An example from my personal experience: in eighth grade, I was diagnosed with an information-processing "issue," that is, even as of now, unnamed. I had many teachers over the course of my high school years that swore to whatever God they believed in that I didn't have anything. I just was not trying to understand them. Besides, they said, I could not have a disability if it did not have a name. They told my parents, and myself, that my psychiatrist was a quack and that I was lazy and, in some cases, just plain stupid. On the other extreme, the teachers who did believe me decided that I could not do certain things. When they spoke to me, I often heard, "You won't be able to understand this. Let me dumb it down for you as much as I possibly can." For me, and many other students, this was just insulting.


It isn't just me that feels this way. Over the summer, I attended a conference on learning disabilities. Many students I talked with and students on the panels feel that their learning disabilities are not being taken seriously, and many blame it on the over-diagnosis of learning disabilities in today's culture. Many of the students also said that teachers never encouraged them to strengthen their weaknesses. Instead, they told them to focus on their strengths, as they would never be any good at their weaknesses. One of the students told me that once she was told that she "couldn't" do math, she had a harder time than before understanding math. Often, people with learning disabilities hear, "You can't do that," so much, that they, consciously or not, begin to believe it themselves, no matter how much they swore to themselves that they would never let such a thing happen. Sometimes, people with one learning disability develop symptoms of other learning or physical disabilities due to the way they are spoken to. This all seems to go back to conditioning. It makes me wonder: how much of the disability is created by culture? Why is it that our culture seems to feel the need to knead the disability deep into the brain of the person with the disability?


I have often felt that this is an attempt to either make the conditioner feel better about themselves or make the conditioned person's disability a self fulfilling prophecy. It could also be an attempt to make the person with the disability dive deep within themselves to find strength to fight back, but that seems to bee a long-shot. This culture has a strong tendency to separate things into "us vs. them" situations. Is that why we must condition people to believe that they cannot do things that they actually can?






Full Name:  Christin M. Mulligan
Username:  cmulliga@brynmawr.edu
Title:  Tic, Tic, Tic: Tourette's Syndrome
Date:  2006-04-10 16:25:32
Message Id:  18935
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip


It all began with a deviated septum...or so we thought. Two years ago, my brother Steven began making strange honking noises in the back of his nose/throat. The sound was a minor annoyance at first, but it became a persistent one over the course of several days. After seeing a doctor and determining that his septum was not deviated, my mother began to consider other possibilities. To her, the noise he made seemed like a tic. No matter how many times we asked him to stop, he could not. When our family physician disagreed and the honking sound continued, she took him to see a specialist at A.I. DuPont Children's Hospital. After examining my brother and discussing everything from his diet to his behavior patterns, the doctor suggested the possibility of Tourette's Syndrome, if the condition continued for the next year.

Tourette's Syndrome is named for the 19th century French doctor who discovered it, Georges Gilles de la Tourette. It is a neurological disorder comprised of tics: involuntary, purposeless, repetitive movements or sounds. Like schizophrenia, it is believed to be the result of hypersensitive dopamine, norepinephrine, and serotonin receptors. (For more information on the function of these neurotransmitters, see http://serendipstudio.org/bb/neuro/neuro06/web1/cmulligan.html). There are both motor and vocal tics. These include but are not limited to blinking, nose twitching, shoulder shrugging, grimacing, kicking, stamping, grunting, shouting, sniffing, barking, and corpropraxia, the making of obscene gestures. Complex vocal tics also often include corprolalia, the involuntary repetition of obscene words, echolalia, the repetition of other's words, or palilalia, the repeating of one's own words (1).
Steven's symptoms, however, were all nonverbal. Although the honking noise itself eventually stopped, as time went on, we would notice him constantly shifting his weight, spastically moving his legs and arms, and repeatedly adjusting various articles of clothing. He began to seem comfortable in his own skin. These difficulties, unfortunately, were only the beginning.

Tourette's is also associated with a number of other disorders, such as ADD/ADHD, oppositional defiant disorder, learning disabilities, and OCD. In Steven's case, obsessive-compulsive behaviors began to occur. There are two components to OCD: uncontrolled, recurrent thoughts (obsessions) and uncontrolled recurrent behaviors (compulsions). Typical obsessions include fear of germs, excessive doubting (eg: worry one has left the stove on even though he just checked to see it was turned off), need for specific order or symmetry, need for perfection, visions of violence, visions of sexual imagery, or visions of religious imagery. Typical compulsions include washing, excessive checking and re-checking, saving or hoarding, counting, arranging or ordering objects, hair pulling, nail-biting, excessive seeking of reassurance, vocal repetition, compulsive praying, and repeatedly performing an action until it is "perfect"(2)..

Steven repeatedly described the need for perfection. While doing his homework, he would erase his name or his answers and write them over and over again because they had to be "perfect". He would arrange objects on his desk in a specific order. While eating, he would set down his glass or his utensils several times until they were "right". If he did not perform these ritual behaviors, he felt uneasy or "wrong".

So how does one deal with Tourette's Syndrome? Tourette's is traditionally treated with neuroleptics like haloperidol and pimozide, which inhibit the various neurotransmitters mentioned above. Side effects include excessive weight gain, dysphoria, Parkinsonian symptoms, memory problems, intellectual dulling, and personality changes (3). Imagine this rage of side effects in conjunction with the hormones that are already raging in an adolescent boy. Consider the social stigma attached to not being able to perform well in school or to drastic changes in appearance and personality. My mother and Steven's doctors thought long and hard about these issues. Rather than deal with these myriad of side effects, they opted to treat him with clonidine (cataprese), which is also used in the treatment of hypertension. It is administered via a transdermal patch. Unlike the other neuroleptics, cataprese's primary side effect is sedation. Steven says it produces a calming effect. The tension he feels prior to a tic is alleviated. There is less of a compulsion to perform certain behaviors. As a result of this medication, his bouts of tics are fewer and far between. He is able to function in a normal classroom environment, interact with his peers, and most importantly for him, continue to play baseball.


WWW Sources
1)What Is TS?, a rich resource on Tourette's Syndrome
2)OCD, a rich resource on obsessive-compulsive disorder
3)Guide to the Diagnosis and Treatment of Tourette's Syndrome, a rich resource



Full Name:  Stephanie Pollack
Username:  spollack@brynmawr.edu
Title:  Role of Estrogen in Preventing the Onset of Alzheimer's Disease
Date:  2006-04-10 17:19:03
Message Id:  18936
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

More than four million Americans are afflicted with Alzheimer's disease. Alzheimer's patients suffer from a drastic decrease in the size of their limbic system due to hippocampus atrophy (8) . The limbic system influences emotion and is vital for memory (3) . Early stages of Alzheimer's are accompanied by relatively simple mental struggles, such as "chronic forgetfulness and difficulty handling routine chores" (10) . Late stage Alzheimer's patients exhibit loss of speech function and ability to walk or sit upright (10) . This progressive loss of language and motor skills and ability to reason is characteristic of the gradual, yet substantial, deterioration of mental capabilities in Alzheimer's patients (6) .

On a molecular level, Alzheimer's manifests itself as plaques and tangles present in the brain (4). These senile (6) or Amyloid-containing (7) plaques, as they are often called, may begin to form several decades before the patient exhibits Alzheimer's symptoms (4). Tangles are most concentrated "in the cerebral cortex and hippocampus regions", areas of the brain evidently impacted by the disease (10). Plaque formation clogs up the brain's pathways and prevents chemicals in the brain from doing their job (11).

Alzheimer's is a complicated disorder that is believed to be brought on by multiple genes working in concert with environmental factors (5). The possible role of the hormone estrogen in sustaining and even enhancing brain function has sparked much research. It has been shown that estrogen has the capacity to "[boost] cells' chemical function, [spur] their growth, and even [keep] them alive by shielding them from toxins" (12). In other words, estrogen may act to safeguard the very neurons that malfunction and form plaques (12), keeping the brain healthy. Estrogen levels in women decrease after menopause, making the brain more susceptible to plaque formation.

Studies have yielded data indicative of an overall improvement in short-term memory when both healthy postmenopausal females and female Alzheimer's patients underwent estrogen replacement therapy (3). Additionally, estrogen testing in animals of both sexes has demonstrated that increasing estrogen levels boosts both long- and short-term memory (3). Clinical trials of estrogen replacement therapy show improved "memory in both healthy women and female patients with Alzheimer's, and [that estrogen] may even stave off the disease if given to women after menopause" (12). Intensity of treatment is important to effective results; it is likely that Alzheimer's prevention can only be accomplished by estrogen replacement therapy lasting for at least ten years (10). Female patients "who received the highest estrogen doses over the longest periods of time were the most protected" from Alzheimer's (5).

There are significant drawbacks to hormone replacement therapies, such as an increased risk of developing breast cancer, heart disease, stroke, and blood clots (13). In this vein, the women who do develop one of these life-threatening conditions may not survive long enough to test whether they become Alzheimer's patients or not. Therefore, the studies do not account for the subset of women who did not survive hormone replacement therapy; not all the subjects studied survived to the end of the trial. It is important that researchers work to eliminate the disadvantages of hormone replacement therapy in order to market estrogen to patients at risk of developing Alzheimer's. Many women feel that the risks associated with hormone replacement therapy outweigh the benefits (provide relief from the side effects of menopause). Additionally, hormone replacement therapy was not found to be effective in women aged 65 or older in preventing dementia (1). This is consistent with the importance of length of time of treatment in averting Alzheimer's.

In spite of the downside to estrogen replacement therapy, estrogen's apparent role as a cognitive enhancer is a promising finding. Next, researchers must work to develop "drugs that might bolster brain function without promoting reproductive cancers in women...or feminine characteristics in men" (12). If this were possible, medicine could exploit estrogen's role in enhancing viability of neurons and, consequently, prolong a patient's functional life.

Once thought to simply be the female sex hormone, estrogen has far exceeded its expectations. Hopefully, this discovery will open new doors to research of hormones in general, and their potential to work in ways not originally considered. At this point in time, hormone replacement therapy is not yet sophisticated enough to use solely as a means of maintaining mental function. It is simply an unforeseen benefit of its use in menopause. It is evident that estrogen's impact on the body is wide-ranging, hence its numerous positive and negative side effects. If delaying the onset of such a destructive disease as Alzheimer's can be accomplished by supplementing the body's usual dose of estrogen, many lives would be saved. It is known that altering hormone levels does influence brain functioning, as seen in patients suffering from depression. Perhaps utilizing the scientific evidence that estrogen can make us "smarter" in our old age will change the course of Alzheimer's disease.

References

1)Sorting Out HRT Risks and Benefits: What Women Really Need to Know About Hormone Replacement Therapy

2)Managing Alzheimer's Patients

3)Seeking "Smart" Drugs

4) Soothing the Inflamed Brain: Anti-inflammatories may be the first drugs to halt the progression of Alzheimer's

5)At More Risk for Alzheimer's?

6)Researchers Hunt for Alzheimer's Disease Gene

7)Role of Alzheimer's Protein is Tangled

8)Setting a Standard: A British project produces a test for Alzheimer's disease

9)The Oldest Old

10)Preventing Good Brains from Going Bad

11)Alzheimer's Jam

12)Estrogen Stakes Claim to Cognition

13)Hormone replacement therapy: Benefits and alternatives



Full Name:  Danielle Marck
Username:  dmarck@brynmawr.edu
Title:  The Neuropeptide Ghrelin: Improving Human Memory
Date:  2006-04-10 17:36:42
Message Id:  18937
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Human memory and the mechanisms of its formation are the focus of many researchers in the neuroscience field. Scientists are focusing on how the body and its hormones may regulate memory formation, as a result of external stimuli that influence hormone secretions. The research community is now focusing on diet and eating habits and their affects on chemical and hormone pathways that specifically direct human behavior. New studies support theories linking the secretion of ghrelin, in an empty stomach, to increased learning capabilities. For many years the neuropeptide hormone ghrelin was associated with appetite stimulation and function, but scientists failed to make the connection between increased ghrelin secretions and other functions. Recent discoveries have identified ghrelin's importance in increasing synaptic connections and altering neuronal morphology within the hippocampus, the neural director of memory. (1) Scientists have identified the major pathways of learning and how manipulation of these biochemical pathways can lead to improvements in memory functioning. As research continues, these new studies are linked to potential cures for memory associated diseases that can benefit from ghrelin supplements. (2) Although recent studies have found that increases in ghrelin concentration facilitate memory enhancement, the ghrelin pathways and their relationships to other hormone pathways have yet to be fully elucidated.
For many years the neuropeptide and gut hormone ghrelin has been associated with appetite and energy metabolism, but new findings support its importance in many of the body's functions. (1) Ghrelin acts as an endogenous ligand or extracellular substance that binds to receptors, primarily growth hormone secretagogue receptors (GHS). (2) Initially, ghrelin is released as a result of an empty stomach. Ghrelin is secreted from epithelial cells that line the stomach and then binds to receptors on cells located throughout the body, including the hypothalamus and the hippocampus. The release of ghrelin stimulates growth hormone release and works with the hypothalamus to illicit a hunger response. (1) Ghrelin also influences the pituitary gland, stimulation of appetite, control of energy balance, influences on sleep, gastric control, and glucose metabolism. Ghrelin's newest discovered influence lies in its ability to change synaptic connections within the hippocampal region. (3) However, ghrelin has multiple effects on metabolic and chemical pathways but how is it that these pathways interact or are interconnected? If ghrelin holds such importance in the human body, these ghrelin pathways must engage in extensive chemical communication networks, crosstalk, that involve a detailed feedback loop.
Ghrelin acts as an endogenous ligand, which binds to specific receptors located on the hippocampus that cause an increase in synaptic plasticity and the creation of new synaptic connections between neurons. (2) The biochemical effects of ghrelin induce morphological changes of the hippocampus and in turn have long lasting behavioral effects such as memory retention. These synaptic changes include underlying mechanisms such as the quantity of neurotransmitters being released into a synapse and how cells respond to those neurotransmitters. (5) The synapses are regulated by a variety of processes, which differ in the strength and enhancement of chemical signaling pathways. At those synapses that undergo repeated use, synaptic enhancement occurs, while at other less used synapses a decrease in synaptic strength can occur. (4) While memory lies within synapses of the brain, synaptic plasticity exists as a fundamental morphological function for the enhancement of the hippocampus which leads to memory retention. Perhaps the increase in synaptic plasticity within the hippocampus is a result of a dominating process that influences memory. The increase in synaptic plasticity within the hippocampus, guarantees memory efficiency and thus the body uses ghrelin to enhance the important memory process. The increase of synaptic plasticity within the hippocampus, demonstrates that the hippocampus can change synaptic form to enhance memory. This evidence introduces hope for cures for neurological diseases. However, studies have failed to discover the overlapping interplay between other ghrelin and biochemical pathways.
The increased emphasis on learning and neurodegenerative diseases by the populace, has led the scientific community to explore the underlying factors of memory. Scientists are focusing on those areas that might influence brain metabolic and synaptic capabilities. For example, scientist are considering ghrelin as a therapy for treatment in patients with Alzheimer's, a neurodegenerative disease which results in a loss of mental function due to the deterioration of brain tissue. Since Alzheimer's ultimately affects human memory retention, exogenous ghrelin supplementation could work with hippocampus receptors to increase synaptic plasticity and help memory. Further, scientists could force other tissues to express the necessary receptors for ghrelin through gene therapy and ultimately increase synapse production and function in those areas. However, many questions remain unanswered especially the complexity of the ghrelin network and its multiple roles including regulating memory, appetite, and interactions between synapses. Now that ghrelin has been identified as a major contributor to higher order brain function, total body analysis including DNA expression microarays will allow scientists to see the differences in ghrelin and ghrelin receptor expression in tissues throughout the body. With more discoveries identifying the role of ghrelin throughout the body, studies of the effectiveness of ghrelin supplementation and the negative repercussions, as a result of its interaction with other biochemical pathways, should be conducted.
The main question that remains unanswered, is how does ghrelin's effect on learning manifests itself on a larger scale? Much of the research done on ghrelin has been in mouse models, which shows their increased ability to navigate mazes, and other simple challenges. Can these findings be extrapolated to complex human learning? Further research needs to be done on human subjects, to elucidate the pathways of ghrelin and how they interact with other biochemical processes which enhance learning. It is important to remember that learning is not simply stimulated by increases in ghrelin levels and subsequent synaptic changes, but many other neurotransmitters are involved. Further, with so many pathways and more to be discovered, does ghrelin simply improve recorded memory or does the nueropeptide interact with other chemical pathways to enhance memory retention?
New challenges are arising in finding treatments for cognitive and neural degenerative disorders, and while ghrelin has a strong impact on hippocampus synaptology, ghrelin treatments themselves have not proven their medical efficacy. We cannot simply treat neurological disorders via exogenous ghrelin supplements because those ghrelin pathways have not been full characterized and therefore could interfere with other learning pathways or lead to neural pathway constrictions.


1) The Scientist: Magazine of the Life Sciences , scientific news

2) Sabrina Diano, Susan Farr, Stephen Beniot, Ewan McNay, Ivaldo Silvia, Balazs Horvath, F. Gaskin, Naoko Nonaka, Laura Jaeger, William Banks, John Morley, Shirly Pinto, Robert Sherwin, Lin Xu, Kelvin Yamada, Mark Sleeman, Matthias Tschop, Tamas L. Horvath. "Ghrelin Controls Hippocampal Spine Synapse Density and Memory Performance". Nature Neuroscience. Vol. 9, number 3 (March 2006) pg. 381-388

3) PubMed , resources about scientific research

4) arjournals.annualreview.org, Bryn Mawr College , journal reference

5) Wikipedia: References , scientific descriptions



Full Name:  Em Madsen
Username:  emadsen@brynmawr.edu
Title:  Well I'll be a Monkey's Uncle: Learning from Primates
Date:  2006-04-10 18:10:59
Message Id:  18938
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

The theory of evolution has gone to humans' heads. Or maybe it's not the theory, but the fact that its rise to prominence "confirmed" for many what had only been suspected before: that humans were the most highly evolved species. That we humans were the real deal; the coolest kids in school. I'm thinking about those diagrams where the drawings stretch from left to right. On one side is an ape on its haunches. The final drawing on the opposite side is a loin-cloth-clad man, ready to walk right off the page. This drawing reinforces the idea that we're the pinnacle. However, we are no more the pinnacle than any of our primate friends. In fact, we have quite a lot to learn from the study of primates, a field that until recently had been dominated by strict through-processes which obscured useful and significant data about how primates think and function.

In junior high school biology class, our dour-faced teacher lowered his spectacles on his nose and reminded us that we shared 99% of our genetic material with chimpanzees. "Lucky for you, it's only 99%," he continued. Because of these genetic similarities, studies of primates such as the chimpanzee have been used as stepping stones to arrive at more general conclusions about humans and human nature. These observations and studies have, until very recently, focused almost exclusively on gender models where the male primate is seen as dominant and the female as passive. For example, this passage from a textbook on sociobiology: "These data make it clear that only males are directly involved in differential selection among rhesus [monkeys] and probably all the terrestrial and semiterrestrial primates" (1). This process of generalization prohibits more nuanced observations, both about primates, and also about the way in which primate behavior can be applied to the study of human nature.

In his article "A Natural History of Peace," neurobiology professor Robert Sapolsky works towards a more finely attuned process of observation among the primates he has studied. As he points out, "Across the roughly 150 primate species, the larger the average social group, the larger the cortex relative to the rest of the brain. The fanciest part of the brain, in other words, seems to have been sculpted to enable us to gossip and groom, cooperate and cheat, and obsess about who is mating with whom. Humans, in short, are yet another primate with an intense and rich social life..." (2). Watching primates can teach us things about human brains, such as the observation about the cortex-size of primates involved in varying social groups. However, as Sapolsky goes on to observe, once primates were discovered to be cold-blooded killers in the 1960s, and humans became one among many species to be violent (instead of the only one, as previously supposed), humans got it into their heads that they were some sort of super-evolved killer ape, "according to which we have as much chance of becoming intrinsically peaceful as we have of growing prehensile tails" (2). Sapolsky strives to use the evidence he has collected to complicate this mindset.

In the baboon group that Sapolsky observed in the 1980s, a number of interesting changes happened. A nearby tourist lodge "expanded its operations, and consequently the amount of food tossed into its garbage dump" (2). The group of baboons that lived near the garbage dump used this as their primary source of food, while the more aggressive males from Sapolsky's observation group would stage raids in order to obtain the food they wanted. When tainted meat caused an outbreak of tuberculosis, most of the group who lived near the dump died. The aggressive males who would travel to the dump from Sapolsky's group also died. His group was "left with males who were less aggressive and more social than average, and the troop had double its previous female-to-male ratio" (2). The group's dynamic changed completely, becoming more peaceful and involved in activities such as grooming and child-care. Sapolsky observed that the change was not the result of a changing gender ratio, for other groups observed in the area had similar male/female numbers but maintained the more aggressive lifestyles--it was the "demographic disaster--what evolutionary scientists term a "selective bottleneck"" (2) that had left the group with its much more social males. This utopian society preserved its fabric even with the entry of outside males: the males were indoctrinated by the group's females, and they quickly learned that this was a slightly different way of living that appealed to them. Sapolsky uses this example to draw conclusions about human nature. Though humans, as primates, are "hardwired for xenophobia..." since "Experiments have shown that when subjects are presented with a picture of someone from a different race, the amygdala--a structure in the brain associated with fear and aggression is stimulated" (2), humans who have been exposed to people of different races consistently will show no activity in the amygdala. Therefore, Sapolsky believes that there is enough flexibility in human/primate nature to accommodate different types of processing the idea of the "Other," and that this flexibility should not be written off in social activism for peace.

One other mindset that is being challenged in primate studies is that of gender. As I mentioned before, many primate studies were based on males as active and females as passive. However, this ignores the fact that there are many variances to this "rule." As Sarah Blaffer Hrdy, the author of "Raising Darwin's Consciousness," points out, "We could just as easily have focused on any number of lemur species, species in which the females rather routinely dominate males. We could have decided to make example of the shy and nocturnal owl monkey..., where males and females cooperate in child care with the male playing the major role in carrying and protecting the infant, or we could have focused on the gentle South American monkeys known as "muriqui"..., who specialize in avoiding aggressive interactions, or any of a host of other primate species in which we now know that females play an active role in social organization" (3). However, these types of primates are not generally the subject of primate studies that seek to draw conclusions about primates in particular and humans in general. In emerging studies, emphasis is being put on the agency of female primates in social situations that would have been overlooked under the old model. For example, "Female baboons... actively engage in forging for themselves a network of alliance with different males. In short, there is much more going on than simply males competing with other males" (3). If this type of expansion can continue, the definition of what it means to be a primate, a human, or a specific gender can be complicated in useful ways, direction our thoughts in new directions, towards new discoveries.

As I've pointed out, new movements in primate studies are pushing humans towards acknowledging alternative possibilities in arenas such as human nature and gender. The concept that peace could be a trait passed on through evolution is a powerful possibility. The idea that female primates could have more agency than previously supposed, and therefore if we apply the same principles to humans and attempt to shift our obscuring gender lens, that human females might as well, is also exciting. In conclusion, we do not know just how entrained our thought-processes are until they are challenged by a newer, more multifaceted approach. In this way, our thought process can evolve along with our continually changing understanding of primates, and thus, ourselves.


Sources Used:

1) D. Freedman, Human Sociobiology (New York: Free Press, 1979). pg. 33.

2) Sapolsky, Robert. "A Natural History of Peace." Harper's. April 2006, Vol. 312, No. 1871. pp 15-22.

3) Hrdy, Sarah Blaffer. "Raising Darwin's Consciousness." The Gender and Psychology Reader, ed. Clinchy & Norem. New York University Press, New York, NY. 1998.



Full Name:  Ebony Dix
Username:  edix@brynmawr.edu
Title:  Color: Is it Real and Does it Impact Behavior?
Date:  2006-04-10 19:39:26
Message Id:  18941
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Color is a lot more than the quantification and movement of light. It attracts attention, helps us recognize objects, distinguish among objects of varying hue and saturation, and it even appeals to our aesthetic sense 2). The sum of all these affects suggests that color is innate and can possibly effect our emotions and behavior. The scientific definition of color fails to capture the importance that color has in our world.

From a scientific perspective, color is defined as a sensation produced in the brain by light that enters the rod cells â€" one of the two types of photoreceptors found in the retina of the eye - via the absorption/reflection of different wavelengths and frequencies of photons (1). When light is transmitted from an object to the eye, it stimulates the different color cones of the retina, therefore making the perception of various colors in the object possible. This definition is limited because it does not answer the important question about whether color is real or just a function of our brain. It does not establish that color is extremely important in the way we perceive the world today, regardless of whether color is real or not.

The fact that those of us who can see color can generally recognize and distinguish between the main colors of the electromagnetic spectrum (red, orange, yellow, green, blue, indigo, violet), is evidence that color is innate. Additional evidence can be observed in blind individuals who have their sight restored later in life and are able to recognize colors long before they are able to identify objects (2). Being able to distinguish between varying hues and saturations of color in the absence of a learned skill also makes it plausible to assume that colors impact our behavior, just as our senses of taste, smell, touch, and hearing cause us to respond to a series of inputs.

Many argue that color is merely a fabrication of the mind, and not a physical reality, which is a valid point because color is intangible. But, it seems to me that the effect that color has on people is very real. The ability of the individual to distinguish between light and dark and between red and violet and to behave differently under circumstances where there is more dark and less light, or more red and less violet is a real phenomenon. Take for instance the theories behind seasonal affective disorder (also known as seasonal depression), which state that due to lack of sunlight, some people become depressed either as a result of a hormonal imbalance or delay in the biological clock affected by the reduced availability of sunlight (3). Sunlight in the physical sense goes hand in hand with color because sunlight represents the full spectrum of electromagnetic radiation and color is the detection of this radiation by the nervous system. Because it is possible for the body to detect changes in light, it follows that the brain is able to detect variations in hue and saturation of color, which in turn evoke a response that impact behavior.

While the impact that certain colors may have on individuals can be the result of social and cultural conditions, it seems that colors in the most primitive context are able to induce a response in individuals. For example, some studies show that an individual who is placed in room whose ceiling, floor, and walls are painted a hue of bright red is more likely to experience and enhanced functions of the autonomic nervous system, evoking tension and excitement than one who is placed in a room painted in a similar fashion, but in a faint hue of light blue (4). While such responses to red may not be consistent among a large sample of individuals, it is still absolutely valid to conclude that colors have impact on the brain. It is difficult to determine whether those impacts are due to conditioning or an innate response to certain wavelengths of light. I can draw from personal experience to suggest that soft colors (pastels, light shades of yellow, pink and blue) do not cause tension or excitement, but, rather a calming sensation. Because I am one individual out of an enormous population and it is impossible to determine whether my response was due to conditioning or an innate disdain for the color red, I can only use my experience to support the reported experiences of others. I can say with certainty that each individual’s brain is wired slightly differently, therefore variations in the frequency and intensity of responses to color most definitely exist.

The ability of the brain to process information entering the eye and then furnish a response upon detecting this information is quite an amazing accomplishment considering that the information (visible light) is intangible. The light entering the retina and the perception of the colors due to that light is undoubtedly a function of the brain. But, whether it is mutually exclusive from physical reality is a difficult concept for me to grasp. I firmly believe that when an electron returns to its ground state from some higher excited state, and there’s no one around to observe it do so, it does still emit a photon, which has an associated wavelength and frequency.

It is clear that there are many schools of thought that have theories about whether color is based only on the brain’s perception of it, and not on reality. But it seems equally clear, that the concept of color as a function of the brain that is exclusive from physical reality depends on how one chooses to define what is physically real. While color may not be tangible or “physically real”, it is a very important phenomenon that impacts many dimensions of visual perception and behavior. Our world is made up of many colors, whether we can see them or not, and because the physics of electromagnetic radiation is a reality, color in some sense of the word must also be real.


Sources:
1)Howard Hughes Medical Institute webpage, an article from the Howard Hughes Medical Institute website that briefly discusses how we see colors

2) Levine, Michael, W. Levine & Shefner’s Fundamentals of Sensation and Perception New York: Oxford University Press, 2000.

3)Cleveland Clinic Health webpage, a website that describes the seasonal affective disorder, which is also know as seasonal depression

4) PDF file from Midwest Facilitators homepage, an article written by a Loyola University Chicago professor on the impact of color on behavior



Full Name:  Whitney McDonald
Username:  wmcdonal@brynmawr.edu
Title:  Aggression; shall we always blame it on the Genes?
Date:  2006-04-10 20:15:09
Message Id:  18945
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip


I noticed in my childhood, whenever I or one of my siblings did something mean or aggressive one of our parents would ask, "I wonder where they get that from? Oh it must be from you're side". I also noticed this in the biological / psychological world that same emphasis is placed on blaming genes for behavior, but what about the environment? In terms of aggression I looked into various areas of aggression, social hierarchies play fighting and harm avoidance, and their relative social context's investigating how, where, and why aggression is stimulated and induced. Furthermore, Dopamine, the chemical in the nervous system that facilitates the aggressiveness and/or passivity, is explored in it's role with aggression and what parts of the brain this chemical is more prevalent and at what time. However in this paper you will find that aggression is stimulated by the social environment not by chemical concentration in the body. Those chemical concentrations respond to environmental stimuli by creating chemical balances. Animals have an equal amount of aggression that can be produced but what differs individually is the distribution (i.e. control) of aggression, this distribution is a factor of chemical balances in the body.

Let's start with social dominance. An experiment was done with Anolis corolinensis Lizards examining visual stimulus that induced and/ or stimulated aggressive behavior. This stimulus was a darkened part of the lizard's skin call the "eyespot". When this spot is seen by an opponent the opponent becomes less aggressive and thus submissive to the lizard with the dark spot (1). The chemical presence of dopamine (DA) was measured in the submissive and aggressive lizards. It was found that the subordinate lizards produced a increased amount of DA in the Raphe and a section of the brain the deals with emotion and fear response (4e) & (4a). There was a decreased amount of DA in the hippocampus which deals with memory and navigation and in the Locus Ceuleus part of the brain that deals with stress and panic (1) & ((4). The complete opposite effect happens to the dopamine levels in the dominant lizard. Dopamine is often compared to adrenaline because it is the chemical that deals with action and pleasure (4b). However from this experiment it may be interpreted that the dopamine levels was the stimulus for aggressive behavior when in fact the stimulus was the "eye spot". This environmental stimulus created the dominant and subordinate social status in the lizards. It was also found that when a lizard was put in front of a mirror image no hierarchical status was formed; when a lizard with a darkened "eye spot" encounters another lizard with an dark "eye spot" the lizard with the covered "eye spot" becomes submissive (1). This finding alludes to the idea that genes do not influence hierarchical status the environment does. The environment creates the stimulus and it is that stimulus that influences the dopamine levels in the body to become submissive or aggressive. These lizards count on the environmental stimulus of the darkened "eye spot" to determine social hierarchy. This can also be analogues to humans and how hierarchies are formed (i.e. determining if one could beat another in a fight by seeing if they are physically fit to take on the opponent).

Let's further investigate this idea through the research of aggressive play. Rats were studied as they underwent aggressive play with dominant and subordinate rats in relation to themselves. The most concrete defining factor for hierarchy was age. One group of rats in adulthood and the other in infancy were taken to have their orbital frontal cortex (OFC) damaged to see its effects on social behavior through aggressive play. In the control rat group it was observed that subordinate rats would play more aggressively to the dominant rat and the dominant rat would make less playful attacks on the subordinate rat. Those rats with damage to the OFC in adult hood did not exhibit many changes in social behavior, however the infants did in fact treat dominant rats the same as they would peers and subordinate rats (2).


From this analysis it is made clear that the infant rats were not able to receive proper stimulus in order to differentiate between the dominant, subordinate and the self as a figure in aggressive play. They then therefore treated all playful opponents the same, not altering aggressive behavior. This non discrimination between dominant figures often leads to an actual fight between the dominant and the infant rats (2). This also presents the idea that the OFC damage to the brain did not inhibit the rat from becoming aggressive; however it did influence how the aggressive behavior was used. This brings us back to my previous argument that environment is what most influences aggression. The rat clearly was able to be aggressive but the distribution of aggression was changed by the change in the rats biology. Aggression is not inhibited because a certain part of the brain was altered but the way and which that aggression is used is most altered.


Lastly let's further examine this idea in an experiment investigating the chemical roots of harm avoidance in rats. An experiment took rats and measured there level of harm avoidance by measuring the levels of dopamine in the brain. They did this by placing rats in an open apparatus with square holes. The rats were to investigate the holes the more or less the rat explored the hole alludes to the rats' level of harm avoidance or aggression (3). After this their chemical levels were measured as a more concrete way to measure aggressiveness. It was found that rats with low dopamine levels are associated with more ambitious aggressive rats (3). However, this research lacks a social dimension that is found in the other experiments by using the levels of dopamine in the system to account for aggressive behavior. However aggression is a social construct and therefore social interactions to compare one partner against the other would be a better way to categorize aggressiveness for animals. Therefore I would conclude by this experiment that under non social conditions aggressive potential can be determined merely by the levels of dopamine in the system. However under social conditions it is the environment that stimulates aggressive behavior and determines the distribution of social hierarchy, and therefore determining level of aggressive potential. In non-social conditions chemical balances of dopamine in the body do not allude to how aggressive one is or why aggressiveness starts but helps allude to that individual's aggressive distribution (i.e. preference) for that particular situation.


In conclusion, there are many debates in the biology and psychology world that questions ability for one to be aggressive, yet many of those that question fail to take into account the environment and how that influences aggression. Aggression is a social construct and the only way one can be categorized as aggressive or passive would be to measure him/her among others. Therefore it is my finding that every animal is capable of aggression, but what differs are the environmental stimuli and the distribution of aggressiveness. The stimulus of aggressive behavior is based on the environment, yet how aggression is used is based on the chemical balances of the nervous system
Reference List

1)Korzan, J. Wayne. Et al. "Dopaminergic Activity Via Aggression, Status, and a Visual Social Signal." Behavioral Neuroscience 120 (2006): 93-100.
2)Pellis, M. Sergio. Et al. "The Effects of Frontal Cortex Damage on the modulation od Defensive Responses by Rats in Playful and Nonplayful Social Contests." Behavioral Neuroscience 120 (2006): 72-84.
3)Ray, J. & Hansen S. Et al. "Links between Temperamental Dimensions and Brain Monoamines in the Rat." Behavioral Neuroscience 120 (2006): 85-92.
4) Wikipedia
4a) Amygdala Updated 2 April 2006. Cited 30 March 2006.
4b) Dopamine Updated 2 April 2006. Cited 30 March 2006.
4c) Hippocampus Updated 2 April 2006. Cited 30 March 2006.
4d) Locus_ceruleus Updated 2 April 2006. Cited 30 March 2006.
4e) Raphe Updated 2 April 2006. Cited 30 March 2006.
4f) Striatum Updated 2 April 2006. Cited 30 March 2006.



Full Name:  Nicolette Belletier
Username:  nbelletier@brynmawr.edu
Title:  Beyond Prozac: A Deeper Understanding of Depression
Date:  2006-04-10 22:13:59
Message Id:  18947
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Depression is a condition that affects 9.5% of Americans in a given year (1). Severity of depression ranges from mild to severe and therefore there is also a wide range of effective treatments. Psychotherapy can help those with mild depression while Electro-Convulsive Therapy may be the only hope for those with major depression. Research to help those with depression focuses on the interaction of neurons in the brain. Medical treatments for depression include drugs which target the chemical interactions of the neurons. However, these treatments are limited since some people with major depression do not respond to drug treatments.

Antidepressant medication as a treatment for depression approaches the workings of the brain from a chemical approach. Although neurotransmitters like serotonin regulate interactions between neurons and therefore play an important role in treating depression, it is clear that the workings of the brain are more complex since antidepressant medications do not always improve patients' depression. There is even evidence that medications like Prozac increase the chance of suicide in teens and children who take it (2).

Given the limitations of antidepressant medication, examining the brain from a different perspective is illuminating alternative possibilities for treatment. Instead of focusing on chemical interactions between neurons throughout the brain that are targeted by medications, scientists are seeking information about how neurons are organized into networks. Furthermore, moving beyond antidepressant medication requires increased awareness of the patient of the treatment process he or she is going through.

According to the Hamilton rating scale for depression, symptoms of depression include insomnia, guilt, inability to work or pursue hobbies, and anxiety (3). However, one of the most important symptoms is a feeling of helplessness. Even in the case of major depression, when a person is incapable of carrying on daily life or constantly thinking about death, the person very well may be confused about why he or she is in such a state and still feel completely helpless.

Some people who suffer from depression and are able to recognize that their problem is not only a temporary slump can find relief in a variety of ways. Psychotherapy involves engaging the patient in understanding the roots of the depression, whether they may be a difficult relationship or traumatic event. Talk therapy can help a patient change his or her behavior without physically or chemically altering the brain. Sometimes even regular exercise or even meditation is enough to lead a patient to recovery.

One patient described how meditation helped him overcome depression which plagued him for years (4). Even though it is impossible to know exactly what effect meditation has on the brain, it certainly involves becoming aware of one's mental state. In doing so, perhaps the person may be able to understand the way his or her brain reacts to situations and lessen the effects of depression. It is interesting to see how in some instances a person can use his or her own mind to fight against depression, even if medication or psychotherapy is still needed (5).

However, for some people, depression causes a feeling of utter hopelessness. Even though the patient may be aware that something is not normal, he or she is still stuck in depression and feels like there is no hope but to die. In cases of major depression in which there is a high risk of suicide, psychotherapy and antidepressant medications may not bring improvement.

The last resort treatment has been Electro-Convulsive Therapy (ECT). The treatment is controversial due to possible side effects of memory loss as well as frightening depictions in pop culture. Also, even though it is understood that ECT causes changes in serotonin receptors in the brain (6), little is known about how the procedure works and how long it is effective. Developments are being made however that use an electrical current to locally stimulate parts of the brain without the side effects of ECT.

An experimental procedure which electrically stimulates a specific area of the brain called Deep Brain Stimulation (DBS) has been effective in four of six patients (7). The procedure requires the patient to be conscious during surgery to talk about the effects of the electrical current so that doctors can find the correct are of the brain, called "area 25".One patient described the sensation of having this specific area on her brain stimulated as suddenly feeling connected to the surgeon. When the stimulation was turned off, the feeling went away. The patient understood that there was a difference in her state of mind when the brain was stimulated, but had trouble articulating it. Still, she had a desire to break free from her depressed state of being but she did not know how.

Tests show that another experimental procedure called "transcranial magnetic stimulation" (TMS) can enhance a person's ability to perform certain tasks like drawing, for example. In one experiment, the subject of TMS is told to read a common saying out loud and then again while the brain is stimulated. Under TMS the person noticed that an extra word had been placed into the phrase (8). When stimulated, the brain actually read the words instead of filling in what the person thought it saw.

Although TMS may be an exciting way to improve human skills in art or mathematics, it is also exciting to see how when different parts of the brain are active a person's outlook on the world can change dramatically. Certain parts of the brain must be active in order to react to stressful situations in life just as they must be active make mathematical deductions or translate an image inside one's head to a drawing (9). In some cases, the magnetic current actually temporarily weakens the frontal lobe of the brain in order to produce effects. When the subject realized that the card he was reading was not simply a common phrase, something happened to remove the instinct to simply recite the phrase. It is unclear exactly how either TMS or DBS work, except for the fact that they regulate the interaction of neurons in a complex network. However, there is evidence that the treatment causes at least temporary effects.

The procedures may still be experimental, but the implications of these studies are helpful to our understanding of the brain and therefore depression. If someone can consistently look over a word and then instantly notice it, it is easy to see how stimulation of a certain part of the brain could change the way a person feels about his or her life in a much broader sense. Also, after seeing the extra word, the subject came to a realization about how his brain interpreted the outside world. Perhaps after undergoing this therapy, patients with depression could understand the assumptions his or her brain made that caused life to seem so hopeless and find relief.

Commercials for antidepressant medications like Prozac and Zoloft are common and in many cases provide at least some relief for those suffering from depression. However, it is clear that though these treatments come in a convenient pill form, they are not effective for everyone. Moving beyond thinking about the brain in terms of neurotransmitters and trying to involving the patient more directly in treatment is allowing people to find relief when medication was not an effective option.


REFERENCES

1) NIMH: Depression

2)NPR: FDA Study Links Antidepressants, Youth Suicide August 20, 2004

3) ,Hamilton rating scale for depression

4) NPR: Andrew Weil,NPR program about "integrative medicine" treatments for depression

5) Meditation and Depression

6) ECT and receptor function

7) Mayberg et al.

8) Savant for a Day,New York Times article on TMS

9) PubMed abstract



Full Name:  Rachel Freeland
Username:  rfreelan@brynmawr.edu
Title:  Traditional Versus Atypical Antipsychotics
Date:  2006-04-10 22:36:01
Message Id:  18948
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

About 3 million Americans suffer from schizophrenia. Although this disease is not curable, there are many treatment options that suppress the symptoms. Doctors routinely recommend the use of antipsychotic drugs as the primary treatment. Over the years, two different categories of antipsychotic drugs have been developed (traditional and atypical antipsychotics). However, the question still remains of whether traditional or atypical antipsychotics should be prescribed to reduce the symptoms of this debilitating disease?

Schizophrenia comes from the Greek words meaning "split" and "mind", but people with schizophrenia do not have split personalities. Rather, "split mind" refers to the fact that people with schizophrenia are split off from reality and can not distinguish what is real from what is not real.

Schizophrenia is a misregulation of information in the brain. There are three different hypotheses as to the etiology of schizophrenia. The first is that there are many different neurotransmitter pathways presumed to be involved in the biological basis of the disorder. The second is that genetics may play an important role and the third, is that the environment may trigger a possible genetic predisposition. It is most likely that a combination of the three trigger the onset of schizophrenia.

The symptoms of schizophrenia have been broken down into two categories, positive and negative. Positive symptoms include, hallucinations, delusions, disorganized speech, increased goal directed activity, and illogical thoughts. Negative symptoms include blunted affect, impaired emotional responsiveness, apathy, loss of motivation/interest, and social withdrawal.

Despite the fact that schizophrenia is not curable, there are treatment options that help suppress the symptoms. Psychological treatments, such as supportive psychotherapy and reality oriented; family therapies are often used. In addition, social interventions aimed at reducing relapses and facilitating reintegration into society have proven useful. The primary treatment option is usually the use of antipsychotic medications.

"Antipsychotics are a group of drugs that are used to treat a handful of psychiatric disorders characterized by disturbed thought and behavior, most notably schizophrenia. Although they are not curative, they relieve some of the debilitating symptoms of this group of disorders" (1). The precise mechanism of action that accounts for the effects of antipsychotic medications is still unknown. However, the dopamine hypothesis is the predominate theory used to explain the action of these drugs. Dopamine produces its effects by activating dopamine receptors on postsynaptic neurons. Many antipsychotics appear to act by blocking dopamine receptors in the brain.

There are two categories of antipsychotics, traditional antipsychotics and atypical antipsychotics. Traditional antipsychotics were first developed in the 1950's and were used to treat psychosis, particularly schizophrenia. They are especially good at reducing the positive symptoms, but do not reduce the negative symptoms. Traditional antipsychotics are broken into two classifications: low-potency and high-potency. Common side effects of traditional antipsychotics include: dry mouth, tremors, weight gain, muscle tremors, and stiffness. In addition, traditional antipsychotics yield extrapyramidal side effects. These side effects include: motor disturbances, parkinsonian effects, akathesia, dystonia, akinesia, tardive dyskinesia, and neuroleptic malignant syndrome. Some of these side effects have been described to be worse than the actual symptoms of schizophrenia.

The first atypical antipsychotic, Clozapine was discovered in the 1950's, but was not introduced clinically until the 1970's. However, it quickly fell out of popularity due to drug induced agranulocytosis (loss of white blood cells that fight infection). During the 1990's olanzapine, risperidone, and quetiapine were introduced into the market. Atypical antipsychotics treat both the positive and negative symptoms of schizophrenia. Side effects of atypical antipsychotics include: agranulocytosis, weight gain, and some extrapyramidal side effects. Atypical antipsychotics are considered to be the first line of treatment for schizophrenia and are gradually replacing traditional antipsychotics.

Are atypical antipsychotics always better than traditional antipsychotics? Some advantages atypical antipsychotics have over traditional antipsychotics are that there are fewer anticholinergic side effects, less parkinsonian and dystonia side effects, and they suppress the negative symptoms. They also have a lower propensity for causing extrapyramidal side effects. However, each atypical antipsychotic has a different chemical structure and therefore side effects vary from drug to drug. In addition, many atypical antipsychotics are compared to haloperidol (a traditional antipsychotic that yields numerous extrapyramidal side effects) and therefore it is not surprising that the atypical antipsychotics have an improved extrapyramidal side effect profile in comparison. Many atypical antipsychotics produce fewer side effects at lower doses, but once the dose is increased to maintain a therapeutic effect, the severity and number of side effects also increases. In a study done by John Geddes, Nick Freemantle, and Paul Bebbington, Atypical antipsychotics in the treatment of schizophrenia: Systematic overview and meta regression analysis, 12,649 patients in 52 randomized trials comparing atypical antipsychotics (amisulpride, clozapine, olanzapine, quetiapine, risperidone, and sertindole) with traditional antipsychotics (haloperidol and chlorpromazine). The researchers found that "there is no clear evidence the atypical antipsychotics are more effective or are better tolerated than traditional antipsychotics. Traditional antipsychotics should usually be used in the initial treatment of an episode of schizophrenia unless the patient has previously not responded to these drugs or has unacceptable extrapyramidal side effects" (2).

Even though traditional antipsychotics and atypical antipsychotics are both effective in treating some of the symptoms of schizophrenia, it seems like atypical antipsychotics are more effective because of their ability to suppress the negative and positive symptoms. Although they do contain some side effects, the severity of the side effects is less in atypical antipsychotics. Maybe a combination of traditional and atypical antipsychotics is the best way to treat schizophrenia.





Works Cited
1)Drugs and the Brain, information about antipsychotics
2) Geddes, John, Freemantle, Nick, and Bebbington, Paul. Atypical antipsychotics in the treatment of schizophrenia: Systematic overview and meta regression analysis. British Medical Journal, Vol. 321, 1371-1376.

Works Consulted
1)Schizophrenia Society of Canada, information about schizophrenia for families
2)National Institute of Mental Health, information about schizophrenia
3)Mental Health Medications, information about antipsychotics



Full Name:  Julia Patzelt
Username:  jpatzelt@brynmawr.edu
Title:  Drug Addiction: Free Will and Standards of Normalcy
Date:  2006-04-11 00:11:31
Message Id:  18952
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

In my last paper, I explored the ways in which environment and genetic trends influence the disease of drug addiction. Using research on brain receptors, behavioral patterns, and neurotransmitter pathways, I found that, at the very least, genes and personal experiences may affect an individual's participation in the vicious cycle of addiction. Much of the research asserts that the seeming loss of free will during addiction is a direct result of the predisposing factors and the direct biochemical effects of the substances themselves. I followed these assertions without questioning IF the I-function and free will are inextricably linked to the problem of addiction.

Another assumption inherent in much of the research and intellectual debate was the fact that drug abuse is an abnormal or overriding force in contrast to 'normal'/healthy neuro-function. Again, while this assumption is a popular and comfortable one, the boundaries of 'normal' appear to be arbitrarily drawn, and the distinctive categories of normal biochemistry and normal behavior are either conflated or ignored. I plan to further our investigation into the realm of drug addiction by addressing these issues from the perspective of an amateur neurobiological scientist and a critic of the research constructs that frame the studies of drug addiction.


Does the I-function Have Any Autonomy "Under The Influence?":

Researchers have used the correlations between drug addiction and trends in genetic/receptors patterns in the brain in an attempt to establish causation between genetic 'predispositions' and the disease of addiction. While the biological trends are supported by strong data, there is little explanation for individuals who either follow the trend and avoid addiction or individuals who don't follow the trend and fall prey to the disease regardless.

Lab rats are more predictable than humans in their behavioral patterns under the influence. While rats have been used to justify many theories on addictive behavior, there have also been contradictory findings that are basically ignored compared to the data that supports the current party line. The current explanation for the slippery trajectory of addictive behavior states that the survival/decision-making parts of the brain become hijacked by the addictive substance and are therefore rendered useless. Why then have there been groups of rats, and more commonly/importantly humans, that are able to access the dopamine reward pathway with drugs and not become dependent on the substance? The dependence must be extricable from the addictive substance, and therefore, there must be mechanisms in the I-function that can either process or express immunity to the chemicals (1).

If we assume that the I-function (and thus the individual) can have some level of autonomy in the presence of psychotropic substances, then what divides users with "self-control" from users who seem to lose their sovereignty over the substance? A recent article's title inadvertently draws attention o this problematic distinction based on contemporarily codified patterns of psychology: "Drug Abuse A Preventable Behavior; Drug Addiction: A Treatable Disease" (2). Part of the definitions that divide abusers from addicts are dependent on researchers' attempts to loosely quantify a user's "motivational strength" by rating behavior based on categories of effort and desperation to obtain the relevant substance (1). Obviously these distinctions make sweeping assumptions about the relationship between a person's behavior and their relative feeling of desperation, but what about variations among and between populations? What about the variance among I-function autonomy? How can one possibly establish a gradation of usage behavior when we struggle to categorize quotidian behavior? (This issue arises later in the paper when the concept of normal behavior is challenged.)

Another issue with the causation arguments is the fact that almost all of the biochemical research is done on lab animals/rats. Lab animals are arguably much simpler biochemically then human beings, and our understanding of animal I-functions (if they exist) is more extensive than our limited understanding of our own I-function. Some of the lab animal experiments have shown that there are common neurochemical mechanisms at work in many animals that show addictive behavior. The findings include not only shared pathways but the effects the drugs have on those particular pathways (accelerating the firing rate of the neuron cells bodies/accelerating the rate of action potential transfer). Establishing similarities in genetic expression and action potential activity in lab rats does not automatically explain the connection between human neurochemistry and behavior with regard to drug addiction (1).


What Is Normal?:

"Current neurobiological research on drug addiction assumes that addiction interferes with an individual's healthy psychological and mental development and lifestyle. But do the genetic patterns and biochemical changes during addiction necessarily indicate a flaw in the brain's evolutionary development, or do they represent a variation on normal neurochemistry?" (3).

Connecting the neuroscience of natural rewards to drug addiction can explain the ability of addiction to progressively hijack the brain and prevent decisions that support an individual's survival, such as eating and sleeping. "Indeed, a recurring theme in modern addiction research is the extent to which neuroadaptations responsible for various aspects of the addiction process are similar to those responsible for other forms of neural plasticity studied in cellular models of learning, such as long-term potentiation and long-term depression." (4) But drugs do not replace components in the brain; they only alter them. If a brain's survival mode or natural state of being can be altered by the pattern of addiction, can not addiction be an inherent part of neurochemical infrastructure and fall into a category with other external stimuli that affect our homeostasis?

Could drug abuse have an exacerbating rather than a competitive effect on the reafferent loop? Is it not possible that addiction is an extreme example of reafferent neurochemical function? What justifies the categorization of addictive behavior as an outlier? My last paper in this series will focus on the issue of the reafferent loop, as well as on the role of central pattern generators. We will come full circle in our investigation by addressing the relationship between neurophysiology/neurochemistry and behavior/psychology.

Sources:
1.) 1)"Addiction controversies.", M.A. Bozarth (1990). Drug Addiction as a
Psychobiological Process. In D.M. Warburton (Ed.)
2.) 2)HealthWise Newsletter, July 1997
3.) 3) Patzelt, Julia. Neurobiology Paper #1: "Drug Addiction: Which Comes First- Brain or Behavior- and Does it
Matter?"
4.) a name="4">4)
A Behavioral/Systems Approach to the
Neuroscience of Drug Addiction
, The Journal of Neuroscience, May 1, 2002



Full Name:  Andrea Goldstein
Username:  agoldste@brynmawr.edu
Title:  Photographic Memory: A Look at Eidetic Imagery in the Brain
Date:  2006-04-11 00:15:50
Message Id:  18953
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

"Everyone has a photographic memory. Some don't have film." As evidenced by this humorous quotation, the topic of photographic memory is quite prevalent in pop culture. Like much of the often talked about subjects in pop culture, however, the actual phenomenon is not very well understood by the general public. Photographic memory, or as it is technically called, eidetic memory, is not a well understood phenomenon in the world of neuroscience either. Much evidence points towards the concept being entirely fictional, as research has not been able to consistently verify the presence of such memory. The question, then, is not only whether photographic memory exists, but also whether it is neurologically feasible.

In theory, photographic memory involves the ability to remember things so vividly that an actual image is retained in the mind. (1) People with photographic memory can supposedly remember an unlimited amount of information with accuracy far superior to the average person. There have been a few well-documented cases of such remarkable recall, such as "S", the subject of Luria's The Mind of a Mnemonist, who could memorize anything from the books on Luria's office shelves to complex math formulas, and "Elizabeth", a woman who could mentally project images composed of thousands of tiny dots onto a blank canvas. (2) Both could also reproduce poetry in languages they could not understand years after seeing it written. (1) (2) Such recall seems as though it might be correlated to the phenomenon of flashbulb memory. In highly emotional situations, people tend to remember events so vividly that the memories take on a photographic quality. (3) Such memories were, until recently, believed to be permanent, never fading in quality. Recent studies, however, indicate that over time, people's memories of such events actually do fade. How accurately people remember such events appears to be directly proportional to how strong the emotional ties to the event were. (4) Photographic memory, which looks on the surface to be the same phenomenon, should not be long-lasting, since it does not generally have any emotional content.

If eidetic memory, which is so often referenced with respect to unemotional images or events, cannot be tied to the emotionally-linked phenomenon of flashbulb memory, then in what situations could it be observed? One frequently researched area is that of chess board configurations. It has been found that chess experts have much better recall for the location of pieces on a chess board than novices. The advantage in memory, however, is completely neutralized when the pieces are arranged in a way that could never occur in the course of an actual chess game. (5) This evidence seems to indicate that expert chess players are not actually using any sort of photographic imagery to recall the location of the pieces on the board. They are instead relying on having seen many different board configurations in the past and using this experience to recreate the situation they were shown.

If not in experts, then perhaps eidetic memory can be found in children, who have little experience with the world, and thus may have no knowledge base to help them memorize things. According to Lev Vygotsky, one of the most influential theorists in the field of developmental psychology, young children do indeed rely on eidetic imagery to help them remember things. He references a child's closing his or her eyes and moving them around when asked to recall an image as evidence that children retain a mental picture of objects they have seen. In adulthood, he theorizes, these memory techniques are replaced by verbal techniques, such as mentally rehearsing a list of objects. (6) More recent studies have indicated that children's eidetic memory is not as universal as Vygotsky originally perceived. In a number of experiments, only 2-15% of elementary school children were able to project an image they had seen onto a blank easel and describe it afterwards. (7) Vygotsky's theory, then, that children use primarily eidetic imagery in memory until it is replaced by "higher mental functions" (6) involving verbal behavior in middle childhood, cannot be the complete answer.

Because only isolated examples of eidetikers (people who are capable of eidetic imagery) have been found, there doesn't seem to be any explanation for how such a phenomenon works neurologically. According to a well-accepted theory of memory, the first step in memory storage is sensory memory. Generally, information is stored here only very briefly, and is either lost entirely, or, if given proper attention, processed further. While still in the sensory memory, visual information is believed to be stored as an actual image. Any further processing is thought to change visual information into conceptual information. (8) The chess player, for example, no longer sees the actual chess board, but rather an internal, abstract concept of a chess board. Since photographic memory involves seeing visual images, it must be on the very basic sensory level that eidetic memory functions.

Is it possible then that something in the brains of these so-called eidetikers has been wired incorrectly, causing traces of memory that should only last mere seconds to remain in a person's memory for minutes, hours, or, in cases like S or Elizabeth, years? Absolutely. Memory is believed to be facilitated by changes at the neuronal level due to long-term potentiation. This phenomenon is essentially the strengthening of synaptic efficiency through repeated use over time, producing long-term memories. (9) Normally, this type of induction takes several rounds of stimulation in order to produce the increased proficiency of the neural circuit. It is conceivable that in a small portion of the population, genetic or environmental factors that have yet to be discovered lower the threshold for this potentiation, resulting in sensory memory that remains stored as a visual image instead of being lost or processed conceptually. Multiple stimulations would not be necessary to retain these images; one brief presentation of a stimulus would be sufficient.

Such a perspective on the neurological basis for eidetic memory would explain many of the unanswered questions on the topic. Photographic memory may be so rare that it appears to be fictional because it is the result of an uncommon genetic mutation or an unlikely combination of environmental and genetic factors. The greater prevalence of photographic memory in children can by explained by the re-appropriation of neurons from sensory memory circuits to verbal memory circuits as verbal behavior increases. That it is an event linked to sensory memory, and not episodic (autobiographical) memory explains why emotional or experiential ties to the object do not increase memory for it.

Advancing in the field of photographic memory would require scientists to find more subjects with unusual memory abilities. One recent case is that of "AJ", who seems to remember every detail about even the most trivial events during her lifetime. (10) Neurological testing may lead to a greater understanding of the location of memory in the brain, and more specifically, what causes such extraordinarily clear and detailed memories to form. With increasingly sophisticated technology and the hope that more people with exceptional memories will come forward, it is possible that the many unanswered questions about photographic memory will someday be less of a mystery.

Resources:
1)Eidetic Memory, from Wikipedia

2)An Adult Eidetiker, from the Sarah Lawrence College website

3)Flashbulb Memory, from Wikipedia

4)A study on flashbulb memory, from The Discovery Channel Canada Online

5)Photographic Memory, from the MadSci Network

6)Vygotsky, from the Massey University website

7)Children's Eidetic Memory, from the Magic Mnemonic Website

8)Memory, from the Memory Disorders Project of Rutgers University Online

9)Memory, from the University of Memphis Neuropsychology website

10)The Woman With Perfect Memory, from ABC News Onliine



Full Name:  Tamara Tomasic
Username:  ttomasic@brynmawr.edu
Title:  Dominance, Handedness, and Ambidexterity
Date:  2006-04-11 00:29:44
Message Id:  18955
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Why do we have a dominant and non-dominant side? We have always known that we are not perfectly symmetrical, but why should our asymmetry translate into preference for one side of our bodies? Thinking in an evolutionary/selective type of way, why are there still two (or three or more) different types of dominance? Wouldn't it have made sense that, since an overwhelming majority (90-95%) of the population is right-handed and consequently the world suited most for right-handed people, the left-handed preference would have been systematically unselected for? Or maybe it would have made the most sense for no preference to exist at all, and for the entire world to be ambidextrous?


It might seem like a good idea to be equally proficient in both hands-if one cannot be used for a particular task at any point in time the other can substitute as it can be equally well controlled. Interestingly, observations of human and animal behaviour have shown that ambidexterity is not as advantageous as it seems at first. In observed termite eating behaviour of chimpanzees, both right- and left-handed preference were seen, as well as ambidexterity. But those individuals that showed a strong preference for either one hand or the other ate a third more termites than those individuals that showed no preference and used both hands (1)Science and Nature: Animals, An article about handedness in apes. One explanation for this was that those who preferred only one hand used it more often, thus becoming more specialized and precise in their task. The ambidextrous individuals were not as proficient at catching termites, though their level of proficiency was equal in either hand.


Looking at handedness from a historical point of view, being right-handed was a distinct advantage when it came to defense. Because the heart is on the left side of the body, it would make sense to hold the shield on the left side, and thus on the left arm, to defend it. This means that the left arm was passive for most of a man's life, while the right arm was actively used. Could it be that handedness was determined through necessity? If this is the case, it means that we have evolved to have preference for one hand over the other. But because left-handedness is so rare, wouldn't the element of surprise be reason enough to keep left-handedness in the gene pool? And if this were the case, shouldn't dominance in hand preference shift between right and left? It would also make sense to be able to use both hands equally well in a battle-if one of your arms was injured, you could continue the fight with the other arm; this would be a very advantageous adaptation.


Despite this argument for ambidexterity, historical evidence points to the fact that right-handedness was the predominant (and favored) preference. Recent studies have shown that handedness can be linked to bone length, or rather, bone length can be used to determine handedness in individuals from other eras. By measuring the bone length of modern day British individuals with a known preference a correspondence was found: longer arm bones were present in the arm that was preferred (or bones were of equal length if both hands were preferred equally). The numbers obtained from living individuals were 82% right-handed, 3% equal preference, and 15% left-handed. When the skeletons of medieval English villagers were measured, surprisingly similar numbers were found, with 81% having longer bones in the right arm, 3% bones of equal length, and 16% longer bones in the left arm (2)New Finding on the Frequency of Right and Left-handedness in Mediaeval Britain, A study about handedness using modern day and mediaeval British subjects. This shows that since the middle ages, when literacy was at a low point and the left was associated with evil, natural handedness seems not to have drastically changed.


Also interesting is seeing whether humans and hominoids are the only ones with dominance and handedness. Looking at the fiddler crab, we see that handedness is present even within this organism. But while handedness exists, the ratios vary among the species: some species show a virtually 50:50 ratio or right to left, while in others the right predominates (3)Science Week: Zoology: on Fiddler Crabs, An article about handedness in Fiddler Crabs. Why this difference in dominance/preference? The fiddler crabs would seem to suggest that either the environment or the genetic isolation would have the determining effect on ratios of handedness within a population.


To this day, science has been unable to come up with a satisfactory explanation for handedness and dominance in general. Organisms with preferences for one side of the body over the other seem to function better, but why they would pick one side over the other has yet to be understood.



Full Name:  Jennifer Lam
Username:  jlam@brynmawr.edu
Title:  Searching for God Within
Date:  2006-04-11 04:07:14
Message Id:  18958
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Religions are as diverse as the cultures of the world. Although we worship different deities, have different values and perform different practices, the human race has searched and continues to look for the meaning of life and a refuge in a higher power. Wars have been fought, lands have been conquered, and holidays have been celebrated all in honor of these Gods. It seems as though religion and faith are inescapable these days; it is integrated into our lives whether we invite it or not. The fact that every culture has sought or is seeking for a higher power has some peculiar neurological implications. The universal need for religion can, at least, be partially described by science, genetics and neurology.

Until recently, religion and science have been like oil and water; neither scientist nor theologian dare cross over and integrate one field into the other. However, times have changed. Neurotheology is the study that unites together these seemingly different entities in hopes of being able to understand the inexplicable-God and consciousness (1). Its goal is not to materialize God or to reduce religious experiences to brain functions, but rather to shed light on the interplay between the brain and these religious encounters. Essentially, neurotheologists wish to learn more about the brain and religion by using one to study the other.

With the advent of high tech brain imaging techniques, researchers are able to see the inner workings of the brain and observe neurochemical changes in the brain due to different stimuli (2). Specifically, neurotheologists have employed single photon emission computed topography (SPECT) in order to visualize brain activity during deep prayer and meditation. SPECT uses radioactive tracers, a scanner, and a computer to read, record and produce a picture of the brain (2). Dr. Andrew Newberg of the University of Pennsylvania performed SPECT-scans on Tibetan Monks and Franciscan nuns to take snap shots of their brains during their deeply religious experience (1), (3). He and his colleagues found that the prefrontal cortex, which is associated with thoughts and actions affiliated with internal goals, lit up (1). More interestingly, the superior parietal lobe, which is correlated with spatial and temporal orientation, grew dim (4).

Words do little to describe the feeling one gets when he/she enters a mystical state. Often, a sense of unity comes over a person deep in meditation, which is sometimes accompanied by a perceived voice. For this moment in time, all boundaries cease to exist and a feeling of euphoria usually occurs. The superior parietal lobe, also known as the orientation association area of the brain, is responsible for this religious experience according to the SPECT-scans data (4). This region of the brain combines visual and somatosensory signals to distinguish the body from the rest of the world (1), (5). So, it makes sense that, when the superior parietal lobe is "turned off," the feeling of unity and oneness arises.

To account for the turning off of this area of the brain, we can turn to the input side of the nervous system. Since our orientation association area of the brain calculates vision, temporal and spatial inputs in order to situate ourselves respective to the external world, blocking any sensory input will hinder the brain's ability to segregate and set boundaries (1). This is what occurs during meditation and prayer. Without these inputs to the nervous system, infinity can be experienced, which in it of itself can be a rather sensational experience with or without religious intentions.

Interestingly, another way to quiet the orientation association area is to participate in a ritual. In the most basic form, rituals can be a vehicle for species survival. Being able to create some sort of unity among a group of individuals can increase their chances of survival-think mating and hunting rituals (6). Not only does the repetitive nature of rituals allow a sense of unity to be felt by all participants but it also sends the sensory input-processing unit into overdrive. When this occurs, the hippocampus intervenes and inhibits neuronal signaling in order for brain to be able to process all the inputs it is receiving (1). The orientation association area is under hippocampal control and, thus, becomes inhibited during neuronal overdrive (1). Essentially, the "I" function is turned off, and the sense of self disappears.

With the "I" function shutting down, consciousness is impaired, and therefore, the unconscious brain is the main player behind the experiences of those who are undergoing a religious experience. Besides a sense of unity, another sensation correlated with deep prayer and meditation is the ability to hear voices that do not seem to be associated with your own inner voice. Sometimes, one assumes that this voice is that of God. Those whose unconsciousness is able to overcome their conscious thoughts are thought of to experience this phenomenon more so than others. What neurotheologists believe is happening is dissociation between different regions of the brain, which misidentifies inner speech with something existing outside of oneself (1). The region of the brain that is responsible for producing speech, the Broca's area, does not match up with the sensory processing unit of the brain since, during meditation and prayer, this unit is overloaded with input (1), (7). Therefore, the distinction between self and non-self do not align, and the voice seems to be coming from an external source.

Relating these mismatches to topics discussed in class and the forum, we could look at the reafferent loop to help us further understand this issue. Some neural outputs do not require any input or stimulus for the output action to occur. Many times, the nervous system's pattern generators will create outputs in order to produce inputs; this feedback mechanism is called the reafferent loop. The pattern generators that spontaneously create the output in the middle of the "box model" of the nervous system partake in a corollary discharge symphony where they essentially relay information to other pattern generators, so as to create coordination among them (8). During meditation and prayer, perhaps the corollary discharge harmonization becomes impaired in such a way that internally generated output signals do not match up with inputs received by other pattern generators, thus creating a perception of an externally created input.

Although the development of neurotheology excites some, it offends others. It is quite obvious where the controversy lies. It's important to note that neurotheologist neither claim that God is a product of the mind nor do they seek to reduce religious experiences to brain behavior; instead, they simply wish to understand the link between minds and spirituality (1), (3). As with anything in neurobiology, having evidence that one thing happens offers no grounds of refusal for other events. Just because an experience is correlated with a certain neuronal activity does not mean that the experience only exists in the brain (1). However, there is no way of knowing whether or not the brain is causing the experience or reacting to a spiritual encounter. As long as our brains are functioning the way they do and our innate need for finding explanations for the unknown fuels our exploration, the debate will continue and perhaps will never be solved.

References:

1)Newsweek Article, Religion and The Brain: In the New Field of "Neurotheology," Scientists Seek the Biological Basis of Spirituality. Is God All in Our Heads?


2) An Introduction to Brain Imaging.


3) Tracing the Synapses of Our Spirituality.


4) Religion and the Brain.


5)Parietal Lobe


6) Neurology, Ritual, and Religion: An Initial Exploration.


7) How the Brain "Creates" God: The Emerging Science of Neurotheology.


8)Serendip Website, Neurobiology and Behavior Spring 2006



Full Name:  Anna Dejdar
Username:  adejdar@brynmawr.edu
Title:  Borderline Personality Disorder: Exploring the Etiology
Date:  2006-04-11 05:40:27
Message Id:  18959
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

In one part of the book "Lost In The Mirror", Dr. Richard A. Moskovitz, M.D. writes, ""Elton John's characterization of Marilyn Monroe as a candle in the wind captures the essence of the borderline personality. She is an elusive character lacking in identity, overwhelmed by a barrage of painful emotions, consumed by hunger for love and acceptance, and careening from relationship to relationship and impulse to impulse in a desperate attempt to control these feelings"" (3). This portrayal vividly shows the key characteristics of Borderline Personality Disorder (BPD), which is a "Cluster B Personality Disorder". Personality Disorders are identified as being "pervasive, persistent, inflexible, maladaptive patterns of behavior that deviate from expected cultural norms" (1) and "the symptoms are seen in at least two of the following areas: Thoughts, Emotions, Interpersonal Functioning, Impulse Control" (2). BPD is a serious disorder that affects approximately 2-4% of the population in the United States (13). The DSM-IV, which is the Diagnostic and Statistical Manual of Mental Disorders and is used by the American Psychiatric Association (14) has a list of official criteria for diagnosing BPD. A person must have at least five of the following: "Intense and unstable personal relationships, Frantic efforts to avoid real or imagined abandonment, Identity disturbance or problems with sense of self, Impulsivity that is potentially self-damaging, Recurrent suicidal or parasuicidal behaviour, Affective instability, Chronic feelings of emptiness, Inappropriate intense or uncontrollable anger, and Transient stress-related paranoid ideation or severe dissociative symptoms" (4).

Furthermore, the description of Marilyn Monroe demonstrates one of the main characteristics of people with BPD which is that they have very low self-esteem and feel that they are worthless and as a result become extremely attached to someone that they are in a relationship with because of a fear of being abandoned by that person. They are clingy and need too much attention, becoming too much involved. Then the fear often becomes even more extreme and they actually begin to push the other person away from them and out of their life so that they will not be left. As a result of this treatment, the other person often does end up leaving. This act ends up validating the feelings of worthlessness that the person felt in the beginning. So this is a vicious cycle that keeps repeating for people with BPD and it becomes very difficult for them to get out of it and move away from those feelings. People with BPD can become so desperate that they might engage in self mutilation in order to try to get the other person to come back to them out of concern (5). As seen from this description, BPD has a lot to do with a person's thoughts and feelings about themselves and others, which makes it difficult to treat and also to identify the etiology (the cause) of it.

The exact etiology of BPD has not been found, but there are multiple theories; one is about childhood abuse and the other is a biological etiology. A very strong theory about the etiology is that people with BPD have suffered from early childhood trauma. It has been found that approximately 87% of people with BPD suffered from some sort of childhood trauma, 40-71% from sexual abuse and 25-71% from physical abuse. It has also been shown that when the abuse occurs early in the person's childhood, he/she has more damaging problems later in life. The explanation for this is that when the child is experiencing the abuse, he/she does not know how to make sense of what is happening, which then affects the thoughts and feelings of that person due to the confusion that he/she is feeling at that moment. The abuse also affects the relationships that the person will have in the future because he/she develops a difficulty in understanding the feelings and thoughts of others also and his/her relationship to other people. Another important part of this abuse is that while the abuse is occurring, the child seems to enter a "dissociated state" (4), where he/she no longer feels the pain as a way of defending themselves against the immense pain. This idea can be supported by the fact that one behavior that people with BPD demonstrate is that they cut their own body. It is reported that they do not feel any pain, which would suggest that they are also doing this in a "dissociated state" (4). Abuse in childhood also indicates that the family environment that the person is in can also play an important role because the families are not able to protect their child from the abuse or are doing the abuse to him/her, which shows that there are serious problems in the family, which could be another strong contributing factor (4).

The second theory of etiology consists of three different biological explanations for the development of BPD. The first one is that there is a problem in the limbic system, specifically in the amygdala and the hippocampus, in a person with BPD. Both the amygdala and the hippocampus are in charge of regulating the expression of emotions and particularly the expression of "fear, rage, and automatic reactions" (6). All of these are very important components in BPD, where the people have excessive anger and also fear in their relationships, which is demonstrated through impulsive acts like self-mutilation, which is an example of an automatic reaction. The limbic system in general is considered the "emotional centre" (6) of the brain. It has been found in studies that the volume of the amygdala and of the hippocampus are significantly smaller in people with BPD than in people who do not have any mental illness, indicating that there could be a link between BPD and a dysfunctional amygdala and hippocampus (6).

Another part believed to be involved is the orbital prefrontal cortex, which also has a very important role in regulating emotions. Particularly, it has been found that the orbital prefrontal cortex has a very important role with the inhibition of the limbic regions, which are concerned with generating aggression. The way that aggression is inhibited is through the serotonin system where the chemical serotonin regulates the activity in the prefrontal cortex and therefore when that is reduced, the problem with inhibiting that activity occurs (7). Therefore any damage to it could also result in emotional expression problems, which can definitely be seen in people with BPD as they move through very dramatic and severe emotions. One study done by Paul Soloff, M.D. and his associates found lower levels of glucose in the prefrontal cortex of people with BPD where glucose levels are associated with serotonin. If there are low levels of glucose, then that signifies that there is a deficient amount of serotonin (6), which could support the theory of the function of the serotonin system with the prefrontal cortex and BPD.

Another theory about the biological etiology of BPD looks at the function of the orbitofrontal cortex with BPD. One study compared people with BPD, people with lesions in the orbitofrontal cortex, people with lesions in the prefrontal cortex without lesions in the orbitofrontal cortex, and then a control group that consisted of "healthy subjects" (8). The study was looking at the possible etiology of BPD by giving all the subjects tests and questionnaires and comparing their performance or reactions and responses with people with BPD. The results and conclusions of this study were interesting because they introduced another possible aspect to the development of BPD. The researchers found that there were a lot of similarities between people with BPD and the people with orbitofrontal cortex lesions, particularly in the areas of the tests where they were both shown to be "more impulsive" (8) and reported "more anger and less happiness" (8) and more behaviors that were inappropriate than the other two groups. This would suggest that those aspects of BPD are related to a problem in the orbitofrontal cortex since both groups displayed the same responses in those areas. However, there were differences found between the two groups that suggest that not all of the traits of BPD are due to problems in the orbitofrontal cortex. For example people with BPD "were more neurotic, less extraverted, and less conscientious than all other groups" (8), even the group with lesions in the orbitofrontal cortex. Based on this finding, the researchers concluded that there must be a problem somewhere else in the brain that is responsible for these other aspects of BPD. They also suggested that this area might be in the limbic system, specifically in the amygdala, which is involved with emotion (8).

All of these theories with the etiology of BPD look at different explanations and point out various aspects of BPD that would explain BPD; however there does seem to be an interaction between all of them in relation to BPD. There is something known as the "diathesis-stress model" (14) in Psychology that states that in the development of an illness or disorder, there could be a relationship between a predisposing factor known as a "diathesis" and also the stress of the environment that that individual grows up in (14). This could be applied to BPD, where a person could have a problem in the orbitofrontal cortex or the orbital prefrontal cortex or the limbic system or in all of them, which would make him/her more susceptible to stressful situations in his/her environment. Then due to these problems, abuse or trauma in childhood could make the situation even more problematic and cause him/her serious problems in the future in the form of BPD. It could even be looked at in the reverse, where due to abuse or trauma in childhood, problems in the orbitofrontal cortex, the orbital prefrontal cortex or the limbic system could occur and then the person could develop BPD (6). This could explain why it is difficult to find the exact etiology because there are different factors that contribute to its development and not one specific factor that can be identified. Also, not everyone that has childhood abuse or trauma develops BPD, but other disorders or also no disorders, which would help to support the "diathesis-stress model", where the interaction between the two is what might be responsible for BPD (14). More research would have to be done to look at this and to see if this is the case.

BPD is a very difficult disorder and it affects both the people who suffer from it and also the people that are involved in their lives as it is challenging to help the person with BPD with his/her cycle of thoughts and emotions about his/her relationships and also his/her actions. There are various treatments offered for BPD ranging from therapy to medication. Five forms of therapy that are used are: Cognitive Analytic Therapy, Brief Psychoanalytic Psychotherapy, Interpersonal Psychotherapy, Dialectical Behavior Therapy, and also Schema-Focused Cognitive Therapy (4). Cognitive Analytic Therapy is that the therapist and the patient discuss possible connections between his/her present behavior and his/her childhood experiences. The therapist and the patient work together, which also helps to give the patient an example of a healthy and good relationship (4). Another therapy is Brief Psychoanalytic Psychotherapy, which has routes in Psychoanalytic Psychotherapy which was founded by Sigmund Freud where he used hypnotism and also "free association" in order to discover difficult memories that were suppressed by the patients (9). Brief Psychoanalytic Psychotherapy is a modified version of this where the therapist is specifically licensed in Psychotherapy and has a role that is very active in the treatment with the patient. They discuss his/her present experiences (4), (9). There is also Interpersonal Psychotherapy, where it also focuses on the present and helping patients with their personal activities and also their relationships (10). Then there is Dialectical Behavior Therapy, which is when the therapist helps patients with regulating their own emotions, and teaches them how to tolerate distress and also how to accept reality (11) while being very warm and understanding throughout the patient's process (13). Lastly, there is Schema-Focused Cognitive Therapy, where the therapist and the patient work on reshaping the patient's "maladaptive schemas" (4) which are negative thoughts and feelings that the patients have about themselves and their relationships with other people, which have started in their childhood and progressed throughout their lives (4). These therapies rely heavily on the new formed relationship between the therapist and the patient, which can be very helpful because it is founding a stable and long lasting relationship so that the person with BPD can feel more comfortable doing this in the future. However, a problem could occur because the patient might start his/her cycle of difficult relationships where the patient might get overly attached to the therapist. Then the BPD would be even harder to discuss and to treat because the therapist would be actually involved in the dysfunctional cycle (4).

Another technique as treatment is Eye Movement Desensitization and Reprocessing (EMDR), which is when the patient follows something that the practitioner is holding, with his/her eyes. The theory is that that these "rapid eye movements allegedly unblock ""the information-processing system"" (12) and this is a way of curing the brain (12) because it allows the central nervous system (13) to re-process the difficult memories and also to eliminate the previous beliefs. However, there is not strong evidence that supports this theory nor that the techniques with the movement of the eyes are the ones that really help with the problem. EMDR is a controversial treatment because many practitioners have been "certificated" to be able to do it; however the American Psychological Association has not approved of EMDR. Further research still needs to be done with EMDR to see if it is truly effective and to what extent it should be used (12).

Lastly, there is also the option of medication which consists of Selective Serotonin Reuptake Inhibitors (SSRIs), specifically antidepressants like Prozac and Zoloft. The antidepressants help with the very strong feelings of anxiety or despair that are experienced by people with BPD. There are also Mood –stabilizing drugs like Neurontin and lithium which aid with the radical and abrupt changes of mood that occur with people with BPD (13).

The therapies are all very different and apply to different people who might have different needs for dealing and treating their BPD and also if possible different causes of their BPD. There is a large range which would make it helpful for many different people. There are options for people who need to be able to discuss their thoughts and feelings or also medications as an option for people who feel that the problem is very biologically based and who need that help along with their therapies. The various therapies truly reflect the diverse and various explanations for BPD and are made adaptable to the people. There is still research to be done to look at the exact etiology of BPD and also possible relationships between the etiologies and also of effective treatments of BPD; however a lot of progress has been made that tries to effectively help with BPD. The theories have looked at various approaches to the etiology, embracing different options which shows that research is still open to understanding the cause.


WWW Sources:

1. 1) eMedicine-Personality Disorders: Article by Michael S Beeson, MD, MBA ,

2. 2)Personality Disorders Etiology, Symptoms, Treatment, and Prognosis ,

3. 3)Exerpts From The Book: "Lost In The Mirror,

4. 4)Recent developments in borderline personality disorder--Winston ,

5. 5)Borderline Personality Disorder ,

6. 6)Borderline Personality Disorder Label Creates Stigma ,

7. 7)Mental Health InfoSource ,

8. 8)Attentional Mechanisms of Borderline Personality Disorder ,

9. 9)Complementary Health and Alternative Medicine- Psychotherapy ,

10. 10)Interpersonal Therapy ,

11. 11)Dialectical Behavioral Therapy ,

12. 12)eye movement desensitization and reprocessing (EMDR) ,

13. 13)Understanding borderline personality disorder ,

14. 14) Butcher, James N., Susan Mineka, and Jill M. Hooley. Abnormal Psychology Twelfth Edition. Boston: Pearson Education, Inc, 2004.



Full Name:  Sylvia Ncha
Username:  sncha@haverford.edu
Title:  Nothing to Fear But Fear Itself?
Date:  2006-04-11 06:13:50
Message Id:  18960
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

How can fear have such a strong hold on an individual? Fear can be seen as an emotion that can protect humans or animals from potential danger (1). Fear can also be seen as a feeling of uneasiness and hypervigilance. Experience and culture play a role in our feelings because they teach us what to fear in the world around us. Perhaps this is why people are disproportionately afraid of some things but can ignore others. Fear can be a healthy emotion because it keeps us on alert and watchful for possible threats. Death, I think is safe to say, is our most basic fear. I believe this because the fear of death functions to make us alert in dangerous situations. This idea definetly accounts for some fears but what about other fears that just have to do with being uneasy in certain settings or situations? What about fear of things that do not even cause a major threat to our lives?

Fear can be innate and/or acquired through experiences. Acquired or learned fear comes as a conditional reflex which is in response to two types of stimuli. One stimulus is usually neutral or harmless and that can be for example a bell ringing while, the other stimuli can be potentially harmful like an electric shock. If an animal is given a shock treatment immediately after the bell rings, it learns to associate the ringing of the bell with the shock treatment. From this observation, the animal will exhibit fearful behavior when it hears the sound of a bell ringing. From the two types of fear,
acquired fear can at many times manifest itself as an anxiety disorder or mental illness. So is there a neurological difference between the two types of fear?

What is happening in the brain when the feeling of fear arises? The emotion or feeling of fear in mammals, including humans, is processed in a pair of tissues located along the middle part of the brain, called amygdala. This processing occurs in the amygdala regardless of whether the fear is innate or conditioned. Studies by Dr. Gleb Shumyatsky and colleagues, according to India's national newspaper The Hindu , indicated that a gene known as GRP or gastrin releasing peptide, appears to inhibit the action of the circuitry in the amygdala that is linked with learned or conditioned fear reflex (1). So we know that the amgydala is the area where fear is turned on and processed but I wonder if there is a way to turn off fear. Just as fear can be triggered by a threat i.e. turned on, can fear be turned off? Is there a place in the brain that if it is triggered, it can completely remove a fear from a person's memory? I pose this question because it seems as if fear or most of fear comes from a memory
or an experience. If this is the case, can there be a way to totally erase the memory that causes the individuals fear towards an object or situation?

According Gregory Quirk of the Ponce School of Medicine in Puerto Rico, the medial prefrontal cortex (mPFC) suppresses the activity of fear-generating nerve cells in the amygdala and elsewhere in the brain (3). In the September 24, 2003 Journal of Neuroscience, the researchers reported that electrically stimulating the mPFC reduces the responsiveness and activity of nerve cells in the amygdala's central nucleus. According to Quirk, the fearful memory is still stored in the brain, perhaps elsewhere in the amygdala, but the mPFC actually prevents the memory from generating fear or anxiety (3). Researchers are investigating whether they can extinguish fear in people by directly stimulating the mPFC. I am pretty sure though that stimulating the mPFC would not include shooting lasers or things of the such into the brain, it would probably be carried out through pharmaceutical drugs or by another less invasive treatment. I am a little skeptical about this idea because fear seems to be such a strong emotion that it does not seem like just the stimulation of mPFC alone would erase all a person's memory of fears that they have. I think the investigation of mPFC is still worth doing though.

The idea of shutting off fear seems to be valid because the purpose of fear is to warn or keep us from future discomfort yet we experience fear itself in an uncomfortable state. I guess the overall goal with Quirk's study, is trying to help individuals feel comfortable while still avoiding possible threats or situations that could cause future discomfort. From what I understand, when we experience fear we are in a state of caution from a possible threat. Some people could argue that fear is always based on something that has not happened yet, and is therefore a fantasy of our mind rather than fact. However, if someone knows that lions like meat, is it not reality (and not fantasy) that if I sleep next to a lion in the zoo, 9 times out of 10 I will not wake up with all of my body parts? In fact I may not even wake up...why? I WOULD BE DEAD! Just because it has not happened yet, does not mean that it is just part of my imagination. What we know to be true or factual is from what we have learned in life experiences
and from our culture or from tv shows like "When Animals Attack". People who have not had experiences with certain animals but are still afraid of the animals, are sometimes seen as irrational or abnormal. Why?

What classifies an individual as abnormal or normal when it comes to fear and disorders like phobias? What I find to be a bit unsettling is the fact that I to some may seem to be abnormal because I have a phobia of cats. However, what level must be reached to qualify me as an abnormal individual simply because what seems to scare me does not scare others. Where is the line drawn between normal fears and abnormal or irrational fears? I feel at times people with irrational fears as with phobias, are immediately shut down by others because their fears are not rational, but where is the line between realistic and non realistic fear. According to Dr. Jeffy Ricker, a fearful individual may be classified as abnormal if their fear is more severe than is warranted by the actual threat and/or if the individual's behavioral response to the fear is severely maladaptive (2). I concur with Dr. Jeffy to some extent with his beliefs because some fears of like fish or heights may not seem to be rational because those things are not fatal. However, the question still remains of how exactly you determine what fear is more severe than is warranted? Is it just based on what the majority of people feel towards the animal or situation?

Fear is a tough subject to tackle with because it seems to be all relative to one's own experiences. At the same time though, it seems like there is a clear difference between what I think poses a threat to a person versus what another person might think poses a threat to them. Since there is this difference I do not think it is fare to minimize someone else's fear simply because it does not fit in the category of accepted fears. Innate fear and acquired fear does not even help in the distinction of what is normal and what is abnormal because then you would be making a nature versus nurture argument. I think that fear plays a very important role in how we live our lives because it can definitely protect us from certain situations but it can really hold us back from situations where there really is no threat. I think that is where the idea of nothing to fear but fear itself, comes in.


Bibliography

1) http://www.hindu.com/seta/2005/12/15/stories/2005121500071600.htm. The Hindu: India's National Newspaper. 2005.
2) Dr. Jeffy Ricker. http://www.sc.maricopa.edu/sbscience/psy266/course/fear-anxiety.html. Fear and Anxiety.
3) John Travis. http://www.sciencenews.org/articles/20040117/bob9.asp. Science News: Fear Not.



Full Name:  Astra Bryant
Username:  abryant@brynmawr.edu
Title:  I'll Help You, But Why Do I Have To Want To? (Morality, Evolution, and Survival)
Date:  2006-04-11 07:36:37
Message Id:  18961
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Let us take for granted, that we, Homo sapiens sapiens, are moral creatures. By
this, I mean that we are able to experience chemically generated emotional responses
which societal infrastructures label as "moral feelings". Whether we act upon these
emotions is neither here nor there: humans, barring biological deformity, experience
morality. In a previous paper, I argued that morality was generated by both our biological
structures and by the pressure of social culture - let us assume this without arguing its
validity.

The experience of morality cannot be traced to a single emotion. Morality is a
complex behavioral response that requires the coordination of a range of basal emotions
and neurological processes – love, hate, anger, guilt, responsibility, recognition, and
many more. Morality's component emotions can be mixed, matched, and measured out in
many different arrangements; this plethora accounts for the lack of a single feeling that
can be described as "morality". Morality is a complex behavior, and comes in many
different flavors. Examples of the flavors of moral emotions (keeping the distinction from
moral actions in mind) range from the everyday: wanting to share your lunch with the
person who just spilled hers; to the rare: the guilt and feelings of responsibility you
experience when your brother needs you to donate your kidney to save his life.

But why does morality exist? It could be argued that our moral feelings are
constructs of societal pressures - that we do not actually have moral feelings, but that as our societies dictate, so we will create feelings to match. But the presence of specific neural pathways and structures that influence our feelings of morality (remembering individuals whose complete lack of moral feelings have been identified as a product of neuronal deformations) indicate that the ability is not following the more. Instead, I would argue that feelings of morality evolved, and that our innate ability has been incorporated into our societies. The idea that morality is an evolved - and therefore a heritable trait - is crucial. For if morality is a heritable trait, then its existence is a product of the action of natural selection.

The idea that morality is a product of natural selection pressures allows us to
examine morality in terms of its "usefulness". For a heritable trait to survive the pressure of natural selection, it must be one of two things. Either said trait must have no effect upon the survival of the organism and its offspring, or, it must have a positive effect. Given that, for example, our moral feelings prompt us to donate parts of our body to others (thus decreasing the chances of our own survival), I argue that morality is not a neutral trait. Instead I believe that morality is, in fact, thoroughly integrated into the struggle to ensure blood-line survival.

But how could morality ensure survival? It should be clear that moral feelings
often do not aid in individual survival - giving away food, no matter how noble the
feeling that prompted the action, still results in less food available for sustaining you. So, natural selection should have selected against moral feelings. But the continued existence of morality, supported by natural selection, becomes explainable if the purpose of natural selection is rigidly defined. It should be understood that the point of natural selection is not to ensure individual survival. One lifetime is way too short a time over which to discuss natural selection. More reasonable is the ensured survival of the species, or as current Darwinian Theory would have it, a specific blood-line. With this firmly in mind, the fact that morality has not been selected out, can be considered as the result of the catalytic nature of moral feelings.

We experience moral feelings. Generated by our brain in response to defined
situations, they instruct us as to what actions would be most beneficial to the overall
survival of our blood-line. Societies have constructed social rules which fit these
responses - labeling these survival-beneficial actions as socially acceptable. But in order
for survival to be affected (thus satisfying natural selection's pressures), we must act upon
our feelings. Earlier, I made a distinction between the internal feelings of morality, and
the performance of the physical activity suggested by those feelings. To have an impact
upon survival, it becomes important to have more than internal feelings. Actual physical
behaviors are the key to favorably changing our survival quotient.



Under this circumstance, the two faces of morality - the internal feeling and the
external behavior - can be seen as a means, and an end. The end is the external behavior -
the physical action that will increase the chance that the bloodline will survive into the
next generation. The means are our feelings of morality - electrochemical signals that
because of the way our nervous system works, influences our behavior in very specific,
survival-increasing ways.



The label of replication tool, with a positive effect on blood-line survival, can be
extended to include what we think of as our consciousness. However consciousness is
generated, whether though emergence or quantum mechanics, it has the effect of
motivating us beyond basal stimulus/response behaviors. Consciousness is most likely at
least influenced by heritable traits – so the same argument for the selected status of
morality can be applied. Our consciousness, including its subset of morality, encourages
replication through what amounts to neurological mood music.

This coolly biological explanation for morality and consciousness is somewhat
disquieting. It categorizes what we see as our individuality, as nothing more, or less, than
a tool used in order to ensure replication. Our moral sense in one of those experiences
that bridges quantifiable neurobiology, and the idea of the 'human soul'. Morality, along
with honor, wisdom, and language, is regarded as a defining characteristic of humanity.
As we make inroads into demystifying morality and consciousness, we perhaps will come
to find that the boundaries between the quantifiable and the ethereal, between the human
soul and the animal existence, are not actually there at all. The realization that, as Thomas
Metzinger puts it, "you are basically a gene copying device ((1))", with no soul, and worse,
no innate nobility that is not the by-product of a blind attempt to ensure survival, forces
us to examine our own mortality, as well as every feeling that we experience.



But for me, what is truly disquieting is not the fact that our ultimate purpose is
replication. Nor is it the fact that our behavior, including our consciousness, evolved as a
chemically induced motivation tool used in the pursuit of said replication. It is the
enormous trouble to which our brain goes to adapting these chemical pathways in order
to distract us from the task of replication.

Sources:

1) Blackmore, Susan. Conversations on Consciousness. Oxford University Press. Oxford:
2006.
2)Can Evolution Explain Morality?, An influence I came across while writing my first web paper - didn't read this time around, but helped guide the genesis of this paper.
3)Darwin on the Evolution of Morality, Another influence I came across while writing my first web paper - has some nice quotes from Darwin's writings.



Full Name:  Erin Schifeling
Username:  eschifel@brynmawr.edu
Title:  The Biological Basis of Memory Manufacture
Date:  2006-04-11 07:56:56
Message Id:  18962
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip


In my last paper, I examined memory in terms of what its loss, retention and recovery revealed about how it worked and what it was (1). This macro-level, behavior oriented approach revealed several characteristics of memory: it is dependent on molecules within the brain and on brain cells, it includes long and short term memory, and it is affected by the external environment. From another angle, memory, especially human memory, is essential for the development and evolution of human culture, arguably the trait that most clearly sets humans apart from other animals (2). The ability to form and use memories of different kinds played an increasing role in the evolution of humans. However, culture and the behavioral results of memories arise out of a biological framework, which must create or allow for each of these characteristics and capabilities.

The goal for this paper is to reach an understanding of the biology of memory, the basic mechanism behind these more macro-level patterns and interactions. Specifically, how are memories stored, where are they stored and what exactly are memories? The search for answers to these questions leads to secondary questions: Is there a difference on the cellular level between long term and short term memory? Are there qualitatively different kinds of memories? Where, in the brain, does remembrance occur?

To begin to look at a set of processes as complex as those involved in memory, it is useful to start with more straight-forward memories in less complex nervous systems (3). Kandel studied sea snails and other animals with simple nervous systems in an attempt to explain what memories are: synapses or cells? In the 1950's the debate centered around whether memories resulted from the growth of cells or from a change in the properties in a synapse (the area between a sending and a receiving cell) that would make synaptic signals between the two cells more or less frequent. The nerve cells of organisms taught a certain response were observed at all points in the learning, or creation of the memory. Repeated firing of certain synapses leads to a build up of serotonin which fixes short term memory in the cell by altering synaptic potentials (making signals more likely to be sent between nerve cells). Repeated and temporally spaced serotonin accumulation eventually causes the activation of certain DNA segments through a chain of molecular interactions. Proteins are produced that converge on the serotonin-high synapses of the nerve cell and begin the interactions necessary to begin growth of new synapses between the two cells. (3)

This cellular level understanding allows for experience to cause changes in connections between cells -strengthening or weakening them- and allows for both long-term and short term memory, which are qualitatively different. But, the model is imited to memories that pattern behavior, called implicit or procedural memories. It needs to be expanded to explain what are called declarative or explicit memories -the memories of events that occurred years ago that people usually refer to when they discuss memory (3). Memories that can be narrated are different from learned behaviors. Strengthening connections between input and output neurons that already exist results in an increase in a certain behavior, but remembering smells from ten years ago requires a storage site beyond input-output chains and must in some way link to consciousness.

The current theory for declarative memories does build off of the simpler implicit memory model to build more complex memory abilities (3). It is thought that cells in the hippocampus (a part of the telencephalon, the most recent part of the brain to evolve, (4)) process sensory input signals -images, sounds, smells, emotions- storing them in other parts of the brain by connecting the related inputs to certain cells in the hippocampus with the same cellular-level process used for procedural memories (5). Over time, the connections between storage cells may increase to the extent that the hippocampus cells are not required by the memory. Procedural memories are learned in many parts of the nervous system and in a wide range of animals. Declarative memories seem to be processed through the hippocampus, but connected and stored throughout the brains of a select group of animals (5). Kandel (3) described memory as "the pattern of functional interconnections of . . . cells" (p.567) and this description seems to apply to both kinds of memory although more research remains to be done on declarative memories.

Declarative memories, while they appear later in life and in evolution, are not less important than procedural memories. In fact, for about the first year of life, humans very actively create procedural memories, and then later begin to form declarative memories (6), (7). Children and adults continue to form procedural memories throughout their lives. In many ways these procedural memories dictate behavior even more especially because we are less conscious of these memories and have a more difficult time describing them with language (8). Other studies have found that declarative memories of events preceding positive feelings are more likely to form than events followed by negative feelings. This suggests an evolutionary connection between procedural and declarative memories. Procedures that had positive results should be remembered and performed again for improved survival. While this is less important, for declarative memories, if it is built on procedural mechanisms, the trait may remain. (9)

A biological basis for learned behavior and memory does exist at the cellular level. The biological observations and theories (where observations remain to be made) provide for memory formation, learned behavior, and the exchange of culture/behavior, which in turn changed the forces moving human evolution. Evolution occured in some sense through behavioral changes passed down culturally in place of physical changes passed down genetically. Also, this biological model allows for biology and experience to interact directly. Experience quickly leads to changes in the brain, which allow for a change in response to the world beyond the individual. Thus, the external and internal processes are interdependent. The biological model explains how molecular and cellular level problems cause memory loss: chains of molecular reactions are necessary for memory production and cell connections serve to store memories. At the same time, the biology allows for cultural, emotional, and attention-related influences on memory formation, though more research is required for a full understanding of these relationships. Finally this model clearly and qualitatively differentiates between short term and long term memory.

This essay began with a series of questions. The answers detail a biological process for procedural memory which hints at the workings of declarative memory and pathways for outside influences on memory and raise more questions: what is the biological difference between declarative and procedural memories? How much of the brain framework is laid out through genetically controlled development and how much is created through memory and learning? Can new cells imitate the connections of the cells they replace, or must the whole process of memory formation reoccur for memory recovery? Through what biologically mediated pathways do attention, emotion and culture affect memory formation?

Sources

1)Schifeling, Erin. "Memory Loss and Recovery."

2) Donald, Merlin. The Origins of the Modern Mind: Three Stages in the Evolution of Culture and Cognition. Cambridge: Harvard University Press, 1991.

3)Kandel, Eric R. "The Molecular Biology of Memory Storage: A Dialog Between Genes and Synapses." Bioscience Reports. V. 21, No. 5. Plenum Publishing Corporation, 2002.

4)"Adventures in Neuroanatomy: Parts of the Nervous System."

5)"The Hippocampus: Memories are Made of This."

6)Edelson, Mat. "Researchers Map the Biology of Memory Formation." John Hopkins University, 2001.

7) Ingram, Jay. "Making Memories: Why You Can't Remember Your First Birthday." Muse, April 2006.

8)Kirschner, Gordon. "Implicit Memory and Psychotherapy."

9)Pendick, Daniel. "Have a Nice Memory." Memory Loss and the Brain, 2005.

10)Rohatgi, Ruchi. "Learning and Memory."

11)Webster, Jennifer. "Memories are Made of This."



Full Name:  Suzanne Landi
Username:  slandi@brynmawr.edu
Title:  Scientology and the Brain
Date:  2006-04-11 08:21:19
Message Id:  18964
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Scientology's stance on psychiatric medicines and the brain has become popular tabloid fodder since celebrities like Tom Cruise launched a highly publicized crusade against modern treatments for mental disorders like post-partum depression. The religion has crossed over to popular culture, spouting wisdom on parenting, job administration and education. In less than a century under the direction of now departed founder L. Ron Hubbard, Scientology has become a religion, a trend and most importantly, a new way of looking at the brain and behavior.

One of the basic tenets of Scientology (more formally referred to as the Church of Scientology), is that man consists of three parts: the thetan, the mind and the body. The thetan is the immortal soul which Scientology claims is the individual. Specifically, the actual energy that makes up the thetan is appropriately called the theta, named after the Greek letter that was used to represent thought. Even though the dynamic includes the mind, it insists that the individual and personality is the thetan. The mind has a simpler role, used by the thetan as a communication and control system between the person and his or her environment, reminiscient of the theory that "The mind is what the brain does". The body serves as a vessel, with no role important to behavior (1).
For the neurobiologist, this interaction of thetan, mind and body reflects some brain science, but at the same time is incongruent with the models we follow. The thetan is parallel to the "I-function," the complex part of the brain that we often consider the source of decision, choice and a person's overall identity. Presumably, the mind is the leftover; the neurons and physical matter that controls basic functioning. The definition of mind is further separated to include the reactive mind, the part of the mind that has no volitional control and works on a simply stimulus-response model. As neurobiologists, we know that signals can start without a stimulus and can end without a response, but perhaps Scientology attributes this to the thetan or an undefined "unreactive" mind.

The separation of the brain into both the mind and thetan is not different from other religions that separate the soul, brain and body. We've seen these themes before without intense controversy – mostly debate on cognition.
Additionally, Scientology uses a form of meditation known as "auditing." An auditor conducts a session with a fellow Scientologist to contemplate existence, and examine areas of their existence in order to "rid themselves of unwanted spiritual conditions and increase awareness and ability" (1)). So far, this doesn't sound so different from regular visits to a psychiatrist or even a Catholic confession. But auditing sessions include a device known as the "E-meter," short for Electro-psychometer. Its function is to measure the mental state of the person being audited, and traces harmful energy, known as a "charge," that the session intends to fix. This works, presumably, by sending a weak electric volt into a person's body, which then interacts with the energy involved in thought and registers on the E-meter. Somehow, this process stears these charges away from the reactive mind, allowing for clear thought and life assessment (2). With the exception of science fiction-like meters, these tenets of Scientology sound similar to meditation, which has proven neurological benefits (3)and even psychological counseling. So why does Scientology seem to have such a strong vendetta against psychiatry and drugs?

The beginnings of Scientology were developed by Hubbard in the 1950s, based on his self-help philosophy of Dianetics. The foundations for psychotherapy that Scientologists are permitted to seek were established in these early developmental years of Hubbard's philosophy. However, Dianetics does not equal Scientology, nor is it forgotten in favor of the Church. Although Hubbard relegated Dianetics to a subfield of Scientology, there are some key differences between the two, mostly that Dianetics focuses on the individual's quest for health and truth, more psychological. Scientology explores more cultural aspects of life, like ethics, morality and solutions to broader, real-life complications. Dianetics actually resolves to rid a follower of the reactive mind, where painful memories are stored (6).

Hubbard claims that the reactive mind is also known as an "engram bank," and engrams are blamed for afflictions like allergies, asthma, hypertension and other psycho-somatic troubles (4).Whether or not this is true is difficult to tell. The official Dianetics website reports sketchy statistics that claim 98% of participants have had their lives improved by Dianetics, and it would be difficult to test every follower of Hubbard for significant improvement in neurological functioning and happiness. The beliefs of Dianetics are found in Scientology doctrine but one method can be utilized without the other (5).

Scientology and Dianetics exploit a basic human desire to explore thought and pain, commonly found in psychotherapy and other religions. There is no way to distinguish theta from ordinary thought in an MRI or if attempts to rid a person of the reactive mind, if it indeed exists, is impossible. This task is not entirely different from trying to detect the presence of God or the effect of prayer on one's brain. Scientology offers a support system that incorporates some science, and banks on people to use auditing and other applications of Scientology instead of chemicals and therapies that could actually improve their condition.

This reliance on a faith to solve medical problems and simultaneous rejection of the more established medical practices can be dangerous, but as noted before, not unfamiliar. Scientology and its proponents, most famously Tom Cruise, need to consider the benefits of established psychiatric medicine. While it is true that vitamins and exercise affect both brain and body in a positive way, it is not accurate to prescribe this as a method to cure post-partum depression (7). Both Dianetics therapy and the religion of Scientology are so modern that at times it seems like science fiction, but followers of other major religions indulge in beliefs that seem both strange and unethical to science. The study of the brain and behavior requires an enormous amount of faith in itself, whether we believe that either is dictated by neurons or engrams.

1)What is Scientology? The Parts of Man

2)The E-Meter


3)Psychology Today: The Benefits of Meditation


4)Dianetics (The "Bible" of Scientology)

5)Dianetics Results and Statistics


6)Scientology Glossary


7) Shields, Brook. New York Times Online. 01 July 2005.



Full Name:  Caroline Troein
Username:  ctroein@brynmawr.edu
Title:  "Honey, I have something to tell you"
Date:  2006-04-11 08:32:56
Message Id:  18966
Paper Text:
<mytitle> Biology 202
2006 Second Web Paper
On Serendip

Across the industrialized world, the average age for childbirth is increasing. Birth rates, in many industrialized countries, have fallen so low that governments are facing an economic crisis waiting to happen as aging populations are unable to be supported by a dwindling number of working adults. And soon the baby boomers will retire, tipping the balance, and becoming the largest generation in history to retire. Society was not constructed for these challenges. Part of the problem is that women are having fewer children at younger ages. We assume because of evolutionary theories and religious doctrines that childbirth would be desired rather than shunned. This mystery has more to do with society and culture than the structure of the brain.

To examine this dichotomy of societal versus biological influences on the desire to procreate, one must examine the brain. Reproduction is governed by a complex balance of hormones. Aside from the effect of pheromones on reproductive cycles and tendencies, five hormones play central roles: gonadotropin releasing hormone (GnRH) secreted by the hypothalamus, follicle-stimulating hormone (FSH) and lutenizing hormone (LH) secreted by the pituitary gland, and estrogen and progesterone secreted by the ovaries.(1) These hormones vary in presence during the course of a woman's menstrual cycle. (2) Logically, affecting one of these hormones should affect the reproductive capabilities and urges of a woman.

In normal conditions, these hormones are regulates in a feedback loops to maintain optimum conditions for reproduction. The hypothalamus plays a key role in this process and communicated through a blood portal to the anterior pituitary gland and stimulates the creation of gonadotropins, luteinizing hormone and follicle stimulating hormones.(3)

At the University of California, Berkeley, researchers have found that a previously ignored hormone called gonadotropin-inhibitory hormone (GnIH) halts reproduction by inhibiting GnRH.(4) Whereas GnRS stimulates the pituitary gland to activate the reproductive system, GnIH simply inhibits this function. While this finding has only been established in rats, it is likely that it is also present in humans since the human genome also contains a gene for GnIH. Thus, if one wanted to increase a person's natural propensity towards having children without a balancing mechanism, one needs to eliminate the GnIH protein.

The GnIH protein is already controlled by estradiol, which reduces the production of GnIH when necessary. George Bentley, involved in the UC Berkeley study, states in regards to the results that "this is an example of the reproductive system being fine tuned." Fine tuning the system is an essential element of regulating how reproduction functions. The fine tuning may also be one of the many factors which create the sensation of a "biological reproductive clock" urging women to have children while they are still fertile.

People, women more than men, are drawn to protect children. The close facial features of a young face inspire protective and nurturing instincts in people. Evolutionarily, we have a tendency to protect our next of kin, but in general we tend to also protect children. It is logical considering evolutionary issues. If one is not in a position to protect one's one genetic material, one automatically defers outwards to a more general species protective instinct. Humans have long displayed their favoritism of children to many other age groups. Children are given specific rights and privileges – something which today would is taboo for gender differences. In addition, the United Nations' High Commission for Human Rights implemented the Declaration of the Rights of the Child, specifying that "special safeguards" are necessary for children because "mankind owes to the child the best that it has to give." If we have an instinctive biological desire to reproduce, this is logical behavior because of limited resources and dangers involved in reproduction.

Selection of who to protect stems from psychological mechanisms related to choice and risk. Most choices we make in the world have ambiguous circumstances – it is equally logical to, when out for a walk, to walk left or right.(5) But people will more likely choose a direction which they are familiar with rather than explore. When it comes to choosing social groups, we also display this tendency. In any lunchroom in America, students tend to self segregate based on visual cues. Then this could also be assumed when a person makes a decision to protect children. First, protect those who you know. Then, protect those who look like your own child. Finally, protect other children.

Making the distinction of who is protected is important from a brain-behavior view because of the time delay involved in reproduction and immediacy to one's own genetic material. It is an investment for a woman to carry a child to term. It is extremely taxing on the body, dangerous, and a child requires significant resource expenditure even after birth, to which any new parent will attest. Because there is an instinctive desire to reproduce, we are willing to care for children who aren't even our own because it means that there is a greater chance that our genetic material will be passed on. Not only do we prefer our genetic kin because of subconscious genetic bias, but we prefer what we believe to be safe.

Considering these two aspects: that women have a complex balance of hormones designed for reproduction and that there are simple rules to the protection to children, we must return to the original premise: biologically, why are industrial women not having as many children? This question remains unanswered by biology, because there has not been a significant change in hormone for all industrial women. There is unlikely to be such a change because to the complex nature of these hormones. This question is also unanswered by evolutionary theory and choice biases. It is safer to have children in Western society, and mothers are likely to receive help from the state if they need it. There should be no need to defer survival of one's genetic material to anyone other than direct offspring.

Society's telling role comes into play. Women in industrialized nations are often delaying childbirth, or foregoing it all together, for careers or their own lives. The implication of this is that the desire to reproduce is strongly affected by society. One could rationalize the behavior by stating that as a whole, there is a high likelihood for one's genetic relatives to survive, so one may expend energy on other pursuits.

Regardless, the trends of industrialized nations are contrary to assumed biological mechanisms of capitalist existence. Survival of the fitness with intense personal preference is no longer the case. The two influencing aspects explored, hormones and preferences, are not enough on their own to explain the entire process surrounding the desire to have children. Yet they both stem to other causes which influence these factors. Biology and society have evolved together to have an elastic cohesion dependent on circumstance. Logic does not always fit into this relationship.

Web Resources

1) "Reproductive Physiology" , Resource of the overall features of reproduction

2) The Menstrual cycle, Resource explaining specifics in regards to reproduction.

3) "Hormones of the Reproductive System", Source on the specifics of reproductive hormones.

4) "Brain hormone puts brakes on reproduction", Article on study done about halting certain reproductive hormones.

5) Aronson, Elliot. The Social Animal. New York: W.H.Freeman & Co Ltd, 2003.




Full Name:  Bethany Canver
Username:  bcanver@brynmawr.edu
Title:  Understanding Self-Injurious Behavior
Date:  2006-04-11 08:35:48
Message Id:  18967
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Intentional and often repetitive self injurious behavior (SIB) is exhibited by approximately 1-2 million people the in United States. The typical self-injurer is female (women are 1.5-3 times more likely than men to self injure), adolescent or young adult, single, middle to upper-middle class, and intelligent. Though it is often conceptualized as a "derivation of suicide" (4),/a>, the primary objective in approximately 85% of self-injurious events is tension relief opposed to suicide. Favazza classified the nature of self-injurious events into four categories: 1) stereotypic 2) major 3) compulsive 4) impulsive. Stereotypic SIB is primarily exhibited by individuals with developmental disabilities and occurs without regard for social context or without thought and feeling. Major SIB is very dramatic and occurs as an isolated event whereas compulsive SIB occurs repetitively, sometimes multiple times a day. Impulsive SIB is episodic, buffered by periods where no SIB occurs. Generally, SIB is accomplished in the absence of pain due to disassociation the individual achieves and is followed by a feeling of relief or normalcy which continues until the cycle begins again (Yates). Many cases of SIB occur in conjunction with other disorders like alcohol and drug abuse, post traumatic stress disorder, eating disorders, personality disorders, or developmental disabilities. In fact, SIB rarely occurs in isolation from other symptoms or disorders.
According to the current definition of SIB in which it is described as "the destruction or alteration of body tissue [that] occurs in the absence of conscious suicidal intent"
(4),/a>, an exhaustive list of SIB includes tattooing, piercing, surgical implants, scarification, pigmentation changes, radical dieting, hunger striking, fasting, stigmata inducing, cutting, and burning (2),/a>. Because SIB varies etiologically over a continuum, it is impossible to assign a definitive causal argument which becomes problematic in attempting treatment. An important question that arises in the treatment of SIB is whether the behavior is in response to neurochemical stimuli or whether there is something that is being communicated by the individual who is exhibiting self-injurious behavior. If there is something that the SIB is nonverbally articulating, what follows is whether the individual is aware of this and how they themselves interpret their behavior. To address these nuances, motivating factors behind SIB have been categorized as either interpersonal, in which attempts are made to effect change in the interpersonal environment, or intrapersonal, in which attempts are made to "quell intraphysic distress" (4),/a>.
At the biological level, SIB is attributed to abnormal neurochemistry involving the neurotransmission of serotonin, dopamine, and endorphins. Serotonergic deficits, or decreased serotonin levels, have been observed in SIB individuals by analyzing the breakdown products (metabolites) of serotonin in spinal fluid. Serotonin levels can also be assessed by measuring imipramine binding sites on platelets; there is a direct relationship between the number of impramine binding sites and serotonin levels. The most precise method used to determine serotonergic levels is through investigation of a hormone called prolactin's response to a drug called d-fenfluamine. A muted response to d-fenfluamine is indicative of lower serotonin levels. Seretonogenic deficits, determined by imipramine binding sites on platelets was linked to aggression and impulsiveness by Stoffetal (1987) and Birmaher et al (1990) which suggests that SIB is akin to impulse disorders like kleptomania and tricholtillomania. Another neurochemical explanation of SIB is that the body becomes addicted to endorphins, pain-relieving neurotransmitters derived from opium, released by self-mutilation. Individuals with SIB have abnormal endogenous opiod systems which may be congenital or a result of neurochemical responses to events in early childhood
(4),/a>. However the applicability of the endorphin theory is thrown into question by the fact that the vast majority of research on the affect of endorphins on SIB has been conducted on autistic and/or mentally retarded individuals who have unique brain chemistry that differs from that of the non-autistic/mentally retarded population.
SIB is oftentimes treated using psychopharmacology which is the branch of pharmacology that deals with drugs that influence the brain (psychoactive drugs). Favazza (1998) recommended high doses of selective serotonin reuptake inhibitors (SSRI's) which increase synaptic levels of serotonin. Another class of drugs known as opiate antagonists (i.e. naltrexone or nalozone) have been prescribed with the intention of minimizing the need for endorphins. Atypical neuroplectics (i.e. clozapine, risperidone, and olanzapine) which bind to dopamine and serotonin receptors have also been used in SIB treatment. In addition to psychopharmacology, SIB is treated with psychotherapy and support groups, the ultimate goal of which is resocialization by way of finding substitutes for self-injurious behavior and developing alternative coping skills.
In addition to the neurochemical theories of causation, there are a number of psychological and sociological explanations for the occurrence of SIB. Self injury as an adaptive response, is thought to exist in the absence of other coping mechanisms
(1),/a>. Not only can SIB be tension relieving, but it also allows for self-care to occur which can be a significant function in the case of individuals who have been victims of physical or sexual abuse. Depression and low self-esteem have also been pointed to as likely causes for SIB as has emotional proprioception or a feeling of disconnect between self and body (1),/a>. This failure to distinguish self from non-self is paramount in the psychosomatic perspective's explanation of SIB. Experimental evidence (Favazza 1999) has linked parental loss, chronic illness, and emotional neglect to SIB later in life (4),/a>. From a psychosnalytic perspective, SIB allows victims of abuse to gain control over traumatic experiences by recreating their victimization. The object relations perspective attributes SIB to a lack of nurturance or protection during early childhood which results in a self-care system which a "false-self" acts as the protector of a "true-self" (4),/a>. According to the attachment perspective, a child will have been made to feel that their caretaker is unreliable or threatening and this can result in the child viewing him/herself as undeserving of care (4),/a>. This confusion centered around attachment to the caregiver according to Liotti (1992) "may render the child more vulnerable to dissaociative defenses" (4),/a> such as SIB that serve to reconcile a caregiver who is at once nurturing and threatening. The psychological explanations heavily rely on the nurture side of the nature vs. nurture dichotomy while downplaying or ignoring the role of genetic predisposition in SIB. It has recently been shown that psychological experiences like the aforementioned examples can actually alter neurochemical pathways which illustrates the effect of environmental stimuli on internal biology.
The commodification of the body, which has been exacerbated by late capitalism (Potter), has been used to explain the use of the "body as text", or a medium or communication. From this sociological perspective, the body is "being used to communicate something that is difficult or impossible to articulate in conventional modes"
(2),/a>. SIB elicits a particular response from others via a nonverbal system of signs and symbols that is culturally determined. SIB can be religiously, politically, or aesthetically motivated (2),/a> as is the case with religious fasting, hunger strikes, and tattooing. At the individual level wounds can be "event markers" that signify either important or traumatic events or wounds can reflect something much broader than individual experience. For women, the need to feel autonomous and in control that SIB affords may reflect the subordinate position of women in many societies. It may also be a response to fears of being passive victims in violent or sexual attacks (2),/a>. Self-loathing is also a likely motivation for SIB particularly for women living in societies where such significance is placed on beauty and outward appearance. From this perspective, SIB becomes less random in that who exhibits SIB and why is predictable based on certain cultural factors. Self-injury is then less a response to neurological stimuli and much more a reflection of cultural stimuli.
Deviance theory argues that SIB serves to set a boundary between what is and what is not acceptable behavior; the distinction between socially acceptable and socially unacceptable SIB is differentiated by social context. Though the end result of all self-injurious behavior is the same, how it is perceived and understood is largely decided by psychiatry and popular culture which classifies "some acts as fashionable, others as transgressive, and still others as pathological"
(2),/a>. For example, tattoos and piercings may be socially sanctioned (even rites of passage) whereas cutting and/or burning may be interpreted as mental illness. Typically, the delineation between acceptable and unacceptable SIB is made between those self-destructive acts that are committed within the presence of others and motivated by ritualistic, symbolic, or sacred mores and those committed in isolation that lack ritualism, symbolism, or sacredness that extend beyond the individual. SIB that occurs in conjunction with other self-destructive, deviant behaviors, like drug and alcohol abuse, which are behaviors that are already negatively sanctioned by society, further vilifies some types of SIB.
In examining the phenomenon of SIB, it becomes unclear whether or not the cause is neurobiological or psychological? Whether psychological trauma alters neurobiology? Or whether cultural forces influence who will exhibit SIB and how it will be manifested? At this point in time the way to answer these questions in the "least wrong" manner would be to say that there is a complex interplay between all of these psychological, neurological, and sociological factors which cannot be distilled down to a simple equation used to predict who will self-injure, whether treatment is needed, and what treatment is most appropriate.

1)Understanding Self Injurious Behavior, by Lisa R. Ferentz
3)
2)Commodity, Body, Sign, by Nancy Nyquist Potter
4) ">Developmental Psychopathology, by Tuppett M. Yates
• Ferentz, Lisa R. "Understanding Self-Injurious Behavior." 3/24/06. www.prponline.net/School/SAJ/Articles/understanding_self_injurious_behavior.htm
• Potter, Nancy Nyquist. "Commodity/Body/Sign: Borderline Personality Disorder and the Signification of Self-Injurious Behavior." March 2003. The Johns Hopkins University Press. http://muse.jhu.edu
• "Psychopharmacological Treatment of Self-Injury." 3/24/06. www.palace.net//llama/psych/pharm.html
• Yates, Tuppett M. "The Developmental Psychopathology of Self-Injurious Behavior: Compensatory Regulation in Posttraumatic Adaptaion." 3/23/06. 2003 www.sciencedirect.com/science?_ob=ArticleURL&_aset=V-WA-A-W-WAW-MsS...



Full Name:  Rebecca Woodruff
Username:  rwoodruf@brynmawr.edu
Title:  Thinking is Healing? Examining Alternative Therapies for Treating Obsessive Compulsive Disorder
Date:  2006-04-11 08:51:57
Message Id:  18968
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Notable for its dynamic nature, its plasticity, its complex structure, and its ability to both receive and generate inputs, the human nervous system both complicates and offers solutions to various neurological disorders in and of itself. New ways of understanding and treating Obsessive Compulsive Disorder (OCD), an anxiety disorder that affects roughly 7 million Americans (1), have shed light on both the structure and the function of the human nervous system. The I-function appears to play a rather unusual role in OCD, and several independent researchers have found that this very complication offers a course of treatment. Observations suggest that not only can interaction with the environment, both involving and bypassing the I-function, change the brain, but that the brain has the intrinsic power to change itself structurally and thereby functionally. Ultimately, research about the role the I-function plays in OCD has played a key role in our understanding of the brain. In contrast with other common treatment options such as psychopharmacology and physical restraint, this involvement of the I-function suggests that millions of OCD sufferers have an inherent power to capitalize on the plastic nature of their brains and reduce the limits caused by the disorder.

The National Institute of Health characterizes OCD obsession as "recurrent and intrusive thoughts, feelings, ideas, or sensations (1)" and compulsion, the counterpart to obsession as "a conscious, recurrent pattern of behavior a person feels driven to perform (1)". While in some cases, exhibiting either obsessions or compulsions comprises an OCD diagnosis, many people experience the obsessions and compulsions concurrently. Obsessions often come about as a barrage of unwanted, disturbing, and fear-invoking thoughts, and the person often feels that the only way to assuage the fear is by fulfilling the compulsion. However, a unique component of OCD is ego dystonia. Researchers and clinicians alike have observed that in the middle of an OCD cycle, many patients are fully aware that their obsessions and compulsions have no bearing in reality. This separation, this internal voice of reason, is termed ego-dystonia, and is a defining characteristic of OCD (2). It is notable that the term is rarely used in reference to any other neurological disorder. However, this internal voice of reason is often not enough to overrule the intense fear sparked by the obsessive compulsive cycle, making the disease incredibly debilitating for many Americans (4). Interestingly, this characteristic also forms the basis for new and controversial treatment for OCD as well as new understandings of the structure and the function of the brain.

Currently, many kinds of treatment are available for OCD sufferers, each resulting from a different hypothesized source of the disorder. These causalities range from an overused neural pathway in the forebrain (2), to a serotonin imbalance (1), to some mysterious autoimmune mechanism (3). Due to so much dissent within the medical community as to the origin of the disorder, researchers disagree about the best way to treat OCD. To exaggerate this dissent, many of the psychiatric studies published have generated frustrating results due conflicting clinical significance determinations between research groups (5). That said, there are few reliable and easily comparable numbers to back up whether pharmacology offers the best treatment option, as compared to aggressive exposure therapy, or talk therapy, etc. This paper will not go into depth on these treatment options, however, it will examine the possibility that the very component of this disorder that makes it so frustrating may also unlock the secret to curing it and a whole host of other neurological disorders: the involvement of the I-function.

Based on research conducted on the disorder, OCD appears to have ties on many levels to the I-function. First of all, obsessions clearly link to I-function processing as the very definition implies that obsessions are at the forefront of consciousness. However, these obsessions cannot be simply dismissed as other thoughts can. This complication makes the obsessive compulsive cycle very difficult to treat.

As is the case with obsessions, ritualistic compulsions involve the I-function in different ways than the actions do in people who do not have OCD. Dar and Katz's study on Obsessive Compulsive washers suggests that the compulsion of washing in OCD patients takes on a much higher level of identification than the act of washing does with non OCD suffers (6). One theory used to explain this phenomenon, the Action Identification Theory (AIT) says that low levels of identification of a specific action often are automatic; in other words, they can be performed with limited I-function processing (7). However, high levels of identification, as seen in OCD patients, take on a ritualistic quality so that the performing of the action uses the I-function to a much larger extent. Therefore, OCD patients identify with their obsessions and their compulsions on a high level, which closely involves I-function processing.

Furthermore, the very nature of ego dystonia has interesting implications for the role of the I-function in OCD thought processes. Going back to the definition, ego dystonia is a separate voice inside of the nervous system that recognizes the mind tricks that the brain is playing on itself. While documentation of this mental separation is less empirical and based more on interviews, it is still a valid observation. So valid, in fact, that one researcher and clinician, Jeffrey Schwartz, has used it to the patient's advantage. In his work, Schwartz teaches patients about what he believes is the major cause of the disorder, an imbalance in the firing of two pathways in the caudate nucleus, and has them practice resisting the obsessive compulsive cycle without the use of SSRIs or physical restraints (2). The mere fact that there are any numbers at all to support this treatment option seem to me to be quite in line with the power and involvement of the I-function in OCD. Patients employ reasoning and choice, all attributed to the I-function, to deal with an onset of the obsessive compulsive cycle and find that they can deal with the problem effectively.

The case of OCD offers a great deal of insight into the enormous power the I-function and its role in neuroplasticity. The nature of obsessions and compulsions, as well as ego dystonia, a defining characteristic of OCD, are closely related to the I-function. While this close connection between the disorder and the I-function often make OCD inaccessible and difficult to cure, by properly harnessing the power of the brain's power to change, new and effective treatment options emerge. On a larger scale, if the brain has the power to reduce the negative effects of OCD, what else is in its power to change?

1) Medline Plus, COMMENTS ABOUT IT

Use 2) Schwartz, Jeffrey M. M.D. and Sharon Begley. The Mind & The Brain. New York: Harper Collins, 2002.

3)Obsessive Compulsive Foundation,

4)National Institute of Mental Health,

5)Proquest Research Library, "How effective are cognitive and behavioral treatments for obsessive-compulsive disorder?"

6)Proquest Research Library, "Action Identification in Obsessive-Compulsive Washers".

7)The National Academies Press, Workload Transition



Full Name:  Marissa Patterson
Username:  mpatters@brynmawr.edu
Title:  Deep Brain Stimulation and its Possibilities
Date:  2006-04-11 09:11:58
Message Id:  18969
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Treatments for many different types of brain disorders such as attention deficit disorder, depression, and schizophrenia have evolved throughout the ages, from times where nothing could be done, to shock therapy, talk therapy, until finally doctors discovered that many times, giving a pill that filled the whole brain with a certain type of neurological chemical like dopamine could seemingly free the body of symptoms. The tremors, rigidity, balance difficulties, and slowness of movement (bradykinesia) associated with Parkinson's disease, a result of the unexplainable death of the dopamine producing cells in the part of the brain known as the substantia nigra, (1) was often treated with medications that replace dopamine in the body, such as Levodopa (L-dopa). However, these pills do not always work perfectly— dosages are constantly being adjusted, and this treatment can cause dopamine dysregulation syndrome, resulting in "severe dopamine addiction and behavioral disorders such as manic psychosis, hypersexuality, pathological gambling, and mood swings(2)". They can also cause involuntary movements, or dyskinesia(3).

It seems that this "bowl of soup" vision of the brain, where large changes can occur by adding different "seasoning" chemicals, cannot be entirely true. A recent advance in brain technology called deep brain stimulation has demonstrated that the brain may in fact be able to be influenced simply by the stimulation of a small area, and that different parts of the brain in close proximity can control different aspects of the same disease. This presents a more complicated picture of the brain, where it is only the specific interaction between each infinitesimal part that causes a "normal" functioning, and that a small error in the patterns of one can disrupt the delicate workings of the brain. Large scale treatments that attempt to effect the entire nervous system with a wash of chemicals may be entirely too broad to affect an advanced level of change and control.

In deep brain stimulation (DBS), a small metal electrode is inserted into the brain region that is overactive and is attached to an Implantable Pulse Generator (IPG) located under the skin of the chest that controls the strength of the stimulation(1) like a pacemaker. This procedure was invented in France in the 1980's (3) but did not become approved by the Federal Drug Administration until recently. Each of the different targeted sites for DBS can help to abate particular symptoms. Stimulation can be done to the thalamus, the global pallidus, and the subthalamic nucleus and is adjustable and reversible(3).

Thalamic stimulation became the first approved types of DBS for Parkinson's in 1997 (1) and has been shown to cause a significant improvement in tremor and the ability to initiate upper-extremity movement (akinesia) (4) over drug-only conditions during a long-term study. Often this procedure is performed bilaterally and is considered much more safe than a bilateral thalamotomy, where the thalamus is irreversibly lesioned. (3). Thalamic stimulation occurs in the ventro-intermediate nucleus and causes a reduction or suppression of tremor in 80% of patients who are treated and follow up over ten years shows a control over tremor that remains constant over time(3). The region it stimulates is very particular for the single symptom of tremor, showcasing the specificity of this section of the brain. It is suggested that this option is best for patients with longstanding, non-progressive tremor-dominant Parkinson's, such as those who are elderly(4).

Stimulation of the global pallidus (GPi) can be used for treatment for both dyskinesia and rigidity issues in Parkinson's patients(3). It gives benefit to 70-80% of recipients, though sometimes it can worsen bradykinesia in those who have received it(3). Much more effective is subthalamic stimulation (STN). With this type of stimulation there can be reductions of rigidity and tremor in only seconds, with movements improving after a few hours(5), can have effects on postural stability, freezing, and gait, and often causes a significant reduction of L-dopa dosage(3). An early study reported a mean 58% increase in motor function, with over 50% improvements in akinesia, rigidity, and gait and balance, as well as an 82% reduction of tremor(6). Incredibly, it was also shown that there was a 17% improvement in the "stimulation-off" state, a possible result of the electrode insertion or a carry-over of chronic stimulation(6). Later research showed a 52.5% increase in mean gait velocity and stride length, as well as a larger range of motion and a decrease in forward trunk inclination(7). This type of procedure is preferred because of increased safety, what is thought to be the highest reduction in anti-Parkinson drug use, and a lower stimulation voltage needed, leading to a longer battery life in the IPG(3). It is thought that GPi is best for patients with dose-limiting dyskinesia, while STN is more helpful for younger patients who have prominent bradykinesia(8).

Deep brain stimulation for Parkinson's disease can affect the lives of many people who are living with debilitating symptoms. The results of these studies offer an interesting contrast to what scientists have "always known" about brain disorders: that they are caused and controlled by an increase or decrease in a particular neurotransmitter, and so the best way to treat the symptoms is to give a drug that floods the brain with replacements. This new treatment flies in the face of conventional ideas. It shows that a more specific method of treatment where a particular location of the brain is stimulated can be drastically more effective than a blind dumping of neurochemicals into the entire brain. Brain function can not be a simple as a mere change in transmitter amount. DBS has no effect on the substantia nigra, where damaged cells stop producing dopamine. Instead its effects are on small areas of the brain, each able to help abate different symptoms. It is also important to note that currently there is no one location that can be subjected to stimulation and miraculously cure all Parkinson's symptoms. Just as this disease is not simply a lack of dopamine (and could be cured by medications), it is not a result of overactivity in just one area of the brain. The actions of the brain are much more complex than a simple cause/effect. There is a complex arrangement of neuron activity that must cause these symptoms; otherwise symptom alleviation would be easy: just find the cause (a lack of dopamine, perhaps?) and replace the missing piece. The effects of deep brain stimulation in Parkinson's offer many interesting ideas about the future of neurological treatment. It has already been shown to have positive effects in ending severe depression(9) and holds great promise for treatment of other diseases. The concepts associated with this treatment challenge the common perceptions that the brain simply an amalgam of chemicals and will hopefully lead to a revolution of current neurological treatments.

1)About Parkinson's Disease, a broad description of the disease from the National Parkinson's Foundation.

2) Witjas T. "Deep Brain Stimulation can cure dopamine addiction." Pain and Central Nervous System Week. 24 October 2005 p 194.

3)Deep Brain Stimulation, a PDS Information Sheet, a handout from the Parkinson's Disease Society of England.

4) Tarsy D et al. "Progression of Parkinson's Diesease following Thalamic Deep Brain Stimulation for Tremor." Stereotactic and Functional Neurology 2005 (83) 222-227.

5)Deep Brain Stimulation-Latest Research, information from the Parkinson's Disease Society of England.

6) Kumar R et al. "Double-blind evaluation of Subthalamic Nucleus Deep Brain Stimulation in advanced Parkinson's disease." Neurology 1998:51(3). 850-855.

7 Rizzone M et al. "High-frequency electrical stimulation of the subthalamic nucleus in Parkinson's disease: kinetic and kinematic gait analysis" Neurological Science 2002:23. S103-S104.

8 Anderson V et al. "Deep Brain Stimulation in Parkinson Disease Reduces Uncontrolled Movements" Journal of the American Medical Association. 17 April 2005.


9)A Depression Switch, an article by D. Dobbs from the New York Times Magazine, 2 April 2006.



Full Name:  Lori Lee
Username:  llee01@brynmawr.edu
Title:  Lucid Dreams
Date:  2006-04-11 09:14:51
Message Id:  18970
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Lori Lee

Lucid Dreams
Human existence thrives off of the very concept that all humans are fixated on the knowledge of their significance. Most people spend a generous portion of their lives with the overcastting question of the meaning of life, and as a result we are aware of our behavior and ourselves. The I-function serves to be the "self" of which an individual is aware, and consequently our lives are governed by the I-function. But if in sleep we are fully conscious, what is it then that governs dreams and dreaming? And how does that change, if at all, for lucid dreams?


Donald J. Degracia writes, "Dreams are a form of conscious awareness during sleep. When we dream, we are consciously aware of visual, auditory, tactile, kinesthetic and emotional content, as well as thought (both cognitive and metacognitive) and to lesser extents smells, taste and pain"(4) In effect, sleep is analogous to being awake during the day except for a few depressed senses. In terms of sensory perceptions during dreams, dreams are seemingly hallucinations, but they are still conscious experiences. The issue to consider in the discussion of consciousness and sleep is the fact that "conscious" is different from "aware," and that one does not imply the other. In dreams, we are aware of ourselves in the dream but not of ourselves in reality outside of the dream. We know that we are, in fact, conscious because we are aware of ourselves in our dreams, but the fact that we cannot control what happens in our dreams makes it evident that we are not actually, physically, aware of ourselves. (4)


In analyzing the different types of conscious dreaming, we see that there is potential for the I-function in each type. In sleep terrors, an individual becomes intensely afraid and anxious, sweats, and their heart beat speeds up, but because we are consciously aware of this fear, we live out its reaction. We interpret our fear, and then we become aware of our fear. Another type of conscious dreaming is sleep paralysis, which typically is seemingly a hallucinatory perception in that the dreamer is sleeping in an environment in which the person has an inability to move even despite "intense efforts to do so." In this sleep paralysis state, many dreamers believe that they are awake and are lucid.


In both sleep paralysis and sleep terrors, the dreamer is consciously sleeping, but neither is aware of what is really going on in reality, the physical world. In terms of the I-function, The I-function allows the dreamer to be aware of their own aware selves, which seems to be absent in the behaviors of people in the sleep paralysis and sleep terror states. Unlike sleep paralysis and sleep terrors, lucid dreams allow the dreamer complete awareness and consciousness: having the ability to know that one is dreaming and be able to control your dreams as a result of this knowledge. This one major difference between lucid dreams and regular dreams is vital, as lucid dreams become an incredible tool for the I-function, and regular dreams seem to be "typically mundane, realistic experiences in which the dreamer has modest feelings." (2) Lucid dreams allow for the I-function to loosen up and create new and unique situations where the dreamer's fantasies can actively be lived out. Lucid dreams become more vivid and memorable as the I-function becomes increasingly able to add more interesting situations to the dream. And because the lucid dream is being controlled through the I-function, the dreamer is able to anything they want or have ever wanted to do, seemingly at their own command. The I-function also uses lucid dreams as a tool to prepare the dreamer for situations that may actually arise in reality. The I-function may serve as reassurance, or a reminder, in a nightmare that it's not real, as well as help the dreamer walk through a real life situation with confidence, or some quality, that they wish they could have in order to prepare the dreamer for a similar situation in reality. (1) also (2)


Ultimately, it becomes obvious that in lucid dreaming, the I-function and the nervous system work hand-in-hand, as the I-function creates the idea and the nervous system brings it to life. Ironically enough, the nervous system seems to take the "back seat" in lucid dreaming, as it obeys the I-function producing whatever the I-function pleases. The I-function has so much power that it can bring forth a response to an input that doesn't exist. In analyzing the I-function, one can easily see the potential that it holds. As rare as lucid dreaming occurs, only one instance is needed to convince one that the I-function is what seemingly determines everything that an individual chooses to do in their lives; free will. This opens the path for an infinitum of new questions about the I-function and its ability, potential, and function in life outside of sleep. As lucid dreams are almost euphoric in their power and ability, it is really the I-function that most people are forgetting to credit. The complete role of the I-function, like most concepts, cannot be certain, but its ability is known through the small window of knowledge about the I-function through sleep, dream sleep, and lucid dreaming.


1)Neurobiology and behavior 2004 Class forum, a great place for multiple perspectives.
2)Does the I-function control dreaming?paper about dreaming and the I-function.
3)current ideas about REM Sleep, Dreams and Dreaminga good and quick reference about sleeping.
4)Paradigms of consciousness during sleepa good reference.



Full Name:  Fatu Badiane
Username:  fbadiane@brynmawr.edu
Title:  The Chemistry of Cupid's Arrow
Date:  2006-04-11 09:34:12
Message Id:  18972
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

"[It is better] to have loved and lost, than to have never loved at all," is one of the most common expressions one hears discussing the subject of love. Claimed to have been said by Samuel Butler and Saint Augustine, this quote embodies one of society's major obsessions: love. It is written about in the tabloids between movie starts, it is deeply discussed and described in poetry, and it is manipulated with aphrodisiacs and love potions. All of this focus on one of the many ranges of human emotion brings to mind the chemistry that underlies it. What is love and does it even exist? Or is it just a fantasy that can only truly be mastered by poets and artists? A look at the beginnings of love research, as well as current findings on brain scans, and neurotransmitters may shine some light on this interesting subject.

Love was first studied in the late 1970's by Dorothy Tennov. She came up with the word limerence to describe the state of being in love. Limerence is more closely related to having a crush, infatuation, or puppy love than it is to romantic or sexual love. Limerence is defined by the limerent reaction toward the limerent object, or object of affection. The reaction consists of intrusive thinking, fear of rejection, hope of success, physical symptoms, and sexual attraction. All of these physical and emotional factors play a role in the formation of one of three bonds with the limerent object: an affectionate bond, a limerent-nonlimerent bond, or a limerent-limerent bond. An affectionate bond is a relationship in which neither partner is limerent. It is more closely related to a strong friendship than an infatuation. The limerent-nonlimerent bond is a relationship in which only one partner is limerent, while the limerent-limerent bond has both partners in a state of limerence. Once one of these three bonds are formed the relationship can end up anywhere, it can last or quickly deteriorate. Tennov's study was one of the first scientific studies on love (3). This pioneer study cannot look deeply into the neurobiology of love, but it did do a decent job of looking at the emotions, thoughts, and physical attributes of those in love. Tennov's study opened the doors to current research that has the capabilities of looking inside the brain at the physical structures involved.

Helen Fisher, Arthur Aron, and Lucy Brown have conducted a recent study on the brain biology of love. They studied 17 young men and women who claimed to be madly in love through functional magnetic resonance imaging (fMRI). The fMRI takes brain scans of the subjects while they look at different pictures of people they know. Some of the images were of close friends and others of their beloved partner (4).

Fisher, Aron, and Brown found that intense romantic love is associated with activity in dopamine-rich areas of the brain. These regions are associated with the pathways of reward and motivation. The specific regions of the brain are the right caudate nucleus and the right ventral tegmental area. The conclusion reached from this study is that romantic love should be classified as a motivational drive paired with a range of emotions associated with the neurotransmitter dopamine. This means that love is a motivation which is primarily controlled by the neurotransmitter dopamine (4). Motivation alone, however, cannot explain attraction, the sweaty palms and beating heart, the feeling of flying that one gets when in love. But at least dopamine is one step in the right direction towards the final answer.

A look inside the brain has been informative in looking at some of the biological and chemical structures that are important for this phenomenon. But, the previously discussed research by Tennov and Fisher, Aron, and Brown has only looked at one half of the pair in love. It is now time to look at the interaction two people attracted to one another undergo. A look at the anatomy of attraction will lead to more clues about the other players in this game of love.

The scene for this classic love story will be a dinner party; a get together of good friends celebrating the host's birthday. There is cake, punch, and yummy hors d'oeuvres for all to enjoy. Bianca is settled in a corner chatting away with her two closest friends, when Damian walks through the door. After greeting the host and finding a seat to enjoy some punch and cake, he scans the room. Unconsciously, he is making mental calculations of all of the women he sees. He is looking at their facial bone structure as well as their waist to hip ratio. Both of these are crude judgments of a woman's health and fertility (5). These are important factors for men to consider in choosing the perfect mate. The chemistry starts when Bianca and Damian make eye contact with one another from across the room. The midbrain releases dopamine as the motivational drive for the two individuals to approach one another.

After a nice greeting and the flash of a smile, the hypothalamus starts working (5). The two are close enough to each other that they are picking up each others pheromones, or chemical substances that are produced by animals as a stimulus to illicit some sort of response(Webster's Ninth New Collegiate Dictionary). Pheromones, in humans, are odorless and therefore go unnoticed. They are another factor used to interpret the health and fertility of a potential mate (5). This first stage of love is called lust. Lust, however, quickly turns into attraction. At this stage, and Damian and Bianca's intriguing conversation about recent travels continues while the hypothalamus manipulates the body to its owners advantage. The hypothalamus causes the eyes to dilate, the heart to pump harder, and produces a slight sweat. Some of these factors such as the dilation and sweat are to make the person more attractive. The racing of the heart is a result of the stress that first encounters produce (5). All goes well for Damian and Bianca. At the end of the evening they each exchange numbers and make plans to call each other. At this stage romantic love starts. Damian and Bianca are both drowning in dopamine to promote strong feelings of pleasure (5). The happiness that they feel encourages them to call on another and schedule a date.

The first date takes place at a quaint little French restaurant where they enjoy a light meal of quiche lorraine and salade nicoise followed by a savory chocolate fondue. Throughout the wonderful evening, their brains are churning away dopamine to produce the natural high of being in love (5). After several more dates, the two become closer to each other and are in the stage of attachment. This is where dopamine retires and oxytocin comes into play. Oxytocin is important in producing the emotions of love and is increased by physical contact.

Eighteen months after Damian and Bianca met, the neurotransmitter cocktail produced by their brain will decrease. Just as a drug user needs more and more of a drug to feel the same high, the brains of a couple in love will become habituated to the dopamine and oxytocin that was so forcefully produced at the beginning of their relationship (5). It is at this point that the relationship will end or continue.

The new neurotransmitter that was introduced in this story is oxytocin. Oxytocin also works closely with another neurotransmitter, vasopressin. Dopamine, oxytocin, and vasopressin are the three best studied neurotransmitters known to play a role not only in Damian and Bianca's relationship, but in all love relationships. Their role, however, is better known in animal models, such as the monogamous prairie vole, than in humans. But, scientists believe they should play a similar role.

Dopamine is involved in pair formation, or bonding. It is the first neurotransmitter to be produced after a suitable mate has been identified. Experimentation in prairie voles has found that when dopamine activity is blocked, it interferes with pair bonding. The voles that were given dopamine blockers did not produce as strong of an interaction toward their mate as those who were not given blockers (1). In addition to this, dopamine is known to play an important role in the reward system of human brains, or the mesolimbic pathway. It also plays a role in mood. Dopamine is used to form bonds, reward positive interactions, and create that happy mood of being in love. These are the essential roles it plays in the primary phases of love (2).

Most importantly, however, dopamine is involved in pleasure, which is an essential part of love and encompasses all of its subroles. Pleasure is a subsystem of the reward and motivation pathways in the brain. These pathways are controlled by the autonomic nervous system (2). The autonomic nervous system is the part of the nervous system that governs involuntary actions (Webster's Ninth New Collegiate Dictionary). The secretion of dopamine is out of the control of the owner of the brain. These secretions result in changes within the body to stimulate certain emotions, such as pleasure. It is the integration that produces pleasure. This same euphoria and reward that is associated with love, plays a role in drug usage and the high the results from it (2).

The next vital neurotransmitter is oxytocin. Oxytocin is the neurotransmitter that is responsible for facilitating pair bonding. Experimentation in prairie voles has found that voles treated with oxytocin formed pair bonds quickly, whereas those with oxytocin blockers did not show as strong a partner preference. Those with the blockers were less faithful to their mates (1). In humans, it is believed that oxytocin works as a stress reducer. Love is considered to be a very stressful time. Symptoms produced by the body such as sweating, heart beat acceleration, increased bowel peristalsis, and sometimes even diarrhea are not pleasant to deal with when trying to catch someone's attention. Oxytocin is one of the neurotransmitters that is released to inhibit the stress responses, such as those listed, which are produced by other parts of the nervous system. Oxytocin encourages pair bonding, or social attachment, by acting as a stress reducer (2). Earlier, it was stated that oxytocin is released during times of physical contact, such as hugging or cuddling. Putting all of this information together, one can see that the stress that comes from being in love is reduced when oxytocin is released via the touch of the loved one. It is a comforting feedback loop.

Vasspopressin similarly plays a role in partner preference. Research done on prairie voles lead to the conclusion that vasopressin facilitates partner selection. The voles that received treatments of vasopressin were more faithful to a particular partner than those who did not receive the treatments (1).

All of these neurotransmitters, especially oxytocin and vasopressin, play a role in the autonomic nervous system. The autonomic nervous system (ANS) is the automatic control center of a person's brain. It is out of the control of the brain's owner. The ANS is key in social attachment and love, hence the fact that oxytocin and vasopressin are its most important players (1). In the ANS oxytocin plays a role in the reward and reinforcing pathway, as well as ensuring trust, loyalty, and devotion. The key pieces to a beneficial and lasting relationship.

Love indeed is not just a fantastical state that many experience. It is an emotional state that is to an extent organized by the brain. The bonds people make, the parts of their brains that play a role in these bonds, and the neurotransmitters that zip through their brains to illicit particular responses are very real. Evidence from the late 1970's to present day proves this point. Although science may have a grasp on how the brain interprets love, or what causes the initial attraction, the secret to what makes some relationships work while others fail is still a mystery. This may have less to do with the functioning of our brains and more to do with the workings of our minds and souls.

Works Cited
1) Carter, Sue C. (1998). Neuroendocrine Perspectives on Social Attachment and Love. Psychoneuroendocrinology. 23(8) 778 – 818.

2) Esch, Tobias, Stefano, George B. (2005). The Neurobiology of Love. Neuroendocrine Letter. 26(3). 175 – 192.

3)Wikipeida,

4)Soceity for Neuroscience,

5)Wikipedia,



Full Name:  Perrin Braun
Username:  pbraun@brynmawr.edu
Title:  Performing the Body
Date:  2006-04-11 09:36:19
Message Id:  18973
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

In 1963, at the height of the revolution in body art, artist Caroleee Schneemann wrote in her notes that "the body is in the eye; sensations received visually take hold on the total organism." That same year, Schneemann covered her body in paint, grease, chalk, ropes, and plastic and placed herself amongst a seemingly random array of objects in a work that she entitled Eye Body (Jones 1-2). In such performative and theatrical works, often referred to as "body art," the human figure is essentially transformed into a canvas that obscures the distinction between life and art (Green 6). This means of visual communication creates a whole new conception of the body for the audience, especially since the body of the artist is no longer his/her own, but instead belongs to the viewer. Body art therefore creates an interesting dichotomy between the body and the self, expanding the concept of the I-function into conflicts between the artist and the viewer, and the artist and the body.
In a collage from the 1960's, Japanese artist Yayori Kusama posed herself in an odalisque position on a couch amidst uncooked, and decidedly phallic, macaroni. She is nude save for high heels and 1960-esque polka dots that cover her flesh, knowingly gazing straight at the viewer. Kusama is unapologetic of her status of her non-male, non-white artist, visually performing her identity and eroticism as well as enacting the role of the artist as a public figure. In images such as these, viewers are forced to entertain the notion of the artist staging her self as her work. Indeed, Kusama's body is literally absorbed into the subject of the work and therefore becomes it; intertwining viewer, artwork, and artist/artwork (Jones 6-8).
However, Kusama's work displays her conflict as to her identity in the piece. She is performing the question of whether she is an object or a subject, celebrity pin-up or artist. Regardless, this image of Kusama is "deeply embedded in the discursive structure of ideas informing her work that is her 'author function'" (Jones "Presence 14-15). In such blatantly public works such as hers, the body of the artist is no longer his/her own, but instead belongs to the viewer. There is a certain dualism that is inherent to body art that differentiates between the self of the artist and the subject of the work (the artist's body) because the artist transcends the body in order to convey a larger message to the audience.
The revolution in body art has radically transformed the ways in which the public views the means of interpretation which governs our comprehension of visual culture. The work of such body performers as Kusama and Schneemann is contingent on being watched by an audience, as opposed to existing independently like a painting. Michel Foucault said that "the body is the inscribed surface of events (traced by language and dissolved by ideas), the locus of a disassociated Self (adopting the illusion of a substantial unity), and a volume in perpetual disintegration" (qtd. in Jones 12). The body, therefore, is the locus of a dispersed "self" that is transmitted through the audience, with all of its racial, sexual, gender, and class identifications (Jones 13).
Throughout history, the artist has always been the "I" in a work of art. For example, self-portraiture and even the concept of signing one's name on the bottom of a
painting are indicative of the artist's self-awareness. It was a fairly modern development that the subject is so dependant on the expectations of the viewer and experiences he/she might have had that influence the perception of the work.
Jackson Pollock, a twentieth-century artist, is particularly renowned in the art world for his series of paintings which concern the act of performing art. His images display a shift in the relationship between artists and their work in the sense that his means of creating an artwork is drastically different from the more conventional artist-at-easel image. Instead, Pollock stands above his canvas and performs the act of painting by pouring a stream of paint directly out of a can while he moved (Jones 53). He is famously quoted to have said that "I am nature," meaning that anything that he created became and extension of himself and consequently nature.
The I-function in performance art is therefore an extremely complex issue due to the multiple identities involved in the performance. Especially more so since the artist, the viewer, and the subject of the work are all integral to the interpretation and analysis of the piece. Viewers are forced to engage with works that are so closely intertwined with their own experiences and those of the artist that it often seems like the I-function takes more than one form in body art.
Works Cited

Green, Gaye Leigh. "The Return of the Body:Performance Art and Art Education." Art
Education. Vol. 52. pp. 6-12.

Jones, Amelia. Body Art: Performing the Subject. University of Minnesota Press:
Minnesota, 1998.

Jones, Amelia. "Presence in Absentia: Experiencing Performance as Documentation." Art
Journal. Vol. 56. pp.11-20.



Full Name:  Carolyn Theresa Dahlgren
Username:  cdahlgre@bmc
Title:  You Smell!: A Look into Olfactory Hallucinations
Date:  2006-04-11 10:00:09
Message Id:  18975
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

If you were to take a poll asking people to name which of the five senses that they could most easily live without, I am willing to bet that the sense of smell would be many people's top choice. There are, however, many people who do live without the sense of smell; just like a blind person lives without sight or a deaf person without sound. Anosmia is a condition that occurs when a person lacks the sense of smell. Dysfunctions of the sense of smell can be congenital. A number of different diseases, conditions, and medications can lead to olfactory disturbances. It may also occur for other reasons; "A number of different diseases, conditions, and medications can lead to olfactory disturbances. Major causes of olfactory dysfunction: obstructive nasal and sinus diseases, upper respiratory viral infections, head trauma, and in 22% of cases no cause is ever found (idiopathic)"1.
Your sense of smell may seem like a trivial sense compared to something like sight or sound. The role of the nose, however, is severely under appreciated. Most people realize that the senses of smell and taste go hand and hand. When I was a child, my mother used to have me hold my nose before drinking medicine in order to diminish the taste. When you have a cold and your nose is stuffed-up, things taste bland or strange. The sense of smell enhances the flavor of the foods we consume. Lacking a sense of smell may seem like it could be a blessing, especially if one has to take some noxious medicine or, perhaps, if a skunk is nearby. The idea, however, loses some if its charm when thinking about what it would be like to have a deadened sensation of pleasant stimuli such as homemade cookies or chocolate cake. The appeal of anosmia is totally striped away if you imagine unintentionally eating spoiled food. Our sense of smell is an important self-preservation device. "It serves as an important early warning system for the detection of fire, dangerous fumes, leaking gas, and spoiled food"1. Smell is also plays a key part in social interactions. "It enhances socialization and interpersonal relationships by protecting against objectionable body odors"1. Smells have also been found to be a form of communication. Pheromones are chemical signals, smells that animals use to transmit messages to other organisms. "There are alarm pheromones, food trail pheromones, sex pheromones, and many others that affect behavior or physiology"8.
Unlike the other five senses, smell and taste are known as the chemical senses. This is because these senses are the result of external chemical stimuli acting directly upon sensory neurons. Airborne chemicals, which are actually particles from the things that are smelled, stimulate special receptors, called chemoreceptor because the chemical interact directly with these receptors. "These receptors are very small - there are at least 10 million of them in your nose - ...each with the ability to sense certain odor molecules. Research has shown that an odor can stimulate several different kinds of receptors. The brain interprets the combination of receptors to recognize any one of about 10,000 different smells"3. When these receptors are stimulated, information is sent along the olfactory nerve to the olfactory bulb which is the part of the brain that is associated with the sense of smell. "The olfactory bulb is the most rostral (forward) part of the human brain"7 and is located just above the nasal cavity. It sends signals to the other parts of the brain to help interpret the sensory information it receives and to translate them into the different smells we can recognize.
Thus far, this paper has discussed anosmia, the lack of ability to sense smells. There are, however, a wide range of other olfactory dysfunctions; "approximately two to five million American adults suffer from disorders of taste and smell"2. There are two particularly intriguing disorders that I wish to highlight: parosmia and phantogeusia. Parosmia is a distortion of the olfactory sense. People with parosmia report smelling scents that are incongruent with the olfactory stimuli presented. An "affected person reports smelling something other than the scent which is present... for example, the person sniffs a banana but it smells like rotting flesh instead of a banana"9. Phantogeusia is an 'olfactory hallucination'. A person reports a smell (and sometimes an associated taste in the back of the mouth) for which no external stimulus can be found. "There is no odorant present, but the affected person reports smelling something, usually something unpleasant"9. Phantogeusia smells are not only noxious, they are also longer lasting than most olfactory experiences. "When a normal person smells an unpleasant scent, sensory adaptation takes place rather quickly -- within a few minutes the scent seems to have disappeared. The unpleasant scents in parosmia and phantosmia can, however, be very long-lasting"9.
Why do some people have these olfactory misinterpretations? What is the brain doing during olfactory hallucinations? Unfortunately, there is not a very in-depth range of literature about olfactory dysfunctions. There are, however, some interesting similarities between smell 'hallucinations' and other reports of 'hallucinations', namely the phantom limb phenomenon experienced by some amputee victims. Perhaps olfactory hallucinations could be understood by extrapolating from some theories about phantom limbs. The phantom limb sensation is the experience of a feeling, usually pain, in a limb that has been amputated. Even though nerve connections have been severed and the nervous tissues have been taken away, many amputee outpatients report experiencing sensations in their amputated limbs. In a previous web paper for our course, Christy Taylor explored the etiology of phantom limb hallucination and explained that the phenomenon may be a product of inconsistencies between the sensory information received by the brain and the brain's corollary discharge signals, signals from the output side of the brain which sends feedback to the brain about motor output. "Corollary discharge signals are used to define expectation, and when the sensory input does not match this expectation, the nervous system sends out a signal that says 'there is something wrong' which may be felt as pain in the phantom limb"6. Phantom limb pains, like the unpleasant scents experienced in parosmia and phantosmia, are often intense and long-lasting.
If sensations of noxious smells are the olfactory equivalent to pain, then parosmia and phantosmia seem to be an olfactory equivalent to phantom limb pain. This observation may be a useful, new perspective on these olfactory dysfunctions, but it still does not explain why they occur. Why should a person receive sensory inputs from the nose that are incongruent with the corollary discharge messages? For phantom limbs this is a simple matter, there are no sensory inputs from the amputated limb and the brain thinks there should be, but for parosmia and phantosmia there is no reason for the signal to be incongruent. This is especially true for this 'chemical' sense of smell - olfactory sensory input is a result of direct interaction of neurons and molecules from the things that are smelled. And what about people with anosmia? Why don't they have phantom smells? There are still a lot of questions to explore but one thing is clear, the olfactory sense is a lot more complicated than we may think and we should appreciate our working senses of smell.

Bibliography

1. Anosmia Foundation. "Anosmia". http://www.anosmiafoundation.org/intro.shtml.

2. Anosmia Foundation. "Smell Disorders". http://www.anosmiafoundation.org/smell.shtml.

3. Cook, Steven P., MD and Gavin, Mary L. Gavin, MD. "What's That Smell? The Nose Knows". http://kidshealth.org/kid/body/nose_noSW.html. Updated: July 2004.

4. How Staff Works. "The Nose". http://health.howstuffworks.com/define-nose.htm.

5. "The Sense of Smell". http://users.rcn.com/jkimball.ma.ultranet/BiologyPages/O/Olfaction.html. Updated: Oct. 22, 2005.

6. Taylor, Christy. "Phantom Limbs". http://serendipstudio.org/bb/neuro/neuro98/202s98-paper2/Taylor2.html. Updated: 1998.

7. Wikipedia. "Olfactory Bulb". http://en.wikipedia.org/wiki/Olfactory_bulb.

8. Wikipedia. "Pheromone". http://en.wikipedia.org/wiki/Pheromone.

9. Wuensch, Karl L. Ph.D. "Parosmia and Phantosmia". http://personal.ecu.edu/wuenschk/Parosmia.htm. Updated: Jan. 15, 2005.



Full Name:  Jessica Engelman
Username:  jengelman@brynmawr.edu
Title:  Asexuality as a Human Sexual Orientation
Date:  2006-04-11 10:01:43
Message Id:  18976
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Only in the past few years has the public in general accepted homosexuality and bisexuality as genuine sexual orientations (although debates over cause, morality, and status in society continue), but now another orientation is being proposed: asexuality. What is it, and is it really a sexual orientation, determined before birth like heterosexuality or homosexuality are now theorized to be? Traditionally, "asexual" referred to the reproduction of simplistic organisms (amoebas, primitive worms, fungi, etc.) or in humans to a lack of sexual organs or an inability to feel/act sexually due to disability or other condition. However, the new proposed definition for "asexual" presents it as a (human) sexual orientation, following that if heterosexuality is attraction to the opposite sex, homosexuality is attraction to the same sex, bisexuality is attraction to both, asexuality is attraction to neither sex. An exact definition has not been officially set, so most "experts" in the area reference AVEN (Asexuality Visibility and Education Network): "a person who does not experience sexual attraction." This is not to be confused with chastity, which is a choice to not act upon sexual urges (for asexuality to be an orientation it must be innate, not a choice). Even this definition is slightly incomplete; AVEN implicitly indicates asexuality only refers to lack of sexual attraction to another person. (1) The reason I cite a non-academic website (AVEN is actually an online community for asexuals devoted to providing opportunities for these previously isolated individuals to interact and promote awareness about asexuality) and put "experts" in quotes is that the subject of human asexuality has received almost no academic attention, nor in literature, nor by society, and only in the past few months has it become a hot topic in the media. In just the end of March/beginning of April 2006 segments on asexuality were featured on CNN, 20/20, MSNBC, and even Fox News. This recent interest has sparked some notice from researchers, but asexuality isn't as clear-cut as the other three "recognized" orientations.

Turns out, there are many shades of asexuality. Initially, AVEN used a system of classification with the letters A, B, C, and D. Type A has a sex drive (a drive for all but sex, such as kissing and stroking), but no romantic attraction, type B has romantic attraction but no sex drive, type C has both, and type D has neither. They no longer use this system as it became too limiting, but it does highlight the possible differences between any two asexuals. There are some who are thoroughly repulsed by sex (having it, watching it in a movie, thinking about it, even mention of it), while there are also individuals who just find it unappealing or boring (like washing dishes—if it has to be done you put up with it, but it's something you'd rather not spend your time doing). There are those who have no interest in dating or forming anything beyond friendship, while some asexuals date, fall in love, and even marry. Often, these "romantic asexuals" do engage in activities such as kissing, cuddling, and petting. Asexuals even date and marry sexuals who are willing to abstain from sex, have less sex, or have it with another sexual. Most asexuals still have emotional needs and form relationships to satisfy these, contradicting their stereotype of being frigid or misanthropic. Often romantic asexuals will describe themselves as being asexual-heterosexual, asexual-homosexual, asexual-bisexual, or asexual-asexual indicating their romantic orientation (which gender they're non-sexually attracted to).

Many asexuals appear to be in fine physical condition, indicating abnormal hormone levels or dysfunctional gonads are not the primary cause for the orientation. The idea is that asexuals may still experience physical arousal but perhaps their brains somehow do not connect it to the act of sex. Because of this, some asexuals masturbate in lieu of sex with another person (although their experience of masturbation may be different from that of sexuals); this can also be classified as being autosexual. However, there are also some asexuals who do not have a defined gender, due to physical deformities or discrepancies between their "physical gender" and their "mental gender." Although the title "asexuality" is fairly broad, it usually does not include bestiality, paraphilia, and other so-called "fetishes."

Why this sudden emergence of asexuality? During the Victorian era, marriage was generally expected and strongly encouraged, but with the belief that abstinence would be upheld in the relationship except with the explicit purpose of procreation. These ideals continued at least in part into the 20th century until they were abruptly disrupted by the sexual revolutions of the 60s and 70s. This period of presumed purity worked fairly well for asexuals, especially for those seeking a romantic partner, since even women were (sometimes) socially permitted to refuse their husband's requests for sex. However, any sexual activities or preferences deemed "unnatural" were stiffly condemned (in England homosexuality was punishable by hanging, imprisonment, or as in Oscar Wilde's case years of hard labor), so any individuals fitting the definition of asexual would not try to bring attention to their "problem." In the time between the introduction of liberal views on sex in the 60s to modern day the world seems obsessed with sex. To go one day in America without running into sex would require sealing oneself in a room containing only the Winnie the Pooh book series, and that is under the assumption that the individual would not think of sex spontaneously. When nearly every television show, movie, magazine, newspaper, novel, most songs, and even academic texts contain mention of sex, dating, or some other form of sexual attraction, asexuals feel extremely isolated from the rest of humanity. Only recently have asexuals (in addition to people with sexual "abnormalities," "fetishes" etc.) been able to find others like them and form communities over the Internet—there are many older individuals on asexuality.org who married and/or had sex out of social obligation and just assumed there was something wrong with them when they didn't enjoy it.

Considering the word about asexuality has gotten out to the general public only in the past few weeks, most of the population doesn't understand and some don't believe in asexuality. One female asexual compiled a list of things people have said to her when she told them she's asexual, including: "you hate men," "you have a hormone problem (why don't you just fix it?)," "you are afraid of getting into a relationship," "you were sexually abused as a child," "you are a lesbian," "you just haven't met the right guy," "did you just get out of a bad relationship?" She thoroughly refutes all of these (no, she's not a lesbian, she was treated just fine as a kid, it has nothing to do with previous guys or not meeting "Mr. Right"), but clearly portrays the stubbornness many asexuals find in others who aren't willing to believe he/she has no libido. (2) Although "coming out of the closet" is arguably much safer for asexuals than for homosexuals (it's unlikely they'll be fired, discriminated against, or tied to a fence and beaten to death), female asexuals do face an increased threat of rape. Sexual males often tell them (jokingly or seriously) that they just "haven't had me in bed" or see asexual females as "challenges," and sometimes forcefully try to prove their point. However, most asexuals say their friends and family are supportive (or at least have let up on pressuring them to date/marry).

Asexuality also finds critics in psychologists, sexual therapists, and even the religious (who are usually fairly accepting of asexuality due to its similarity with chastity, which is revered or at least respected in many religions). The psychoanalysts Hansen de Almeida and Brajterman Lernen state that "there is no such thing as asexuality, which is only an omnipotent fantasy to have both sexes." (3) Dr. Joy Davidson, a certified sex therapist featured on 20/20's segment on asexuality, believes asexuality is predisposed by physiological, psychological or experiential factors leading to asexuals' "shutting down the possibility of being sexually engaged." She also expresses worry that asexuals are labeling themselves "sexually neutered," resulting in a self-fulfilling prophecy. (4) The specialist chosen for CNN's piece, Dr. Laura Berman, was a little more accepting of the idea of asexuality, but warns that it shouldn't be confused with intimacy or relationship issues. Since many asexuals are self-diagnosing themselves from information gathered from the Internet, it is certainly possible that a large portion of asexuals are not truly asexual, but rather have other issues with symptoms similar to asexuality. (5) A variety of diseases and physical ailments can result in reduced sexual drive, including spinal chord injuries, pituitary disorder, schizophrenia, and other neurological conditions. (6) If misdiagnosed as asexuality, warnings such as a non-existent or low sex-drive might be missed and underlying conditions left untreated. AVEN recommends newcomers as a general rule of thumb having at least hormone levels checked, just in case, but especially for those who used to have a sex drive. Finally, Nantais and Opperman write in the Christian magazine Vision that "Question: What do you call a person who is asexual? Answer: Not a person. Asexual people do not exist. Sexuality is a gift from God and thus a fundamental part of our human identity. Those who repress their sexuality are not living as God created them to be: fully alive and well." (7) Here, they clearly assume asexuality is sexual repression, rather than an inherent and complete lack of sexual desire.

Most of what is known about asexuality is really an educated guess or supposition; otherwise there would be some way to respond to the skeptics. So what research has been done on asexuality? There are several recorded instances of animals that refuse to mate, such as lab rats. A study on Mongolian gerbils showed that part of a population of male gerbil fetuses that developed between two female fetuses refused to mate, but instead spent almost 50% more time taking care of the young than male gerbils who as fetuses were positioned between two other males. They were also about 30% more likely to stay with a nest when the mother had left. This suggests that, although not perpetuating their own genes, they helped perpetuate their sisters' genes, which has evolution benefits for at least half that family's genes. These "asexual" male gerbils had on average half the level of circulating testosterone and 50% smaller bulbocavernosus muscles compared to the gerbils who had been between two males as fetuses. As male gerbils become violent when placed together, there was no way to tell if these asexual gerbils weren't actually homosexual instead, but the study still indicates that there are mammals that refuse to reproduce due to natal conditions. (8) Another study done on rams showed that besides the population of rams readily willing to mate with females, there was also a subset of rams who mounted other rams, and another subset that refused to mate at all. The asexual rams had testosterone levels comparable with those of the heterosexual rams, exogenous testosterone treatments did not prompt them to mate, and so the researchers concluded neither hypogonadism nor basal androgen concentrations caused the rams to exhibit asexual behaviors. However, when anesthetized, the homosexual and asexual rams had higher levels of cortisol concentration than the heterosexual ones. The researchers noted, "the endocrine response to anesthesia is most likely mediated through the central nervous system, the present results indicate that functional differences exist between the brains of rams that differ in sexual behavior expression and partner preference." (9) Since scientists have already noted that the brain of homosexual men is structurally different from that of heterosexual men (cell structure of gay mens' hypothalamus more closely resembles that of a heterosexual female's), that the asexual brain may too be structurally different should not be too easily dismissed. The existence of animal displays of asexuality run contradictory any suggestions that asexuality is a problem caused by psychological issues such as fear of commitment, or conscious/unconscious repression of sexuality, as animals are presumed to be incapable of both, although this rests upon the assumption that asexuality has the same cause in humans and animals.

There have been very few studies about asexuality in humans, most of which were about the stereotype that disabled people are made asexual as a result of their condition. One of the only studies that looks at asexuality as a possible orientation was actually a reexamination by Anthony F. Bogaert of a survey of 18,000 British about general sexuality and STDs. 1.05% of the respondents to the survey reported "I have never felt sexually attracted to anyone at all," very close to the 1.11% who responded they were homosexual or bisexual, although more women tended to be the former than the later, and more men tended to be the later than the former. Bogaert noted this asexual group to have poorer health, shorter stature, less body weight, higher attendance at religious services, lower socio-economic status, and asexual women had a later onset of menarche, all when compared to sexual people. Although these are only correlations, they may help form later hypothesis about the cause of asexuality, and whether asexuality is a valid orientation at all. Bogaert suggests some of his own. Perhaps the factors affecting height growth and weight gain also affected a region of the brain vital to sexuality, or education or other resources dependent on socio-economic status are somehow vital in sexual development, or maybe asexuals had fewer "sexual conditioning" experiences growing up (i.e. masturbation) which might also explain the high proportion of women and religious (both groups are less likely to masturbate). Youth, however, was not correlated with asexuality, indicating these individuals were not merely "late bloomers;" asexuals actually tended to be older. Major limitations to the study, besides being merely correlative and not actually about asexuality, include its high non-response bias (30%) and its face-to-face style of interviewing (which may have pressured individuals to alter their answers). However, the study does contain enough correlative evidence to warrant future research in the area. (6)

If asexuality is indeed determined to be a genuine sexual orientation, and even if it isn't, it greatly alters the way scientists and the public think about sex and sexual drive. On the social side, it shows that relationships do exist without sex and that love and sex may be mutually exclusive. It also changes the picture of the stereotypical "asexual." For women, this is a chaste, yet often motherly, figure of purity, such as the Christian Virgin Mary or the Greek goddess Artemis; or contradictorily the strong-minded and masculine woman who cannot admit her sexual or emotional attractions or lose her strength, such as Joan of Arc or Utena from the Japanese television series Shoujo Kakumei Utena. For men, this is a cold, calculating, overall emotionless or suppressed, yet often resourceful and intelligent individual, such as the Vulcan Spock from Star Trek or Sherlock Holmes (who keeps in character even in "Scandal in Bohemia" where he has his only "love affair" with a woman). (10) On the research side, future research projects may consider forming an additional "asexual" category when conducting studies relating to sex (Cott et all do this in their study on post-traumatic stress disorder and child sexual abuse and found significant differences between self-labeled asexual and sexual groups). (11) But perhaps most importantly, future research and consideration of asexuality may greatly upset the current one-dimensional continuum of sexuality, with exclusively homosexual on one end and exclusively heterosexual on the other end. Already psychologists are making this continuum two-dimensional, allowing for level of sex drive in addition. However, this too may become too limiting to describe the spectrum of human sexuality, requiring additions of third and even fourth dimensions (possible, although not particularly practical) to this continuum, perhaps recognizing need for sexual connection vs. need for emotional connection. In the meanwhile, until there is more biological and psychological information about asexuality, scientists and the public both might ease the use of limiting categories such as "gay" or "straight" and thus perhaps eliminate two categories upon which sexuality has traditionally hinged: right and wrong.

Resources

(1) Asexuality Visibility and Education Network
(2) Nonsexuality Rant
(3) Gender identity: Its importance in the psychoanalytic practice. A theoretical view / Identidade de genero: Sua importancia na pratica analitica. Uma visao teorica. Hansen de Almeida, Rui, Brajterman Lernen, Rosely C., Revista Brasileira de Psicanalise. Vol 33(3) 1999, 485-494.
(4) ABC's 20/20 – March 23, 2006
(5) CNN's Showbiz Tonight – April 5, 2006
(6) Asexuality: prevalence and associated factors in a national probability sample
(7) Eight Myths about Religious Life
(8) Why some male Mongolian gerbils may help at the nest: testosterone, asexuality, and alloparenting
(9) Relationship of serum testosterone concentrations to mate preference in rams
10) Wikipedia entry on asexuality
11) Ethnicity and sexual orientation as PTSD mitigators in child sexual abuse survivors

Note: due to the lack of official literature on asexuality, much of the information about asexuals in this paper came from websites and forums such as AVEN's community board, Wikipedia (although I personally cross-referenced information gotten from there), comments mae by asexuals themselves, and speaking with a Bryn Mawr student who identifies herself as asexual, hence an overall lack of direct citations in the paper.

Additional Readings and Viewings:

Feature: Glad to be Asexual
Study: One in 100 Adults Asexual
ABC's The View – January 16, 2006
MSNBC's The Situation with Tucker Carlson – March 28, 2006
Fox News's Day Side – April 3, 2006



Full Name:  Amber Hopkins
Username:  ahopkins@brynmawr.edu
Title:  Mixed Signals
Date:  2006-04-11 10:45:36
Message Id:  18978
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Throughout history, poets, composers, and artists of all types have used metaphors to create cross-sensory experiences in their works. However- instead of trying to merely create this experience in themselves and in others, many of these artists were trying to explain to the world the experiences they had in response to certain stimuli. This phenomenon, known as synaesthesia, stemming from the Greek, syn = together + aisthesis = perception,) is the "involuntary physical experience of a cross-modal association. That is, the stimulation of one sensory modality reliably causes a perception in one or more different senses" (1) .

Synaesthesia occurs predominantly in females, people who are left-handed, and within families. The pattern in which synaesthesia occurs is consistent with x-linked or autosomal dominant transmission, so either parent is capable of passing this trait to their offspring (1) .. The criteria for it to be seen as full synaesthesia are as follows: one stimulus always evokes a certain perception, the perception occurs involuntarily, the perceptions are individual: every synaesthete has their "own" colors and shapes, the perceptions are irreversible: a 7 might evoke the color blue, but the color blue doesn't evoke a 7, the perceptions are permanent: they begin at child age and don't change throughout life (2) ..

There are various forms of synaesthesia, different pairings of the senses involved in each particular case. The most common form is Grapheme-color synaesthesia, seeing colors in response to hearing or reading a letter or number. Other forms can include seeing colors in response to sounds, smells, tastes, or pain, tasting sounds or smells, smelling sights, etc (3) .. In Grapheme-color synaesthesia, there is no correlation between the colors experienced and the letters seen between different synaethetes (people exhibiting synaesthesia). Rather, each tends to associate different colors with each letter. However, surveys done do show that some letters have a tendency towards one color over the others- O is very often perceived as white, and A is often perceived as red. However, these trends are much more prevalent in vowels than in consonants (4) .. An interesting suggestion was posed, however, in one synaethetes memory of an experience she shared with her father, also a synaethete. She said that "sitting at dinner with my family one evening, I commented that "The color five is yellow." There was a pause, and my father said, "No, it's yellow-ochre." And my mother and my brother looked at us like, 'this is a new game, would you share the rules with us?' And I was dumbfounded. So I thought, "Well." At that time in my life I was having trouble deciding whether the number two was green and the number six blue, or just the other way around. And I said to my father, "Is the number two green?" and he said, "Yes, definitely. It's green." And then he took a long look at my mother and my brother and became very quiet. Thirty years after that, he came to my loft in Manhattan and he said, "you know, the number four *is* red, and the number zero is white. And," he said, "the number nine is green." I said, "Well, I agree with you about the four and the zero, but nine is definitely not green!"" (5) .. This implies that perhaps to some extent, the colors a synaethetes experiences in regards to certain stimuli might also be determined genetically.

Another question posed by the observation that synaesthesia is genetically determined lies in its evolution. First, when a synaethete is observed experiencing synaesthesia, we would typically expect some cortical area(s) to "light up." Richard Cytowic, observed however, that "cortical metabolism actually plummets during synaesthesia. Such a decrease is impossible to obtain in a normal person with, for example, a drug. Even during an activation trial with amyl nitrate, which subjectively intensifies the synaesthetic experience, he observed that the patient's regional blood flows are decreased compared to baseline. Normally, any physical or mental task, or any activation procedure (e.g., drug administration, carbon dioxide or oxygen inhalation), increases blood flow by five to ten percent." This led Cytowic to the conclusion that because the patient's thinking and neurological exam were unimpaired, the area of the brain most involved in synaesthetic experiences was not the cortex, but rather the limbic system, which is more primarily involved in emotion, memory, and attention. (1) Interesting to note, however, is that only in mammals is the limbic system seen in its most developed form. Thus, are animals below the level of mammals incapable of the synaesthetic experience?

Further exploration into the evolution of synaesthesia poses the question as to the necessity of its evolution. The symptoms of synaesthesia do not appear to hinder the development or existence of those experiencing them- rather the opposite in many cases! One synaesthete said for example "What synaesthesia is about for me is an extra way of perceiving the world. Because of that additional dimension, the parts of the world that I perceive in this special way are parts I hold most dear...Equally important, however, is the idea that the creative person is able to use her unique abilities, ridiculed though they may be, to make not only a living but also a significant contribution to the world" (6) .. Why, then, is synaesthesia not more common? To answer this question, we must look at a more extreme case, where the synaesthete "not only sees colours when she hears sounds, but suffers from the reverse: she hears sounds whenever she sees colours. Here, the word "suffers" is used advisedly, as this form of synaesthesia leads to massive interference, stress, dizziness, a feeling of information overload, and a need to avoid those situations that are either too noisy or too colourful...Here then, we have a clear case of synaesthesia leading to social withdrawal, and interference with ordinary life" (7) ..

The most recent views of synaesthesia are proposing that in reality, everyone is a synaesthete. Looking at the development of the brain, evidence is suggesting that "babies experience sensory input in an undifferentiated way. Sounds trigger both auditory and visual and tactile experiences. A truly psychedelic state, and all natural - no illegal substances play a role. ...suggests that this results in a sensory confusion for the infant... The notion would be that following an early initial phase of normal synaesthesia, the different sensory modalities become increasing modular (Fodor, 1983), presumably because modularity leads to more rapid and efficient information processing, and is therefore highly adaptive" (7). Thus, adult synaesthetes might therefore have not gone through the process of modularization, causing them to continue to experience sensory input in an undifferentiated way. Cytowic puts it into this perspective- "Do the elemental qualities of synaesthesia, as partially represented by the form constants, represent "building blocks" or "modules" of cognitive science in which a perception is assembled like modeling a statue from bits of clay? Or is perception holistic, constrained by sensation as it unfolds from within? If so, then perception is like sculpting from a block of marble, exposing the statue within it by removing extraneous bits. In this view, synaesthesia is the conscious awareness of a normally holistic process of perception that is prematurely displayed. That is, it is awareness before the terminal target, before the final stage of neural transformation and mental mediation. If this is correct, then we are all unknowingly synesthetic" (1).

1) Synesthesia: Phenomenology And Neuropsychology , A review of current knowledge
2) Wikipedia , article on Synaesthesia
3) Types of Synesthesia , Analysis of 778 case reports
4) Trends in Colored Letter Synesthsia , Analysis of color frequencies
5) Synesthetes Perspectives , Carol's thoughts on her synaesthesia
6) Colored Letters , Demonstration of aspects of synaesthesia
7) Is There a Normal Phase of Synaesthesia in Development?, Analysis of synaesthesia from infancy into adulthood



Full Name:  Brooks Ambrose
Username:  bambrose@haverford.edu
Title:  Teleodynamics: A theoretical romp
Date:  2006-04-11 12:36:05
Message Id:  18979
Paper Text:
<mytitle> Biology 202
2006 Second Web Paper
On Serendip

This paper discusses the difference between the methods of reduction and emergence with a thought toward grounding the paradigmatic uniqueness of the analysis of living systems. It introduces the concept of information and gives a preliminary characterization in a functional frame of reference. This paper makes no argument other than those implicit in the theoretical discussions.

Reduction is a method of scientific inquiry in which researchers reduce big things to their "nothing buts"; for instance, one may, by decomposing each level of organization one encounters, reduce the muscle of an animal through stages to the atoms that comprise it (and even further if one desires). This methodology is responsible for many of the scientific advances achieved in modern times. However, on the heels of the success of reduction, a complimentary (yet critical) paradigm of inquiry is gaining prominence. Called "emergence," this methodology asks what is lost at every reduction, that is, it identifies the properties of structures that are defined by particular configurations of component parts, properties that disappear when components are analyzed in isolation.

Emergence is "something more from nothing buts" (1). For example, when reduced, water and ice are both described in terms of the same molecule. However, the property of the buoyancy of ice in water is emergent; it depends on differences in relationships among H2O molecules that vary according to thermodynamic conditions.

In general, emergence is concerned with explaining the hows and whys of structural organization. The reader may have noticed that the difference between water and ice was discovered by process of reduction, by understanding the properties of H2O as an individual molecule. A more appropriate example of emergent methods is explaining why one snowflake differs in structure from another. Researchers suggest that the structure of snowflakes is path dependent; it depends on, among other things, the pattern of temperature change through which the snowflake travels as it falls, with different paths yielding different structures (2).

The careful reader will note that, insofar as they apply to understanding H2O, reduction and emergence are methodological techniques, not theoretical paradigms. At the end of the day, buoyancy and snowflake structure are both explained by theoretical models based on the same thermodynamic (concerning energy) and morphodynamic (concerning matter) principles. Once the correct model is discovered, it makes little difference whether one thinks of the relevant level of organization as emergent or reduced; emergence may be treated as the inverse of reduction and vice versa.

Though the method of reduction has been fruitfully applied to living systems, a serious problem seems to arise when a researcher attempts to achieve an emergent explanation by inverting a reductive biological explanation. To illustrate this point I will rely on a talk given by Ursula Goodenough at a recent forum (1). In her discussion, Dr. Goodenough proceeded to, in a stepwise fashion, reduce a big thing called "muscle-based motility" (the type of locomotion found in animals) to the property of muscle contraction, and then to the structure of muscle fibers, and to that of muscle fiber fibers, and then to the molecular mechanisms of kinesin and myosin motors (3), and on until arriving at the atomic makeup of the relevant protein molecules, and finally to the origin of that atomic material in the death of stars (stopping short of a foray into subatomic particles). Then, to illustrate emergent biological explanations, Goodenough "played the movie backwards," showing how atoms arrange into molecules that arrange into proteins, and then how proteins arrange into molecular motors that allow the contraction of fibers that can be arranged in a series of bundles to produce the muscles that allow animals who possess them to achieve muscle-based motility.

As with buoyancy and snowflake structure, the reductive method yields biological explanations of emergent properties based in thermodynamic and morphodynamics models. Unfortunately, unlike in the cases of buoyancy and snowflake structure, this particular emergent path, even if it provides an excellent understanding of how muscles work, does not seem to exhaust all of the hows and whys about the structural organization of muscle-based motility. While the properties of matter and energy sometimes produce a highly organized structures like snowflakes, biological systems cannot be organized from their components so spontaneously. While it is plausible that certain basic structures of muscle organization are capable of self-assembly, for example, the actin-based myosin motors, at a certain level of organization the muscle (not to mention the whole organism) exhibits a structure that requires additional explanation. Why, for instance, does an animal develop a particular pattern of muscles instead of another? How is the movement of many muscles coordinated to achieve movement of the whole body, and why does motility tend to move an animal toward sources of food and away from danger? The origin and functioning of the structural patterns implicit in these questions do not seem to be fully explainable in terms of the matter and energy constitutive of an organism.

The conclusion that some thinkers have come to is that the improbable structures and functioning of living systems cannot be explained without recourse to the concept of information, another unit of physical reality interdependent with matter and energy (4). The interdependency of this triadic formulation must be appreciated. Information does not exist outside of matter or outside of systems that depend on energy. On the other hand, the majority of matter and energy in the universe exists without information. As a consequence, we must allow two paradigms of physical inquiry: the two-dimensional paradigm of thermodynamics and morphodynamics, and the three dimensional paradigm that incorporates information. However, as the buoyancy and snowflake examples make clear, information should not be identified with all improbable structures in the world, only with a particular type of improbable structure, a living system. Even with reference to its native context, life, information should be treated as the phenomenon that accounts for much, but not all, of the organized structure of living things (remember, much of living structure is still 2 dimensional in origin).

The pervasive quality of life that most confounds analysis is the complex interconnectivity between its own components and with itself as system in the context of an environment. The necessity of treating living systems as wholes requires that they be categorized in a functional frame of reference. Functional propositions require that discrete units be understood in terms of their consequences for the system as a whole. Goodenough uses the term "teleodynamics" to recognize the logical domain of functional inquiry. Information is assumed to be the unit that combines with matter and energy to allow living systems. While the autonomy and independent variability of information is important, within a functional frame of reference we must emphasize the role of information in the context of the whole organism.

Please forgive the cursory nature of this discussion, but I tend to think that the concept of information is easily reified if a theorist is not careful. Without much reflection, I treat information as any structural pattern that is a component to a pattern cascade that resonates through stages of material structures in or adjacent to an organism. As a general property, each stage in a cascade involves the use of energy to restructure matter for the purpose of either maintaining continuity in a pattern or of achieving a new level of organization dependent on a coding structure. During this process patterns tend to decrease in mass or energy during encoding stages and tend to increase in mass or energy during decoding stages. We tend to only refer to the smallest, most densely organized pieces of matter as information, for instance, any coded messages, perhaps because they tend to have the special properties of mobility and storability that allow us to separate them in our minds from the totality of the system. We tend not to refer to the message transmitter or to the decoder as information structures in their own right. Moreover, we do not tend to consider the structural consequences of a decoded message to be an example of information. To use the example of a locked door, we tend to see the pattern of the key as information, but we ignore the patterns embedded in the lock, the position of the deadbolt and the door handle, the actual open or closed pattern of the entryway itself, or the fact that the whole system seems to have been suspiciously designed to accommodate an organism about as tall as your average human adult. The organization imbedded in every systematic component of a pattern cascade fall under the special phenomenological domain of teleodynamics. We should not confuse the resonant movement of a pattern through the matter of an organism using the energy of the organism with a particular manifestation of that pattern in a given location and time.

The implication of the above caveat is that at a certain point the pattern cascade appears to leave the organism, that is, there is a fundamental point where pattern movement takes the form of the organism restructuring the environment at the boundary of the organism. We generally do no refer to the patterns that emerge at this moment as information. However, nowhere in the space between gathering of environmental information and the execution of an information-based command is it obvious where to draw the line between information and energy. It is in this sense that information is a relational unit rather than an essential unit of living organization (5). The significance of these relativistic formulations is to focus our attention on the idiosyncratic nature of the empirical structures under examination.

1) Goodenough, Ursula. "Emergence: Nature's mode of creativity." Talk delivered on April 8, 2006 at the Global Philosophy Forum at Haverford College. A recording of the speech should soon be available in the Haverford library.

2) http://www.its.caltech.edu/~atomic/snowcrystals/faqs/faqs.htm

3) http://valelab.ucsf.edu/publications/2000valescience.pdf

4) For a discussion of the pitfalls of reductionism and an alternative paradigm for biological theory, see: Grobstein, P. 1988. From the head to the heart: Some thoughts on similarities between brain function and morphogenesis, and on their significance for research methodology and biological theory. Experientia 44; 960-971. Available online at http://serendipstudio.org/complexity/hth.html

5) http://serendipstudio.org/local/scisoc/information/grobstein25may04.html



Full Name:  Gray Vargas
Username:  gvargasr@haverford.edu
Title:  Crawling to New Heights: Needing Locomotion for Better Perception
Date:  2006-04-11 12:39:11
Message Id:  18980
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

What sparked my interest in this topic was an article stating that only babies who can crawl are afraid of heights. This is true even if babies have normal depth perception (1). It was interesting to me that 1) how we move through our environment affects how we perceive it and 2) that the fear of heights is not present immediately at birth, indicating that some "instincts" depend on experience.

Campos et al came to this conclusion using what is called a visual cliff, where a flat piece of clear plexiglass covers a ledge with a checkerboard pattern that drops off several feet with the same pattern visible below. Usually babies are placed at the edge of the cliff and their mothers call to them from the direction of the deep side to see if they will crawl across the plexiglass over the apparent drop off (2). Alternatively, in this study, babies were lowered onto the plexiglass on the deep side of the visual cliff while their heart rate was being measured. Crawling babies (or pre-crawling babies who had significant experience with a baby walker) showed a heightened heartbeat (fear response) while pre-crawling babies did not (1).

This intrigued me. Somehow, once they had learned to travel independently through their environment, babies were able to learn more about the characteristics of said environment and the possible consequences of moving through it (such as falling off of a ledge). More sophisticated and coordinated activity in their motor neurons, accompanied by sensory feedback and new proprioception signals, was altering the input received by their sensory neurons, or the interpretation of the input. In other words, a new, more informative feedback loop is put in place for crawling babies that lets them learn more about their relationship to the environment and how it changes. This feedback loop means that the Nervous System's (NS) output is affecting its input; an important example of experience affecting the brain. But it is still unclear how exactly this happens, and what causes what. And there are so many cognitive and behavioral changes occurring during this stage of development (around 6-8 months for the start of crawling) that is hard to discern which development leads to which.

Research being done at the Infant Studies Center at Berkeley has found that it is not as simple as pre and post-crawling, but that there is a delay of a few weeks between the time babies start to crawl and when they show a fear of heights. This supports the fact that it is experience crawling, not simply an immediate side effect of development at this stage that causes the fear to emerge. This also fits with anecdotal evidence of babies crawling off the edge of beds or changing tables or even down the stairs when no one is watching (3).

However, one study, which is in the minority, goes against the experience hypothesis. Richards & Rader (1991) found that it was age of crawling onset, not crawling experience, that predicted behavior on the visual cliff (4). Contrary to what would be expected, those babies with an earlier crawling-onset age (hence more crawling experience at the time of testing) performed the worst—that is, did not avoided the drop-off as much as later crawling-onset babies. Testing age did not predict performance. These researchers explained their finding by saying that early crawling onset "during the tactile phase of infancy interferes with later visual control of locomotion" (4). These findings have not been supported but they do raise a red flag at assuming experience leads to more of a fear of heights.

Regardless of whether crawling experience or age of crawling onset has more of an effect on fear of heights, all of this evidence shows that the fear is not completely innate in humans—that we have to either grow into or learn some of what we think of as our "instincts." This is obviously not adaptive for the species, since it means there is a time where babies are independently mobile yet do not yet know to avoid heights—making them vulnerable to injury or even death at a young age (5). While there is no real need to be afraid of heights before you can move significant distances independently, there is a need to once you can. But how are animals supposed to develop this fear before they have experience moving on their own (which would be necessary to eliminate the vulnerability)? The research suggests that babies cannot, and this is just another one of the many things babies are not capable of immediately after being born. Are all of these many developmental delays simply due to the fact that we are born prematurely compared to other animals—most of whom can move independently right after birth? Are they all just unfortunate evolutionary side effects that mean that we need to grow, have experiences and learn just to get to the developmental stage that so many other animals are born into?

In fact, scientists have studied animals on the visual cliff and have found that animals that rely on visual cues pass the visual cliff (meaning only walking towards the shallow side) immediately after birth (this is true of chicks that are less than 24 hours old and kids and lambs as soon as they can stand (around a day after birth)). Researchers state that "a seeing animal will be able to discriminate depth when its locomotion is adequate, even when locomotion begins at birth" (5).

This brings up the question of whether fear of heights should be considered an "instinct," and what the definition should be of "innate." Can something still be an instinct if you are not born with it? And can instincts exist on a continuum? For example, while all developmentally normal adults know that heights are dangerous, not all of them will show the same increase in anxiety symptoms when faced with them (or feel any anxiety at all). Did something go wrong in the development of adults who are not afraid of heights? Along this continuum, the fear can also become excessive and unreasonable in 2-5% of the population—becoming a diagnosable phobia called acrophobia (classified under anxiety disorders in the DSM-IV).

Other developmental examples support the idea that motor experience can lead to improvements in perception. Babies who have learned to crawl are better at finding hidden objects, and the longer a baby crawls in one session the better spatial cognition they display during that session (6). Besides just crawling, it has also been shown that as babies get better at using their hands and fingers they also get better at discriminating different properties of objects, like size, weight, texture, and temperature (7). Finally, as babies learn to control their heads, they show vast improvements at localizing a sound source. These studies show us that more complicated movement can improve perception in various different instances and modalities.

This made me wonder about the sensory experiences of individuals who never gain independent control of their movement, or those who lose the ability later in life. Do they have deficits in their perceptual systems? I also wondered whether therapies involving movement could help improve perception in those same people. There are books sold online with physical activities they claim will help keep your child from developing learning and behavioral problems (sometimes called "special physical education"). Another site claimed that 70% of kids with learning disabilities did not crawl but immediately learned to walk (8). Is it a two-way street so that damaging motion always damages sensation or the other way around? Or even, improving one improves the other? This seems unlikely since there are separate tracts in the spinal cord dedicated to sensory and motor functions.

In tackling this question of locomotion improving perception, we have seen that babies do not show a fear of heights until several weeks after they learn to crawl (or with experience moving with a baby walker) and there are also hints that crawling-onset age could be a factor. We also saw that in most cases, animals show fear of heights as soon as they gain locomotion, which is usually within one day. The concept of an instinct was discussed, and it appeared that the delay in this behavior in humans was due to how premature we are at birth relative to other animals, which results in a period of vulnerability for babies where they are able to crawl but not yet afraid of heights.

1) Berenthal, B. (1996) Origins and Early Development of Perception, Action, and Representation, Annual Review of Psychology, 47: 431-459, citing Campos et al (1992)
2)Berkeley Infant Studies video video of a baby being tested on a visual cliff
3)Berkeley Infant Studies Lab the lab's website with facts about the visual cliff
4) Richards & Rader (1991)
5)an article about the visual cliff a good article about visual cliff research at Cornell on babies and animals
6) Berenthal, B. (1996) Origins and Early Development of Perception, Action, and Representation, Annual Review of Psychology, 47: 431-459, Kermoian & Campos (1988)
7) Berenthal, B. (1996) Origins and Early Development of Perception, Action, and Representation, Annual Review of Psychology, 47: 431-459
8)Crawlies website an article on the benefits of crawling



Full Name:  Anne-Marie Schmid
Username:  aschmid@brynmawr.edu
Title:  Afterimages
Date:  2006-04-11 13:37:24
Message Id:  18983
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

When an image is looked at for a length of time (usually around 30 seconds) and then replaced with a white field, one type pf an effect called an afterimage can be seen. This type of afterimage is usually reported as being the negative of the image that is seen earlier; that is, all of the colors in the image have been replaced by their complementary colors in the afterimage (4). While the method referenced above is the easiest way to see an afterimage, they can occur from a variety of stimuli, including looking at an object and then closing one's eyes.
The common explanation given for an afterimage is that the photoreceptors (rods and cones) in the eye become "fatigued", and do not work as well as the those photoreceptors that were not affected (the "fatigue" is actually caused by the temporary bleaching of the light sensitive pigments contained within the photoreceptors)(1, 2, 4). This results in the information that is provided by the photoreceptors not being in balance, causing the afterimages to appear. As the photoreceptors become less "fatigued", which takes between ten and thirty seconds, the balance is recovered, resulting in the afterimage disappearing (2).
While the explanation above may explain some of the afterimages, it should not be used to explain all afterimages. If the brain is the main way through which we view our world, then wouldn't it be just as responsible for afterimages as "fatigued" photoreceptors? Perhaps instead of just a photoreceptor that has been temporarily bleached, causing it to not be able to react as quickly as the other receptors around it, the portion of the brain that is responsible for vision had adjusted to the image that was being focused upon, and was expecting the image to not change, so that when the change did occur, for the first few seconds the old image was still expected, and thus the afterimage was formed (3, 6).
The involvement of the brain in the creation of afterimages would explain why there is such a variety in the images reported; both positive and negative afterimages, afterimages that occur in a sequence (an alternation between positive and negative afterimages from a single stimulus), afterimages that change color, and afterimages that occur without a visible stimulus (4, 6). It is the afterimages that occur without a visible stimulus that provide the most convincing argument for the brain's involvement in the creation of afterimages.
When a subject is exposed to light from the green range of the visible spectrum, red afterimages have been reported, even when the subject is unaware of the stimuli (such as when the subject has their eyes closed). This suggests the involvement of the portion of the brain responsible for vision in their creation, as the afterimages, when reported, are distinct, whereas the original stimulus was not distinct enough to be noticed by the subject (3, 5). Interestingly, when the experiment was carried out on subjects who were unable to perceive certain colors, such as those who were colorblind, a response from the pupil was generated for the aftereffect, in the form of contractions. The subjects, however, reported seeing neither the stimulus nor the aftereffect. This suggests that both the photoreceptors and the brain play a part in the generation of an aftereffect (3, 5).
Unfortunately, the process by which an aftereffect occurs is not fully understood. While the common explanation given for the phenomenon is that it is created by "fatigued" photoreceptors, there is building evidence that they are not the only part of the body involved in the creation of afterimages, as evidenced by the variety of types of afterimages that can occur from a single stimulus. While the involvement of the brain in the creation of the afterimages is suspected, the exact nature of the involvement is currently unknown. Hopefully further investigation into the nature of the brain's involvement in the creation of these images will occur, as it could lead to other breakthroughs in our understanding of how we perceive the world and ourselves.
Sources

1) "How Do We See Colors?" - an explanation of the different photoreceptors.
2) "Afterimages" - an example of a negative afterimage, as well as an explanation as to their occurrence.
3)"The unseen color aftereffect of an unseen stimulus: Insight from blindsight into mechanisms of color afterimages."
4) "Retinal Vision" - an explanation of different types of afterimages.
5)Sperling, G. "Negative Afterimage without Prior Positive Image." Science, New Series, Vol. 131, No. 3413. (May 27, 1960), pp. 1613-1614.
6) "Visual Illusions and Neurobiology" - explains how the brain may be responsible for certain afterimages.



Full Name:  Faiza Mahmood
Username:  Fmahmood@brynmawr.edu
Title:  Religion and the Brain
Date:  2006-04-11 14:07:30
Message Id:  18984
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Traditionally, modern science has treated religion with an attitude of apathy and indifference. In direct contrast to the reason and logic underlying our understandings of modern science, the foundation of religious and spiritual sciences lies in the existence of a reality beyond the comprehension of our senses. The nature of spirituality remains consistent, free from the constraints of time, faith and culture. As a result, it has been suggested that such commonality of experience may in fact reflect "a common core that is likely a reflection of structures and processes in the human brain" (1)

In addition, researchers have also posed the question of whether the human brain is programmed to believe in a higher power, and whether faith is an innate mental faculty or one that humans have developed (2). Eager to find the answers to such questions, recent years have seen the development of the field of neurotheology – the study of the neurobiological foundations of religion and spirituality.

Adherents of the world's many faiths share a common goal in their search for existence of an entity greater than the self. . (2). Furthermore, existence of such an entity implies a denial of any great significance to be attributed to the self. Indeed, the goal of many spiritual beings is to achieve a sense of oneness with an aforementioned higher being. At the center of most spiritual experiences is an increased emotional state in combination with a finely honed sense of focus, free from extraneous sensory stimuli. (1)

Such conditions lend themselves to a state in which one becomes detached from time, fear and sense of self – only to be united with a greater entity. (1)
The desire for such an experience is prominent in most all human societies; in fact, research has shown that such religious experiences act directly on the frontal lobes of the brain to promote optimism as well as creativity. (1)
Indeed, there is evidence to suggest that religious people are generally healthier and live longer lives. Research even suggests that regular prayer or meditation lowers blood pressure and heart rate, and decreases depression and anxiety (2).

In the 1950s, basic studies of the human brain focused on the brain's electrical activity; subsequently, researchers were able to appreciate a relationship between meditation and changes in brainwave activity. However, these early investigations were of limited capacity and were unable to provide information on the exact areas of the brain that were affected and, more importantly, why those changes occurred. Advancements in technology, including the availability of SPECT (single photon emission computed tomography) machines, have allowed researchers to examine live, functioning brains. In recent years, the field of neurotheology has used such technology to pinpoint which brain circuits become active when having a spiritual experience – through a divine encounter, intense prayer, or sacred music (1).
At the University of Pennsylvania, Dr. Andrew Newberg and Eugene d'Aquili sought to identify the spirituality circuit of the brain by collecting data from Tibetan Buddhists and Franciscan nuns. Newberg and D'Aquili imaged the brains of their subjects while they were deep in prayer/meditation. For example, the Buddhist would typically be made comfortable on the floor of a room lit by few candles. A string of twine would be placed next to the subject, and he would then proceed to focus his mind until he felt that his true inner self had surfaced, to the point where he felt "timeless and infinite" (Begley). Finally, when the intensity of the experience peaked, the subject would tug on the string, at which point radioactive tracer would be injected into the subject, followed by a SPECT study. The SPECT machine detects the location of the tracer as it travels in the blood; as such, SPECT is able to track the flow of blood to the brain, where increased flow correlates with increased neuronal activity. Multiple trials on several subjects allowed the researchers to pinpoint which areas of the brain were being utilized, which has since enabled them to better explain how various religious rituals affect human beings (1). .

Not surprisingly, the prefrontal cortex (area of concentration and attention) was repeatedly lit up in all the SPECT images. However, what was impressive was a quieting of activity in the superior parietal lobe, location of the orientation association area (3). . This area is known to deal with the body's spatial orientation, as well as its perception of space and time. In particular, the left OAA produces a sense of the body's physical delimitation, and the right OAA informs the self with regards to the physical space occupied by the body (1). . These areas are crucial to our spatial understandings; such that lesions in these areas can result in an inability to move from one corner of a room to another. The lack of activity in the OAA can probably be attributed to the lack of sensory input, which may be secondary to the intensity of focus required for meditation (1). . Thus, the brain is unable to form boundaries between self and non-self and is thus given to "perceive the self as endless and intimately interwoven with everyone and everything" ((4). ).

Additional research revolves around the area of temporal lobe epilepsy. Studies have shown that certain religious experiences, specifically visions, can be mimicked by electrical stimulation to the temporal lobes. Temporal lobe epilepsy is the result of abnormal bursts of energy to that area, resulting in extremely vivid visions and voices. Such epilepsy is rare, but researchers have suggested that transient, focused bursts of energy may be responsible for certain mystical experiences (1). . To test this theory, Michael Persinger of Laurentian University fitted his subjects with helmets that created a magnetic field triggering electrical activity in the temporal lobes, resulting in what his subjects described as "an out-of-body experience, a sense of the divine" (1). As such, Persinger believes that religious experiences are the result of so-called "mini electrical storms" in the temporal lobes – storms that may be triggered by anxiety, hypoglycemia, fatigue and personal crisis. Indeed, this provides an interesting explanation for the circumstances that often lead people to "find God" (1).

In light of such research, one may start to question whether religious experiences are merely a consequence of brain activity without any independent reality. However, Newberg insists that "it's no safer to say that spiritual urges and sensations are caused by brain activity than it is to say that the neurological changes through which we experience the pleasure of eating an apple cause the apple to exist." Therefore, one can still not be sure whether spiritual experiences are a product of the brain, or if the brain is just experiencing a spiritual reality. It seems that even in light of the experiments carried out the question of whether our brain creates God, or whether God has created our brain is one debate that will not be settled anytime soon, if ever. What you believe is in the end, a matter of faith


References


1) Begley, Sharon.. Religion and The Brain. Newsweek International, US Edition, May 7, 2001.

2)God on the Brain-Programme Summary

3)Exploring the Biology of Religious Experience

4) Newberg, Andrew and Eugene d'Aquili.. Why God Won't Go Away. New York: Ballantine Books, 2001.



Full Name:  Beatrice Johnson
Username:  BESAIR@aol.com
Title:  THINKING and SEEING
Date:  2006-04-11 19:20:21
Message Id:  18986
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

THINKING and SEEING


Beatrice Johnson

What pulls us away from our thinking? What pulls us away from our seeing? Do we see our thinking? Do we think our seeing? Of course there are answers to these questions, but what about that thinking that we don't see, what about that seeing we don't think? That thinking that interferes with our thinking, that seeing that interferes over our seeing.

It is said that of thinking or thought; Action of using one's mind to produce thoughts, or convert symbolic responses to stimuli. (1) In psychology, the process of forming mental connections or bonds between sensations, ideas and memories. (2) But I am not speaking of either of these two definitions. There is a thinking that goes beyond both of these limitations. That type of thinking that comes not as a process, but as finished product and it is welcomed. It doesn't come often, but when it does it is welcomed. Like something no one else has thought of and you have that feeling.
I read something of that nature, in one of your papers when you wrote:
What I REALLY like is the light that comes into students' eyes when they suddenly realize
that something we're talking about makes a lot of sense, to them, and not only about what
whatever it is we're talking about, but also about a lot of other things, they've wondered
about and never quite understood. One reason I like that light so much is that I identify
with it: I really get a kick out of that feeling when it happens to me, and sometimes it
does, because something I've said to a student makes the student think of something I've never thought of before. A bigger reason, though why I like that light it means those students are THINKING.(3)

I must agree that is the way I feel. But I never connected it with anyone else. And I never thought of it as a process. I always thought of it as something more inside of me. Maybe like a gift.

Seeing can be seen in the same way. Sometimes you can look at something, and see what it is and know what it is, but you see also more of what it is. I don't know if that makes sense but it's what I mean. It's not like when two people look at the same thing and see something different, no this is completely different within that one person. They see what they see but they see more. I guess what falls in play here is the memory, which takes me back to point (1) But if it is something or someone I have never seen before would the memory still play apart? In that case my memory would play apart on what I am seeing along with the retina. The more I think about this, the more I try to write and still make sense of what I am thinking, in a way that I can be understood. I may not be writing what I am thinking. Thinking tends to get past me before I think it, but evidently not. I just think fast, which hinders my writing and my being understood.
I sat down to write about thinking and seeing . And it seems like I am doing both of them. Does thinking make me see better or does seeing make me think better? I can't get around this. It is not just physical seeing that I am talking about or am I, its separating of the two. I close my eyes and I still see, if I close them real tight I see the universe. See there I go thinking again. I even tried to find a meaning for seeing which would have been similar to the two I found on thinking, but what good is the meaning if I don't understand what I am looking up, seeing. The two on thinking I could comprehend and they made sense to me. But the process of seeing is a little more complex, or maybe it's not, I just don't have the knowledge of it.
So what is it that pulls us away from thinking? Is it other thinking, or just some distraction ? What is it that pulls us away from seeing? Is it more seeing, or just some distraction? Is thinking, and is seeing distractions in themselves? A blind man does not need to see to think, and a seeing man does always think.
Sources
1. Britannica..com
2. Britannica..com
3. serendip.brynmawr.edu/sci_ edu/problem.html



Full Name:  Andrew Garza
Username:  Agarza@haverford.edu
Title:  How can personality be classified?
Date:  2006-04-11 21:14:52
Message Id:  18989
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Many people think of personality as the combination of qualities that make people who they are. Since the beginning of the earliest large human civilizations people have been hypothesizing about what factors influence personality and how to categorize different personalities (1). Not only have these understandings about personality helped satiate people's curiosity about how the world works, but they have also served to help people predict others' behavior. Modern science has addressed some of the ancient personality-related riddles by showing us that both past experiences and hereditary genetic factors play highly important roles in shaping our personalities (2). Now the challenge has become to find out which particular factors influence the formation of different personalities and how these factors interact with each other.

One of the most common popular assumptions about personality seems to be that it is stable. If someone is considered to be a nice guy, he can be counted on to treat others respectfully and to not be rude, right? People who understand personality through the lens of the trait approach would strongly agree with the assumption about the stability of people's personalities. The trait approach to understanding and classifying personality basically says that human personalities are composed of a set number of traits that become the "ingredients" of personality. Everyone exhibits the individual traits to varying degrees, and the combinations of these characteristics form our personalities. Within the trait approach there are a variety of hypotheses that try to explain personality, but the framework that is supported by the widest body of evidence is called the Big Five (3). The Big Five is a taxonomy that classifies personality types based on the extent to which people exhibit each of the following five factors: extroversion, neuroticism, agreeableness, conscientiousness, and openness to experience. Proponents of the Big Five argue that the traits are so universal that a study by Weiss, King and Figueredo has even shown them to influence the personalities of non-human animals like chimpanzees (3).

However, one of the major limitations of the Big Five and other prominent trait approach evaluation methods is that they have limited power to actually predict people's behavior in specific circumstances. It has been shown that expectations of people's behavior that are predicted based on Big Five evaluations of their personalities only correlate with their actual behavior 30% to 40% of the time (3). Although this range does show that the model has some power to predict behavior, it can't be considered to anticipate behavior very strongly. McAdams (4) suggests that one reason why the model doesn't have strong predictive power is that the categories are so broad that they can't account for all the important nuances of personality, and thus they can't predict behavior.

On the other hand, Mischel explains the low predictive power by emphasizing the importance of another factor that I think is often missing from the popular conception – and certainly to an extent the academic conception as well – of what personality is. He states that the main explanation for why the Big Five model fails to account for so much behavior is that behavior isn't just determined by people's traits, but rather also by how people react to the particular moment-by-moment situations in which they find themselves (3). One experiment by Hartshorne and May (5) demonstrates that at least some aspects of personality are much less fixed than we may have originally thought. The researchers gave tests designed to measure honesty to about 11,000 school children between the ages of eight and 16. The tests – which focused on sentence completion and math – were administered to the children in a variety of settings. Sometimes the kids took them in supervised classrooms, while other times they were given the answer key and asked to grade their own tests under minimal supervision. Still other times, they were given the test to take at home and bring back the next day. The idea behind the study was that the researchers could measure the amount of cheating that was taking place by comparing the test scores of exams that were strictly administered (with little room for cheating) versus tests taken in situations in which children had ample opportunity to cheat. It turned out that the scores in situations where cheating was easy were on average about 50% higher than the results produced in situations where the kids were strictly supervised. The researchers' fascinating analysis of the outcome reveals that there weren't clearly defined groups of cheaters and honest students. Many students cheated at least a little bit, but they often did it in different circumstances. For instance, some were more likely to cheat on math tests taken at home than on similar tests taken under loose supervision at school, while the reverse was true for other students. So the way a student acted under one set of circumstances wasn't necessarily an accurate predictor of how he or she would act in another context (5). The key points here are 1) that personality is not quite as stable across a variety of situations as many of us might think it is and 2) that personality is not only determined by past experience – which is what I think people often mean when the use the term "experience" – and genes, but also certainly influenced by situational conditions. It is more than possible for the same person to exhibit two totally different behaviors when that person is faced with different situations.

One reason why people might be naturally inclined to think about personality through the lens of the trait approach rather than giving strong weight to situational factors as well is that people often make the Fundamental Attribution Error (FAE) (5). The FAE says that people will generally overestimate the importance of character traits in explaining other people's behavior and underestimate the importance of situational factors. At the same time, we also have the tendency to attribute our own actions, especially if they are negative, more to situational factors than to personal traits. So for instance, if you are late to an important committee meeting you would probably attribute that tardiness to traffic or something external, while people who see you at the meeting would be more likely to think, "Wow, he's less responsible than I had thought". But if you were the person sitting in that meeting when somebody else came in late, it is likely that you would make the same assumption – "Gosh, that's a tardy person."

Given the fact that situational factors are crucial in predicting how a person will act on a certain occasion, it is fundamental that situational influences be better integrated into a personality model that also includes traits. Mischel and other researchers have noticed that, although people do act differently in various situations, there are also patterns for how people act in certain kinds of situations (3). In order for a truly comprehensive theory of personality to emerge, more research will have to be done to determine which combinations of attributes are likely to make people act a certain way in a given situation. Clearly it would not be possible to create a taxonomy that shows how people with different combinations of Big Five traits would act in every possible situation, but researchers could try to find patterns of behavior across various situations and group some of these situations into larger categories. For instance, openness and conscientiousness are more important than extroversion, agreeableness and neuroticism when trying to predict how neat someone's room and work area will be (6). The ideal goal would be to reach the point where situational and trait factors are so well understood that people's dispositions to act a certain way in given situations can be understood through the framework of traits. Research of this nature will undoubtedly also shed light onto other important issues related to personality like the extent to which the I-Function operates similarly and differently for people with various combinations of traits.

Bibliography

1)Survey of Beliefs about Personality, Ellen Whyte describes some of the past and present beliefs about how personality works.
2) Web Paper, This paper gives good examples of beliefs about personality that rely complete environmental causality and total genetic determinism. It also cites an interesting and solid study that supports the notion that personality is determined by a mix of experience and genes.
3) Gleitman, Fridlund & Reisberg. "Personality." Psychology. Sixth Edition. New York: W.W. Norton & Company, 2004. This psychology textbook offers an excellent overview of a variety of personality theories, including the Big Five system.
4) Academic Critiques of Big Five, Several analyses of the Big Five.
5) Gladwell, Malcolm. The Tipping Point: How Little Things Can Make a Big Difference. New York: First Back Bay, 2002. This book offers several good examples of how situational factors play a large role in influencing how we act.
6) APA Article, Beth Azar addresses the interesting relationship between the tidiness of bedrooms and certain characteristics of people.



Full Name:  Brittany Peterson
Username:  bpeterso@brynmawr.edu
Title:  Why Is Autism on the Rise?
Date:  2006-04-12 00:47:40
Message Id:  18993
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

I have been interested in the topic of autism since the first time I heard of it, but my interest became more personal when my younger cousin Ethan was diagnosed at the age of two. Since then, the whole family has become interested in the topic and we have all tried to familiarize ourselves with the causes and treatments so that we can give Ethan the tools he needs to live up to his full potential.

Autistic individuals usually display delays or permanent problems with social interaction, specifically interactions such as eye contact, emotional communication, and playful interaction and conversation. Individuals suffering from autism may also display repetitive language and behavior, while remaining unresponsive to other stimuli in their environments. They may be aggressive and may react badly to changes in their routines. (3)

Recently, there has been a lot of controversy regarding the possible causes of autism and the recent rise in autism diagnoses. It is my belief that the main cause of the recent sharp rise in autism cases is the presence of mercury in the environment, in tandem with certain genetic anomalies. Some of the mercury to which humans are exposed is present in fish or in emissions in the air. Recently, however, it has come to light that many vaccines given mainly to children contain mercury as well, and that the levels of mercury to which many children are routinely exposed through vaccination regimens greatly exceed safe levels. A wealth of evidence indicates that it is this form of exposure to mercury that has in large part caused the recent rise of autism.

Many vaccines contain a preservative called thimerosal, which is 49.6% mercury, which becomes ethylmercury in the human bloodstream. Ethylmercury then crosses the blood-brain barrier and causes neurological damage. (5) Mercury often has a stronger effect on children than on adults; the most common effects of mercury exposure in children include damage to the digestive and neurological systems, and to the kidneys. (10) Vaccines of this type have been used since the 1930s, (2) despite the fact that at least one major drug company, Eli Lilly, has known that vaccines containing thimerosal have harmful effects since the early 1940's. (4)

One vaccine that has in the past contained thimerosal is the measles-mumps-rubella vaccine given to small children. A study in Japan (Kawashima et. al.) examined the peripheral mononuclear cells of 3 autistic patients- there was evidence of measles virus in these cells, and that virus was found to be from a vaccine and not the disease itself. There was no evidence of the same virus in the peripheral mononuclear cells in non-autistic control patients. Furthermore, an American study (Wakefield et. al.) reported to Congress in April 2000 that 24 out of the 25 autistic children they studied had measles virus identified in the cells of their gut, while only 1 out of 15 of the control, non-autistic group of children had the same result. (7)

There are other autism-related figures that are even more striking. Some of the best data on autism cases comes from California; this state has kept excellent records with regard to autism cases and how the numbers have changed over time. Between 1987 and 1998, the number of autism cases rose by 273 percent. (6) In the late 1980s and early 1990s, the FDA and CDC began to investigate possible links between thimerosal containing vaccines and autism. Eventually, they began phasing in new rules requiring the removal of thimerosal from vaccines. (1) The number of new cases of autism reported has dropped by about 100 each year between 2002 and 2004, and the trend seems to have continued, although full data on that year is not yet available. (9)

It should be noted as well that since the United States exports vaccines to other countries, especially those in the Third World, the presence of thimerosal in our vaccines has had an effect around the world. Autism was practically unheard of in China before United States manufacturers sent thimerosal-containing vaccine there in 1999; now there are more then 1.8 million autistic individuals in that country. Enormous increases in diagnoses of autism have also been reported in countries such as Nicaragua, Argentina, and India since U.S.-manufactured vaccines containing thimerosal have been distributed there. (11)

Another case for a link between thimerosal in vaccines and an increase in autism comes from Lancaster, Pennsylvania. In the Amish community there, children rarely receive vaccinations. Statistically, there should be about 130 autistic children in this community given its size, but there are instead only four. Even more strikingly, all four of these children were exposed to mercury somehow. One child was exposed to a power plant, one was adopted from out of the community and so had been vaccinated normally, and the other two were born in the community but their parents had for some reason had them immunized normally. (11)

Why are drug companies so attached to using thimerosal? The answer is simple: to save themselves money. Using this preservative allows the companies to package their vaccines in larger vials that can be used for more than one dose- that is, it gives the vial enough protection against germs and aging that it can be opened and used multiple times. These larger vials are half as expensive as single-dose vials. (11)

Another interesting point to consider is that mercury poisoning and autism share many of the same symptoms. In a report to the FDA on autism in 2000, a group of parents, researchers and doctors led by Sallie Bernard made this point very strongly. "Defining characteristics of autism- social withdrawal, OCD behaviors, and loss of or impairment in language...[and] sensory disturbances" (7) are also present in cases of mercury poisoning. This report also stated that some biological problems associated with mercury poisoning are also associated with autism, such as problems with the immune system, cerebellum, amygdala, and hippocampus, and that mercury has been found to be more toxic to males than females. The male to female autistic ratio is extremely high, about four males for every one female. The FDA report went on to note that autism usually emerges in early childhood, a period throughout which many vaccines are administered. (7)

There is also evidence of a genetic component to the onset of autism and similar disorders. One of the functions of the MTHF-r gene is to affect the way the body disposes of toxic metals that are introduced into the bloodstream. A particular type of mutation at this gene, called A1298C, affects that function and so mercury and other metals build up in the systems of individuals with this mutation, meaning that they are even more likely than individuals without the mutation to suffer the harmful effects of overexposure to mercury. (8) This can cause autism and other, similar neurological disorders, and it has often been found that autistic people have a family history of similar problems. (7)

I was motivated to research and write on this topic for personal reasons, but I have found that in researching the topic, the analytical, scientific side of my brain played a role, putting the facts I gathered into a cohesive whole. Viewing all of this information together, from the standpoint of a biologist, a chemist, and finally a logical human being, I cannot honestly come to any conclusion except that the rise in autism of recent years is, most likely, very tightly linked to the presence of mercury in thimerosal-containing vaccines. Thankfully, as I have stated, progress has already been made to remove this preservative from vaccines, and the results of this change have already been seen. However, there is a long way to go in eradicating this threat. I hope that in researching this topic I have furthered that goal.

Works Cited

1)"A Timeline of the Thimerosal Controversy", from Mother Jones. Outlines events surrounding the controversy, from the first warnings of trouble to restrictions placed on thimerosal by government agencies.

2)"Comparison of Blood and Brain Mercury Levels in Infant Monkeys Exposed to Methylmercury or Vaccines Containing Thimerosal", from Safe Minds.
Outlines a study comparing two types of mercury exposure in monkeys and exploring the particular effects of the type of mercury found in thimerosal.

3)"Defining Autism", from The Autism Society of America Discusses common symptoms of autism.

4)"Frist, Hastert Pull Last Minute Maneuver to Protect Vaccine Manufacturers From Liability", from Safe Minds. Discusses efforts by pharmaceutical companies and sympathetic politicians to protect themselves from this controversy.


5)> "Health Effects of Mercury", from Mercury-Free Minnesota Health effects of mercury exposure.

6)"Is autism on the rise?" Inside Autism. Some statistics regarding the recent rise in autism and some discussion of the ways it is diagnosed.

7)"Measles virus isolated in autistic children". Autism Research Review International, published by the Autism Research Institute Summarizes report by the group led by Sallie Bernard, as well as studies done which found measles vaccine virus in autistic children.

8)"Program for Methylation Support", from the Neurological Research Institute. Information on genetic mutations affecting the processing of toxic metals such as mercury and some of the potential outcomes of having this mutation.

9)"State Shows Decline in Autism Diagnosis Rates, Possible Sign of Nationwide Drop", from California Healthline. Article discussing the recent drop in autism in California.

10)"ToxFAQs for Mercury", from the Agency For Toxic Substances and Disease Registry Explains why and how mercury is toxic to humans.

11)"Deadly Immunity", from the Common Dreams News Center. On the harmful effects seen in areas where thimerosal-containing vaccines are administered as well as the reasons drug companies want to use thimerosal in the first place.



Full Name:  Liz Paterek
Username:  epaterek@brynmawr.edu
Title:  Bipolar Disorder and the Creative Mind
Date:  2006-04-12 16:20:13
Message Id:  19000
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

There is an old stereotype that artists are moody individuals prone to fits of depression and madness. Is this little more than an old wives tale? Many artists and writers speak of periods of increased mental fluidity and lifted mood ((4)). Poets such as Edgar Allan Poe and Emily Dickinson, novelists such as Mary Shelley and Leo Tolstoy and artists such as Michelangelo and Vincent Van Gogh have all be reported to show signs mental instability ((2)). How common is depression in artists compared to other creative professions? If there is a trend, is it because this bipolar nature generates a new way to see the world? Are the arts a refuge for mentally unstable? Is artistic genius linked with madness?

Major depression strikes as many as 5% of the general population, often later in life and is more common in women. Bipolar affective disorder, which involves phases of mania and depression, is known to strike 1% of the population, with the numbers of men and women being similar ((1b)). Mania is expressed by periods of extreme productivity, grandiosity, hyperactivity and irritability lasting for at least a week. Hypomania is a less severe form of this disorder also involved in manic depression. Major depression must last for periods of at least 4 weeks and is characterized by inability to concentrate, feelings of worthlessness and fatigue ((5), (3)).

Memory and creativity are related to mania. Clinical studies have shown that those in a manic state will rhyme, find synonyms and use alliteration more than controls. This mental fluidity could contribute to an increase in creativity. Moreover mania creates increases in productivity and energy. Those in a manic state are more emotionally sensitive and show less inhibition about attitudes, which could create greater expression ((3)). Studies performed at Harvard looked into the amount of original thinking in solving creative tasks. Bipolar individuals, whose disorder was not severe, tended to show greater degrees of creativity ((5)).

Bipolar disorder is not the first to be linked to creativity. During the 1960's, it was alcoholism. Before that, many artists, including Keats, Shelley, and Poe were thought to have fatal diseases such as tuberculosis. However, these diseases all are linked by symptoms. Tuberculosis has manic and depressive phases, which gives credence to the idea that artists experience mood swings ((2)). Alcoholism is linked to mania and depression ((3)).

There have been studies pointing to a link between manic-depression and left-brained talents. When Nancy C. Andreasen of the University of Iowa questioned 30 writers, she found that at least 80% had had at least one episode of major depression, mania or hypomania compared to 30% of controls ((2), (5)). Another researcher, Rothenburg, who has spent 30 years studying creative individuals, objects to her control groups and her methods ((2)). Later when Kay Redfield Jamison studied 47 writers, painters, and sculptors, she found that 30% had been treated for bipolar disorder ((2), (5)). Half of the poets studied were treated for bipolar disorder ((5)).

While these samples are small and it is difficult to judge prominence of many living individuals, there is a trend. Some diagnosis of the past has been performed to help confirm this data. This is based on second hand information and has its flaws ((2)). The way individuals are portrayed by others will be scattered, because they do not know all aspects of a person's life. All people have quirks, if one wanted to see insanity, it would be easy to exaggerate them. However, if this data is supportive, it will cement the trend.

Artists in generations past have been shown to have suicide rates 10-20 times higher than the general population and higher than average rates of hospitalization for depression ((1b)). Another researcher, Ludwig, delved into depression in prominent 20th century individuals based on 2,200 biographies of 1,004 individuals. He showed that while 11% of creative individuals suffered from mania; only 1% of the general population did. He also showed 46 to 77% suffered from depression, almost twice the rate in the general population ((1c)). He found that accomplished individuals in other fields, including science, had only a 3% rate of depression. He believed the biographers were less likely than psychiatrists to believe that a person had a mental disorder and that clinical stories are autobiographies which are the most inaccurate understanding of a person ((2)). Despite any flaws in how these experiments have been performed, the trend persists. Therefore it is important to ask why this trend exists.

While Jamison and Andreasen argued that bipolar disorder enhances creativity, Ludwig argued that individuals who are creative but manic are more likely to find a home in art rather than other fields ((2)). According the Ludwig, the sciences require organization, preparedness and levelheadedness. An artist could draw on the lack of these traits for inspiration; a scientist could not ((2)).
There is the question of different forms of intelligence. A scientist may not be an outstanding poet nor is an outstanding poet likely to be great at physics. At an elite level, talents are often focused in specific field. This is at least somewhat suggestive that there may be different brain connections that create different talents. Therefore a genius may not be able to choose to go between fields because of personality as Ludwig suggests.

The Jamison and Andreasen argument is shaken by the fact that around half of all great creative minds have not been bipolar; therefore manic phases could not cause their creativity. They have no data to suggest directionality of this link. It could just as likely be that artistic talents generate a predisposition for bipolar symptoms as it could be that being bipolar generates artistic abilities. The latter makes less sense because fewer bipolar individuals are artistic than artists are bipolar, as well as that those with severe mania are less creative than those with mild forms. While drug studies would seem to support the link that without manic phases creativity decreases, it would be better to realize that these drugs have broad effects and all effects may not be directly related ((2), (5)).

It is clear that being bipolar does not mean that one will necessarily be creative. It is also clear that being bipolar is not a requisite for genius. However, Hagop S. Akiskal found that 9-10 percent of those bipolar patients he studied with less severe symptoms were artists and writers ((1a)). The mind of a left-brained genius could be more vulnerable to mood swings, which manifest similarly to normal bipolar symptoms. Therefore the symptoms would not exist in all geniuses but in many. The connections in the brain that cause this genius may be different from those who express right brained talents. This would explain why geniuses in other fields do not show the same symptoms. It would also keep the link between mania and creativity that Ludwig's argument does not.

Left brained creativity could be a vulnerability factor to developing symptoms of bipolar disorder. Studies present a seemingly clear link between bipolar disorder and artistic creativity. This would account for the reason other individuals in creative fields, such as science, do not show the same results. Because talent is often focused, it is unlikely that a manic individual chooses art. Because not all bipolar minds are creative but many creative minds are bipolar, it seems likely that bipolar disorder generates vulnerability for bipolar symptoms. Because it is only a vulnerability factor, many people will not suffer from it while still having talent.

Works Cited:
1a) Creativity and the Troubled Mind

1b) Manic-Depressive Illness and Creativity

1c) Moods and the muse

2) That fine madness - manic depression is latest mental illness popularly linked to artistic genius - special issue: The Science of Creativity

3) Analysis of Relationship Between Manic Depression and Creativity

4) The link Between Mental Illness and Creativity

5) Bipolar Disorder and the Creative Genius



Full Name:  Stefanie Fedak
Username:  sfedak@brynmawr.edu
Title:  Tom Cruise Weighs in on Pregnancy, Scientology, and Pseudo-Sciences
Date:  2006-04-14 21:38:42
Message Id:  19040
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

"When you talk about postpartum, you can take people today, women, and what you do is you use vitamins. There is a hormonal thing that is going on, scientifically, you can prove that. But when you talk about emotional, chemical imbalances in people, there is no science behind that. You can use vitamins to help a woman through those things."

-Tom Cruise on Brooke Shields' Use of Paxil for her Postpartum Depression (1)

Tom Cruise most recently starred in Summer 2005 thriller "War of the Worlds" but, on his barrage of press junkets, he engaged in a war of words regarding Brooke Shields' use of paxil for her Postpartum Depression. Shields, in a July 2005 "New York Times" Op-Ed contribution aptly titled "War of Words", responded publicly to Cruise's assertions that women can combat Postpartum Depression by using vitamins and that psychiatry is nothing more than a "pseudo-science". (1)

Cruise famously attacked Shields, who authored a book on Postpartum Depression titled "Down Came the Rain: My Journey Through Postpartum Depression", and "Today" show host Matt Lauer for supporting the use of doctor prescribed medications for the treatment of depression and ADD. (2) Cruise, a very vocal Scientologist, and fiancée Katie Holmes are now expecting a child of their own, and I have heard innumerable gossip column watchers, Page Six readers, and casual observers exclaim, "I hope Katie Holmes gets Postpartum Depression, and then Tom Cruise will understand!" While I wouldn't wish a serious disorder like Postpartum Depression on anyone, the question begs to be asked, "If Katie Holmes, or any pregnant woman, were to suffer Postpartum Depression, is Tom Cruise's recommendation that she use vitamins to resolve the issue correct?"

There are five recognized mood disorders pregnant and new mothers may experience. These disorders fall on a spectrum of severity, with baby blues being least severe and postpartum psychosis as most acute. Roughly 80% of women report experiencing the baby blues, with the onset generally 1 to 3 weeks following birth. The symptoms include weeping, anxiety, feelings of dependency, and general mood instability. (3) Birth is both a physically and emotionally taxing ordeal, and the weeks that follow are often a difficult period of transition for the new mother and her spouse/partner. As a woman's hormones rebalance, the baby blues will fade. Socialization with other moms, allotting personal time, and receiving adequate support from family and friends will help women struggling postpartum, without a need for medical intervention. (3)

Women may also experience depression or increased anxiety, obsessive-compulsive disorder, and panic disorders, during or after pregnancy. Studies show that 15% to 20% of mothers suffer from depression or anxiety, much like Brooke Shields. The start of symptoms may be rapid, but generally the onset is gradual and the common course of treatment is psychotherapy combined with medication and the help of support groups and family. (3) Obsessive-compulsive symptoms and panic disorders are slightly less common, occurring in anywhere between 3% and 5% of new mothers. The course of action for these illnesses is much the same, a combination of psychotherapy and medication in closely monitored dosages. (3) But, what about vitamins, as Tom Cruise suggests? Is there any evidence that shows effectiveness of all-natural treatments and Postpartum Depression?

Studies conducted on the use of vitamins and minerals in the treatment of stress, depression, and general anxiety show that there is some credibility to Cruise's assertion that vitamins can be beneficial in the treatment of mood disorders. In a number of publications, B vitamins have been repeatedly singled out as a natural alternative to prescription medications. B vitamins facilitate the function of neurotransmitters, much like antidepressants, allowing for regulation of moods and emotions. (5) B vitamins produce a naturally-occurring compound known as SAM-e (S-adenosylmethionine), which helps produce serotonin, dopamine, and norepinephrine. Increased levels of B6, B12, and Folic Acid inhibit deficiencies in the neurotransmitters that produce happiness, improving mood and productivity. (4) However, alterations in diet and vitamin and mineral intake are typically not sufficient for all patients experiencing depression and more rigorous and traditional courses of medication should be followed. (5)

As we have discussed in class, there are no conclusions or right answers in science, only ways of getting closer to the truth. Perhaps vitamins do have beneficial effects for some depressed individuals, however, first hand accounts and detailed scientific studies tell tales of women who desperately need psychiatric intervention, and without it put themselves and their children in danger. In Shields' "New York Times" Op-Ed she recalls, "I wasn't thrilled to be taking drugs. In fact, I prematurely stopped taking them and had a relapse that almost led me to drive my car into a wall with [my daughter] Rowan in the backseat." (2) In the most serious of Postpartum Depression, a woman may experience psychosis, much like Andrea Yates, who killed her five children in 2001. Andrea had been taking antidepressants, and just days before she drowned each of her children the family's bathtub her psychiatrist suspended their use. (4)

Would Andrea Yates have killed her children if she continued taking anti-depressants? Could Brooke Shields have avoided the pitfalls of Postpartum Depression through increased vitamin and mineral intake? These questions will likely never be answered. What is known, however, is that anti-depressants appear to be the most effective treatment available to women who struggle through Postpartum Depression. Frankly, if I were Katie Holmes, until clinical trials and substantial evidence suggest otherwise, I would put my faith in medicine and not seek help from an increase in Folic Acid intake.

1)Brooke Shields' Op-Ed from The New York Times, Published July 2005
2)Transcript from NBC's "Today" Show with Matt Lauer, Aired June 2005
3) A Brief Introduction to Postpartum Illness , Authored by Shoshana S. Bennett PhD., 2003.
4) Synopsis of the Andrea Yates Case , courtesy of the Court TV Crime Library
5) BioNeurix review of Amoryn, an all natural treatment for depression
6) "Altering the Brain's Chemisty to Elevate Mood" , an article by Brown, Gaby, and Reichert



Full Name:  courtney moore
Username:  cmoore
Title:  Weighing In: Costs and Benefits of Appetite Suppressants
Date:  2006-04-18 10:28:50
Message Id:  19083
Paper Text:
<mytitle> Biology 202
2006 Second Web Paper
On Serendip




The adage "You can never be to rich or too thin" seemed to go out of style in the 1960's. A generation of feminist bra-burners tried to unseat the oppressive standards established by a patriarchal domination, and civil rights activists fought for racial and economic equality in the face of cultural hegemony and social stratification. However, despite the fight to divert focus from the homogenizing standards of aristocratic white culture, the pressure to be thin is as prevalent as ever in today's society. At the same time, obesity rates continue to rise, earning America international accolades as "the world's fattest nation." As a deluge of media messages equating thinness with beauty and success amplifies the desire to fit a standard body shape and size, consumers turn to medical technology for help losing weight.

After fad diets and ambitious exercise plans fall through, more and more Americans choose pharmaceutical appetite suppressants to shed ten, twenty, fifty, or up to hundreds of pounds. These drugs are clinically prescribed for patients suffering from "obesity," but many are also available over-the-counter for general weight loss. However, many critics claim, "Nowhere has the question of risk versus benefit come under greater scrutiny than with anorectics," (1) as the potential side-effects or anorexctics may pose dangerous and even fatal health risks. How much will Americans pay for a size six figure?

Anorectics, also known as anorexigenics or appetite suppressants, are substances that reduce the desire to eat. These medications are generally stimulants of the phenylamine family, which work by increasing the neurochemicals that affect mood and appetite: serotonin and catecholamine. (2) The most notorious phenylamine is amphetamine, or speed, which was sold commercially as an appetite suppressant until the late 1950's, when it was outlawed in most parts of the world due to concerns regarding drug abuse. (1) Physicians no longer prescribe amphetamine, but some of its derivatives also classified as anorectics remain on the market. (4)

The multi-purpose nature of commercial anorectics complicates questions of efficacy and desirability. Appetite suppressants can be prescribed for clinical obesity, which is often attributed to genetic factors; casual weight loss, instigated by a personal decision on behalf of the patient; and some are even prescribed for mental health problems. For example, sibutramine inhibits the reuptake of neurochemicals such as monoamines, noradrenaline, serotonin and dopamine, performing the same function as many current antidepressants. Furthermore, "in addition to its appetite suppressant effect, sibutramine increases thermogenesis and fatty acid catabolism," (1) fighting obesity by jointly increasing metabolic functions and reducing appetite. While these functions can certainly induce weight loss, they are accompanied by a range of negative side effects.

The use of prescription anorectics decreased during the 1970's and 1980's as the public became more aware of the very real dangers these medications pose. However, medical scripts for phentermine and fenfluramine ("phen-fen") skyrocketed in the 1990's, after a small but well-timed study demonstrated phen-fen's efficacy in treating 121 obese individuals. (1) Although less than one third of the patients completed this study, and most regained weight during its latter stages, the findings paralleled a growing concern with American obesity and fueled an anorectic frenzy.

Despite the recent popularity anorectics enjoy, the purported benefits come at a high cost. Principal adverse effects include increased heart rate, increased blood pressure, sweating, constipation, insomnia, excessive thirst, lightheadedness, drowsiness, stuffy nose, headache, anxiety, and dry mouth. (2) This extensive list is complemented by the grave possibility of drug addiction during treatment and a similarly somber likelihood of depressive tendency at discontinuation. (4) While weight loss is often linked with lower cholesterol, such a boon must be compared with disadvantages such as pulmonary hypertension and valve defects. (1) Many patients using anorectics experience a racing heartbeat or corollary cardiac complications, again detracting from the appeal anorectics initially present.

Furthermore, many studies show that the benefits anorectics offer may not be as dramatic or lasting as drug manufacturers claim. Patients often experience a weight loss plateau after 6 months while taking a weight-loss medication, and many regain significant amounts of weight upon terminating treatment. (2) It is unclear whether this leveling off is due to a developed tolerance or if the medication has reached its limit in effectiveness; in any case, most anorectics are intended for use as a short-term treatment for people with obesity. "Appetite suppressants can help you to lose weight while you are learning new ways to eat and to exercise," (5) but changes in eating habits and activity level must be developed and continued to ensure long-term weight loss.

Critics of the pharmacomania accompanying commercial anorectics qualify, "appetite suppressants typically affect hunger control centers in the brain. However, hunger is not the only trigger for eating." 6) Our instant gratification society relies on technology for quick-fix solutions to deeper individual and societal problems. Weighing costs and benefits, it seems that anorectics fail to provide the ticket to health and happiness initially promised. The prevalence of appetite suppressants thus ought to be accompanied by a close examination of societal standards of both body weight and general health. The polarized division of obesity and obsessive eating habits indicates a fundamental problem in the way Americans view diet and exercise, a conundrum evidenced by a willingness to sacrifice long-term health for quick and easy weight loss.

1) Anorectics on Trial: A Half Century of Federal Regulation of Prescription Appetite Suppressants -- Colman 143 (5): 380 -- Annals of Internal Medicine , Information about the history of anorectics in American medicine

2) WebMD with AOL Health - Prescription Weight Loss Medicine, A physician's interpretation of anorectics

3) Information about Diet Pills (Appetite Suppressants), Advice on choosing and utilizing anorectics

4) Pharmacorama - CNS stimulants and anorectics , Information on the biochemical functions of anorectics

5)MedlinePlus Drug Information: Appetite Suppressants, Sympathomimetic (Systemic), Anorectics and the central nervous system

6) Appetite Suppressants to Reduce Hunger and Help Weight Reduction , Information on common functions and usages of anorectics


Full Name:  Claude Heffron
Username:  cheffron@brynmawr.edu
Title:  Making Sense of the Salem Witch Trials Through Ergot Poisoning and LSD
Date:  2006-04-27 22:10:09
Message Id:  19168
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Science is generally thought of as a distinctive discipline which has little relation to other areas of academia. Over the course of time, science has increasingly crossed over to other areas of study and has been extremely useful in explaining certain historical phenomenon because as Abdus Salam puts it, "scientific thought is the common heritage of mankind (6). In the year 1692 in the village of Salem, Massachusetts, young girls began to exhibit strange behaviors such as having seizures and convulsions, as well as showing physical symptoms of illness, fever, for example. This strange phenomenon was attributed to witchcraft and started a craze that is now known as the Salem Witch Trials, which culminated in the hanging of 19 supposed witches, both men and women. One particularly interesting explanation for the bizarre behavior the bewitched exhibited was put forth by Linda Carporael in 1976, proposing that the supposedly possessed people may have contracted convulsive ergotism, a disease caused by consuming ergot poisoning (3).

Ergot is a type of fungus, from which the drug LSD is derived, that can develop on cereal grains, often rye, particularly in warm or damp conditions. Consuming breads contaminated with this fungus can cause the symptoms the bewitched exhibited among other signs (3). The two major kinds of poisoning by ergot are gangrene ergotism and convulsive ergotism . In gangrene ergotism the afflicted person's limbs come to appear like a mummy's: black, dry, and prone to spontaneously breaking (7). Those with convulsive ergotism tend to bite their tongues, develop strange breathing patterns, and have seizures much like epileptic seizures. People who have ergotism often feel their nervous system being stimulated briefly and then fall into paralysis, which can also lead to contractions in respiratory and cardiac areas of the body (7). Ergotism victims' muscles twitch involuntarily and they may have recurring convulsions, which effect their entire body. Arteries contract and there is debate as to whether veins contract as well or are actually dilated in people with the disease. When women have ergotism, their uterus may contract violently and sometimes cause abortion (7). The digestive effects of ergotism include vomiting, excess salivation, diarrhea, increased peristalsis, and retching. Skin temperature may even drop several degrees as a result of changes in the cardiovascular system (7). The many symptoms of ergotism are well known, but much more has yet to be discovered about the disease.

Scientists have yet to discover just how ergotism impacts the nervous system,but as Carporael suggests, it most likely acts similarly to LSD since the two substances are very much alike (3). LSD was accidentally discovered, by the Swiss chemist Albert Hoffman, while he was attempting to isolate a compound in ergot which he believed to be a circulatory stimulant. Hoffman had ceased his studies for a period of time after many failures and discovered LSD upon resuming them when he unintentionally absorbed some of the chemical through his skin and experienced the drug's multitude of effects (4). LSD is such a potent substance that Hoffman was affected by the minute amount that he accidentally absorbed and it is too powerful to be measured in milligrams like most drugs but rather must be taken in an even smaller unit, the microgram. The chemical difference between ergot and LSD is that LSD contains an additional diethylamide group which ergot lacks (5).

Using this similarity as a starting point, we can now examine how LSD effects the nervous system to show how a closely related substance, ergot, may similarly effect the body, perhaps causing so many Salem residents to appear to be possessed. LSD is very similar in structure to the neurotransmitter seratonin, which it is known to affect. Seratonin mainly exists in the upper region of the brain stem (1), through which most of the body's motor systems pass (2). The Raphe Nuclei area of the brain is known to be influenced by LSD and is perceived as crucial to finding out more about how LSD works since this region contains most of the brain's serotonergic cells and is thought to be involved in preventing sensory overload (1). The fact that this region of the brain contains so many seretonergic cells may prove useful in explaining how and why people experience hallucinations, a form of sensory overload, while they are on LSD. There is still much debate in the research about LSD as to whether LSD inhibits or excites seratonin reuptake (1). More research on how LSD works on the brain is necessary before definitive conclusions can be drawn about the drug.

Scientific evidence on how LSD effects the nervous system is still quite incomplete and the link between LSD's impact and that of ergot is certainly questionable, but Carporeal puts the medical symptoms into a historical context, giving way to a believable theory. Wild rye, a host plant for ergot, commonly grew along the coast of the Northern and Mid-atlantic states and supposedly caused sickness in the Massachusetts colonists' cattle (3). The grain was generally harvested in August and saved until the weather became cold around November which is consistent with the timing of the first girls exhibiting the symptoms of ergot poisoning in mid-December (3). The year 1791 was a wet one which tends to stimulate ergot growth and the next year, when conditions were unusually dry, no more witches were discovered. Reports from some of the bewitched indicate that many felt as though they were being choked and that someone was biting, pinching, and pricking them in addition to feeling as though their bowels were being pulled out, all of which could be their interpretation of the tingling sensations and involuntary muscle contractions caused by ergot poisoning (3). There is no sure way of knowing about what actually did cause the Witch Hunt in Salem over three hundred years ago but the medical symptoms of ergotism are certainly present in the case as are appropriate growing conditions for ergo, so ergot poising is certainly a possible explanation.

There is no solid evidence on how exactly LSD, a widely studied derivative of ergot, effects the brain and how similar its impact is to that of ergot. The link between ergot, LSD, and ergot poisoning as a potential cause of the Witch Trials is a speculative one but it certainly worthy of further consideration. With this in mind, it is impossible to draw conclusions about the plausibility of Carporael's theory that convulsive ergotism was the basis of supposed witchcraft in Salem. In my mind, this theory, accurate or not, is of great importance in that it seeks to explain the Witch Trials, which are typically viewed as a hoax, as a serious historical event which can be explained scientifically, helping to bridge the gap between science and other areas of academia.

Works Cited

1)The Effect of LSD on the Human Brain

2)Brain Stem

3)Ergotism: The Satan Loosed in Salem?

4)LSD Research: An Overview

5)The Discovery of LSD and Its Psychadelic Effects

6)Science

7)Secale Core: A New Look



Full Name:  Bethany Keffala
Username:  bkeffala@bmc
Title:  Observation and Action Reflect in the Same Mirror Neuron System
Date:  2006-04-28 12:40:22
Message Id:  19176
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip


Mirror neurons were first discovered in the F5 area in the rostral end of a monkey's nervous system. They are called mirror neurons because they fire not only when the monkey performs an action, but also when it sees that same action being performed by an experimenter. It was subsequently found that mirror neurons are also present in certain areas of the human brain. The existence and function of mirror neurons bring with them important implications and questions regarding different types of human learning.

When a monkey (or a human) watches an action, a circuit is triggered which is not the same overall pattern of neurons as that which fires when that monkey (or human) performs the action itself. Instead, there is a partial mapping. There are specific groups of neurons (mirror neurons) in specific areas of the brain that fire in the same pattern during both action and observation. This has led researchers to believe that mirror neurons are a missing link that helps explain how humans (and other animals) understand each others actions.

There are also different mirror neurons or groupings of mirror neurons for different actions. There are specific circuits, for instance, for using the same hand in different types of grasping, such as grasping a peanut versus grasping a mug. (2) Interestingly, if a monkey is watching an experimenter perform an action using her/his right hand, mirror neurons for the corresponding hand in the monkey will fire, even thought the monkey and the experimenter are facing each other. (3)

Watching someone else perform an action is like simulating the action ourselves without physically carrying out that action. It has been proposed that the Mirror Neuron system acts as a system of templates, as a link between action and observation. Mirror neurons have been found in humans in, "...several areas of the brain - including the premotor cortex, the posterior parietal lobe, the superior temporal sulcus and the insula - they fire in response to chains of actions linked to intentions" (1). Included in these locations is Broca's area, which is frequently thought of as the human homologue of the F5 area in non-human primates. Broca's area in humans is strongly associated with language production and comprehension, and the presence of mirror neurons in this area of the brain may help to shed new light on language use and acquisition.

Aside from their implications for language, mirror neurons may also help us to understand learning in general in a new way. In the broader scheme of things, they might help us understand how humans and animals relate to each other, and understand each other's behaviors and actions. "In humans, mirror neurons are much smarter, more flexible and more highly evolved than in monkeys, scientists have found, and they appear to be involved not only in actions but in intentions and emotions—the things that make humans social animals. When a person watches someone else perform an action—say a kick—mirror neurons in the brain simulate the action and provide a template for anticipating what will happen next" (1). The Mirror Neuron System could be a huge help in explaining our facility in learning and understanding, as well as imitation, and interaction in general. Instead of having two separate circuits for action and observation, we have one which goes both ways.

Mirror neurons seem to be very important for our understanding of interaction, but what happens when different species interact? We know from experimentation that a monkey's mirror neurons will fire when it watches a human experimenter carry out an action. But would a human's (or a monkey's) mirror neuron system be activated, for instance, when he/she hears a cat meow or watches a dog stretch? The system might even be activated while watching non-living entities. Do we simulate, for example, when we watch things like cartoon characters? Do monkeys simulate while watching cartoons? What happens when a person without a hand watches someone with a hand performing some action with that appendage? It seems that the more open the mirror neuron system is to being activated by entities which differ from the host by a greater degree, or more flexible activation for simulation, would have more of an evolutionary edge. If the firing of mirror neurons facilitates comprehension and interaction, then being able to comprehend a variety of actions and to interact with a variety of creatures would make a more flexible and adaptable organism.

However, the question must be asked, what do animals without mirror neurons do? If we equate having a mirror neuron system with the ability to comprehend another's actions, then animals without this system or with an impoverished system would have great difficulty in interacting at all. So the question then becomes, how do mirror neurons interact with other systems that facilitate comprehension of what is going on around us, and what are those other systems? Humans, as far as we know, have the most developed mirror neuron system in the animal kingdom, seconded by non-human primates. There does not seem to be much research exploring mirror neurons in other animals, though the literature suggests that there are animals without this system.

It would be useful to look at which animals have mirror neurons versus those who don't. I propose that the level of advancement of the mirror neuron system is on a continuum, ranging from no brain (such as jellyfish) to the system found in humans. In all probability, organisms that developed later would have a more highly developed mirror neuron system, as this feature seems to be favored by natural selection. The systems most likely differ in terms of number of mirror neurons, as well as neuronal arrangements. Mirror neurons could help us learn how and why, in part, we are so quick to acquire things like culture and language, but there are still many gaps that need to be addressed.




Sources:

1) Blakeslee, Sandra. "Cells that Read Minds." New York Times. 10 Jan. 2006. Science Times.

2) Rizzolatti, Giacomo, and Michael A. Arbib. "Language Within our Grasp." TINS Vol. 21, No. 5. pp. 188-194. Elsevier Science Ltd., 1998.

3) Stamenov, Maxim I., and Vittorio Gallese. Mirror Neurons and the Evolution of Brain and Language. Amsterdam: John Benjamins Publishing Company, 2002.



Full Name:  Trinh Truong
Username:  ttruong@brynmawr.edu
Title:  Liar, Liar, Brain on Fire
Date:  2006-05-02 23:41:40
Message Id:  19207
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

How do know if someone is lying? Some people say signs such as aversion to eye contact, fidgety behavior, and nervousness can help you discern a liar. However, there are some people who are excellent at lying and exhibit no such signs. Even the polygraph, which detects and measures changes in heart rate, stress, and blood pressure, fails to expose some liars such as former CIA agent Aldrich Ames, who was actually a Russian spy. (1)
Brain imaging has identified the part of the brain that is active when a person lies, which is the prefrontal cortex, the region of the brain enables people to feel remorse, learn morals, exercise self-restraint, and social sensitivity. (4) Within the prefrontal cortex researchers, Yaling Yang and Adrian Raine, have found that the amount of gray matter and white matter are different among people who habitually lie and normal people. In their experiment they took 108 volunteers and through a series of psychological tests and interviews categorized them as pathological liars, antisocial but not pathological liars and normal. In the brains of pathological liars white matter- composition of axons that connect nerve cells to one another- in the prefrontal cortex was found to be 22% greater than normal individuals and 25.7% than the antisocial group. This addition of white matter is believed by the researchers to better equip liars in the art of deception with skills such planning ahead, multitasking, manipulation, and mind reading to know what the person does not or already know about the situation. (2)
Besides from having a different amount in white matter, habitual liars also differ in the amount of gray matter in that they have 14% less than normal people. Since the gray matter is made up of nerve bodies that are responsible for processing moral issues in the prefrontal cortex, which is active during MRI when people are asked to discuss moral issues, it is likely that people with less gray matter are more likely to be unfazed about the moral qualms of lying. Because of their deficit in gray matter and surplus in white matter pathological liar's brain have less inhibition in deciding to lie and more metal tools at those disposable in the process of fabricating lies. (6)
The insight that this research revealed about the brain difference in activity while it is lying and while it is telling the truth has inspired further research into the brain's processes during a lie using the fMRI. (3) With this new lie detector scientists observed that when someone is telling a lie his brain is active in more areas than when he is telling the truth because most of the time it takes more effort to lie than to tell the truth. Areas, such as the amygdala, rostral cingulate, caudate, and thalamus which are connected to emotion, conflict, and cognitive control, becomes highly active when a lie is told and mostly inactive when the truth is told. For example, the caudate, which is the area that manages the conflict in the decision to tell the truth or generate a lie, is bright and hot when a lie is told and inactive when the truth is told. A psychiatrist, Daniel Langleben at the University of Pennsylvania suggests that before a person lies the brain must first refrain itself from blurting the truth, the more natural impulse, and then formulate a lie. These operations involved in lying are revealed through functional imaging which traces the activities of different regions of the brain associated with those operations. (1)
Many legal experts and scientists hope to use fMRI scanning in the courtroom as a lie detector. The way the method of detecting deception works is that it compares the brain's behavior when it is truthfully responding to mundane questions, such as those about the person's name, age, etc., to its behavior when it is responding to the questions of judicial concern. Whichever brain image that appears more different from the brain's activity mapping of the truthful answers will convey deceit. Though lie detection by fMRI mapping is more sophisticated and less erroneous than any other previous instrument of its kind because it is able to examine the directly the source producing the lies are than the indirect side-effects, there are still accuracy issues concerning the judicial implementation of it. The simple, clear MRI images produced from this procedure may obfuscate the possible complexities, such as those in the presumed direct relationship of certain activities of the brain to the operations involved in lying, behind the results. Certain memories can be unclear, confused, or remembered incorrectly and others more accurate, but it is not certain that the brain images may be able to convey this distinction. Also there could be loopholes because people could be misleading without actually lying. (5)
Furthermore, the ethical implications of such an invasive instrument into the most private source of our being may be more controversial and prodigious as we start implementing this new technology more prevalently. Professor of Hank Greely of Stanford University believes that "to invade what has been the last untouchable sanctuary, the contents of your own mind" is a "significant change in our ability" and "it should make us stop and think to what extent we should allow this to be done." (5) In this age of terrorism perhaps personal privacy can be sacrificed in order to save lives. Besides this new implement can be a benefit to those who are innocent since it can be utilized in their exoneration.
The thought that a machine can, to some extent, read our minds and reveal our secrets is both terrifying and exciting. However, it is only terrifying for those who indeed have something to hide. Just as long as this use of this device is limited to judicial purposes, I believe its advantages will far outweigh the ethics difficulties that it may generate. Still, its infallibility cannot yet be ascertained, and we should not rely entirely on it. The brain is complicated and the processes of lying may not always be directly correlate with the same exact groups of regions in the brain.
Web References
1)Don't Even Think About Lying,
2)First Evidence of Brain Abnormalities Found in Pathological Liars,
3)Brain Fingerprinting,
4)Size of Brain Linked to Violence,
5)Are You Lying?,
6)Liars' Brain Makes Lying Come Naturally?,



Full Name:  Astra Bryant
Username:  abryant@brynmawr.edu
Title:  Cough. Squeeze. The Anal-Cough Reflex
Date:  2006-05-03 14:43:28
Message Id:  19211
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Warning: This web-paper should not be read by those offended by frank discussions of personal waste management.


Let's try an experiment. Take a deep breath in. Cough.


Now cough again, but when you do, pay attention to your anal sphincter.


If you participated in the about experiment, you should have noticed that when you coughed, you felt a brief contraction of your sphincter. If you were to sneeze or stand, you would experience the same sensation. What causes this contraction? This muscular action is, like all other actions, controlled by the nervous system. Specifically, the muscular contraction you felt is a reflex – an unconscious, non-voluntary, bodily reaction to a specific stimulus. In the case of this particular muscle contraction, the stimulus is a cough. The motor output is the brief contraction of the anal sphincter. This sensation is called the Anal-Cough Reflex. Used as a non-invasive test before and after colorectal surgery, it helps doctors assess fecal and stress urinary incompetence. But how does this reflex work?


First, let us go over some basic anatomy. The anal sphincter is actually comprised of two muscles: the internal and the external sphincters. The internal sphincter (IAS) is an involuntary smooth muscle that contributes 55% to the anal resting pressure. The IAS contractions are characterized as slow waves which occur 6-20 times each minute. The IAS in innervated sympathetically via the hypogastric and pelvic plexus, and parasympathetically from spinal nerves 1, 2, and 3 via the pelvic plexus. There may also be some innervations from nonadrenergic, noncholinergic pelvic nerves, but these innervations have not been thoroughly characterized (3).

The external anal sphincter (EAS) surrounds the lower 2 cm or the IAS, separated by the inter-sphincteric groove. The EAS also connects to the puborectalis muscle, and is innervated by spinal nerves 2, 3, and 4 via the inferior hemorrhoidal portion of the pudendal nerve (4). The EAS exhibits both voluntary and involuntary contractions. The anal sphincter (both internal and external) is located on the pelvic floor, along with a complex set of muscles which support the urethrovesical junction, the vagina (in females), and the anorectum (7)


Now back to the Anal-Cough Reflex. During coughing, motor contractions are coordinated by a "cough center" located in the brainstem. This center causes the contraction of the abdominal diaphragm and intercostal muscles. If a voluntary cough is induced, 8 ms later, the intercostals and rectus abdominis muscles contract. 93.1 ms after a voluntary cough, the EAS also contracts (2). This contraction is the Anal-Cough Reflex (ACR).

In neurobiological terms, the difference in latency between the contraction of the abdomen muscles and the EAS is quite large, and indicates that the two contractile events are caused by separate mechanisms. Given its short latency, the abdominal contractions are most likely initiated via a monosynaptic spinal pathway originating in the brainstem "cough center". Three main properties of the EAS contraction have been used in the identification of the pathway type. The much longer latency of the EAS contraction is too long for the pathway to be monosynaptic. In addition to latency, the anal pressure caused by the contraction is great than the pressure caused by a voluntary squeeze –evidence that the EAS contraction is an involuntary reflex response (6). Since EAS contraction still occurs despite anal anesthesia, sensory input from the anal region is not needed for activation of the pathway. Thus the neural pathway that yields the EAS contraction has been identified as a polysynaptic reflex pathway. (2)


Although the type of nervous pathway which initiates ACR has been experimentally described, the exact mechanism by which the pathway senses the cough is still unknown, as is the exact location of the pathway. However, there are several hypotheses currently being investigated (2). One common theme is sensory stimulation caused by the increase in intra-abdominal pressure that occurs during a cough.

The Hypotheses

1. Pelvic floor sensory receptors:
In the muscles of the pelvic floor, muscle spindles or other sensory receptors could sense the rise in intra-abdominal pressure, or the stretch of the abdominal muscles. The pelvic muscles could then cause the contraction of the EAS.

2. Pacinian corpuscles:
Pacinian corpuscles – pressure receptors connected to a sensory neuron – located in the ligaments and fascia of the pelvic floor can also cause muscular contraction, and could sense the intra-abdominal pressure rise.

Note: The above 2 mechanisms would be expected to exhibit shorter latencies than the 90 ms experimentally seen. Therefore, although they are mechanically sound, these mechanisms are unlikely to be the cause of the EAS contraction response.


3. Backfiring output from "cough center":
Efferent output from the rectus abdominus to the "cough center" could backfire, causing contractile excitation of the EAS.

4. A non-reflexive polysynaptic pathway:
While this possibility should be considered in order to cover all neurological possibilities, Chan et al. state that their electrophysiological experiments indicate that the ACR is generated by a reflex pathway whose reflexogenous stimulus is not localized to the rectal mucosa.

Note: In addition to the conclusions regarding the reflex characteristic of the neurological mechanisms, the above 2 mechanisms can not account the presence of a relationship between EAS latency and cough force; with increasing force of cough, the 90 ms latency decreases (with the 8 ms latency of the abdominal muscles remaining constant) (1) (2) (3).


5. Sensory input from the viscera of bladder:
The viscera of the bladder contain many stretch receptors that are normally involved in sensing how full the bladder is. These receptors may be able to detect mechanical stretching of the bladder as a result of the intra-abdominal pressure created by the cough.

6. Slow conducting fibers:
Slow conducting skeletal muscle afferent fibers can respond to mechanical stimulation (i.e. stretching of muscle during coughing). These fibers could conduct the signal to the EAS; the 90 ms latency is explained by the slow conduction velocity of the fibers (5).

Note: The above 2 mechanisms were mentioned by Chan et al. but not elaborated on.


7. Spinal Neuron:
Currently, the most likely neurological mechanism for the EAS response is the involvement of a spinal neuron. Spinal nerve 3, which innervates the EAS, when stimulated, causes the EAS to contract after a latency of 90 ms – the same latency time as the EAS response to coughing (2). It is possible that this S3 mediated response is responsible for the contraction seen in the ACR. However, although these 2 responses have very similar characteristics, they have to date not been experimentally linked, and a mechanisms connected S3 to the cough behavior has not been identified.

What is the purpose of the ACR? Even though the exact mechanism of the reflex remains a mystery, the physiological purpose has been identified. A contraction of the EAS inhibits the ability to void the rectum. Since the ACR is initiated in response to coughing, sneezing and standing – all of which involve a distinct rise in intra-abdominal pressure which could cause movement within the digestive system – the EAS response is most likely a mechanism that prevents unwanted leakage of fecal matter during pressure changes (1). Without the EAS reflex contraction, any stress on the intra-abdominal space could cause involuntary release of fecal matter. Since the pressure of the intra-abdominal space is quite variable – especially in bipedal organisms, a preventive contraction would be important. Imagine an organism without ACR – it would be easy prey for any predator who could locate its position based upon the smell of the fecal matter released whenever the organism in question experienced a rise in intra-abdominal pressure (i.e. during coughing, or moving). The ability to keep fecal matter contained within the body despite internal pressure changes is important in maintaining bodily control, and in the evolutionary tendency toward stealth as a means of avoiding predation.

Sources

1) Cough Anal Reflex: Strict Relationship Between Intravesical Pressure and Pelvic Floor Muscle Electromyographic Activity During Cough. , a paper describing the relationship between increasing intra-abdominal pressure and EAS contraction.

2) The Anal Reflex Elicited by Cough and Sniff: Validation of a Neglected Clinical Sign., a paper describing the Anal-Couch Reflex.

3) Anatomy and Physiology , American Society of Colon and Rectal Surgeons: gives basic anatomy and physiology of pelvic floor.

4) Chapter 1: Anatomy, Cellular and Gross. , a chapter from a book on Incontinence: describes the anatomy of the pelvic floor.

5) Sensing Vascular Distension in Skeletal Muscle by Slow Conducting Afferent Fibers: Neurophysiological Basis and Implication for Respiratory Control.a>, a paper discussing a mechanism of sensing movement of skeletal muscle by slow conducting fibers.

6) Chapter 2: Neurophysiology and Neuropharmacology , a chapter from a book on Incontinence: describes the neurophysiology of the pelvic floor.

7)Normal Pelvic Floor Anatomy, describes general functions of structures in the pelvic floor.



Full Name:  Christin Mulligan
Username:  cmulliga@brynmawr.edu
Title:  Smelling Numbers and Tasting Colors: Synesthesia
Date:  2006-05-03 15:50:35
Message Id:  19214
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Is it possible to smell shapes, hear colors, taste numbers? It is if you have synesthesia. Famous synesthetes include Russian novelist Vladimir Nabokov and Russian composer Alexander Scriabin. Synesthia is a Greek word, (syn = together + aisthesis = perception), translated as "joined sensation." Synesthesia is accompanied by a sense of certitude, a conviction regarding the salience of these experiences. People with this condition experience consistent sensory blendings. Synesthetes involuntarily experience "cross-modal" sensations. The feeling of one sense stimulates another, enabling them to taste the number four, see music as a colored lines or blobs, or smell a circle. As a result of these cross-modal associative abilities, synesthetes typically score in the superior range on the Wechsler Memory Scale. Cytowic also notes that despite "their overall high intelligence, synesthetes have uneven cognitive skills. While a minority are frankly dyscalculic, the majority may have subtle mathematical deficiencies (such as lexical-to-digit transcoding). Right-left confusion (allochiria), and a poor sense of direction for vector rather than network maps." Colored hearing is the most common form of synesthesia (1).

Genetic transmission of synesthesia is either autosomal or x-linked dominant. Synesthetes are predominantly left-handed and predominantly women. Cytowic believes 1 in 25,000 people are synesthetic and found a female to male ratio of 3:1. Synesthesia is typically unidirectional. For example, the number two might trigger the color red, but the color red does not trigger the number 2. Also, synesthetic perceptions are "durable" and "generic" rather than "elaborated" or "pictorial." Thus, these perceptions remain consistent throughout the synesthete's life. If the word "banana" is seen as yellow, it will remain yellow. In synesthesia that produces visual projections, there are four basic types of images, known as form constants. These are gratings and honeycombs, tunnels and cones, cobwebs, and spirals (1).

After a study of forty-three chromatic-lexical (color-word) synesthetes, there was no universal agreement on the color concordances of the letters of the alphabet. Certain vowels tended toward certain colors, however. For example, A was typically red; E was either white or yellow; O was white. Among the consonants, there was no significant correlation (2).

Synesthesia depends only on the left-brain hemisphere and involves large metabolic shifts away from the neocortex, which controls reason and higher analysis, towards the limbic system, which controls emotion, memory, and attention. The center of synesthetic activity is the hippocampus. Seizures in the limbic system and hippocampus can produce synesthesia in non-synesthetes. During synesthesetic perception, cortical blood flow decreases so much in subjects that they should become blind, paralyzed, or show some other indication of a lesion. However, subjects' thinking and neurological exam remains normal. Even during trials with amyl nitrate, which increases cortical blood flow, subjects' blood flow was decreased compared to the baseline (1).

Maurer argues that all infants are synesthetic up to four months of age, when the senses begin to differentiate themselves.

During early infancy - and only during early infancy - ... evoked responses to spoken language (are recorded) not just over the temporal cortex, where one would expect to find them, but over the occipital cortex as well. There are similar reports of wide-spread cortical responses to visual stimuli during the first 2 months of life (e.g., Hoffman, 1978). Results such as these suggest that primary sensory cortex is not so specialized in the young infant as in the adult (3).

If this is the case, then why do most infants lose this ability while others remain synesthetic for their entire lives? To further test this hypothesis, Baron-Cohen suggests using neural imaging techniques, such as a functional MRI scan, to test the cortical blood flow in both the visual and auditory cortex when an infant is presented with auditory tones. He theorizes that after the initial infant phase of synesthesia, the senses become fully modularized, making for more efficient informational processing. However, as noted above, the mnemonic benefits of some forms of synesthesia are highly adaptive.

However, Baron-Cohen also studied maladaptive forms of synesthesia, such as the case of JR, a rare bi-directional synesthete who sees colors when she hears sounds, but also hears sound when she sees colors. Situations which are either too noisy or too colorful produce confusion, stress, dizziness, and a "feeling of informational overload." This form of synesthesia lead to social withdrawal and interference with normal everyday activities, supporting the evolutionary argument that natural selection favors modularized senses as opposed to joined ones (3).

WWW Sources
1)Synesthesia: Phenomenology And Neuropsychology, an article on synesthesia
2)Color Trends in Synesthia, a list of color concordances of letters of the alphabet for chromatic-lexical synesthia
3)Is There a Normal Phase of Synaesthesia in Development?, another article on Synesthesia



Full Name:  Em Madsen
Username:  emadsen@brynmawr.edu
Title:  Perfect Pitch: It's Not Just Landing the Banjo in the Dumpster
Date:  2006-05-04 11:27:36
Message Id:  19222
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Perfect Pitch, or absolute pitch, is the ability to "identify a note by name without the benefit of a reference note, or to be able to produce a note that is the correct pitch without reference" (1). Jimi Hendrix, Mozart, and Phil Spector are all said to have had perfect pitch—Igor Stravinsky and Johannes Brahms did not. Those who are not in possession of this ability can take comfort—a successful musical career does not seem to be dependent on the ability to identify F#. Some individuals with perfect pitch have difficulty singing in groups, or listening to music that has been transposed—the changing intonation bothers them so much that they cannot function musically. However, even given all that is known about perfect pitch, neurobiologists and geneticists are still unsure as to what exactly causes perfect pitch.

In the UC Genetics of Absolute Pitch Study, scientists made the following observations about absolute pitch possessors: "most individuals with Perfect Pitch began formal musical training before age 6... Perfect Pitch aggregates in families, indicating a role for genetic components in the development of Perfect Pitch" (2). Given these observations, the study has decided "that a genetic predisposition for Perfect Pitch and musical training are both important for the development of Perfect Pitch" (2). There would seem to be some credibility to the genetic component of Perfect Pitch when it is uncovered that "among the autistic and Savant community, the incidence of Perfect Pitch rises to 1 in 20 or higher" (1), whereas among the general populace the numbers are closer to one in 10,000. However, to assume that Perfect Pitch is genetic because it aggregates in families is potentially dangerous—many of the families the UC study observed were families such as Suzuki families—environments where parents and children were fully involved in the learning and production of music, and therefore were equally exposed to methods that might be more conducive to acquiring Perfect Pitch. In order to more fully explore Perfect Pitch, I'd like to look more closely at the construction of a typical Suzuki family.

I was raised as a Suzuki kid. The Suzuki method is a way of learning an instrument that is based on the ways in which children acquire speech. The founder of the method, Shinichi Suzuki, born in 1898, based his teaching method on love and respect, and the method began with his realization that all Japanese children speak Japanese. (3) Though this may seem rather obvious, Suzuki was amazed at this realization because Japanese is a complex language, and children far younger than four years old were able to learn how to speak it simply by listening to and mimicking their parents. In fact, as children get older, they gradually lose their ability to acquire language as quickly—small children brought to a new country learn the new language much faster than their parents or even their siblings who are older than the age of 12 or so. When a child studies through the Suzuki method, they start very young. In my case, I began at the age of four, though it is not unusual for students to start younger. The students listen to recordings over and over of the music they are going to learn to play: this repetition engrains the music in the students' minds so that they are able to play the music without looking at a sheet of music (much like we all learn to speak before we learn to read words). There is a much higher incidence of Perfect Pitch among Suzuki students than among the rest of the populace.

This would seem to indicate that the acquisition of Perfect Pitch works in similar ways to the acquisition of language. Children are born with an innate ability to acquire language. Linguist Noam Chomsky refers to this as the Language Acquisition Device (LAD). While this is hard-wired into place when a child is born, it also must be activated through interactions with other humans, so that a child may learn the parameters of the language of their home and culture. (4)This ability diminishes over time—the linguistic pathways become more rigid and defined, and the LAD ceases to function in its acquisitional mode. The idea that there is a time-period in which the acquisition takes place is known as the "Critical Period Hypothesis," which was proposed by linguist Eric Lenneberg in 1964. (4) Adults who have learned a language well during their childhood, when the LAD is first in functional usage, will be able to acquire other language past the 12-year-mark because they have learned the environmentally acquired parameters that Chomsky discusses. Those who have no interactions with the world in terms of language in this crucial "Critical Period" have little or no success in learning syntax or communication in any language.

The Suzuki method is exactly like this. The method functions on this same idea of the "Critical Period," and the children in the Suzuki method interact with parents and teachers during this developmental period to learn the parameters of musical syntax and structure. If a child begins music lessons after age 12, it is much more difficult to acquire the skills and knowledge simply because this critical period has passed. This does not mean that children past the age of 12 cannot learn an instrument, it simply means that the musical stimulus they have received to their musical equivalent of Chomsky's LAD has been societally broad and not focused on specific musical acquisition in a way that parallels learning to speak. They may have learned some of the societal parameters, but they are more likely to be "listening" parameters (i.e. "I enjoy listening to this piece of music and therefore know a little bit about its structure/tonality/etc.") rather than "learning" parameters (i.e. "I have played this piece of music and have an understanding of its inherent structure from the inside out.").

It seems to me that Perfect Pitch is acquired in a similar way. No one is born with innately perfect pitch, but they are born with a pitch-equivalent of Chomsky's LAD: a Pitch Acquisition Device. Through societal interactions as a child, this PAD/LAD is developed so that the parameters of the pitch are learned and internalized. There must be quite a bit of overlap between this device and the LAD, because a lot of languages "depend heavily on pitch for meaning" (1) such as Japanese or Vietnamese. These overlaps might give some kind of clue to the biological evolution of Perfect Pitch—if communicating with someone whose pitch inflection was imperfect could mean the difference between deciding "I think there's food in that cave," or "I think there's a bear in that cave," then it seems that those who were more tonally accurate and therefore able to communicate more effectively were more likely to survive. In this sense, Perfect Pitch has very practical applications seen through the lens of language—it may not ultimately be fully related to music at all, but rather speech.

Sources Used:
1)Wikipedia's Perfect Pitch site, Pretty Interesting
2)UC Study, You can even see if you have Perfect Pitch. In the interests of full disclosure, I do not.
3)Suzuki Association of America Website,
4)Wikipedia's Language acquisition site,



Full Name:  Whitney McDonald
Username:  wmcdonal@brynmawr.edu
Title:  A Smelly Experience
Date:  2006-05-04 12:25:42
Message Id:  19225
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Memory is ignited by many triggers, but interestingly a common trigger is often through smell. For me, the smell of burning wood reminds me of when I spent time in the Trinidad and Tobago islands. This trigger of smelling burning wood unfailingly causes recollection and I get a sudden rush of happiness. However, the smell of burning wood has nothing directly to do with the islands except for my own personal experience. It is not odor itself that changes my nervous system making me happy but the experience I associate with that odor. Hence, describing smell would be near impossible to the extent that I could describe my own experience. Olfaction (smell) stimulus not only triggers recollection and emotions but also changes behavior, for example, the smell of lavender puts me to sleep and the smell of pure peppermint keeps me awake. However, it is still the past experiences I have had with these smells that make pharmacological changes in my body to react to that smell, not the basic input of the odor stimulus. On this basis, it is apprehensible that the brain is structured that one can not describe smell because smell has experience attached to its "identity".

The biological processes' facilitating the act of smelling is an area covered in mucus called the olfactory epithelium, located in each nostril, has sensory cells and receptor cells on its surface. There are proteins in the mucus acting as transporters of odorants to receptors (1). Odorant receptor neurons have cilia that extend into the mucus and axons that reach the olfactory bulb (2); there is still some debate on what exactly the bulb's function is but it is currently know to be directly related to the sense of smell. Lastly, the entire stimulus leads to parts of the brain that constitute for emotion; motivation and memory (i.e.the septal nuclei, pyriforam cortex and the amygdale) (1). The fact that the brains-olfaction system is wired this way shows that there is really no other way to identify with smell but through experience because the brain's wiring is directly associated with emotions and memory. Signals from olfaction stimulus do not go to behavior centers of the brain which would associate smell to actions but goes to experience areas. Furthermore, it was found that repulsive odors are more emotionally activating in the brain than pleasant odors; differences are seen in activity of the amygdale (3).

Although the brain creates unique signals when receiving a pleasant signal or unpleasant signal does not mean the brain itself characterizes signals as pleasant or unpleasant, but personal experience does this characterization. A study was conducted with an EEG monitor to see the effects of Aromatherapy scents on brain activity. Participants were exposed to ylang-ylang and rosemary scents. No significant data correlation between brain wave activity and the use of ylang-ylang and rosemary was found (1). This is likely due to psychological factors such as lack of exposure to the scents. It is evident that smell did not create any significant changes in the EEG monitor because the subjects lacked an experience associated with those smells to affect brain waves; failing to create any emotional changes. However, others may have an emotion or memory behind rosemary or ylang ylang smell and automatically have an emotional reaction, which changes the EEG data. Therefore it can possibly be extracted from this observation that smell recognition is not predetermined by the nervous system but has to be acquired.

Aside from causing emotional changes, olfactory input can alter more overt behaviors such as seizers in epilepsy patients. This interesting finding can be attributed to the fact that the temporal lobe where seizures are first initiated is next to olfactory centers (1); receiving an olfaction signal reduces the effects of the seizure or induces it completely. An experiment was done with rats; a seizure was produced by electrical impulses to the brain. Controls were observed and levels of seizure intensity were classified; the effects on behavior were recorded 7 days after the experiment. The stimulants peppermint, white birch and bitter almond were used to test its effects on seizure intensity. White birch was found to be able to reduce the intensity of the seizure in relation to the control (4). Here it is clear that the white birch had an effect on intensity of the seizures and that overt behavior can be altered by olfaction stimulus. What is unclear is if the rats could have had an experience attached to the white birch smell factoring these results, or if it was the actual chemicals in the white birch smell.

Nevertheless, there are many other fascinating examples of olfaction stimulus altering pharmacological processes. A group of men were administered insulin "once a day for four days and their blood glucose was measured (it fell). At the same time they were exposed to a smell. On the fifth day they were given just the smell and their blood glucose fell" (1). This is a very clear example of the effects of experience on overt pharmacological processes. The subjects had an experience associated with the smell which caused changes in the body. Because the body had previous experience with the change in blood glucose associated with that particular smell the blood sugar changed without the use of insulin. Another experiment was conducted where female donors at different periods in their menstrual cycle gave armpit swaps on a cotton swab. Another group of women had the swap wiped on their upper lip; the results were amazing. The women who received the stimulus advanced or stopped their menstrual cycle depending on the stage the donor was in. This interference with the menstrual cycle from an olfaction stimulus is attributed to the fact that a woman's body can recognize when the menstrual cycle should end by input of many factors; bodily scents are one of them. The body recognizes the scent through experience and is able to alter chemical processes accordingly. This is yet another example of how olfaction stimulus alters pharmacological processes and also shows that experience has a substantial role in behavior from olfaction stimulus.

Lastly, because smells are associated with emotions and experience describing smell is near impossible. This inability to describe smell is associated with brain wiring. "Odors are inaccessible at the behavioral level and that all odors are initially encoded as 'objects'" (3). Smell is not associated with behavior but with familiarly novelty like an object therefore can not be described. This further alludes to the reason why smell recognition is associated with experience not just chemical of olfaction stimulation.

Though these various experiments and findings it is evident that the olfactory input can not only alter emotions but overt actions of the body as well. Connections with the olfaction system directed to areas of the brain associated with emotion and memory makes it clear why personal experience constitutes smell recognition and not olfaction stimulus itself. Therefore, if experience identifies smell and smells change bodily processes then it is evident that experience can change a person's overt behavior. If one can mentally trigger an emotion they can change their body. That the mind can essentially affect the body; where a mere thought can cause chemical changes, enabling just a mere thought to take the place of any behavioral drug and fundamentally heal the body.

References:
1) Olfaction
2) Olfaction, The Leffingwell Reports .
3) Royet, Jean-Pierre, et al. "Lateralization of the olfactory Process." Chemical Senses 29 (2004): 731-745.
4) Valentine, Pamela A. et al. "Sensory stimulation reduces seizure severity but not after discharge duration of partial seizures kindled in the hippocampus and threshold intensities." Neuroscience Letters 388 (2005):33-38.



Full Name:  Christin Mulligan
Username:  cmulliga@brynmawr.edu
Title:  A Review of Touched with Fire: Manic Depressive Illness and the Artistic Temperament
Date:  2006-05-04 20:06:24
Message Id:  19230
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Kay Redfield Jamison's book, Touched with Fire: Manic-Depressive Illness and the Artistic Temperament explores the link between the disorder and creative genius. Jamison begins her discussion by outlining the general features of manic-depressive (bipolar I) illness. Symptoms of mania include: agitation, sleeplessness, paranoia, excessive cheerfulness resulting in extreme irritability, susceptibility to spending sprees and sexual indiscretions, and compulsive talkativeness. Symptoms of depression include: flat affect, constant sadness and fatigue, loss of interest in normal activities, indecisiveness, sleep and appetite deprivation or in excess, and morbid or suicidal ideations. She theorizes that these same behavior cycles are present in the lives and evidenced in the work of many artists, composers, poets, playwrights, and novelists. The symptoms of mania tend to stimulate productivity, while depression tends to cause blocks where a paucity of work or none at all is created.

Jamison presents a number of biographical studies of both historical and contemporary imaginative individuals who have shown signs of mood disorders. These include: Anne Sexton, Herman Melville, Mary Shelley, Vincent Van Gogh, Sylvia Plath, Edgar Allan Poe, John Ruskin, Robert Lowell, and dozens of others. In her studies of creative output, those treated for mood disorders experienced peaks in productivity three to four months after they experienced peak in mood. The poets and novelists reported that they produced the most work during September, October, and November; while the painters and sculptors indicated not only this autumnal peak but also one in the spring. Jamison believes that this is the result of Circadian rhythms, the body's natural cycles that control everything from appetite to sleep and sex drive. The amount of light one is exposed to is particularly pivotal in controlling these rhythms and mood shifts, thereby triggering the urge to create. The rate of change in light during the spring and autumn months seems to induce mixed states where both manic and depressive symptoms are present. These periods of "maximum change, contrast, and trasition-are, in their own way, highly conducive to creative work" (Jamison 144).

Jamison believes that hypomanic thought and creative thought share fluency, rapidity, divergence (originality), and flexibility. The extreme increase in the quantity of thoughts and associations made during mild mania makes it more likely that some of these thoughts will be unique or even brilliant. Hypomania tends to increase scores on the Wechsler Adult Intelligence Scale. Manic patients also exhibit "pronounced combinatory thinking;....the ideas formed in this way become 'loosely strung together and extravagantly combined and elaborated'" (Jamison 107). Furthermore, many patients begin spontaneously writing poetry, some without having ever written any before.

Jamison provides a number of first-person accounts from individuals describing their own struggles with the creative process. Many characterize mania and depression as the impetus for creation. Trials and tribulations inspire their art. She spends an entire chapter enumerating the difficult highs and lows experienced by famous poet Lord Byron. She also focuses on the genetic inheritability of bipolar disorder by tracing it in the genealogies of several prominent artistic families, including the Tennysons, the Woolfs, the Jameses, the Coleridges, the Schumanns, and the Hemingways.

What Jamison does not focus on is the neurological basis of the disease. She neglects to examine the neural structures involved, the differences in neural functioning, or the role of neurotransmitters, such as dopamine, serotonin, and norepinephrine. Neurotransmitters are signaling mechanisms between neurons that tell your brain how to respond to stimuli. A deficit of these chemicals is believed to cause the disorder. While she does not discuss the effects of environmental factors, such as stress, on bipolar individuals, she does address the role of alcohol and drugs used to alleviate or intensify the illness. In her view, one is a manic-depressive artist not an artist and a manic-depressive. Since it affects all aspects of behavior and thought, clearly, the disorder is part of one's "I-function," and it is impossible to separate oneself from it.

Jamison, a manic-depressive herself, is adamant about the use of drug and traditional therapy in treating bipolar disorder. Lithium, which is believed to affect the production of the aforementioned neurotransmitters, may cause personality and temperament changes. However, she contends that medication is necessary to balance the extremes of the disease and often prevents the severe consequences, such as suicide, that occur when it is left untreated. Furthermore, in the two studies that she presents, the majority of writers and artists (57%) found their artistic productivity increased or stayed the same (20%) while on lithium. For patients in whom lithium severely hinders their creativity or has other adverse side effects, she notes that now there are other options for treatment, including two anticonvulsants, carbamazepine and valproate. She reflects on the sad history of sterilization of manic-depressives and advises the medical community to be wary of future genetic therapies that could potentially eradicate the disorder, as they may also deprive the world of future brilliant, imaginative minds that are "touched by fire."

Works Cited

Jamison, Kay Redfield. Touched with Fire: Manic-Depressive Illness and the Artistic
Temperament. New York: Free Press, 1994.



Full Name:  Marissa Patterson
Username:  mpatters@brynmawr.edu
Title:  Changing Myself: the Potentials of Neurofeedback
Date:  2006-05-05 11:37:08
Message Id:  19236
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

The brain is an incredible organ. It is the source of action potentials that control movement, breathing, even thought and personality. The brain itself can be influenced by many external or unplanned factors, such as medication, surgery, accident, or stroke. But what if a person could influence their own brain? This is an almost paradoxical thought—how can you influence the part of the body that generates behavior and allows you to act upon your desires? What would that say about the location and ability of the "self?" The process of neurofeedback distinctly raises these questions while revealing itself to be a solution to diseases such as attention-deficit disorder, epilepsy, and depression(1) by training someone to control and adjust their own brain waves. Patients using this treatment are hooked-up to a machine that measures brain waves through electrodes stuck on the scalp, which are in turn connected to a computer. When the patient properly emits the desired type of brain waves, an action occurs on the screen, such as a bike racing faster, a plane ascending in the sky, or a musical symphony playing(2). Erroneous waves not significantly different from the flawed baseline do not provide the desired reward.

Can the brain be trained to change itself? This would require there to be an immense interconnectedness in the brain, between the "I-function" of the neocortex that controls voluntary actions and the neurons within the brain that control the action potentials of the brain itself. Not only would that section of the brain interpret and react to those signals coming from the external world, but it would also be able to adapt to internal signals, therefore decreasing or increasing its own functioning as a response to desires of itself and thus increasing attention or performance.

Neurofeedback is based upon the growing concept that mind and body are connected and can influence each other. As early as the 1960's, scientists were discovering that research participants could modify the strength and velocity of their brainwaves if given feedback, such as pleasant tones, that rewarded them when they were "doing it right. (2)" Only very recently, however, do the resources exist to properly show a concrete effect of this type of training. The initial research in this field showed that patients with epilepsy could reduce seizure risk by two-thirds if they learned to heighten a special type of brain wave known as sensorimotor rhythm(3).

A much more popular current option focuses on the 10 percent of the population with ADD or ADHD. It has been hypothesized that these children do not have the proper concept of what it means to "concentrate," and through neurofeedback, they are able to "learn" what paying attention feels like(3). Data has shown that patients with ADD have greater amounts of a slow wave activity, called theta, and less fast wave beta activity. The machinery used with these patients reward high beta waves and low theta waves that signal the brain is concentrating on a task. (3) This provides a very concrete signal to the patient that that is indeed what "attention" is and what it is supposed to feel like when he or she concentrates(2).

It became apparent in many of these studies that neurofeedback provided a more permanent option to attention problems, and even showed long term stable changes in EEG measurements of brain waves that can make medications like Ritalin unnecessary in 60-80 percent of patients(3). It also demonstrates a highly effective course of treatment for the over 200,000 children who are not helped by standard medications or who are heavily impacted by side effects of the drugs(4). The increasing transition to this new type of treatment showcases the changing concepts of the brain and how it works. No longer satisfied with dousing the brain with chemical stimulants, practitioners and families now search for more specific treatments that help to create a life altering change instead of a temporary unstable fix. These changes brought about by neurofeedback demonstrate a seemingly permanent change in brain function, a way of providing a lasting behavioral modification.

The benefits of neurofeedback are not only limited to those difficulties that often had been thought of to have solely a chemical basis. Closed head injuries, such as concussions, have been treated with neurofeedback and have shown large improvements in symptoms such as headaches and blurred vision(5). Even more incredibly, neurofeedback has been used to help dancers and musicians to increase levels of brain waves that are associated with relaxation and creativity. For example, at the Royal College of Music in Britain, musical playing test scores have increased by 17-50 percent with neurofeedback and students say they feel more expressive(6). It is also being used by NASA to increase concentration in pilots and to increase memory and cognitive functioning(1).

These examples suggest that the brain is much more pliable than initially believed. It is not only possible to change and increase faulty functioning but to improve the general functioning of the brain so that it is more efficient. This could only occur through this type of treatment if the brain is indeed interconnected neurons that affect each other in complicated ways instead of a group of cells that are merely impacted by doses of chemicals. It challenges the often accepted notion that the only ways to effect the way the brain works is to surgically remove a section or to bombard it with drugs. The involvement of the I-function in this treatment allows for the patient to take control of their illness and take advantage of this more fluid concept of the brain to improve function.

The fact that neurofeedback is also being used to improve creativity and mental functioning suggests that characteristics of personality and self are not as inherent and permanent as they are often thought. If you can modify creativity, what are the limits to alter optimism, aggression, or a multitude of other emotions and behaviors? If neurofeedback has already been shown to decrease the symptoms of depression (1), with more research and study it seems there is an endless range of possible applications. This flexibility of self leaves personality wide open. Is there anything the mind cannot do?

1) Train Your Brain, a 2006 Scientific American Mind article

2) Your Brain and Neurofeedback: A Beginner's Manual, A good resource on biofeedback

3) Biofeedback trains your brain to treat disease, A 2000 article from WebMD

4) Neurofeedback: An alternative and efficacious treatment for Attention Deficit Disorder , A 2005 article from the Journal of Applied Psychophysiology and Biofeedback

5) Biofeedback Offers help to Hyperactive Children , A New York Times article

6) "Music on the Brain" , A BBC radio broadcast



Full Name:  Jen Lam
Username:  jlam@brynmawr.edu
Title:  Mixed Signals: Decoding Emotional Body Language
Date:  2006-05-05 15:44:41
Message Id:  19253
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip


Don't underestimate the importance of body language.
-Ursula, Disney's The Little Mermaid

How many times have you watched the evening news and heard tragedies of people being injured due to a panicking crowd? It's a common occurrence from raves to riots. Anytime a large group of people gather, it seems as though fear spreads like wildfire regardless of its origin. The speed at which fear disperse throughout crowds is astonishing. Before people are even aware of what is happening around them, they are reacting to other people's fear. It's as though we automatically recognize fearful bodily expression and respond to it, which can imply that there is a certain amount of subconscious recognition and processing of body language. Fear runs through us all whether we see it as a weakness or just a primordial emotion evolved to save our lives. It's an innate emotion that seems to sharpen our senses, preparing us to react to anything that may happen. But what is it about large groups of people that makes fear so contagious? Or rather what is it about ourselves that makes us hypersensitive to stimuli when we sense fear?

The Grandfather of the Theory of Evolution, Charles Darwin, was the first to theorize that animal and human physical expressions of an emotion are adaptive in that they help the organism with survival (1); this is a far cry from the Cartesian idea that emotions are "private mental episodes" (2). Emotions expressed physically can be seen as a universal, primordial language, so to speak, in that all humans and some animals have a basic, innate understanding of it. It is likely that, before the use of verbal languages, body language communicated the essential emotions, such as fear, necessary for survival. Facial and bodily expressions can transcend verbal language barriers. Although society and culture have tweaked how we interpret some physical expressions, we are still able to recognize basic emotions, such as fear and happiness despite cultural differences. This provides an explanation of the importance of body language in communication, and it shows that the power of body language should not be underestimated. For example, in a recent article published by BBC News, Paul Rincon reports that the US military developed a computer game to teach its troops Iraqi gestures so as to facilitate communication and establish trust between military personnel and Iraqi citizens (3).

Faces fascinate us, for the diversity of faces existing on this planet captures our attention and imaginations. Therefore, it is not surprising that we tend to think that facial expression is the most prominent physical expression that communicates emotions (4). However, recent research has showed that emotional bodily cues affect how people interpret facial expressions (1). At first, this seems like nothing new since we usually perceive bodies and faces as an integrated whole, but what happens when there is conflicting emotional information being portrayed by the body and face?

On the forefront of this investigation is a Harvard researcher, Beatrice de Gelder, who has conducted experiments using pictures with mismatched facial and body expressions in order to see how important body language is in interpreting physical displays of emotions (5). These congruent and incongruent pictures were shown to participants, who were hooked up to an electroencephalogram (EEG) to observe their brain activity. Although participants were told to focus only on the face, when incongruent pictures were placed in front of them, the EEG registered a different pattern of brain activity as compared to that associated with congruent pictures (5). When asked to identify the emotion being conveyed by the picture, participants correctly chose the emotion associated with harmonious body and face expression, but chose the emotion associated with the body when showed the incongruent picture with conflicting face and body expressions (1). In all, these results show that, even though our I-functions are focused on faces and facial expression, we are subconsciously aware of bodily expressions to the point where it influences how we interpret the physical display of emotions.

According to de Gelder, emotional body language (EBL), as it is called, is all one needs to know how to react to a certain situation, especially one that is associated with fear (2). Compared to facial expressions, body postures relay more information to the onlooker such as the signaling of an emotion as well as identifying the action necessary to escape from a fearful event, in some cases running away (2). De Gelder and her fellow researchers hypothesized that interpretation of integrated facial and bodily expressions is an automatic, subconscious process that extracts relevant biological information in order to prepare us for action (1). Furthermore, they suspect that this rapid process takes place before conscious awareness or identification of an emotion (1). Evolutionarily, this makes sense. Processing information by our I-function takes time, for humans and possibly other higher-ordered animals think about the different paths of action to take and the consequences of each. However, when faced with a fight or flight situation that is potentially life threatening, animals must react almost instantaneously to the stimulus, for a split-second hesitation could mean death.

Using EEGs to record brain activity, de Gelder shows that emotional bodily expressions excite areas of the brain that are associated with facial recognition, emotion, and motor activity (5). This helps provide an explanation of why fear is contagious; we perceive the input stimulus and react to it accordingly, an output result. Interestingly, activity was also observed in other areas of the brain correlated with goal-directed actions, which implies some I-function control (2). This study may imply that our reaction to EBL is not entirely a reflex, that there is a certain amount of choice we can exercise over our initial gut response. This seems to raise more questions than answer them. For example, which comes first, perception of emotion or perception of movement? Or is it neither, do they occur simultaneously? Furthermore, another question stems from de Gelder's observation that fearful still body images excite motor areas of the brain more so than neutral stances (2). Why would still images activate motor areas of the brain unless it was to prime us for action? Could the brain be filling in the blanks to provide an explanation for the fearful still body image so as to anticipate motor activity? If this were the case, it would not be surprising since we have learned that it is not unusual for the brain to make up stories to help explain and make sense the world around it.


References:
1) Rapid Perceptual Integration of Facial Expression and Emotional Body Language


2) Towards the Neurobiology of Emotional Body Language

3)US troops taught Iraqi gestures

4)Body Language Fuels the Spread of Fear

5)Read My Gestures: Body Language Can Trump Facial Expression



Full Name:  Liz Paterek
Username:  epaterek@brynmawr.edu
Title:  A Review of Listening to Prozac by Peter Kramer
Date:  2006-05-05 16:44:50
Message Id:  19257
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Prozac is changing the way physicians think about diagnosing mental disorders. It is able to cure such a wide range of illnesses that physicians are beginning to create new relationships between disorders. Peter Kramer, the author, did not have a single message with this book. He openly discussed his experiences with Prozac and other anti-depressants and his patients' reactions to them. His objectiveness allowed me to question his choices and to think about the reactions of his patients, which interested me the most. This book forced me to think about my discomfort with altering our minds to remake the self and my fear of purposely altering the I-function.

There were many questions brought up in class about where the sense of self resides and its interactions with the outside world. Are we merely our I-function, the self that we are aware of, or are we the sum of everything even that which our I-function does not comprehend? I continue to feel that since the I-function is all we are aware of, that it is central to the sense of self. It grants the ability to have an identity even if other factors impact that identity. As such, I view tampering with outside factors in order to mold the I-function as an attack on identity and individuality.

Prozac makes people "better than well". It removes social anxiety creating a more outgoing personality. It has minimal side effects and does not create the feeling of being drugged. Patients' whole lives seem to be different; they have better relationships, are not jealous and are more assertive. Most patients see the new drugged self as the real self. They say this new person was the person they always were trapped underneath. To the outside world they would seem to be an entirely different person.

It is surprising to me that people can completely change their I-function overnight and perceive the change as the true self. I wondered why. It could be that without the feeling of being drugged, the I-function is tricked. The new self is rationalized by the I-function as the real identity that was always there. Perhaps the I-function is willing to accept the new persona because it has been given this identity as a goal and the individual has been working towards it in psychotherapy, something the author mentioned was important for taking Prozac. It could be that this ideal personality has been deeply internalized because society tells us that behaving a certain way is good and when we begin to behave that way, we see ourselves as better people.

I was bothered by most people's reactions to the drug, until author mentioned one young college student. He felt better on the drugs and despised the change because he was not himself. The author wrote off the young student's reaction to a lack of psychotherapy. However, I wondered what else could have made this boy different. After all, there were other patients who did not go through therapy. I wondered if his I-function put a higher value on its individuality. I wondered if his I-function was more sensitive to change and thus felt drugged. I wondered if he had not internalized the cultural ideal so when the rapid change was forced upon him he disliked it.

The author used a lot of analogies to explain the way Prozac worked and I began to think of Prozac like plastic surgery for the mind. There is an ideal type of personality, just as there is an ideal face and body. It is possessed by the minority of individuals; all those who do not have it are viewed negatively. With physical appearance, I think that if a person awoke one day, looking closer to the ideal, they would probably identify with the change. A minority would vehemently fight for their old self. Is this what is happening with Prozac? I wondered if patients just preferred the new self to the old, sometimes it sounded that way.

This plastic surgery for the mind may put pressure to change to become the ideal. I find myself uncomfortable with the idea that seemingly healthy individuals should change their personality to change their lives. I find it disconcerting that we are building ourselves this conformist mass mindset, just as we have for physical appearance. Different minds work differently. If this plastic surgery for mind continues it may very well force people to change who they are mentally in order to succeed in business, school, and life because of the advantages created by being drugged.

One question that the author raised is whether or not we are even at a stage yet to diagnose what is healthy and what is ill. The lines often seem arbitrary. This reminds me of the debate over the human diet. A few years ago all fats were evil. Now we know certain fats are necessary. Similarly, we do not know that many of these people are unhealthy. They are just different from the current ideal.

One thing that lowered my confidence in diagnosing drugs is that diagnosis is made based on treatment response. The author seems to feel uncomfortable with the way he is grouping and diagnosing as well. However, because of the lack of side effects of Prozac he seems to feel that the help it can provide should allow for this type of treatment. To me, it feels like poking around in the dark. It was clear in class that our understanding of the nervous system is leagues behind our understanding of other areas of the body. I just look back at all the mistakes in medical history that scream wait until we understand the process better. We should show restraint on using certain drugs until we know more about the nervous system.

There is always the question of whether the quick fix is the best fix. While these patients received psychotherapy, their growth mainly resulted from drug treatment. In some cases patients only needed the drugs for a brief time then were able to pull themselves from depression. I see positively. The drugs were a window. Most, however, seemed to enjoy the feeling of the drugs but lost it quickly after treatment ended. They are dependant on this drug. I believe in repairing and working for what we want to change but rapid changes that require a constant external stimulus seem to be dangerous.

This book gave me a window into the world of psychotherapy. I left with more questions than answers from both the class and this book. I do not like Prozac's prescription in cases where people can still interact with the world. I fear we do not know all the implications of our actions. I fear that changing personality towards an idea could create a cultural conformity and homogeny that I do not want to be a part of. I fear that not changing your personality may one day be a disadvantage in the material world. I fear that we are moving into another world of cosmetic augmentation that threatens the individuality of the self. Who we are and what we are seems to come from the I-function we should not be so quick to try to "fix" it.



Full Name:  Liz Paterek
Username:  epaterek@brynmawr.edu
Title:  The Invisible Man: Prosopagnosia and the Inability to See Faces
Date:  2006-05-05 17:05:59
Message Id:  19258
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Most of people believe that we see faces; however, this is not the case. In the same way that we read in words and not letters, we see faces as a whole pattern not parts. There are individuals with the inability to create the pattern of the face. They can describe eye color, moles, nose shape, etc but they cannot integrate them to see emotion or identity (2).This inability to recognize faces is called prosopagnosia. It results from brain damage or genetics (3).

One of the best models to explain the process of facial recognition was created by Vicki Bruce and Andrew Young. Their model states that recognition is processed in three stages (6). First, the brain encodes the visual information, or "raw data". Then this data is evaluated in a binary manner, familiar or not familiar. If it is familiar, it stimulates a third portion of the brain that accesses a person's biographical information. A breakdown at any stage in the recognition process, as well as the degree of brain damage, changes the perception of the individual (6).

There are many different forms of prosopagnosia which are explained by the Bruce-Young model. If the model breaks down at the encoding phase, all faces are seen as featureless blobs (6).When there is a breakdown at the second stage, one possible result is that all faces are recognized as familiar and individuals will report seeing strangers turn into those they are familiar with (6).A breakdown at the final stage commonly means that the face is seen as its features but they do not trigger biographical information or memory (3). , (6).

Because prosopagnosic individuals often have trouble with other categorical analysis besides faces, it was believed that prosopagnosia was the result of a breakdown in a more general pathway that analyzes groups and patterns (5), (3) , (4).However, others argued that because of the importance of facial recognition, there it had a more specific pathway (4) , (3). In order to decide this, researchers had to find a "pure" case of prosopagnosia (4), (3) . Two candidates were tested by different groups. The first man did not have trouble navigating, distinguishing between most other categories, or even between faces when viewed at the same angles. He was able to generate patterns from broken images at normal or above normal levels. This suggests his disorder primarily relates to faces (3). The other patient had no trouble identifying his own possessions when mixed with others from the same category (ex. Razor, wallet). He could distinguish his handwriting from others, and could sort coins based on country of origin. He appeared to be agnosic only for faces (4) . Together these provide evidence for the theory of a specific mechanism.

There is evidence within the brain that the regions dealing with both facial and recognition overlap. The fusiform face area in the right hemisphere of the brain is important in facial recognition as well as recognition of objects in categories. In areas where individuals show expertise, the fusiform face area is most active (5). For example, it would be more active for a bird watcher viewing pictures of birds than pictures of cars. It is constantly very active for faces (5). This is a possible explanation of why agnosia and prosopagnosia are often related.

There are other pathways active in recognition, perhaps acting as backup mechanisms. Those with prosopagnosia can recognize upside faces as well as normal individuals. Normal individuals recognized inverted faces less easily than upright faces (2).Since focusing on actively remembering the features of a face can aid in the recollection process, conscious memory may use a different pathway than subconscious recollection and may not be effected in all cases of prosopagnosia (1).

All humans rely on non-facial features to distinguish individuals. An example of this would be having difficulty recognizing someone after a haircut. These features can therefore bring us to the third stage of recognition. Individuals with prosopagnosia, however, rely more heavily on these (1) .

People with prosopagnosia are capable of distinguishing individuals; however, this tends to be slower and less accurate, especially when outside of a very specific group (2). Face blind individuals use both general and specific modes of recognition. Specific models create recognition by using person specific information. For example, Andrew is sitting next to me in class because that is his seat. Hair, gait, voice and clothes are often used in a more general sense to trigger identity (2). Interestingly, prosopagnosics have reported that while they cannot integrate facial features with each other, they can integrate them with the hair surrounding the face. This triggers identity and allows them to see emotion (2). This suggests that the facial recognition pathway is only for the face itself.

Those with the disorder often choose identity triggers that allow them to recognize a specific group. One prosopagnosic man believes he set his identity triggers in early childhood and can no longer change them. He created identity based on things that were naturally easy for him to identify. He is extremely adept at recognizing men who have long hair and dress casually in blue jeans. These people are "his type" (2). Outside of this type, he and others describe recognition of the human face as similar to recognizing an inanimate object (2) , (1). Another man can identify long-haired females with normal accuracy not males. He uses only shapes of features, voice and clothes for males. He also believes his triggers were set when he was young. It is interesting to note that the first man is gay and the second is straight (2).

Emotional recognition pathway is strongly linked to the facial recognition pathway. Russel Bauer found that even when prosopagnosia patients could not consciously recognize faces, their skin conductivity revealed that they unconsciously recognized the individual. He suggested that information is processed along two parallel pathways in the brain. One causes the conscious recognition; this is lost in prosopagnosia. The other route has steps that generate an emotional response (6). Dysfunctions in this pathway can cause disorders related to prosopagnosia. One such case makes people believe that those they know have been possessed. Although they visually recognize familiar individuals they feel no emotion. This is rationalized by the brain as it not being the person they know only the face (6). Perhaps this emotional recognition is one of the reasons that the two men can more accurately recognize the sex they are attracted to.

Prosopagnosia is an uncommon disorder that prevents a person from recognizing the pattern forming the human face. While we take for granted the idea that a two eyes, nose and a mouth make a face; however, it is clear that integration in our brain generates a big picture. There appear to be many pathways for recognition and it is interesting to think that alterations to any one may completely change the way one sees patterns in the world. Facial recognition is vital to group life. Therefore, it is not surprising these various pathways fore recognition exist. We take our brain's integration of information for granted and forget that the brain as well as the eyes can completely alter the image in our heads.

Sources:
1) Burman, C. Prosopagnosia: Face Blindness. last updated 2002
1) Prosopagnosia: Face Blindness

2) Choisser, B. Face Blind!. Updated Jan 2002
2) FaceBlind!

3) Duchaine B. Developmental prospagnosia with normal configural processing. Cognitive and Behavioral Neuroscience. Vol 11 No 1. Jan 2000pp 79-83 (online journal)

4) Ellis, HD and Young, AW. Faces in their social and biological context. Prospagnosia pp81-88

5) Reid, E. Putting a new face on visual recognition Jan 2000.
5) Putting a new face on visual recognition

6) Szpir M. Accustomed to your face. American Scientist. Vol 80. pp 537-539.Nov-Dec 1992



Full Name:  Suzanne Landi
Username:  slandi@brynmawr.edu
Title:  Animals and Language
Date:  2006-05-09 21:35:06
Message Id:  19297
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip


Even after years of research and speculation, the question remains: do animals have language? For some people, this question is easily answered; of course animals have language. After all, a dog can bark and convey the message that it is hungry. Likewise other animals grunt, snarl and whimper to effectively communicate within their species. To an observer, this interaction doesn't look much different from a conversation between two humans, but linguists have defined language in a way that excludes these animal forms of communication. Languages are made up of advanced symbols and grammars that animals seem incapable of learning; this doesn't mean we haven't spent a significant amount of money and time trying to make it happen.

There's a rich database of animal language research done on primates, most notably in the case of Koko the gorilla. Koko's researchers claim that she has learned over 1,000 signs from American Sign Language and that she can convey her emotions by using these signs. The story has gotten a lot of media attention and was featured on PBS and in The New York Times. Despite this exposure, not much is known about the conditions that Koko supposedly learns under (1). Her researchers are reluctant to release detailed videos of interactions and it's difficult to tell if she's being cued. Similar primate research on a chimpanzee playfully named Nim Chimpsky revealed that although it appeared as though Nim had learned language, he was merely responding to prompts given to him by volunteers (2).. These prompts were probably unintentional but this demonstrates a common problem with animal research. Another such case was exhibited by Clever Hans, a horse who, it seemed, could count. He would tap his foot in response to math questions. Although he made some errors, he had a very high success rate and appeared very intelligent. However, it turned out he was relying on visual cues from his audience to know when to stop tapping. Hans would notice the slightest tension in their muscles, a raised eyebrow or an apprehensive face because when he approached the answer, the audience would react in anticipation and he responded by stopping (3). The cuing was unintentional but had a profound impact on Hans' behavior, not unlike Nim's behavior.

Recent findings might start to change the way we think about the relationship between animals and language. Although much testing has been done on monkeys and other species closely related to humans, birds have also been a popular resource. For example, Alex is a famous grey parrot that presumably has abilities such as object recognition and a memory that seem to resemble human cognition (4)- (though skeptics may point out cuing here, too). In a study by Timothy Gentner at the University of California, San Diego, starlings were used to test for a ceiling of ability to recognize patterns in sounds. The songs of male starlings are composed of warbles, rattles, whistles and other sounds which are collectively known as "motifs." A starling learns a new motif and embeds it in their songs; they can recognize other individuals by these unique motifs. With three other psychologists from the University of Chicago, Gentner created an artificial language composed of warbles and rattles. The patterns were based on an experiment by Marc D. Hauser of Harvard University and W. Tecumseh Fitch of the University of St. Andrews in Scotland, who constructed artificial languages made up of short sounds made by men and the same sounds made by women. Male and female sounds were two different categories and the difference in pitch was what made them distinguishable. Fitch and Hauser combined the sounds according to two rules, including the rule that a female sound must be followed by a male sound. They built sentences that embedded a female-male sound pair within another pair and labeled the different sounds "A" and "B." Thus, the simple rule would produce sentences like ABAB, and the complex rule AABB. Gentner used these patterns to create sequences of warbles and rattles for the starlings. They were trained by listening to the songs and assigned a task to peck at a hole if the song had the right pattern, and to do nothing if it did not. The birds received positive reinforcement in the form of food if they chose correctly, and if they were wrong the lights went out briefly. The majority of the starlings learned the pattern. All participating humans in the original experiment also learned the pattern. Similarly this experiment was attempted with cotton-top tamarin monkeys, who noticed when the ABAB rule was violated but did not recognize patterns with the AABB rule (5). The research is recent but has sparked some comments from linguists. Geoffrey Pullum at the University of California, Santa Cruz refuses to believe this presents evidence of how language evolves, or how animals learn language, stating simply "I'm not buying it." He also comments, "It's purely about bird abilities, I think, and not about the foundations of human abilities." (5). Even Noam Chomsky has weighed in, claiming that the starlings are most likely counting rattles and warbles, and that this has more to do with short-term memory than with language itself (5). Gentner obviously disagrees, but given the history of animal research, Pullum and Chomsky may be right. We aren't sure what processes make this possible, but chalking it up to memory isn't farfetched. This might say something about similarities in cognition between humans and starlings in that respect, but the connections are not strong enough to conclusively say the starlings can learn language.

The animal research may not be conclusive enough to convince us, but it does force us to change the questions we think we've answered. To wonder whether animals have language is too broad and nearly impossible to accurately discover. We cannot gain an emic perspective in the animal world and therefore our studies will never be complete. However these experiments teach us to consider the results and revisit the scientific method by hypothesizing based on the outcome. As the starlings learn patterns and gorillas learn vocabulary, we learn what components of language can be learned by certain animals and under very strict conditions. Questions that we can actually answer include, what signs can a gorilla learn? Or, what kind of environment is required to teach starlings a pattern and how are animals similar to humans in terms of conditioned learning? But answering the broad question as to whether animals have language is going to take more time and research.

WWW Sources
1)Koko the GorillaThe PBS page for Koko the gorilla
2)Inside the Minds of Animals – Page 3An brief guide to animal research
3) Clever Hans Information on Clever Hans, the horse
4)Alex the Parrot Alex the Parrot's official webpage, the Alex Foundation
5)
Starlings' Listening Skills May Shed Light on Language Evolution
An article on recent starling experiments



Full Name:  Stephanie Pollack
Username:  spollack@brynmawr.edu
Title:  Post-Traumatic Stress Disorder: Coping with Tragedy
Date:  2006-05-11 10:32:28
Message Id:  19312
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

As the granddaughter of Holocaust survivors, I have always been interested in learning about Post-Traumatic Stress Disorder. Post-Traumatic Stress Disorder, or PTSD, is defined by the American Psychiatric Association as a condition of varying severity, that results from experiencing a traumatic event, or "an event outside the range of usual human experience" (2). Such traumatic events may include war, physical and/or sexual abuse, rape, motor vehicle accidents, terrorist attacks and natural disasters (6) (8). PTSD can manifest itself through nightmares, flashbacks, insomnia, and withdrawal from the outside world (2). Additionally, PTSD sufferers often develop depression, anxiety, and are at a higher risk of suicide (8). Although my Grandparents have not experienced PTSD in the conventional sense, one can assume it is nearly impossible to eradicate the memories of the atrocities they witnessed 60 years ago. After exposure to humanity at its worst, it is naďve to think that one can relinquish such horrific memories completely. So, how are my Grandparents able to live happy and meaningful lives after experiencing such a tragic loss? What is it about the neurobiology behind PTSD that makes some people more susceptible than others?


On a molecular level, stress can trigger damaging effects on the brain. Increased stress levels can cause an elevation in the release of the adrenal steroid hormone glucocorticoid, or GC, (7) which is thought to be involved in the deterioration of hippocampal neurons (3). In a study of Vietnam War veterans, the MRIs of PTSD sufferers indicated significant hippocampus atrophy compared to non-PTSD veterans (3). These results imply a connection between GC and hippocampal erosion, but fail to decipher whether the atrophy is linked to the trauma itself (the event that elicited the PTSD) or to experiencing PTSD (3). In other words, this finding would be important in labeling hippocampal atrophy either as a cause of PTSD or as a consequence of it.


Time may also play a role in how severely the brain is impacted by hippocampal atrophy. Studies have found that both veterans with and without PTSD had greater hippocampal atrophy the longer they were at war (3). Therefore, even though some veterans did not experience PTSD, they did, in fact, undergo deterioration in their hippocampi, concluding that hippocampal atrophy cannot be the sole factor in determining PTSD. New evidence suggests that, together with a reduction in hippocampal volume, the decrease in size of the anterior cingulate cortex (ACC) may contribute to the development of PTSD (6). It is believed that the ACC is involved in regulating emotion, specifically fear, and that such an alteration in this portion of the brain may increase susceptibility to PTSD (6). Neuronal loss in this region may contribute to an inability to control fear in patients with PTSD (6), giving biological credence to the recurrent nightmares and flashbacks associated with the disorder. In this vein, people with smaller ACC volume in general can be considered at a higher risk of developing PTSD after experiencing a traumatic event (6). Additionally, on a more qualitative level, people with preexisting psychological problems are likely to be less capable of dealing with a traumatic experience, and are subsequently more likely to develop PTSD (1).


PTSD is a difficult disorder to diagnose, and involves compiling current symptoms with the trauma-inducing event (1). This difficultly in detecting is often exacerbated because the disorder is frequently accompanied by an unwillingness to talk about the traumatic memories (5). Unfortunately, PTSD sufferers commonly find solace in alcohol and drug use, to drown their depression and recurring nightmares. Along with varying severity in different individuals, PTSD can last for different lengths of time; some patients have relatively short periods of PTSD, most likely soon after the traumatizing event, while in others, PTSD can last a lifetime (1).


As evidenced by the recent nationwide response to both the September 11th attacks and Hurricane Katrina, the most effective way to combat psychological trauma is to build "a sense of community and social cohesion" (4). Granted, many of these PTSD sufferers were not on the front lines of a war, like many Vietnam veterans. In the case of September 11th, many people experienced the manifestation of a general sense of fear, anxiety, and vulnerability that typically follows a terrorist attack (4). This widespread psychological shock was exactly what the terrorists intended; terrorism and war have both physical and psychological casualties (1).


A sense of community and shared hardship is often lacking in those who have been exposed to trauma in the war zone. There are three types of wartime experiences that largely influence the surfacing of PTSD: "moderate to intense combat, loss of buddies, and witnessing or participating in abusive violence or atrocities" (1). In order to alleviate this PTSD among war veterans, the government would be wise in instituting group therapy in order to relieve the symptoms of the disorder (9). By talking openly about the experience, the patient is able to re-live it in a safe and controlled way, and, eventually, come to terms with the past.


Post-Traumatic Stress Disorder is a major psychological concern and has a biological basis in both its initiation and its severity. Pierre Janet and Sigmund Freud believed that "the human mind contained an unconscious domain composed of disturbing experiences hidden from the conscious self" (5). In other words, how well we "hide" our unconscious fears from our conscious thought varies from individual to individual, and is especially relevant in the context of Post-Traumatic Stress Disorder. It is hard to know how my Grandparents avoided PTSD. Perhaps they are biologically "better equipped" with dealing with the trauma (i.e. less susceptible to hippocampal atrophy, etc.). But, just as importantly, it is evident that through sharing their stories and talking about what they experienced, they have been able to overcome the negative and appreciate the positive in the present.


References

1) Vietnam's Psychological Toll

2) Study Raises Estimate of Vietnam War Stress

3) Why Stress Is Bad for Your Brain

4)Combating the Terror of Terrorism

5)The Harmony of Illusions: Inventing Post-Traumatic Stress

6)Voxel-Based Analysis of MRI Reveals Anterior Cingulate Gray-Matter Volume Reduction in Posttraumatic Stress Disorder Due to Terrorism

7)Stress and Gluccocorticoid


8)Post-Traumatic Stress Disorder: Topic Overview


9) Relieving Trauma: Post-Traumatic Stress Disorder



Full Name:  Stephanie Pollack
Username:  spollack@brynmawr.edu
Title:  Book Review of Malcolm Gladwell's Blink: A Journey into Human Decision Making
Date:  2006-05-11 10:39:04
Message Id:  19313
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Malcolm Gladwell's book Blink explores the meaning of snap judgments and first impressions. Gladwell writes Blink as a compilation of short stories and anecdotes that reflect on the human ability to assess a situation in a matter of seconds. Among the situations addressed in the book, are speed dating, the election of President Warren Harding, diagnosing a heart attack in the emergency room, blind taste tests of new food products, and the fatal police shooting of Amadou Diallo. These topics are undeniably unrelated, yet Gladwell is skillful at tying them together into building his case on the rapid nature of decision making.


A recurring term in Blink is "thin slicing", which Gladwell defines as: "the ability of our unconscious to find patterns in situations and behavior based on very narrow slices of experience" (23). In other words, Gladwell insists that we all instinctively look for meaning and judgment even when we are presented with something for a very short period of time. Gladwell's first example of this brief, yet dead-on assessment is in the introduction of the book when a group of art experts instinctively know that a statue brought to the J. Paul Getty Museum in California was a fake. Although they could not identify what it was about the statue that made them believe it was fake, their immediate "gut feeling" denied its authenticity. It is not until later in the book that the reader is told why the art experts had this reaction to the statue. Gladwell makes a clear distinction between the snap judgments of experts versus non-experts. He explains that "the first impressions of experts are different [from the rest of us because] when we become expert in something, our tastes grow more esoteric and complex" (179).


This difference in the opinion of experts is especially relevant in the chapter on blind taste tests. Gladwell discusses how professional taste testers are able to "rank forty-four different brands of strawberry jam from top to bottom according to very specific measures of textures and taste", something the average person would not be able to do (180). In this way, Gladwell writes how the average consumer is highly influenced by food packaging in how they perceive a product's taste. For example, taste testers claimed that 7-Up taken from the same can had more lemon flavor when it came from a can with a very yellow label and a more lime flavor when it came from a can with a very green label. Nothing had changed about the drink, except the outward appearance of its container. Here, Gladwell concedes, our immediate judgments are not accurate, and we are greatly swayed by the design and attractiveness of the product. Only an expert would be able to provide an accurate assessment of food under such conditions.


Gladwell also cites the quick decision making inherent in the jobs of Wall Street stock traders, army intelligence, and doctors in the emergency room. In all of these positions, one must be able "to make decisive, rapid-fire decisions under conditions of high pressure and with limited information" (108). This type of decision making obviously becomes easier with time because it relies heavily on prior experience. When diagnosing a heart attack, one may assume that the doctor must take into account a multitude of information in order to correctly determine whether the patient is, in fact, having a heart attack. However, Gladwell points out that when diagnosing a patient, or solving any other problem for that matter, the more information one is presented with, the harder it is to make a definitive decision. In other words, too much information can throw off your judgment. When reading Gladwell's argument, I couldn't help but think it seemed counterintuitive for doctors to know less about their patient rather than more when diagnosing an ailment. Considering that Gladwell is not himself a doctor and would probably not feel secure if his own doctor only spent the bare minimum amount of time examining him, this conclusion is likely unreasonable. However, by and large, Gladwell's stance probably holds true; most patients are probably diagnosed with what the doctor first thought was wrong.


Throughout Blink, Gladwell attributes the art of "thin slicing" solely to experts. So, where does that leave the non-experts? Gladwell attempts to appease the non-experts in the realm of love. Gladwell explored the world of speed dating, and writes "when it comes to thin slicing potential dates, pretty much everyone is smart" (63). In speed dating, couples are paired up for several minutes during which they decide whether or not they liked the other person. Judgments and impressions are crucial and brief, and, in this situation, everyone is an "expert". The non-experts also decide who becomes elected President. Gladwell ascribes the election of President Warren Harding to "the dark side of rapid cognition", labeling it the Warren Harding error (76). The public was smitten by Harding's good looks and powerful voice and he is a prime example of how incompetent people can achieve high positions of power simply due to their ability to make a good first impression. Interestingly, Gladwell cites the fact that the majority of the CEOs Fortune 500 companies are above six feet tall and "look" qualified to be in a position of power (86). Here, as in the case of Warren Harding, looks play a major role in the assumption of authority and influence, or as Gladwell writes "we see a tall person and we swoon" (88).


Gladwell ends Blink with the heart-wrenching story of Amadou Diallo. Diallo was an immigrant from Guinea living in the South Bronx who was fatally shot by the police when the police thought he was carrying a gun. He was actually pulling his wallet out of his pocket to show them identification. The Diallo killing epitomizes thin slicing at its worst. The police were acting in what they thought was a moment of life-or-death and failed to pick up on cues that Diallo was, in fact, harmless. Assumptions were made about the color of Diallo's skin and the neighborhood they were patrolling, and ultimately ended in tragedy. Gladwell states that "our unconscious thinking is, in one critical respect, no different from our conscious thinking: in both we are able to develop our rapid decision making with training and experience" (237). In the case of Amadou Diallo, the police officers were not thinking consciously. The police officers entered what Gladwell calls "extreme arousal and mind-blindness" (237). In this altered state, our stress level is so high that our mind begins to play tricks on us. Things seem to be "moving in slow motion" and our "complex motor skills start to break down" (225). All in all, during this state we are not at our highest level of rational thinking and "when we make a split second decision, we are really vulnerable to being guided by our stereotypes and prejudices, even ones we may not necessarily endorse or believe" (232).


Overall, Malcolm Gladwell presents the reader with a host of everyday situations in which we "thin slice" our way through. Gladwell defines thin slicing as an unconscious act that occurs "behind a locked door". Sometimes we are correct in our first impressions, sometimes we can be way off. And, just as we attempt to do in class, Gladwell is simply trying to get things "less wrong". He firmly believes in the expertise of professionals and the gaining of prior experience, as this is a recurring theme in the book. And, even though Blink has left me uncertain about whether rapid judgments hold as much value as measured ones, I enjoyed how Gladwell presented his case and demonstrated the power of thin slicing in our decision making.

Reference
Gladwell, Malcolm. Blink. New York, Little, Brown and Company, 2005.



Full Name:  Danielle Marck
Username:  dmarck@brynmawr.edu
Title:  ADD/ADHD: The Untamed Mind
Date:  2006-05-11 14:17:34
Message Id:  19320
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip


After being dropped off at school, a young child enters a new environment filled with commotion, action, and adventure. Children scurry around the classroom, busily engaging in social interactions and conversations while the teacher prepares the morning's activities. One of the children among the congregate of twenty, six year olds, experiences this morning commotion differently. The child enters the class and becomes overwhelmed, confused, withdrawn, and unable to process the disorder around him. When class begins the children take their seats and the child begins to fidget. Sounds from the heating system, whistling wind through the window left a jar, bird songs, the colorful flowers blooming on the tree outside, and whispers of the girls in the back row, all catch the child's attention. The numbers on the blackboard are meaningless and look like a stream of white across a forest green board decorated with animal caricatures of the alphabet along the periphery. The child switches from processing visual to auditory to sensory input, as the hard wooden chair underneath becomes unbearable. The child stands and wanders about the room, while the teacher seriously asks the student to return to their seat, the child is defiant but returns to the seat with the help of the teacher's forceful grip. The teacher continues her lesson. The child attempts to focus on the board and its contents but soon switches from numbers on the blackboard to counting the polka dots on Mary Lou's wool knit sweater. The class has been meaningless to the child and they have gained nothing but the frustration of falling behind and being unable to focus on the lesson.

Children with attention deficit disorder (ADD) and attention deficit hyperactive disorder (ADHD) live in a high-speed world that shifts between sight, sound, and thought. Important tasks, activities, and directions cannot be filtered through the chaotic mayhem they live in. Parents rely on the observations made by teachers in the school environment to diagnose their child when social interactions and homework become an unbearable and frustrating task. Diagnosis of these disorders relies on the child's interaction with the environment and their response to certain environmental stimuli whether it is in conversation or interacting with peers in a social setting. The inattention, hyperactivity and impulsivity the child experiences can be determined in his/her interaction with other children especially notable in the school environment. The diagnosis of these cognitive disorders focuses on those interactions between the child (assumed to have ADD/ADHD) and those around him/her but fails to delve deeper to the source of the medical problem. Most of the psychiatric community diagnoses psychosis and learning disabilities (that can be seen in ADD/ADHD patients) through detailed analysis of the patient's interaction within the surrounding world and does not take note of the internal or biological environment of the patient. The environment presents one aspect to the problems encountered in ADD/ADHD patients. Could the disease have more internal implications dealing with heightened sensory integration and inability to organize this flooding of inputs? A sensory integration dysfunction closely parallels the environmental symptoms used as diagnosis of ADD/ADHD. Does a child with ADD/ADHD lack the ability to organize or appropriately integrate sensory input within the brain? (1) Does this imply miss diagnosis or the failure of the psychiatric field to focus on the internal implications?

With 7.8% percent of all school age children being diagnosed with ADD/ADHD and approximately 2.6 million children of ages four to seventeen taking the stimulant drug Ritalin, to control symptoms, the disease has become an overused diagnosis for many children with learning difficulties and social challenges. (2) A child must exhibit a series of specified characteristics to be diagnosed with ADD that fall into several distinct categories such as: severity, onset, duration, impact and setting. Symptoms must be abnormally severe, be present for at least 6 months, and negatively influence social, occupational, and academic areas. The symptoms of ADD must be exhibited in various settings: the home, school, or work place. (3) The diagnosis of ADHD, according to the Diagnostic and Statistical Manual of Mental Disorders, requires the child exhibit a series of at least eight characteristics for at least six months before the age of six. Proper diagnosis is based upon the following criteria: fidgets, difficulty being seated, easily distracted, difficulty awaiting turn, blurts out, does not follow instruction, difficulty sustaining attention, shifts tasks, talks excessively and/or interrupts. (4) The diagnosis of ADD/ADHD has become the "solution" to help many children with learning disabilities and is diagnosed according to a combination of symptoms seen from both ADD and ADHD criteria. The diagnosis itself deals primarily with the child interacting within an environment either school or home and focuses on how the child interacts and responds to that environmental stimuli. Psychiatrists and learning specialists use these symptoms as means of diagnosing a child with ADD/ADHD. However, are these truly symptoms of the child's illness or are these merely postulations on the observable symptoms without backing from a scientifically verified mechanism? Is the child's disorder a result of an internal dysfunction? The diagnosis seems to focus on problems primarily for those people around the child and ignores the true source of the problem within the child.

While the scientific community's research to find the true biological source of ADD/ADHD has been inconclusive and vague, the psychiatric world has continued to base diagnosis on environmental symptomatic responses. Doctors find links to low dopamine levels in children with ADD/ADHD which results in decreased brain functions such as storage and recollection of information relayed to physical actions. (5) With supporting studies showing that 60% of all ADD/ADHD patients express dopamine level abnormalities, the link between diagnosis lies in balancing dopamine levels. (5) Ritalin and other stimulant medications cause the brain to either increase or decrease dopamine transport within the brain. Currently, scientists and the medical community base their model of ADD/ADHD etiology on dopamine levels in the brain; could there be more to the biological diagnosis? More than 25% of children diagnosed with abnormal dopamine levels, fail to improve with stimulant medications including Ritalin. (6) Thus proving that abnormal dopamine levels alone are not the source of the disability. Once again, the true source of the problem is not identified biologically within the child and diagnosis relies on environmental interactions.

Children with presumptive ADD/ADHD are left to rely on the series of observations made by parents, teachers, and psychiatrists to identify and diagnose a learning and social disability. Many of the symptoms associated with ADD/ADHD diagnosis closely parallel symptoms associated with visual or sensory integration dysfunctions. Interaction between child and the environment relies on a series of sensory inputs into the brain. Could children with ADD/ADHD have a heightened affinity for sensory inputs as sight, sound, and touch? Integration of the sensory system starts before birth and continues throughout a person's life as they interact with their environment. (7) Thus, as a young child begins to fully interact at home and in school, these sensory connections develop, but in the case of children with ADD/ADHD these sensory connections develop differently. A sensory integration dysfunction closely parallels many of the symptoms observed in children with ADD/ADHD, and includes sensory deficits such as, tactile, vestibular, and proprioceptive. The interaction between these three senses allows a person to experience, interpret, and respond to environmental stimuli. (7)

The tactile response focuses on nerves under the skin surface that relays information to the brain resulting in heightened sensitivity to light, touch, pain, temperature, and pressure. A child with ADD/ADHD exhibits negative behaviors towards their environments such as withdrawal when being touched, complaining about face washing or bathing, general irritability and hyperactivity. (7) While many believe these symptoms to be signs for ADD/ADHD, perhaps the problem lies in the child's misinterpretation of their environment as hostile and aggressive due to heightened tactile responses. Also, with improper functioning of the nervous system, the brain is forced to absorb a multitude of inputs that interfere with other brain processes. The child in turn may find trying to focus on the blackboard, participating in discussion, sustaining attention, and organizing quite difficult, because the brain is processing such a multitude of stimuli. Thus, with overstimulation of the brain a child can exhibit difficulties organizing and controlling behavior.

Similar to the tactile response, the vestibular system concentrates on the inner ear and structures that detect movements and changes to the environment. The vestibular system is responsible for the upright position of head posture even when eyes are closed. (7) A dysfunctional vestibular system can result in a child's inability/ difficulty in ascending or descending on a play structure, for example, and may show apprehensions in walking, crawling or being placed on unstable surfaces. (7) While on the other hand, a child might be overly active and seek the thrills of an intense sensory experience. Overall, a child's negative response to these activities or crazy hyperactivity relate to those uncontrollable actions noted in children with ADD/ADHD. Children with ADD/ADHD tend to exhibit brash actions that involve sudden quick movements or actions. Many children that engage in these uncontrollable actions express a joyful euphoria through laughter or giggling, indicating a complete loss of control.

While dysfunction in the vestibular response might provide explanations for a child's hyperactivity in their environment, errors in the proprioceptive system, would also generate symptoms that closely parallel the characteristics of ADD/ADHD. The proprioceptive system allows a person to use and manipulate their body to perform fine movements as used in writing and buttoning a shirt. (7) Children with ADD/ADHD often show difficulties in engaging in small tasks as buttoning a shirt or eating, which could be the result of a proprioceptive problem. The child becomes frustrated at their lack of achievement and is distracted by other activities. School work presents difficulties because of the fine motor control necessary to use a pencil when writing the nightly homework and participating in class exercises.

The scientific community has not yet identified the source or mechanism for ADD/ADHD. Parents of young children are at the mercy of teachers and psychiatrists who are merely diagnosing the disease based on postulates without a conclusive test for ADD/ADHD. Children cannot verbalize or understand the abnormalities that they experience with ADD/ADHD. Psychiatrists only focus on how the child interacts in the physical and social environment around them but fails to investigate the child's internal environment.


Works Cited

1) CNN.com , helping children with ADHD

2)
CDC: Center for Disease Control and Prevention
, overview of ADD and ADHD, and statistics

3) Helpguide: Mental Health Issues , understanding ADD and ADHD

4) Kidsource Online , teaching children with ADD and ADHD

5) Wikipedia Encyclopedia , overview of ADD and ADHD

6) Native Remedies , treatments and information on ADD and ADHD diagnosis

7) Sensory Integration , information about sensory integration



Full Name:  Trinh Truong
Username:  ttruong@brynmawr.edu
Title:  Emotional Effect on Memory
Date:  2006-05-11 21:19:14
Message Id:  19332
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Have you ever wondered why there are certain events that took place long ago in your life, such as the your first day on a job or kindergarten, that you remember more vividly and distinctly than those that happened just last week? Some researchers have come to ascribe this ability to the influence of the emotional impact that one experience during such events on long-term memory. They believe that emotions, both positive and negative, can enhance the brain's ability to store details of an event that was affiliated with those strong emotions. (1)

To test this hypothesis Florin Dolcos of Duke University conducted a study in which nine young women, averaging age 26, were shown 180 pictures that were evenly divided into three categories: pleasant, unpleasant, and neutral. While the women viewed the photos and described them by those three categories, brain scans were performed on them. A year later a follow-up test was conducted to test the memories of the photos. Once again brain scans were performed while the women viewed the 180 old photos along with 90 new photos, also divided into the three categories, and responded whether they remembered a photo, think it is familiar, or is totally new. Results show that photos eliciting emotions, both pleasant and unpleasant, were recalled better than those that were neutral. (1).

Their brain scans revealed that when they were successfully retrieving memory about an emotion-inducing photo both the hippocampus and the amygdala were highly active. This indicate that the amygdala, the prime center for emotions in the brain, may play a role in enhancing memory by its interaction with the hippocampus, the region of the brain which plays an important role in memory that is retrieved by conscious effort and making new memory. (2) Interestingly, in a similar experiment by Larry Cahill, involving test subjects with damage to the amygdala, enhanced memory for emotionally arousing information was not observed. There was not difference between their memory for neutral and emotional information. (4)

In addition to studying the memory enhancing effect of the amygdala scientists have examined the chemical processes that occur in connection to emotions and memory. When humans and rats are emotionally aroused, nor epinephrine, also a stress hormone, is readily secreted. In order to see if this hormone plays a role in the emotional enhancement of memory, injections of it were administrated to rats, which exhibited enhanced memory as a result. (5) In another experiment propanolol, which blocks activity of nor epinephrine, was administrated to patients. Consequently, this medication took away the patients advantage of emotions in memory, and they remembered emotional arousing pictures with the same accuracy as they did the neutral pictures. (4) Evidently, both nor epinephrine and the amygdala play a significant in the memory enhancing effect of emotions. A possible explanation is that information of the significance of the emotional stimulus is sent to the amygdala very early in the process and through chemical connections involving stress hormones, such as nor epinephrine, enhances perception and attention. Through influencing perception and attention the amygdala can strengthen the encoding process of the memory in the hippocampus. (6)

Just as emotions can increase retention of certain memories, it can also impair that of others. Emotion-enhanced memories come at a cost to the memories of events that preceded the emotion arousing event. The same brain mechanisms that are responsible for the enhancement of one particular type of memory are also responsible for the impairment of another. To explore the extent of the two varying effects of emotions B. A. Strange and colleagues conducted experiments that targeted both the enhancing and impeding effect of emotions on memory. In their experiments test subjects were shown words characterized as aversive, pleasant, and neutral. Consistent with previous studies subjects remembered the emotionally related words better than the neutral. However, they remembered neutral words that were presented immediately before the emotional words significantly worse than they did with other neutral words. More interestingly is that the effects of enhancement and impairment were significantly different between women and men. In women the decrement in emotion-linked memory was twice as large, and the coupling effect between increase memory in emotional words and decreased memory in neutral words preceding the emotional ones was also greater in women than men. Although these diverging results of subjects of different sex have yet to be explained, they indicate that sex may have an influence in the emotional memory. (7)

There are many ways that might cause the brain to deem certain information as worthy of being sorted into long-term memory storage. One can rehearse the same information over and over or consciously focus on remembering it. However, a more effortless and unintentional way is through emotions, which augment the recollection of a memory when they are attached. It appears that evolution has used emotions as a mechanism of highlighting what is important to place in the long-term category of memory. Memories that are linked to memory are likely to hold an importance in survival and will be highlighted for long-term memory processing, remembered more easily, and recalled more quickly. (3) When the brain mechanisms are attentive towards storing emotional-related memories it is likely that it is not as equipped to store the information that it came in contact prior to the emotionally filled information afterwards.

Emotions contribute significantly to our ability to store emotional arousing memories through the interaction between the amygdala and the hippocampus. However, there is a price to pay for this efficiency of distinguishing crucial information for survival for storing from that of less significance. The information preceding the emotion-feel events and information gets overshadowed and forgotten in the process even more so than information which lack emotions but did not precede the emotional events or information.

Web References

1)WebMD
2)Memory Loss & the Brain
3)Emotions and Memory
4)A Primer on Emotions and Learning
5)Emotions Plus Brain Hormone May Strengthen Memory
6)Science Direct
7)PNAS



Full Name:  Tamara Tomasic
Username:  ttomasic@brynmawr.edu
Title:  Spinning in Space: Vertigo
Date:  2006-05-12 01:22:49
Message Id:  19339
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Vertigo, often thought of as a disorder, is actually only a major symptom of a balance disorder and not itself a disease. It should not be confused with dizziness (confusingly called "vertigo" in Latin) which is described as "an unpleasant feeling of light-headedness, giddiness or fuzziness often accompanied by nausea" (1)Wikipedia. Dizziness can be caused by many different factors, including spinning around rapidly in circles, but is usually not associated with a feeling of extremely rapid rotation of the world in relation to the self (unlike in dizziness induced by spinning around, the rotation of the world experienced in vertigo is not real). The light-headedness that is characteristic of dizziness is absent in vertigo, replaced instead by a disconnect between the expected input and actual input. This disconnect leads to nausea, a characteristic common to both vertigo and dizziness. Vertigo is also often incorrectly associated with a fear of heights, a theory made popular by the Hitchcock movie of the same name. In reality, vertigo can be either intermittent or constant in occurrence, but it is not triggered by heights or a fear thereof (2)NHS Direct Online Health Encyclopaedia. The feeling of dizziness or light-headedness experienced by some when looking down from great heights is not the same as vertigo, as anyone who has experienced it will be able to attest to.

Experiencing vertigo is often described as the feeling of spinning where there is in fact no motion. This can be experienced in two ways: with the eyes open or closed. With the eyes open, the individual experiencing vertigo describes the sensation as one of the world spinning while their body stands still in relation to their surroundings; this is called objective vertigo. With the eyes closed, the sensation is one of the body being in motion when it is actually standing still ("falling in space"); this is known as subjective vertigo (1)Wikipedia.

The presence of vertigo can be due to harmless causes or indicative of serious medical problems. The condition is divided into two subtypes, peripheral and central vertigo, both of which have distinct characteristics as well as similarities. These subtypes are determined by the location of the damaged vestibular pathway and often have different causes. Peripheral vertigo is characterized by damage affecting the inner ear division of the acoustic nerve. This subtype is often felt more severely than central vertigo and is intermittent; peripheral vertigo is always associated with nystagmus (a rapid eye movement characterized by a quick movement to one side followed by a slow return to the other side) and occasionally with hearing loss or a ringing in the ears. The causes for peripheral vertigo are not usually serious, the most common of them being benign positional vertigo (BPPV); other causes include Méničre's disease (a balance disorder of the inner ear) and acute vestibular neuronitis (1)Wikipedia.

The damage in central vertigo involves the brainstem vestibular nerve nuclei. This subtype is typically constant in timing, coming at predictable intervals, and as previously mentioned is usually felt less severely than peripheral vertigo. The symptoms associated with central vertigo are motor or sensory deficits, slurred speech and loss of ability to coordinate muscle movement. Interestingly, although peripheral vertigo is usually experienced more severely, central vertigo is usually due to more serious causes. These causes include migraines, MS, and tumors; it can also be caused by strokes, seizures, trauma and infection, but these are less common causes (1)Wikipedia. Both types of vertigo are associated with nausea and difficulties with standing or walking in the severe manifestations.

Vertigo can develop and manifest itself suddenly and last for a few minutes or a few days. It can then reoccur at constant intervals or very rarely over a lifetime. It is also possible that and individual can experience vertigo once and then never again; this is especially true of vertigo induced by inner ear infections: once the infection is treated, the vertigo will be cured as well. The physical symptoms of vertigo are: nausea, vomiting, feeling of movement or "spinning" of surroundings in relation to self, light-headedness, difficulty standing or walking, sensation that the floor is moving, and the feeling that you are unable to keep up with what you are looking at (related to movement of world in relation to self). These symptoms vary in severity and length of duration.

Treatment of vertigo is only successful if the underlying cause can be successfully determined. If the vertigo is due to a bacterial infection in the inner ear, antibiotics may clear up vertigo symptoms. It can also be helpful to take aspirin if vertigo is due to poor circulation (2) NHS Direct Online Health Encyclopaedia . Lying still in a quiet space may help reduce vertigo once an attack has started, but it may also induce subjective vertigo rather than alleviate the symptoms. Balance activities and eye movement exercises may also help prevent recurrent vertigo in some.



Full Name:  Anna Dejdar
Username:  adejdar@brynmawr.edu
Title:  Bilingual People: Are Their Brains Different?
Date:  2006-05-12 02:46:54
Message Id:  19340
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip


Since I can remember, my life has always involved two languages, English and Czech. I was born in the United States of America, but my parents came from the Czech Republic and as a result I was taught both English and Czech as I grew up, becoming bilingual. Throughout my life, I was able to differentiate between the two languages consistently. Sometimes a Czech word comes to me faster during an English conversation and vice versa, but mainly I can separate them. Even when I was young, I knew that inside my house, I spoke Czech with my parents and when I left my house, I spoke English with my friends or at school. My life with languages continued as I became very interested in languages in school and I studied other languages like French and Latin, and I am now studying Italian. All of the different languages were fascinating and very enjoyable for me to study and to learn. As a result of many discussions in our class, Neurobiology and Behavior, I have been wondering if because of my background with two languages, my brain organization is different from someone who has learned a second language later in his/her life and does not know it as well as his/her native language? Furthermore, could knowing two languages make it easier for someone to learn a third language because his/her brain is somehow organized so that it is more prepared for understanding and learning the new language better or faster?

In hoping to answer my questions, I first looked at the actual parts of the brain and their functions that are believed to be responsible for language. The human brain has two hemispheres where each one has its own specific functions. In humans most of the areas pertaining to language are in the left hemisphere of the brain ( (1), (2)). Specifically there are two important areas with different roles in the aspect of language. The first one is called Broca's area, which is in the frontal lobe and it is in charge of producing spoken language with the function of moving the tongue, mouth, and also the palate ((5)) and also signed and written language ( (3)). Furthermore, this area controls the formation of sentences and also the articulation of words with the correct syntax ( (1), (3)). One way that the various functions of this area were discovered was through observing people with damage to that area, which is known as Broca's Aphasia. People with Broca's Aphasia show slow and slurred speech that is not grammatical and they have a difficult time in comprehending sentences where the syntax of that sentence is very important. They do however, still maintain their vocabulary. The problems that have been found seem to support that this area is very important in the understanding and also putting together of language and information that is connected to grammar ((1) , (4) ).

The second critical area is called Wernicke's area, which is a part of the temporal lobe ( (5)), where its function is over understanding the speech of other people through recognizing and processing both spoken and written language. It is also involved in short-term memory functions ( (3)). The evidence of its function also comes from looking at damage to that area, called Wernicke's Aphasia. The problems that the people with Wernicke's Aphasia demonstrate are that their sentences are grammatically correct; however, they do not make sense and also there are words that are invented. They also have problems with comprehending the speech of other people and with correctly giving names to different objects. This evidence continues to show the important role that Wernicke's area must have in understanding the language of other people and also of selecting information from one's memory and applying it to the right object ( (1)). The findings from people with damage to either Broca's area or Wernicke's area demonstrate their significance in language and that damage to them greatly alters the way that we are used to communicate and also to understand the communication of other people.

The two areas also have an important, but slightly different role in people who have learned a second language and also people who are bilingual. In a study by Joy Hirsch and her colleagues from Cornell University, which looked at people who had a native language and also a second language, it was found that there was a spatial separation between the two languages in Broca's area, but there was hardly any separation in Wernicke's area. As a result of this, the researchers concluded that due to the functions of the two areas, people who learn a second language later understand the language when it is spoken to them, which would be shown through their Wernicke's area, but have a difficult time in putting together a sentence and communicating it in that second language, shown through their Broca's area. From my own experience with foreign languages, I have also found that I have an easier time in understanding the instructor than in actually speaking in the language. This is an interesting phenomenon and as the evidence illustrates, it could be due to the way that the two areas have developed and respond to the foreign language, making it different from one's response to a native language. This has also been shown by the same researchers where they did not find any differences in the spatial separation of the two languages in both of the areas in people who were bilingual and learned the two languages early in their life. This supports how the brain's organization could be different for the two groups of people because for the people who are bilingual, the two languages are like their native languages and they can therefore understand and speak the two languages equally well, showing a difference in the organization of their brain from people who learned the second language later in their life ( (5)). Furthermore, they concluded that because there was a difference in their Broca's area, then this area could be set to correspond to one language early in life because of constant exposure to it and then it can not be adjusted to another language which is learned later ( (6)). This would support the theory that there is a "critical period"( (6)) where children can only learn a language completely during a specific time limit and after that time, they will never be able to learn a new language in exactly the same way as the previous one ( (7)). However, there have also been studies done by Perani et. al which found that there were not differences between bilingual people who learned at an early age and bilingual people who learned at a later age if their "level of proficiency" ( (6)) was very high, suggesting that the difference in brain activation would be due to the amount of fluency, not to the age where they first learned the language. Lastly, there have also been studies with bilingual people who have damage in part of their brain and as a result their ability in one language is influenced, but not in the other language, which seems to support the theory that the two languages are represented separately in the brains of those people. There have been many studies conducted on the study of language; however, the exact functioning of language in the brains of people who know more than one language has not been found yet nor completely figured out ( (6)).

Lastly, in response to my question about a certain predisposition to learning a third language after knowing two languages, it appears that the answer is yes and there are multiple explanations for this. One is through research, which is shown in "A Dynamic Model of Multilingualsism (DMM)" ( (8)) where the term "metalinguistic awareness" ((8)) is discussed. "Metalinguistic awareness" is especially strong in people who are multilingual and it is believed that it makes the learning of new languages easier and faster. Another explanation is that because the person has had to use different strategies in learning the second language, they can then use those same known strategies more easily in studying the third language. Along with this idea, there is also the theory that because the person has had to use the different language strategies, the strategies became more advanced cognitively and so the person could use them easier for the quick learning of the third language. All of these explanations support that once a person has learned a second language, it does seem easier for them to learn a third or fourth language after that because their brains have developed slightly differently, acquiring necessary skills for the continued study of language ((8)).

The research on language and also the different aspects of knowing more than one language and how that could change one's brain are fascinating. It makes sense that my brain might react to the two languages that I learned very early in my life differently from a person who has learned a second language later because we both have had different experiences because of that and therefore, as a result our brains would be organized differently. It is an interesting area and it continues to be studied with more technology and research, making more progress.

WWW References:
1. 1)Language and the Brain,
2. 2) Handedness and Brain Lateralization,

3. 3) Broca's Area,

4. 4) The Brain and Language,

5. 5) BrainConnection.com- How the Brain Learns a Second Language- Page 3,

6. 6) Cortical Organization of Bilingualism,
7. 7) A Survey of Research in Second Language Acquisition,
8. 8) Metalinguistic Awareness in Multilinguals: Cognitive Aspects of Third Language Learning,



Full Name:  Rachel Mabe
Username:  rmabe@brynmawr.edu
Title:  The Flaws of Past Theories: What Causes Eating Disorders
Date:  2006-05-12 07:25:11
Message Id:  19348
Paper Text:
<mytitle> Biology 202
2006 Second Web Paper
On Serendip



Eating disorders are a great puzzle in the medical world. As a greater understanding is developed for the brain's feeding mechanisms, theories about the etiology and development of eating disorders are being created. However, whether these theories, mostly derived from experiments, are accurate is still under debate. It has been proposed that children are likely to develop eating disorders when their mothers have one themselves. However, this observation may mean that there is a genetic trait passed down or it may imply that the child simply learned these unhealthy eating behaviors from his/her mother. How much the brain, as opposed to environmental triggers, is responsible for the formation of eating disorders is a question that remains unanswered. In the case of eating disorders, does brain equal behavior? (1) .

To discuss eating disorders, it is necessary to understand how hunger and satisfaction are received in the body. In other words, where does the message get across that we are hungry and how does this message become maladaptive? Current research on the brain has led to the understanding that hunger and satiety signals start in the body but reach their final destination in the brain. Through rat experiments where the cerebrum is cut, scientists have shown that the most important area where hunger and satiety signals communicate is the forebrain, specifically the hypothalamus (2) .

In the mid 40's and 50's, the interaction between hunger and satiety was erroneously thought to take place exclusively in the ventral medial nucleus (VMN), responsible for satiety, and the lateral hypothalamic area (LHA), responsible for hunger. This theory was known as the Dual Center Theory. Although these areas do have importance in the eating process, it is now understood that this theory was majorly oversimplified because it does not account for the motivation behind eating (3) .

When mice with damaged VMN were put in a situation in which they could easily press a lever to obtain food, they did so more than normal mice. Only when they had to press a lever numerous times in order to get food, was there an obvious difference between the two mice—the VMN damaged mice appeared lazy. Similarly, a study found that obese women ate fewer peanuts when they had to peal them, compared to when they did not. According to the Dual Center Theory, obesity would be a result of damage or a malfunction in the VMN or LHA. This would mean that the obese person would not feel full or would always feel hungry. However, if this was the case, why didn't the obese subjects always choose to eat more? Further research has found that in addition to the VMN and LHA structures, neurotransmitters (NT) have a strong effect on eating behaviors (4) .

There are many neurotransmitters that are possibly involved in the cause of eating disorders. However, for the purpose of this paper, I will only focus on a few. One neurotransmitter is called norepinephrine (NE), which is thought to be linked to eating behavior. In studies where NE was injected in rats, there was an increase in eating with a preference for carbohydrates. This is important for the understanding of anorexia because Fava, Jemerson, Lake, & Ebert (1985), found that anorexics have less NE in their cerebrospinal fluid compared to normal eaters. Because anorexics starve themselves and stay clear of carbohydrates, it can be postulated that this lack of NE may be a cause of anorexia (AN) (5) .

However, neroepinephrine is just the tip of the iceberg. Another neurotransmitter that is thought to be linked to eating disorders is cortisol. Anorexics tend to have a higher level of this NT than the average person. It has been shown that cortisol works with NE to regulate food intake in rats, particularly inhibiting carbohydrates. More importantly, it also shows some of the behaviors that are common in individuals with AN. Therefore, cortisol could be a possible explanation for some of the behaviors and physiologic traits of anorexics.

Another neurotransmitter, cholecystokinin (CCK), is released right after a meal and is found to be a satiety control in animals. The level of this NT is lower in bulimics than normal eaters. This finding supports the idea that changes in the brain lead to changes in the behavior, since lower levels of CCK would cause a person to continuously eat, without feeling full. In addition, leptin is shown to decrease food intake. In rodents, a problem with the leptin receptors' function or lack of leptin results in obesity. However, a direct correlation of obesity to leptin deficiencies has not been found in humans. Also, neuropeptide Y and YY are other peptides that are theorized to take part in causing eating disorders. NPY is decreased in people with AN but increased in people with BN. This makes sense since this NT induces eating (6), (2) .

The point of all these hypotheses is one of importance. If we were able to say that a specific neurotransmitter controlled at least a specific behavior or physiological aspect in eating disorders, we would be taking a step in the direction toward finding an effective treatment. However, there are many downsides to taking such a restricted biological viewpoint.

Many of these theories have been proved in animals but there is much uncertainty as to how these experiments would replicate in humans. One major doubt I have is that experiments where animal are manipulated to eat less as a result of a specific NT, are perhaps just lowering the animal's set point, causing a loss in appetite. As mentioned earlier in the example of obese people and peanuts, the motivation for not eating is unclear. Individuals with AN are said to not eat because they fear food as opposed to just having a lower set point. They do have appetite, which is evident by their obsessive behavior, yet they are constantly below their set point. Therefore, animal models do not allow scientists to understand the psychological and sociological reasons behind abnormal eating behavior (7) .

An additional reason a purely biological fails to explain eating disorders is because some NTs have a great effect on one type of eating disorder but not others. An example of this is CCK being low in Bulimics but normal in Anorexics. It then becomes difficult to understand how a NT, which effects eating (either by increasing or decreasing consumption), could be abnormal in one kind of eating disorder but have no effect on another. Also, it is unknown whether the abnormal NT came about because of the disorder or whether the disorder came about because of the abnormal NT. These are just some of the questions that should be considered before naively accepting possible explanations of abnormal eating without evaluating other additional possibilities (6) .

Another cause for concern over the idea that NTs or a malfunction in the brain is the sole cause of eating disorders, is that some studies show that once a person afflicted with the disorder achieves normal body weight, these abnormalities in the brain decrease For example, in an experiment preformed by Ursula F. Bailer, M.D., of the University of Pittsburgh School of Medicine, some AN patients had an increase in serotonin once recovered (8) . This would point to the idea that behavior effects the brain and not the other way around. However, Walter H. Kaye, M.D. found that the abnormal serotonin activity in the brains of BN patients persisted after recovery. The conflicting evidence does not draw any positive conclusion in one direction (9) .

Furthermore, a study on a 13-year-old girl, who had a tumor on one side of her hypothalamus, stands as evidence that eating disorders may be caused by behavior, the brain, or both. In this case, the young girl's tumor was thought to be the cause of her anorexic behavior. However, after the tumor was removed, the symptoms of AN still persisted. The results of her operation could be interpreted in two ways. The symptoms may have persisted because the behaviors were somehow conditioned, changing her brain permanently, or perhaps the symptoms were never a consequence of the tumor to begin with, but rather something that formed simply as a result of her environment. This case could support the idea that NTs or malfunctioning areas of the brain cause eating disorders, as well as support the idea that a combination of a biological disturbance and environmental factors could be the cause of eating disorders. However, both arguments seem equally plausible (10) .

In conclusion, it is difficult to say whether eating disorders are entirely biological, completely environmental, or a combination of both. Most likely, it depends on the specific case. Although eating disorders are allegedly a fairly new phenomenon, there is some evidence that it existed as far back as the 17th century in England. However, since proper diagnosis at that time was unavailable, it would be impossible to tell if there was a greater prevalence of these disorders than presently. Society's increasingly intense pressure for girls to be thin may, in fact, be a cause of eating disorders, but without the knowledge of eating disorders' prevalence through history, it may be impossible to find out. Therefore, it is important to consider the current hypothesis with their many limitations in mind (11) .

Works Cited

1) Nature vs. Nurture , A website that compares biological and environmental triggers for eating disorders

2). Chapter 3: Motivation and Homeostasis , describes the importance of homeostasis for nutrition.

3). The Neural Control of Eating , indepth discussion on Dual Center Theory

4). Wikipedia: Obesity , definition and information on obesity.

5). Conditions and Diagnosis: Anorexia Nervosa , Helpful site for general info on AN

6). Eating Disorders , Helpful site for general info on eating disorders

7). Digest: Breaking Down and Assimilating Eating Disorder Recovery, Popular Culture, Whatever, A blog opinion on Eating Disorders

8)Kaye WH, Jimerson DC, Lake CR, Ebert MH. "Altered norepinephrine metabolism following long-term weight recovery in patients with anorexia nervosa." Psychiatry Research. (1985), 14(4):333-42.

9) O'Brien, Hugo, Stapleton, & Lask. "Anorexia saved my life: Coincidental anorexia nervosa and cerebral meningioma." International Journal of Eating Disorders. 30(3) (2001): 346-349.

10). History of Anorexia , A breif explanation of the history of AN


Full Name:  Anne-Marie Schmid
Username:  aschmid@brynmawr.edu
Title:  Cross Lateralization
Date:  2006-05-12 07:58:53
Message Id:  19352
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

While it is common knowledge that the majority of people show a preference for using one hand over the other, the idea that one may also have a preferred eye, ear, and foot is markedly less common knowledge. This preference for one side of the body over the other is believed to be caused by lateralization of the brain (The brain, being divided into two hemispheres connected by the corpus callosum, controls one side of the body with each hemisphere; because the brain is contra-lateral, one side of the brain controls the other side of the body). In theory, if one is right handed, then they should show a dominance in the use of their right eye, ear, et cetera, as the left hemisphere of their brain should be dominant. The same should hold true of those that are left handed (2).
As there is with people who are left handed, there appears to be a correlation between those people who display cross lateralization and certain learning and mental disorders, such as dyslexia and schizophrenia. In the case of dyslexia, at least, the problem is thought to be caused by the brain's storage of information in both hemispheres, rather than just one of the hemispheres. By storing the information in both hemispheres, the time required to retrieve the information is increased, and can result in the information being jumbled. While there is a high correlation between certain learning disorders and cross lateralization, the exact cause of these disorders has yet to have been found. Even so, there have been a number of guides over the last few years suggesting that if a child shows signs of cross lateralization, they should be corrected, so that the growth of the side that is more dominant is increased (3). Although this may seem like a way to attempt to prevent certain disorders from developing, by preventing the child from using the part of the body that is normally dominant, it is possible that more problems will be caused. There is also evidence suggesting that increasing cross lateralization may be beneficial, as it forces different pathways in the brain to be used (4). As nothing on the subject is definite at this time, attempting to force a person to be one way or the other does not seem to be the best solution.
References
1) Handedness, Functional Cerebral Hemispheric Lateralization, and Cognition in Male-to-Female Transsexuals Receiving Cross-Sex Hormone Treatment
2) Hiser, Elizabeth. "Hemisphere lateralization differences: A cross-cultural study of Japanese and American students in Japan." Journal of Asian Pacific Communication. Volume 13, Number 2. 2003. p. 197-229(33)
3) Laterality.
4) Maruff P, Wilson PH, De Fazio J, Cerritelli B, Hedt A, Currie J. "Asymmetries between dominant and non-dominant hands in real and imagined motor task performance." Neuropsychologia. 1999 Mar. Vol. 37, Issue 3. p. 379-84.


Full Name:  Julia A Patzelt
Username:  jpatzelt@yahoo.com
Title:  Addiction Paper #3: Is "genetics v. environment" the right question, and are we ignoring more relevant behavioral patterns among generations?
Date:  2006-05-12 08:43:46
Message Id:  19357
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

In my two previous papers on drug addiction, I addressed issues of "nature v. nurture" (genetics v. environment), environmental influences, and the concept of "normal behavior." For my final address, I will frame some of the recent research findings that draw direct relationships between a person's genetic and environmental development and their propensity to perpetuate the self-destructive cycle of addiction. I will do this by challenging the discrete categories of genetics and environment.
There is strong corollary evidence between certain neurochemical patterns and a person's tendency towards addiction: "Alcoholics and cocaine addicts often express the A1 allele of the dopamine receptor gene DRD2 and lack the serotonin receptor gene Htr1b" ((5)). Due to obvious ethical issues, there is also a lack of definitive research (loss/gain of function research) with humans. Therefore, the corollary evidence has gained an inordinately dominant role in the genetic contemporary explanation for addictive behavior:
Connecting the neuroscience of natural rewards to drug addiction can explain the ability of addiction to progressively hijack the brain and prevent decisions that support an individual's survival, such as eating and sleeping. "Indeed, a recurring theme in modern addiction research is the extent to which neuroadaptations responsible for various aspects of the addiction process are similar to those responsible for other forms of neural plasticity studied in cellular models of learning, such as long-term potentiation and long-term depression."
((5))

Interestingly enough, the almost afterthought correlation between the addiction and depression processes are potentially more relevant than originally presented. More recent studies on the role of family history in predicting addictive behavior has illuminated an interesting phenomenon. While a family history of explicit addiction does play a role in the behavior of the addict's offspring, other familial aspects of an individual's upbringing seem to play an even bigger role in predicting addictive behavior: "Children of alcoholics have been found to be more likely to have families characterized by less cohesion, more chaos, and more conflict than children of nonalcoholics" ((4)).

While it is possible that these patterns of chaos are more responsible for addiction than the genetic patterns of dopamine receptors, it is also possible that these patterns of chaos are more genetically grounded than a person's reaction to a drug, such as cocaine. If chaotic familial relationships are passed on in the genome, and addiction is a secondary reaction to that environment, then the categories of environment and genetics are inextricable linked.

Following the theory that addictive behavior is not necessarily a primary learned or inherited behavior, and that non-addictive chaotic familial situations seem to engender addiction in the offspring, one must examine the role reversal of addictive behavior and familial chaos. The arms of an addicted parent reach beyond an offspring's tendency towards addiction to their tendency toward other psychological and emotional problems: "Along with an increased probability of developing substance abuse problems, offspring of alcoholics also have been found to be more likely to develop affective disorders such as depression and anxiety than offspring of nonalcoholics" ((4)). Besides reaffirming the corollary (rather than causational) studies that tried to pin down generational addiction as either a genetic or environmental tool, these new findings allow us to view addiction as a potentially secondary effect/coping mechanism of more primary psychological issues:

Temperament and personality characteristics, such as high levels of sensation seeking, impulsivity, activity, novelty seeking, and neuroticism and low levels of reward dependence and attention-span persistence, have a greater tendency to be found among offspring of alcoholics than offspring of nonalcoholics.
((4))
Does depression or substance abuse come first, and how are both passed on through the generations? These new findings seem to conflate the categories of environment and genetics. While it is definitely possible that addiction is encouraged by certain genetic inheritances, it is important to remain flexible in our studies. Psychological trends may play a primary role in a person's development of addictive behavior, and the patterning is not necessarily exclusively genetic, environmental, or one-way through generations.


Works Cited
1)"A Behavioral/Systems Approach to the Neuroscience of Drug Addiction.", The Journal of Neuroscience, May 1, 2002.
2)"Drug Statistics.", Drug-Rehabs.org.
3)"Drug Use Trends.", Drug Policy Information Clearing House.
4)"Predictors of Substance Abuse and Affective Diagnoses: Does
Having a Family History of Alcoholism Make a Difference?"
, Ohannessian, C. et. all. University of Connecticut Health Center. Questia Online Laboratory.
5)"Neurobiology and Behavior Papers, #1 and #2.", Patzelt, Julia. Bryn Mawr College: 2006.



Full Name:  Trinh Truong
Username:  ttruong@brynmawr.edu
Title:  Emotional Reasoning
Date:  2006-05-12 08:47:13
Message Id:  19358
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Often in philosophy, including that of Rene Descartes' and, prior to reading this book, my own, reason is thought to be the antithesis of emotions and passions, since they are known to make the individual susceptible to bias and prejudices. When I am told to be reasonable, I am well aware that I am also implicitly being told to stop acting so emotional. All of us have been taught more or less that "rational processing must be unencumbered by passion." (171) However, reading Antonio Damasio's Descartes' Error, in which he examines the relationship between reason and emotion is examined from a neurological perspective, has altered and shed new light to this prevalent philosophical (mis)conception for me. In his exploration of the interactions between reason and emotion in the brain, the rigid demarcations that differentiate reason from emotion become blurred.

Reason is an indispensable tool in human survival, enabling us to deduce effects from causes and decide on the most beneficial course of actions. As Damasio puts its, "the purpose of reasoning is deciding and that the essence of deciding is selecting a response option." (165) However, reason alone cannot sustain a desirable living for humans or select the best response option, especially when it involves personal decisions. Another principle agent necessary in achieving this goal is emotion. Through Damasio's studies of detrimental social effects of prefrontal cortices damage in patients, who retained all other cognitive faculties such as basic attention, memory, intelligence, and language, the connection between impaired reasoning/decision making and impaired emotion was elucidated. Patients with such damages and those in region of the brain involving emotions such as structures in the limbic system, especially the amygdala were no longer able to experience emotions (62). Following their selective brain damage their lives unraveled in ways that resulted in them they lose their jobs, their savings, and their marriages, and become dependent on family's care. In IQ tests, however, their performance revealed intelligence that is normal and sometimes even superior, indicating that their cognitive abilities have not been impaired. Still, the failures in their personal lives revealed a true defect in reasoning and decision making.

In an attempt to explain this discrepancy Damasio's developed the hypothesis called somatic marker. He believed that the defect sets in at "the late stages of reasoning, close to or at the point at which choice making or response selection must occur" (50) The lack of emotion prevented the patients "from assigning different values to different options, and made their decision-making landscape hopelessly flat." (51) He explains that when we are confronted with a dilemma and even before doing a cost/benefit analysis to isolate the best option, we a bad sensation that is affiliated with the bad outcome of a certain option as it pops into our mind. Sometimes this thought is too fleeting to be noticed but the emotion will remain to impact out decision, and this is the phenomenon. The somatic marker draws attention to the negative consequence of a choice and serves as a warning against it, causing the individual to dispel the thought of such an option and decide among other option s. It enhances our efficiency in decision making by evoking emotions, previously connected to certain consequences through experience and learning, that will highlight actions as favorable or harmful. (174)

With this new understanding of emotion's influence on the brain, I find myself reevaluating the conditions that define a neurological disease and individual weakness. Disagreeable behaviors that are linked to emotion are often attributed to the individual's flaws in character, causing individuals to be held accountable for it and be deserving of punishment. On the other hand, similar behaviors which are manifested by specifically identified neurological disease are not condemned because it is viewed as involuntary, unintentional, and innocent in will. The journey that I have been led through in mechanisms of emotion relative to reason leaves me questioning to what extent are our emotional choices or, in the patients' cases, lack thereof it truly voluntary and ill-willed. How do we really know when our undesirable actions are a result of lack of will-power or character flaws and discipline or, indeed, a deficiency in brain? Similar to this book this course itself has elicited many of the same questions concerning the brain and freedom of our behavior, morality, and how much of behavior is subjected to our biology rather than free-will. Areas in which I have previously ascribe completely to personal strength character, I have to think about from the neurological perspective as well, for morality, in my personal opinion, does not reside in a substance independent of the brain. The world as we see it comes to us through the faculties of our sense and interpreted by our brain. Thus events, objects, and images are not always as they appear to be, we only understand the outside world and ourselves through our brain because it is all we have and any technological instrument is an extension of it.

Besides from being entertaining Descartes' Error as well as the course was very insightful in examining the inextricable relationships between the various sections of the brain, such as those involved in reason and emotion. Although reason may reign unchallenged in the theoretic and computational domain of life, it cannot stand alone without the assistance of emotions in the personal domain, which, together, can be what be normally call "common sense." Emotion, though left unchecked can disrupt rational thinking, "plays a cognitive guidance role" (130) as well as reason. Emotion is no longer simply a luxury when it is positive and an annoyance when negative; it is essential to our survival.



Full Name:  Gillian Starkey
Username:  gstarkey@brynmawr.edu
Title:  Pheromones: The Best Aphrodisiac (Web Paper #1)
Date:  2006-05-12 09:30:14
Message Id:  19359
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

It has long been known that animals are able to select the best mates through means other than sight. Birds have their own calls that match up to indicate who they should mate with, and mosquitoes respond to humming sounds created by each other's vibrating wings. (7) But at the basis of it all is something much more primal: pheromones.

A pheromone is defined as any chemical message that is transmitted between members of the same species. There are many types of pheromones; mating is not the only function that they serve. (3) These chemicals were first found in silkworm moths in 1956, when a team of German researchers attempted to find what signaled these moths to perform the "flutter dance," a mating ritual. Eventually the researchers pinpointed a single chemical secreted at the tip of the abdomen of female moths - a chemically pure pheromone. (2) Much more recently, pheromones were also located in male and female golden hamsters.

The idea that pheromones also exist in humans was first proposed by Martha McClintock in 1971. She noticed that women who lived together developed more synchronized menstrual cycles, and deduced that it had to be the result of some chemical message transmitted between the women. (2) The pheromones are thought to be produced by the apocrine glands and bind to protein receptors in recipient's nose. The apocrine glands develop and become functional after puberty - right around the time that humans become sexual beings. (3)

The existence of human pheromones was confirmed in 1986 when George Preti and Winnifred B. Cutler established what the described as "a link between heterosexual behavior and women's reproductive physiology." (4) They wanted to see what type of effect (and how much effect) human pheromones have on other humans, so they conducted a survey with questions about women's sexual behavior and menstrual history. They found that women who had sex with men at least one time per week consistently have more normal menstrual cycles and significantly fewer fertility and menopause problems than women who had sex with men either rarely or in sporadic phases. Preti and Cutler interpreted this correlation as an indication that exposure to male pheromones helps women's bodies maintain optimal reproductive health. (4)

In 2002, Norma McCoy and Lisa Pitino designed a study to examine the effect of female pheromones on men. (5) They extracted female pheromones and mixed them into some women's perfumes. The sexual behavior of these women was monitored for a period of several weeks, and compared against the sexual behavior of women without pheromones in their perfume (a placebo was used for this control group). McCoy and Pitino found that the women wearing the perfume with pheromones engaged in sexual behavior with heterosexual men three times as often as the women wearing the perfume with the placebo. (5) Although the sample size was fairly small, the data was shown to be statistically significant, and indicates that female pheromones increase sexual arousal in heterosexual men.

At this time it was fairly well established that human pheromones signal (or at least highly contribute to) sexual arousal in heterosexual members of the opposite sex. But the question remained: what signals sexual arousal in homosexual humans? A recent study by Swedish researchers - Per Lindström, Ivanka Savic, and Hans Berglund - explores this issue. The researchers selected 36 healthy men and women; some of them men identified as heterosexual and some as homosexual, but all of the women identified as heterosexual. These subjects were exposed to AND (a testosterone-derived chemical in male sweat, believed to have pheromone properties), EST (an estrogen-derived chemical, also potentially a pheromone), and other non-pheromone odors such as lavender and butanol. A "complex brain imaging technique" (6) was used to track the sexual arousal of the subjects during exposure. The researchers found that heterosexual men experienced increased arousal when exposed to EST, and homosexual men experienced increased arousal when exposed to AND, but not when exposed to EST. Heterosexual women were more sexually aroused when exposed to AND, but not when exposed to EST. In other words, homosexual men had the same response to AND as heterosexual women had, which was the opposite of the response that heterosexual men had. (1)

This statistically significant finding indicates that responses to human pheromones are "not linked to gender but to sexual preference." (6) Because pheromones clearly influence sexuality, this study "lends more credence to the biological explanation model than to a psychological one when it comes to homosexuality." (6) Lindström, Savic, and Berglund are currently conducting a follow-up study with homosexual women to see if their responses to EST parallel those of homosexual men to AND.

Although the Swedish study does not determine cause and effect - sexuality could be a result of which pheromone a person responds to, or it could control which pheromone a person responds to - the study does provide strong backing for the idea that homosexuality is a biologically-influenced characteristic. One remaining issue is the role of free will in determining our sexual attraction. Is what we refer to as our "taste in men/women" really not a cognitive process, but rather a biological attraction explained by pheromones? Is this the reason for the "chemistry" that we feel between ourselves and someone we're attracted to? Some faces or bodies are more pleasing to some people than others; this new research with pheromones challenges the idea that these differences have an aesthetic component.

1)"Pheromone attracts straight women and gay men"

2)"A Secret Sense in the Human Nose: Pheromones and Mammals"

3)Definition of Pheromone

4)"The Human Pheromone Discovery"

5)"Pheromone triples women's sexual success"

6)"Gay men 'attracted by same odours as women'"



Full Name:  Gillian Starkey
Username:  gstarkey@brynmawr.edu
Title:  Narcolepsy: Sliding Into Another Nightmare (Web Paper #2)
Date:  2006-05-12 09:50:53
Message Id:  19360
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Narcolepsy is a sleep disorder that affects approximately 1 out of every 2,500 Americans (a total of about 135,000). Characterized by excessive and constant drowsiness during the daytime paired with an inability to sleep soundly at night, narcolepsy is severely debilitating in both social and occupational areas of life. Narcoleptics also suffer from cataplexy (loss of muscle control, during which patients lose consciousness) and hypnagogic hallucinations (vivid hallucinatory experiences occurring right before or after being asleep). This disorder was originally described as "episodes of muscle weakness triggered by excitement and sleepiness." (2) The key characteristic of narcolepsy is that patients can enter a state of REM sleep directly from a state of wakefulness. Once in this state of REM sleep, they are not easily reawakened, which poses problems for if they fall asleep in the middle of a street, behind the wheel of a car, or in any other dangerous situation.

Narcolepsy has been shown to have a fairly strong familial predisposition; first-degree relatives have a 40% concordance rate, meaning that a person is 40 times as likely to have narcolepsy if they have a parent or sibling with the disorder. (6) Researchers have actually found one gene common among all narcoleptics: the human leukocyte antigen (HLA). However, this gene is also present in 30% of the rest of the non-narcoleptic population. Thus, it is not a cause of narcolepsy, but it is considered a risk factor. (1)

In recent studies of narcolepsy, using dogs that suffer from the disorder, researchers discovered a mutation in the dogs' hypocretin brain receptors. Hypocretin neurons were first discovered by De Lecea et al. in 1998. (7) These neurons, also called orexin neurons, play major roles in appetite and sleep regulation in humans. When a person is awake, properly functioning hypocretin neurons (located in the hypothalamus) synthesize the neurotransmitter hypocretin. Hypocretin then stimulates neurotransmitters that have important roles in keeping the person awake - primarily dopamine, norepinephrine, and histamines. (4) Thus, hypocretin neurons work to stabilize wakefulness and sleep cycles.

Links between hypocretin neurons and narcolepsy had been established in animals, but there was still no proof as to whether they contributed to narcolepsy in humans. Then, in 2000, Jerome Siegel found that the brains of narcoleptic humans had 85-95% fewer hypocretin neurons than the brains of non-narcoleptic humans. (1) At this time, a lack of hypocretin neurons was also discovered in narcoleptic mice. These three studies combined together strongly support the idea that hypocretin neurons strongly contribute to the development of narcolepsy. (1) This would make sense considering the symptoms of narcolepsy and the fact that hypocretin is a major regulator of sleep cycles.

However, while there was no total loss of hypocretin neurons in narcoleptic animals (they simply didn't function properly due to the mutation), human narcoleptics seemed to be completely missing most of their hypocretin neurons. (1) Researchers hypothesized that narcolepsy might be caused by the loss of these hypocretin neurons after birth, and tested this hypothesis in a study of 16 human brains. They stained the hypothalamus region in each brain (4 from narcoleptic humans and 12 from non-narcoleptic humans) so that they could see hypocretin neurons and melanin-concentrating hormone (MCH) neurons, which develop together. (5) The narcoleptic humans' brains showed that the MCH neurons were not missing, meaning that hypocretin neurons had originally developed with them. (1)

Thus, the researchers concluded that narcolepsy was not the result of an inherent lack of hypocretin, but rather the product of the degeneration or destruction of these neurons. Siegel proposed the possibility that the loss of hypocretin neurons was because of an autoimmune attack by the body. (1) There is no clear evidence yet that supports Siegel's proposal of an autoimmune attack, but it is still considered a "reasonable possibility." (5) Researchers also noted that the narcoleptic humans' brains showed signs of having undergone gliosis, an inflammation of neuron groups that is typical of neurological degeneration and could have caused the destruction of hypocretin cells. (5)

With regard to the treatment of narcolepsy, there is no "cure," but psychiatrists have experimented with several medications. The first medication used to treat narcolepsy was imipramine (an tricyclic antidepressant), which was effective in alleviating some instances of cataplexy. (2) Currently, the best known medical treatment is a combination of stimulant and antidepressant drugs. Daytime drowsiness is treated with varying doses of stimulants that function similarly to amphetamines, and some narcoleptics are also given antidepressants to help them sleep more soundly as well as decrease instances of cataplexy (antidepressants also seem to help relieve patients of hypnagogic hallucinations). (3) However, these combinations of medications have many negative side effects that may make them less worthwhile to take.

It has been proposed that narcolepsy could be helped by increasing the number of hypocretin neurons in the brains of narcoleptics. I think it would be interesting to see if it is possible to use stem cells to re-grow these hypocretin neurons in narcoleptics, rather than inserting already-developed hypocretin neurons from other humans' brains. However, there is a possibility that they would just be destroyed again. Researchers have not yet established whether the reason for degeneration of hypocretin neurons is something problematic with the rest of the body or brain, or if it is a result of some inherent problem with the neurons themselves, which were subsequently lost.


1)"Narcolepsy may be Due to Loss of Brain Cells"

2)"History of Narcolepsy"

3)"Medications"

4)"Orexin"

5)"Scientists Pinpoint Possible Cause for Debilitating Sleep Disorder"

6)"Narcolepsy and the Hypocretin Receptor 2 Gene"

7)"The Sleep Disorder Canine Narcolepsy Is Caused by a Mutation in the Hypocretin (Orexin) Receptor 2 Gene"



Full Name:  Amber Hopkins
Username:  ahopkins@brynmawr.edu
Title:  Fear Itself
Date:  2006-05-12 09:59:31
Message Id:  19361
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

In his first inaugural address, Franklin Delano Roosevelt said that "the only thing we have to fear is fear itself—nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance." But what is this fear? Everyone has experienced it, in varying degrees- but why? Where do our fears come from- are we born with them, or do we learn them?

Fear is useful to us, and to all living animals, as a mechanism for survival. It helps us protect ourselves from things, people, or situations that might be harmful to ourselves and our existence. The fight or flight response is the phenomenon present in all animals, as our mechanism for responding to perceived threats and dangers. This response in our bodies includes symptoms of increased heart and breathing rates, pupils dilating, saliva production reduced, sweat glands opening, etc. Non-visible effects occur as well, such as the tightening of muscles, fat being metabolized for energy, endorphins being released, and higher levels of judgment and thinking are abandoned, resulting in more instinctive, primitive actions by the individual (1). But this is not fear- this is how we respond to fear. Fear then is the feeling or response that we are subject to when confronted with a perceived danger or threat.

Where do we learn what to fear? How does our body know when to employ the fight/flight response? At some point we do seem to develop some capability of rationally inferring that "that menacing man in the dark alley is probably not waiting around for a hug" or that "the little bunny is most likely not going to attack me the second my back is turned." But how much of what we fear is based off of rationally perceived threats, and how much of it is instinctive, or inherently within us? Consider the following two studies.

In an experiment done by scientists at UCLA, the scientists would play a sound, immediately followed by an electric shock. It did not take long for the rats' amygdalas to pair the sound with the shock and to begin bracing themselves when the sound would be played, and further creating a response of fear. Then the researchers performed it in reverse, not shocking the rats when the sound was played, and thereby undoing the association between the sound and shock, and eliminating the response of fear. This is a clear example of a learned or unlearned fear (2).

Secondly, consider the relationship of monkeys and snakes. According to a study done by Susan Mineka in the 1980's, wild-born monkeys are completely terrified of snakes, even to the point of cowering in the corner of a cage with a plastic snake in it. Captive-born monkeys, on the other hand, are not afraid of the plastic snake. If it shown a recording of a wild-born monkey being afraid of the snake, the captive-born will also develop this fear but if the image of the snake is superimposed on with the image of a flower, so that the wild monkey appears to be afraid of the flower, the captive monkey does not develop this fear. This suggests that there is some kind of pre-programmed instinct as to what should or should not be feared, but that some of these instincts must be triggered by the environment in order to be realized (3).

It is interesting to consider the biological anomaly that occurs in the Galapagos Islands, however. Because these animals on these islands have long been protected from humans, as well as the fact that there are no natural predators on the islands, the animals there are not afraid of humans. If however, humans, or some other animal did begin hurting the animals there, it seems logical that the animals would begin to develop a sense of fear towards that person or animal. This is consistent with the idea that fears (or the lack thereof) can be learned or unlearned. However, there is no way to determine how completely this would harmonize with the idea that some fears are innate, but that they need to be environmentally triggered in order to be learned, because we are incapable of knowing how deeply rooted these fears exist. Because the animals on the Galapagos have been so isolated from any fear-inducing animals for so long, do they still have these instincts within, or would even a small threat prove completely destructive as the animals would be incapable of learning to fear it quickly enough to ensure their own survival?

The ramifications of the ideas that fears can be learned or unlearned, and further that many fears exist only exist as a result of social interaction leaves us with an interesting situation. Would it be worthwhile to work at making this world more fearless? In many situations, where the fear is insignificant, or as Franklin Delano Roosevelt would say "unreasoning" and "unjustified," such as a fear of public speaking, or of going to the dentist, there are few results that can be perceived as detrimental if this fear is unlearned. However, in cases where the fear is more survival oriented, such as a fear of a man with a gun or of venomous snakes, it would not seem wise to "unlearn" this fear, as it might compromise an individual's ability to perceive threats and to respond accordingly.


1) Fight/Flight Response , Psychotherapy website

2) How Fear Works , How Stuff Works

3) The Genome Changes Everything , Edge



Full Name:  Amber Hopkins
Username:  ahopkins@brynmawr.edu
Title:  Phantoms in the Brain
Date:  2006-05-12 10:03:15
Message Id:  19362
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

How does a person with an amputated limb continue to feel not only the presence of the limb, but also sensations of pain or feelings in that limb? Why does a person begin seeing hallucinations after receiving damage to some part of the brain, while others lose sight in some region of their visual field, but are still able to "sense" things there without actually seeing them? These are a few of the questions that V.S. Ramachandran sets out to answer in his fascinating book, Phantoms in the Brain. In answering these questions, however, he proposes several radical new ideas about understanding the brain and its relation to the individual.

His first claim is that "Your own body is a phantom, one that your brain has temporarily constructed purely for convenience." Thus, he argues that pain is also a phantom, created by the brain, and that it, combined with the phantom of our bodies and that of sensory experiences exist only as tools to sustain our existence. This leads him to the theory that "perhaps we are hallucinating all the time and what we call perception is arrived at by simply determining which hallucination best conforms to the current sensory input." I am not personally sure what viewpoint I identify more with concerning the mind/body problem, but as of now I do not completely agree with his claim that "We have given up the idea that there is a soul separate from our minds and bodies." Perhaps I am still clinging to the beliefs I have been raised with, because I find a security in the belief that there is more than just this life, but as of now I am not ready to accept the view that the "soul" exists only as part of the mind and body.

The most radical idea that Ramachandran suggests is that "God" and religion are purely constructs of the human mind, and that there is a "religious circuit" in the temporal lobes of the brain that is responsible for this spirituality. At first I was surprised that he would even make such a claim, because of the controversiality of it, but his logic and arguments are rather convincing. He does acknowledge that "we still don't know whether these circuits evolved specifically for religion or whether they generate other emotions that are merely conducive to such beliefs," but I agree with him that the possibility of science being able to explain "God" is a very exciting and extremely fascinating concept.

Ramachandran's discussion of the Penfield homunculus was also particularly interesting to me, as he explained how the nervous system remaps itself after sustaining an injury such as the loss of a limb. The interconnectedness that this suggests within the brain, and the nervous system as a whole seems like it could then have ramifications into other physical conditions, such as paralysis or in quadriplegics. It might even then extend also to other disciplines, perhaps to acupuncture or massage therapy.

One aspect of this book that I particularly enjoyed in relation to our Neurobiology and Behavior class was that we touched on many of the topics brought up by Ramachandran, but he was able to go deeper into many of these topics. This gave me a better understanding of topics that I did not fully grasp in class alone, such as what is going on when a patient experiences a phantom limb, the phenomenon of "blindsight," or hallucinations. I also enjoyed Ramachandran's version of the I-function as an unconscious "zombie" within ourselves guiding us through much of life, and that he too agreed that there are both the zombie, as well as a conscious part to the self. Our view as a class that conscious awareness is like a story made up of feelings, intuitions, and thoughts that might conflict and are therefore revisable, and Ramachandran's view that we are subject to these things, and that they are continually checked by each other, and by specific parts of the brain or body (which becomes an issue in phantom limbs or when a part of the brain is damaged) support and maintain each other's validity very well. Ramachandran further explains the extent to which our "selves" and personalities are malleable in his statement "that the self that almost by definition is entirely private is to a significant extent a social construct-- a story you make up for others."

In Phantoms in the Brain, Ramachandran does very well at making his subject matter and ideas understandable to "normal" people, who are not necessarily astute students of the brain or of psychology. At the same time, Ramachandran expresses is ideas and his studies in a way that keeps the attention of those well-versed in these topics, making this book accessible to people at all levels. His use of humor, personal anecdotes, and suggestions of easily performed experiments further allow the reader to relate and to relate his observations to everyday life.

1) Ramachandran, V.S. and Blakeslee, Sandra. Phantoms in the Brain: Probing the Mysteries of the Human Mind. Harper Collins, 1998



Full Name:  Gillian Starkey
Username:  gstarkey@brynmawr.edu
Title:  Schizophrenia: The Biological Basis (Web Paper #3)
Date:  2006-05-12 10:32:30
Message Id:  19364
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Advances in biological research in recent years have led to the development of some highly effective medications for psychological disorders such as bipolar disorder, unipolar depression, and attention-deficit disorder. However, what is arguably the most severe, pervasive, and debilitating psychological disorder remains a mystery to biologists and psychologists everywhere - schizophrenia. For ages, hypotheses regarding the biological causes of this disorder have been left unconfirmed, supported only by data that is still full of contradictions. What are these hypotheses, what evidence lends support to them, and how could they potentially work together to create the symptoms that are characteristic of schizophrenic patients?

Schizophrenia occurs in about 1% of the adult population; this is a surprisingly consistent statistic across all cultures and social groups, and there has been no substantial increase or decrease in this percentage since it was first recorded. (6) The only factor that seems to make people more inherently susceptible to developing schizophrenia is a significant familial predisposition: there is approximately a 6.6% concordance rate among first-degree relatives, and a 47% concordance rate between monozygotic (identical) twins. (1) In other words, a person is almost 50 times more likely to develop schizophrenia if they have an identical twin that is schizophrenic than if they do not. This implies that there is a strong genetic component to the development of schizophrenia; chromosomes 1, 2, 4 - 11, 13, 15, 18, 22, and X have all been consistently marked as vulnerable in schizophrenic patients. (1) It is believed that one or more of these chromosomes is problematic and contributes to the onset of the disorder. Researchers generally agree that this genetic predisposition is not necessary to develop schizophrenia, because some cases occur without any apparent familial history of the disorder, but that genes "could be responsible for triggering the necessary 'susceptibility' or 'vulnerability' of schizophrenia in a population." (1)

Another biological trend in schizophrenic patients is a high rate of pre- and peri-natal birth complications. Schizophrenic patients also have a higher-than-average rate of childhood-onset neurological disorders, such as epilepsy and cerebral infections. (1) Whether these problems contribute directly to the development of schizophrenia is unknown. It is possible that they occurred because of the pre-existing genetic weaknesses that are also characteristic of schizophrenia; in other words, this correlation could inconsequential. Researchers hypothesize, however, that these "early cerebral insult[s]," in combination with genetic factors, may be responsible for "an inherent 'nervous system fragility in the brain.'" (1)

In addition to these early childhood disorders, there are three neurotransmitters that researchers have pinpointed as having abnormalities in schizophrenic patients. The first (and most widely researched) is dopamine, which exists in unusually high levels in schizophrenic patients. Dopamine plays a significant role in motor actions and regulated prolactin secretion in the anterior pituitary gland. Dopamine is also associated with the pleasure system; it is released when we engage in physiologically rewarding activities such as eating and sex (our dopamine levels also increase when we take certain drugs). (7) The second problematic neurotransmitter is serotonin, which is involved in mood regulation, sexuality, appetite, and sleep patterns. (8) Finally, researchers have noted decreased numbers of GABA (gamma-amino butyric acid) receptors in schizophrenic patients. (2) In humans, GABA controls inhibitory synapses in the central nervous system. (9) Many symptoms of schizophrenia are associated with a decrease in inhibition, to which a deficiency in GABA receptors could definitely contribute.

The most efficient way to study neurotransmitter levels in schizophrenic patients is by examining the effects that antipsychotic drugs have on these patients' symptoms. The first antipsychotic ever used with schizophrenia was chlorpromazine, which was introduced in 1956; this drug has been highly effective in treating positive symptoms (which doesn't mean "good" symptoms, but rather symptoms that show a level of functioning too far above a normal amount, such as delusions, hallucinations, and grossly disordered thought). Within the 50 years since the release of chlorpromazine, the number of schizophrenic patients requiring hospital treatment within the United States has dropped from around 500,000 to 100,000. (6) Considering the increase in U.S. population since the 1950's, this means that the number of patients needing hospitalization has decreased by more than 80%.

Chlorpromazine is structurally (although not functionally) very similar to dopamine, which could explain why it affects schizophrenic patients the way that it does. Researchers hypothesize that chlorpromazine preferentially occupies the dopamine receptor sites, thus decreasing the amount of dopamine coursing through the body. (3) This would help to re-balance the dopamine levels in patients with schizophrenia, thus alleviating some symptoms that are associated with dopamine functions. This "dopamine hypothesis" outlined four possible reasons for the atypically high level of dopamine found in schizophrenia:
1. A higher-than-average amount of dopamine could be released into the body.
2. An excess of dopamine receptor sites could increase the amount of dopamine actually passed through each neural synapse.
3. Instead of having an abnormally high number of dopamine receptor sites, these receptor sites could just be unusually sensitive to dopamine, thus picking up more dopamine from the synapse.
4. The pre-synaptic neuron could reuptake an unusually low amount of dopamine, leaving more dopamine in the synapse to be taken up by the post-synaptic neuron. (6)
Researchers then discovered that there is no increase in the level of dopamine metabolites in schizophrenic patients, thus eliminating the possibility that there are too many dopamine receptors, or that the dopamine receptors are too sensitive. (3) So, either an unusually high level of dopamine is release into the body to begin with, or the dopamine reuptake sites on the pre-synaptic neurons are less active than normal.

There are a number of other studies that have produced data that supports the dopamine hypothesis (besides the efficacy of antipsychotic dopamine blockers such as chlorpromazine). The most important of these is a study that was conducted with amphetamine abusers. Amphetamines cause an excess of dopamine in the bodies of people that abuse them, resulting in a condition called "amphetamine psychosis." The characteristic behaviors of people experiencing amphetamine psychosis are indistinguishable from the behaviors of schizophrenic patients. (6) This indicates that dopamine does indeed at least contribute to (if not cause) many symptoms of schizophrenia, and therefore that it is reasonable for researchers to target this neurotransmitter in further research.

Several other phenomena regarding schizophrenia lend support to the idea that it is largely caused by biological abnormalities. For example, rates of schizophrenia in men are significantly higher than rates of schizophrenia in women; also, males tend to experience an earlier onset of the disorder and usually receive a worse prognosis. (1) It is hypothesized that female cases may not be as critical and debilitating because estrogen could potentially act as a blocker of dopamine and serotonin receptors. In other words, women could inherently possess a natural antipsychotic within their bodies. This hypothesis is still in the process of being tested, but it is supported by another statistic - females with schizophrenia almost always have lower levels of estrogen than females without schizophrenia. (1)

Recent research has focused primarily on damage to the neuroanatomy of patients and faulty biochemical mechanisms. Studies show that patients with schizophrenia consistently have enlarged ventricles and cortical atrophy, particularly in the temporal lobes and the prefrontal cortex. Ventricles provide pathways for the circulation of cerebrospinal fluid in addition to protecting the brain from trauma. Enlarged ventricles could increase the amount of cerebrospinal fluid circulating through the body. (5) This could potentially lead to some of the symptoms - particularly the physiological ones - characteristic of schizophrenia. Enlarged ventricles and cortical atrophy could also be less effective in protecting the brain from trauma, which could help to explain the correlation between early childhood neurological disorders and the later development of schizophrenia.

Unfortunately, there is still not enough solid evidence to confirm any of these theories regarding biological contributors to schizophrenia. Schizophrenia is a difficult disorder to study, because it covers a very wide spectrum of disorders with many different combinations of symptoms. Currently, the best way to treat schizophrenia is with antipsychotic drugs and the removal of environmental stressors that aggravate schizophrenic symptoms. All that we can know for sure at this point is that there must be some biological precursor or predisposition for the development of schizophrenia that makes some people react to environmental stressors in a way that makes them develop the disorder.

1)"Tracking Genetic And Biological Basis of Schizophrenia"

2)"Biological Basis"

3)"Crossed Lines: What can atypical antipsychotics tell us about schizophrenia?"

4)"Recent Advances in the Neurobiology of Schizophrenia"

5)"Lateral Ventricles"

6)"Neurobiology of Schizophrenia"

7)"Dopamine"

8)"Serotonin"

9)"GABA"



Full Name:  Rebecca Woodruff
Username:  rwoodruf@brynmawr.edu
Title:  Paradigms of Neurobiology
Date:  2006-05-12 10:50:46
Message Id:  19365
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

As expected of a scientific field, contemporary neurobiology operates within a reductive materialistic paradigm. However, in keeping with the unexpected and even unprecedented results of Dr. Jeffrey Schwartz's research, perhaps the focus of attention within the paradigm could use some revision. In his book, The Mind & The Brain, Schwartz brings to light new insights into the structure and function of the human brain through his research of Obsessive Compulsive Disorder (OCD).

Throughout his career as a neurobiologist, Schwartz described himself as unsettled by the common limiting practice of reducing a patient to his or her disease, and reducing the disease to problems at the neuronal level. Particularly, the author was apprehensive of the reductionism associated with the treatment of OCD as seen in the practice of Exposure Response Prevention. This treatment option operated on the assumption that triggering obsessions and physically preventing the fulfillment of compulsions was the only way for patients to improve. To Schwartz, not only did this treatment option seem immoral by putting the patients in an excessively stressful, sometimes dangerous situation, but it also seemed impractical when considering the unique characteristics of OCD.

OCD is an ego-dystonic anxiety disorder characterized by recurring cycles of obsessions and compulsions. On a quantitative scale, Schwartz's research with fMRI technology suggests that OCD results from the brain's differential use of two neural pathways associated with the orbital frontal cortex, the anterior cingulated gyrus, and the caudate nucleus. On a more qualitative scale, a crucial diagnostic characteristic of the disorder is the presence of some level of introspection and insight on the patient's part that the obsessions are not real and that fulfilling the compulsion will not prevent something terrible from happening. However, despite this realization, OCD patients have trouble with ignoring the obsessive-compulsive cycle. This ego-dystonic element of the disorder struck Schwartz as very similar to the Buddhist meditative practice of mindfulness, a realization that spurred him to see if the very cure for OCD lay within the power of the human brain.

What Schwartz found was startling for much of the neurobiological community. He found that entirely unmedicated patients were able to function without the constant torment caused by the obsessive-compulsive cycle. Patients reported feeling a freedom from OCD that they had never felt before by merely following a four step program involving relabeling the feelings associated with the onset of the cycle as a malfunction of the brain, reattributing the feelings to the malfunction, refocusing their attention of a productive task instead of the compulsion, and finally revaluing the entire process as neither good nor bad. Schwartz himself sums up the process as, "By Refocusing attention in a mindful fashion, patients change their neurochemistry (368)." This treatment plan relies solely on intangible mental processes, or thinking, yet it produces tangible, quantifiable changes in the structure of the brain. While Schwartz was elated with the unforeseen efficacy of his new treatment plan, when he tried to describe this translation of immaterial force into material change, he found a roadblock in the current paradigm.

As a result, Schwartz proposed that a paradigm shift is eminent and essential for the understanding of his research. Instead of the traditional reductive materialist paradigm such as Functionalism or even Epiphenomenalism, Schwartz suggested that a non-dualistic, non-reductive understanding of the brain had to take effect. However, upon close examination, what Schwartz construes as remarkable results of his treatment really are surprisingly unremarkable, and can be perfectly understood within the confines of a reductive materialist worldview.

The idea that mental power exists, while perhaps considered unsophisticated by the neurobiological community, can be explained within the reductive materialist way of thinking. This emergence can easily be seen in the order and complex organization of the communal interactions that occur in the form of corollary discharge signals within the nervous system. Neurons interact together without a decided leader and the sum of their interactions produce a level of complexity that is greater than merely the sum of its parts. As Schwartz writes, "There, I propose that the time has come for science to confront the serious implications of the fact that directed, willed mental activity can clearly and systematically alter brain function; that the exertion of willful effort generates a physical force that has the power to change how the brain works and even its physical structures (18)" . Therefore, perhaps Schwartz's "mental force" is merely one example of an observed emergent property in the nervous system.

Naturally it seems ludicrous that such an established figure would overlook this complication. Therefore, perhaps Schwartz uses the necessity of a paradigm shift, not in the overthrow of science altogether, but in the mindset of the leaders in the field. In other words, perhaps the problem lies not within reductive materialism, but in the way that neurobiologists, psychologists, and clinicians view reductive materialism. Perhaps Schwartz's paradigm shift does not refer to the reductive materialism debate, but refers to a need for the change in clinicians and neurobiologists' attitudes towards emergent products, such as mental force.

Current trends in the treatment of OCD, such as psychopharmocology and physical restraint, center around manipulation at the neuronal level. While these treatment methods are effective to some degree, they have yet to produce unquestionable success rates. Schwartz's treatment differs from medication and physical restraint in that addresses the fact that OCD results from both neurons and mental force, the emergent product of the interactions between the neurons. By addressing the disorder at a new level, Schwartz saw not only an increase in the efficacy of his treatment, but a fundamentally different model of what a treated OCD patient looked like.

Schwartz's paradigm shift looks very different based on the following assumptions about the current worldview. Reductive materialism takes into account the occurrence of emergent events. Emergent products may be different in nature and composition than their constituent parts. The effects of the emergent products may produce real, physical changes to the environment they interact with. Therefore, perhaps Schwartz's paradigm refers to the change that must occur at the diagnostic and theraputic level. In order for OCD, and presumably a whole host of other disorders, to be treated properly, the intangible emergent properties that define the disorders must be taken just as seriously as, and treated with the same resolve as the tangible parts of the human body that are also involved in the disorder. Perhaps only by adopting a more holistic approach to medicine and the treatment of patients can the results of clinicians' dreams become a reality.


Schwartz, Jeffrey M. M.D. and Sharon Begley. The Mind & The Brain. New York: Harper Collins, 2002.



Full Name:  Rebecca Woodruff
Username:  rwoodruf@brynmawr.edu
Title:  The Drug Dilemma: Exploring Past and Contemporary Attitudes Towards the Treatment of Obsessive Compulsive Disorder in Adults
Date:  2006-05-12 11:10:09
Message Id:  19370
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip


Current trends in the treatment of neurobiological and psychological disorders reveal both the paradigmatic shifts in the field in addition to the limits of our understanding of the brain at large. Trends in medicine, therefore, function not only in a healing role, but also as an indicator and descriptive tool of the paradigm. While this holds true for any neurobiological disorder, the treatment of Obsessive Compulsive Disorder (OCD) echoes a true transition in our understanding of the brain. Specifically, the transition from traditional psychopharmacology to much more controversial "thinking treatment" therapy calls into question the current Functionalist and Epiphenomenalist ways of viewing the brain. While the materialist paradigm remains intact, the way of viewing the paradigm seems to have shifted from one that emphasizes the reductive element to one that emphasizes the emergent properties of the brain.

Thousands of years before Descartes' separation of the mental from the material, philosophers have suggested that the brain fundamentally differs from the rest of the body (1).. While the nuances of the hypotheses have changed, the core argument that mind and matter are two entirely different entities remained unchanged. However, with the advent of materialism and reductionism, philosophers began to question the notion that the mental state existed. This transition effectively reduced the mind to the sum of its constituent parts, a change that, in the West, is reflected in transition from mystical to mechanistic medical care. However, recently several scientists have begun to question the reductive materialism that is decidedly in vogue in the neurobiological field. As a result, after years of very specific treatment options, new therapies that rely on an entirely different understanding of the brain are emerging. Even more surprisingly, empirical evidence begins to suggest that these new treatment options are more effective than their predecessors (1)..

OCD is a serious anxiety disorder that affects roughly 7 million Americans (3).. The disorder manifests itself as a recurring cycle of obsessions followed by the intense need to fulfill a compulsion in an attempt to break the cycle. Recent studies using fMRI technology correlate the presence of OCD with metabolic differences in the orbitofrontal and cingulated cortex and the caudate nuclei of the basal ganglia (8).. Additionally, some case studies indicate that OCD can spontaneously occur with specific damage to a localized region of the brain, a mutation of a gene (4)., or an autoimmune mechanism (6).. However, the characteristic that truly sets OCD apart from a host of other anxiety disorders and true psychosis is the presence of ego-dystonic insight as to the validity of the obsessions and the necessity of performing the compulsions (5).. According to the Diagnostic and Statistical Manual of Mental Disorders, even though OCD occurs on a spectrum of severity, this ego-dystonic insight is essential to a true OCD diagnosis (5). (7).. This observation is crucial to understanding the change in treatment options and the understanding of the brain at large that is currently happening within the neurobiological community.

Examination of the treatments of OCD reveals much about how the neurobiological and psychological community views the workings of the brain. While there are currently many hypotheses in circulation about the origin of the disorder, the explanation that the majority of psychiatrists subscribe to is the "Seratonin Hypothesis (2).", a reductive materialist view. Clinicians who subscribe to this view of OCD and the brain elect a treatment course consistent with its observations and implications: that of pharmacology. While SSRIs, tricyclic antidepressants, MAOIs, and even hormone therapy have demonstrated some degree of success in the treatment of the disorder, many clinicians are beginning to question their overall efficacy (1).. Viewing the brain and OCD in a reductive materialist way reduces the disorder to the neuronal level, which gives any treatment a decidedly neuronal focus. By this way of thinking, the OCD symptoms are metabolic or biochemical in origin, and so the only logical way of treating the patient is through substances which affect the metabolism or biochemistry of the brain. However, this view of the nervous system does not take into account the distinctive qualities of the brain, as well as of OCD, that uniquely separate them from other body parts or other neurobiological disorders.

From a historical point of view, the Seratonin Hypothesis reflects the applied changes resulting from the explosion of reductive materialist paradigm. In accordance with this worldview, the brain is nothing more than the sum of its parts, and no different than any other organ in the body. Treatment reflects this attitude, as the brain is washed with chemicals in the hopes that gross level changes will follow. While this option works to a limited degree, clinicians are turning to other treatment options derived from wholly other hypotheses about the brain and they are receiving some surprising results.

Recently, several neurobiologists have pushed the limits of our understanding of the brain and have documented remarkable results. These scientists have noted the unique properties of OCD, ego-dystonic insight and a firm base in reality, and have decided to exploit them. Choosing not only to treat the disease on a neuronal level, but also on the level of the emergent product, these clinicians and researchers' work has generated much excitement and controversy within the medical community (1).. The way of thinking follows that if ego-dystonic rational insight separates OCD from many other neurological disorders, why not exploit that quality to the benefit of the patient? Specifically, this treatment focuses on the patient's understanding of the biological, chemical, and personal levels of the disorder.

If it were to take hold, his way of treatment, which functions independently of medication, marks a true change in the neurobiological community's attitude towards the brain. While structural change would still occur on the neuronal level, the mechanism to bring about such change differs dramatically. While the first treatment option does not recognize the differences between the brain and any other part of the body, the second option works with the unique characteristics of the brain to bring about change. Though many balk at the fact that the second option deals not with tangible elements of the brain, the empirical evidence suggests that some important change is nevertheless taking place. As to the true efficacies of these treatment options, only time and research will be able to distinguish them.


1) Schwartz, Jeffrey M. M.D. and Sharon Begley. The Mind & The Brain. New York: Harper Collins, 2002.

2)"As Good As It Gets? Exploring The Causes of and Pharmacological Treatments For Obsessive-Compulsive Disorder",

3) Medline Plus. The US National Library of Medicine, and the National Institutes of Health. ,

4)"Gene for Obsessive Behavior",

5)"DSM-IV: Obsessive-Compulsive Disorder (OCD)",

6)Obsessive Compulsive Foundation,

7)"Obsessive-Compulsive Disorder and Delusions Revisited",

8)"Error-Related Hyperactivity of the Anterior Cingulate Cortex in Obsessive-Compulsive Disorder" ,



Full Name:  Brittany Peterson
Username:  bpeterso@brynmawr.edu
Title:  Panic Attacks: What Are They and Why Do They Happen?
Date:  2006-05-12 14:01:45
Message Id:  19377
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

While looking for a topic to research for my final web paper, I began to worry that I would never find a topic, and thus never finish my paper, fail the course, and so on. Panic responses began gearing up in my body as this vision unfolded in my mind, and I had to force myself to calm down. "Stop having a panic attack," my calm inner voice said, and suddenly I had a topic for my paper. As I researched the subject of panic attacks, I found that considerable research had been done on the topic; more than I would have expected. Given that such an experience is intense, emotional, subjective, and difficult to describe, I supposed that it would also be difficult to research and to define, but in fact, the disorder and the varied experiences that come with are very clearly defined. While the cause has not yet been determined- probably because there is no one cause, but instead several contributing factors which interact in a given situation- the research that has been done so far is extensive and researchers sseem to be on the verge of finding answers. Over the course of my research I had some thoughts regarding possible explanations for certain specific aspects of the disorder, which I did not see discussed during my research. While these are only nascent, since I have not performed professional or extensive research, I have included them in the relevant sections of the paper.

A panic attack is an episode of sudden, intense and overwhelming fear, which usually happens without any apparent trigger (1). Attacks usually reach their strongest point after about ten minutes, and then begin to recede (8). Panic disorder is the constant recurrence of such attacks (2). Panic attacks are the most common type of emotional affliction faced by adults in the United States, with about a third of all American adults having at least one during the course of a given year (1).

There are numerous symptoms experienced during a panic attack, and it should be noted that usually, only a few symptoms are experienced during any given attack. Physical symptoms include an increase in heart and breathing rate, shaking, feeling dizzy and nauseous, having trouble breathing, sweating, and periods of intense hot and cold. Psychologically, many individuals experiencing an attack feel as though they are losing control or even dying. A panic attack can feel like a heart attack or the symptoms of a serious thyroidal problem, and so many people think that they are experiencing one of these ailments and seek according treatment (2). Attacks usually do not last for long, since the body can only remain in this state for short periods of time, but attacks can repeat with short intervals between them for as long as several hours (5). When the panic attack is over, feelings of fatigue often follow, due to the letdown after the rapid increase in adrenaline (2).

There are two organs involved in the process that is most commonly hypothesized to cause the neurological processes involved in a panic attack; this hypothesis is called the noradrenergic hypothesis. The first step is caused by the stimulation of a non-functioning locus ceruleus. This is a region of the noradrenergic nervous system whose neurons contain high levels of norepinephrine. When the brain has too little of this neurotransmitter and too little serotonin, the locus ceruleus is not sufficiently inhibited, and it sets off an uncontrolled, overly strong adrenal gland response, the results of which are the symptoms of a panic attack. The dysfunction of the locus ceruleus may be exacerbated by high levels of stress over long periods of time (7). The organ that receives the signals sent off by the locus ceruleus is the amygdala, which then translates the messages from the locus ceruleus into signals that are sent all over the body and set off the attack. An imbalance of neurotransmitters may also result in a dysfunctioning amygdala, which only makes the problem worse (10). The amygdala is deep within the brain, in the temporal lobe, at the end of the loop of the caudate and just in front of the hippocampus (the link in the sixth item on my list of resources below has an excellent diagram showing these structures) (6). The amygdala holds onto an individual's memories of past events and the feelings and reactions that person had during those experiences. When the amygdala perceives that one is in danger, it uses the memories stored to remind that person of the dangerous event from the past and thus evokes similar emotional and physical responses (10). This way of envisioning the process of a panic attack was by far the most widely supported and cogent one I found during my research, and so I think it is probably the best and most likely explanation.

There are several types of panic attacks that have been classified. One type is known as a "spontaneous panic attack"; this kind of attack occurs for no reason and has nothing whatsoever to do with what the individual is experiencing. These attacks may even occur while a person is sleeping.

Another type of panic attack is the "specific panic attack", which does occur in relation to a specific stimulus. However, these stimuli only set off the panic response because the individual has an intense fear of the stimulus due to another disorder. For example, a person who has a bird phobia might react strongly to seeing a bird, whether or not the animal was behaving in a way that would normally evoke such a strong response.

The third and final type of panic attack is similar to a specific attack in many ways. "Situational predisposed panic attacks" are triggered by a particular situation, but not in connection with another emotional ailment. For example, some people who experience panic attacks simply tend to experience them in a particular situation, such as while driving or exercising, but they do not have a particular fear of these situations or a traumatic memory of similar experiences (4). I think that it is possible that for a particular individual, a particular action may increase the levels of various chemicals in the body, thus predisposing them to an attack. This could originally have been caused by simple variation in individual body chemistry, and then the correlation may have increased over time as the body began to "expect" certain responses to the chemical environment created by the instigating action. It is certainly true that when one experiences intense fear, the experience remains strongly imprinted in the memory, possibly as an evolutionary mechanism to help recognize the fear-initiating situation in the future.

The symptoms described above certainly sound like what one might experience in a normal reaction to a frightening situation, and might not seem like anything unnatural. The difference between a normal fear response and a panic attack, however, is that with the latter- except in the case of specific and situation-related panic attacks- there is no apparent cause for the body's intense reaction, but the fight-or-flight response system is set off anyway (2). Even in the case of specific and situational attacks, the stimulus is not the sort of thing that would normally set off such a response. Usually only a life-threatening situation could do that, whereas those who experience specific attacks are responding to something like a social situation or a memory (4). Panic attacks are particularly difficult to avoid because many of the processes are so similar to the basic animal fight-or-flight response. That response is regulated at the most basic level, and is designed to help us cope with dangerous situations without having to think consciously about what we are doing (9). This means that an individual may have very little to no control over what is occurring during an attack.

There are several conditions and circumstances associated with the occurrence of panic attacks. For example, many people who suffer from attacks have a family history of attacks or another disorder (2). They may also themselves suffer from similar problems, such as phobias, obsessive-compulsive disorder, or social anxiety disorder (4). Agoraphobia is the fear of having panic attacks, usually occurring after an individual has suffered an extremely intense and traumatizing attack. This phobia is also connected with a higher instance of attacks, because this fear in some cases actually increases the likelihood of having a panic attack in the first place, therefore perpetuating a vicious cycle (3). I hypothesize that the connection between anxiety disorders and the instance of panic attacks may be due to the effect anxiety has on the body. If a person is anxious, this may push the levels of adrenaline or other chemicals closer to the threshold required for an attack. The as-yet-unidentified panic attack mechanism does the final pushing, but other disorders also contribute significantly.

Over time, panic attacks can have serious effects, both on an individual's behavior and on their body. One may avoid places and situations in which previous panic attacks have occurred to avoid having another, and may even stop leaving home altogether. People with panic disorder are more likely than other individuals to suffer from problems with drugs and alcohol, to commit suicide, and to avoid engage in normal and enjoyable activities outside the home (5).

Selective Serotonin Reuptake Inhibitors (SSRIs) are one of the most common drugs used to treat anxiety in general. In particular, paroxetine (known as Paxil) and sertraline (Zoloft)- antidepressants that improve an individual's mood- are the SSRIs used to treat panic disorder. These drugs help to correct the chemical imbalances in the brain that play a large part in causing these sorts of ailments. Benzodiazepines, such as Xanax and Valium, are also used for panic attacks. These medicines take effect rapidly by acting on a neurological messenger called gamma aminobutyric acid, which binds to receptors on nerve cells, causing them to open and allow more chloride ions to enter the cell. This decreases the chemical gradient present in the nerve cells, and thus their ability to create action potentials. This slows down the activity of the nerve cell, slowing down the panic response and lessening the effects of panic. A third type of drug that may be used is known as busprirone. This drug is useful because it tends not to be habit-forming, but since it takes at least a week to take effect it is sometimes more effective to use an SSRI or a benzodiazepine (11).

There are also methods of dealing with panic disorder that do not involve drugs. Therapy can help rework the thinking process regarding panic-inducing situations and the attacks themselves and keep them from occurring. Simply having information about the disorder can have calming effects on an affected person, as can joining a support group with other sufferers. Relaxation techniques such as visualization, positive thinking and breathing techniques can help a person prevent or lessen the severity of an attack. Finally, a technique called "interoceptive exposure" exposes a person to the symptoms of an attack in a safe place so that they can work through them without panicking further and worsening the attack, as well as helping soothe fears of the attacks themselves (5). I believe that while medications are a good first step to control panic so that a person can live day-to-day life, the others forms of therapy I have explained above are the best option in the long term.

Panic attacks are the result of a complicated interaction between psychology and biology. Genetic abnormalities affecting neurotransmitters like serotonin and dopamine, which regulate mood, lay the foundation for problems like panic disorders. This imbalance can be exacerbated by the abuse of drugs, caffeine or alcohol. This leads to a vicious cycle when an individual uses such substances to deal with the intense feelings associated with panic disorder and only manages to make it worse, leading to more "self-medication". A major life event or change such as having a child or a new job, or the memory of previous emotional traumas, may increase feelings of anxiety and eventually the individual may be pushed over the threshold into a full-blown panic attack (10). It seems to me that any and all of these factors may work together in a myriad of different ways in any given individual, and that even individual panic attacks in the same individual may be caused by different combinations of factors.

Throughout the course of my research on this topic, I came across many sites set up to help individuals who suffer from panic attacks cope and find ways to lessen the severity and frequency of their attacks. Two things struck me while I was perusing these sites: one, how painful the experience of having this and similar ailments truly is, the number of sufferers who said that having information about their problem was the best tool for living with it. I hope that I have provided this information to others clearly and concisely, and that in doing so I can help sufferers deal with their debilitating disorder.

Works Cited

1) Anxiety Panic Attack Resource Site , This site was useful mainly for basic information and statistics on panic attacks.

2) The Mayo Clinic- "Panic Attacks", This site was very helpful in distinguishing between natural fear and a panic attack, as well as listing and explaining typical symptoms and experiences during an attack.

3) Familydoctor.org- "Panic Disorder: Panic Attacks and Agoraphobia" , This site elucidated the phenomenon of agoraphobia.

4) Anxiety Panic Hub- "Anxiety Disorders". , Discusses the types of panic attacks and conditions associated with panic attacks.

5) The American Psychological Association- "Answers to Questions About Anxiety Disorder", A good source of general information on panic attacks.

6) Psycheducation.org. "Fear" , Provided a very clear description of parts of the brain and of their location.

7) Anxiety & Stress Management Service of Australia- "Biological Causes. , This site discussed the noradrenergic hypothesis clearly and in detail.

8) Anxiety Disorders Association of America- "Panic Attacks", Has information regarding the usual duration of a panic attack.

9) National Institute of Mental Health- Facts About Panic Disorder., Discusses the need for fear responses and how they relate to panic responses like panic attacks.

10)Healthyplace.com, This was another site that really me helped to understand the noradrenergic hypothesis.

11)Drug Digest, Discusses the types of drugs commonly used to treat anxiety.


1



Full Name:  Brittany Peterson
Username:  bpeterso@brynmawr.edu
Title:  Touched with Fire: Manic-Depressive Illness and the Artistic Temperament
Date:  2006-05-12 14:08:48
Message Id:  19378
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

I chose to read Touched With Fire, a study of the personalities of individuals with manic-depressive, or bipolar disorder. The author, Kay Jamison, argues that there are many similarities between the behaviors of an artist in the throes of a creative episode and the behaviors of a bipolar individual, and that this may explain the high instance of manic-depressive people entering into careers in artistic fields. She provides many examples of such individuals, and gives evidence for diagnosing them as bipolar. She also discusses specific instances in which their artistic endeavors were tied to episodes that could be considered manic or depressive, or to a larger pattern of the mood swings associated with the condition.

Jamison spends chapter two discussing the symptoms and experiences associated with manic-depressive disorder (pages 11-48)1. She delineates the different forms the disorder may take, and describes the symptoms of manic and depressive episodes and the way in which an individual cycles continually between them. In order to clarify these points, she includes writings from individuals she believes suffered from the disease who seem to be describing a particular aspect of their affliction. This makes for somewhat disjointed reading, but the fact that many of these other writers are passionately emotional really drives home the intensity of their experiences in a way mere descriptions could not.

The main argument of the book centers around the personalities and tendencies of people with manic-depressive disorder. Jamison spends a lot of the book, particularly in the first few chapters, giving examples of writings written by or about the individuals she calls bipolar and explaining why these writings are indicative of such a diagnosis. For example, a poem by Delmore Schwartz is quoted which seems to eloquently describe the rapid and shocking switches between mania and depression sufferers often experience (page 33)1. While I can certainly see why Jamison comes to the conclusion that these individuals are manic-depressive, there are few cases in which it is clearly stated, by a reliable source such as a physician, that this is the problem. More often, Jamison's diagnosis is based on conjecture, which I feel is particularly unwise given that the individuals in question are creating works of art, not moment-by-moment autobiographies. They might be discussing actual events or fantasies created for artistic entertainment. Jamison may very well be correct in her guess, but since she uses these writings as a basis for the entire thesis of her book- that the fact that these individuals were bipolar is related to their having been artists- it is dangerous to do so without context or confirmation from the source. Such confirmation is in many cases impossible, since most of the individuals discussed have long been dead. I feel that this is not only a shaky basis for any persuasive work, but in particular with one that involves a certain amount of science.

One of the points I find most compelling in this book is the way Jamison compares the dramatic mood swings of manic-depressive illness to the changing of the seasons and other natural phenomena. While of course this is not a point one can prove, it is the sort of parallel that I found myself making many times this semester. For example, when we discussed the way neurons are set up, in smaller and smaller sets of boxes, I was reminded of other natural phenomena of an infinite nature, such as rings on trees or the infinite variability of genetics. The fact that another individual studying neurobiology saw a similar parallel makes me think that despite all of the infiniteness, there is an intrinsic similarity in the way human beings think.

Jamison includes a number of graphs in chapters three and four to prove her point (pages 49-147)1. These center around the moods experienced by a sample of British artists during creative periods, as well as the seasons of highest productivity of a number of artists. Again, I evaluate this element of the author's argument as a scientist and find it lacking. The graphs provide evidence of correlation, but as any scientist knows, correlation does not prove causation. Also, many of the points made are only sensible. For example, one graph indicates that many artists tend to feel "enthusiastic" and "self-confident" during artistic episodes, as opposed to "anxious" (pages 79) 1. It seems that being in an energetic and motivated frame of mind would lead to more productivity regardless of whether or not one was manic-depressive.

The experience of simply reading the book is a mixed one. Jamison's writing style is a bit disjointed, most likely due to the fact that she spends a great deal of time quoting from other sources. She does not delve much into the neurological basis of manic-depressiveness or reasons why the individuals she discusses might have been affected. I think that the topic would be well served by a more detailed exploration of these concepts. One of my favorite aspects of our class this semester was the weaving together of observations about how we live and use our brains- such as the experiment showing that our brains fill in a lot of what we see- and then the more fact-based exploration of why this happens- understanding the way the eye functions and how it connects to the brain. Throughout the book, Jamison continually tells us that an artistic and manic-depressiveness are connected, but cannot give an actual mechanism for this connection.

Despite the fact that her science needs work, the author's way of framing her argument within the confines of her book is elegant. She approaches the topic from several angles other than the ones I have just described, each of which sheds a different light on the disorder itself. She discusses the history of a belief in a connection between disturbed individuals and high levels of creativity, from as far back as the ancient Greeks (pages 50-56) 1. She also looks at the genealogies of artists such as Virigina Woolf, Vincent Van Gogh, and Ernest Hemingway in order to display the genetic nature of manic-depressive illness and similar ailments (pages 192-237) 1.

My instincts and experiences tell me that Jamison is probably right to draw the conclusions she does. I have had direct experience with manic-depressive individuals within my own family. While at the either of the two extremes, I have known these individuals to be capable of feats of great creativity and organization. However, in most of the cases I have observed, the feats accomplished had to do with reorganizing one's life, family or home, or with adopting a new philosophy, rather than creating works of art. I would have liked this book to explore the less glamorous people who are affected by this disorder, and how it changes their lives. So many people suffer from the illness that I feel the effect of the book is limited by only profiling those who respond to the experience of being manic-depressive in a particular way.

I did not feel that we experienced similar problems involving limited viewpoints in our discussions during the class. It is true that we have many things in common- being students, wanting to learn, mostly being of the same age, living in the same community. I do not feel, however, that these factors made a great deal of difference in terms of how we expressed ourselves this semester. Our similarities gave us a unifying purpose, but what I heard during our class discussions and in the forum was a wide variety of experiences brought to bear on the same topic.

Overall, I enjoyed reading Touched With Fire. The parallels Jamison draws are intriguing, and she tries very hard to back up her ideas with evidence. The problem is that her point cannot really be proven, given that is based on correlative evidence and in many cases, the assumption that an individual was indeed afflicted. If the reader can looks at the argument from a slightly less than scientific viewpoint and allow themselves to be convinced, the reading can be very enjoyable. The excerpts from the writings of supposedly manic-depressive individuals are compelling and often extremely beautiful. The overall framing of the argument is well done, even if it is based on several unprovable assumptions. I would be more likely to recommend this book to a person whose background or area of study runs more toward literature than science, because they would be able to appreciate this work, without feeling the need to be convinced by a strong scientific argument.


1)1 Jamison, Kay Redfield. Touched With Fire: Manic-Depressive Illness and the Artistic Temperament. New York: Free Press Paperbacks, 1993.


5



Full Name:  Lori Lee
Username:  llee01@brynmawr.edu
Title:  Go With Your Gut
Date:  2006-05-12 15:48:33
Message Id:  19379
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Lori Lee
Neurobiology and Behavior
Book Review of Blink, Malcolm Gladwell


Go with you gut


In Malcolm Gladwell's Blink, he very thoroughly analyzes the split second in which expert individuals, in their respective fields, make face-on decisions in immeasurable time. In writing Blink, Gladwell bluntly states that the aim of Blink is to show, and convince, his audience that snap decisions and thin slicing are just as efficient as making regular decisions, learning that there are times when we should both trust and be wary of our instincts, and lastly, and "most importantly" [as stated by Gladwell], that our efficiency and accuracy in thin slicing improves and subsequently can be controlled and regulated through practice, use and expertise (1). Snap judgments and decisions are what we know as "first impressions," "instincts", which in our popular society are frequently referred to; Go with your gut, a phrase we commonly hear in reference to exams or any situation which requires difficult decision making, that which goes beyond everyday situations and decisions. But furthermore, snap judgments, analogous to our "gut," is in no way as simple as it may seem. Blink attempts to uncover some sort of truth about the human unconscious by focusing in on an experience to which all individuals can share and reflect, but none can explain.


The book begins with the New York kouros and the question of its authenticity. The topic of first impressions, instincts, and snap judgments begins with Evelyn Harrison's, a renowned expert on Greek sculptures, exclamation when she first learns about the museum's purchase of the Getty: I'm sorry to hear that. Gladwell explains that "thin-slicing" is "the ability of our unconscious to find patterns in situations and behavior based on very narrow slices of experience" (1). In saying "narrow slices" Gladwell is referring to tiny time slots in which the brain picks up inputs, action potentials, but the individual is still unaware, hence the unconscious. These inputs are minuscule details that we are even unaware that we pickup, which in turn are influenced by factors which are not present. Basically, it seems that in this split second, everything that has ever influenced your view on life will influence your snap judgment as well.


Throughout Blink, Gladwell gives an exhaustive list of examples of thin slicing. He writes about psychological experiments carried out in John Gottman's lab which are used to predict the likelihood that a couple will stay together. The expert in this situation is Gottman and his trained assistants, and everyone else is oblivious. "It's weird," said one of the assistants, "You don't get the sense that they are an unhappy couple when they come in" (1). And in just that statement, Gladwell shows that "thin-slicing" can be trained and when it is trained it can be controlled for expert use. The reader can also see that thin-slicing is not entirely unconscious. As we train ourselves in understanding when and what about our instincts we can trust, thin-slicing becomes more of a conscious practice and the individual becomes more aware and more conscious of what they are perceiving, as well as how they are thin-slicing. Another example that Gladwell uses is Samuel Gosling's experiment in which he chooses individuals to observe a stranger's bedroom or dorm room that they have to judge in terms of the Big Five, a measure of an individual's personality across five dimensions. It turns out that absolute strangers do a better job at more accurately "ranking" these individuals than their friends. This again, shows that thin-slicing is even more efficient than actually getting to know someone in judging an individuals personality.


In my opinion, Gladwell states that the most important goal of his book is to show his reader that thin-slicing and snap judgments can be controlled and perfected, and he does so, but in a way that sometimes becomes tedious. In reality, I don't see why Blink is any different or any more revolutionary than the complex thoughts that pass through our brain daily. Gladwell emphasizes the difficulty of the individual to tap into his/her own unconscious, yet I feel this is something we all know, and there is hardly a person who believes that they can find truth in all of their thoughts.


Additionally, Gladwell says that "human beings have a storytelling problem," that "we're a bit too quick to come up with explanations for things we don't really have an explanation for" (1). He refers to Ted William's ability to hit a baseball but his inability to explain what exactly he's doing to hit the ball. I feel this is something we've all experienced as well, but Gladwell states it. Even if we convince ourselves that it is exactly the way that our wrists rotate that allow us to hit a ball, I think we know that it is an impossibility for us to actually know if we are physically doing something down to the rotation of a bone or the contraction of a muscle. "The Locked Door," is the term that Gladwell uses to describe our unconscious, but in reality his explanations and "proof" of the efficiency of "thin-slicing," which I do not doubt," is more like a secret locked door, something others know nothing about.


In the end agree with Gladwell's analysis of snap judgments as it is easier to make decisions under pressure, but Gladwell is also contradictory as he says that it is easier for a stranger to judge your personality by seeing your room, but he also uses the example of Evelyn Harrison, an expert, in her ability to make a snap judgment about a fake statue. Yes, snap judgments can be controlled by knowing when and when not to trust yourself, but Gladwell just uses too many examples with too much information, that a lot of his ideas begin to run together and everything seems repetitious.


In our analysis of the human nervous system, we looked at every aspect of our lives. If we believe that scientific thinking does not lead to truth, and there is no truth or correct, then Gladwell only seems to suggest that there is a right and a correct answer, and that snap judgments bring us closer to that answer.


1)Gladwell, Malcom. Blink: The Power of Thinking Without Thinking. New York: Little, Brown, 2005.



Full Name:  Rachel Mabe
Username:  rmabe@brynmawr.edu
Title:  A Review of "The Man Who Mistook His Wife for a Hat" by Oliver Sacks
Date:  2006-05-12 20:45:33
Message Id:  19387
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip



Oliver Sacks' The Man Who Mistook His Wife for a Hat and Other Clinical Tales is a collection of case-histories of people with right-hemisphere disorders. As Sacks states in his introduction, his interest lies in "neurological disorders affecting the self" rather than more traditional "deficits" of the left-hemisphere that are usually studied (6). The book is comprised of two-dozen stories, separated into four sections: Losses, Excesses, Transports and The World of the Simple.


Because my review is limited to a short paper, I will focus on the title essay. The essay relates the case of Dr P., a music teacher of some distinction. Dr P. began confusing things. For example, he sometimes pat fire-hydrants, thinking that they were the heads of children. Because the occasional mistakes were merely humorous to him, he only went to see an ophthalmologist when he was diagnosed with diabetes. Dr P.'s eyes were fine, but the ophthalmologist said there were problems with the visual parts of his brain and in turn referred Dr P. to Sacks.


After the initial examination in his office, where Dr P. mistakes his foot for his shoe, and his wife for his hat, Sacks visits him in his home—his own space. At Dr P.'s house, Sacks carries out a number of tests, such as asking him to identify family from photographs. He concludes from these that Dr P. has visual agnosia. Sacks views the case of Dr P. as particularly important because usually brain damage is assumed to erode the abstract and categorical, leaving the emotional and familiar intact, but Dr P. was exactly the opposite; the emotional and concrete were lost, in regard to vision, and what remained was the abstract and categorical (7).


What I found most interesting was Dr P.'s different perception of reality. It reminded me of what we learned this semester about our sense of reality being only one perception of reality. Sacks' interpretation of Dr P. was that music had replaced the visual, and instead of having a body-image, he had body-music. Dr P. couldn't recognize his students if they were sitting still, but if they moved, he could recognize them by their body-music (18). To recognize people because of their "body-music" sounds like a beautiful way to go through life. Sacks on the other hand, gives the anecdote of Luria's patient, Zazetsky, who he describes as a mirror image of Dr P. The only difference was that Zazetesky realized his loss of faculties while Dr P. did not. Sacks wonders which is worse: realizing or not? I, for one, think that it would be better to not realize the loss and it seems that Sacks also believes this, since he never tells Dr P. what is "wrong" with him, therefore allowing him to continue living his musical life unencumbered.


Furthermore, it is interesting to contemplate what life would be like if Dr P. was "the norm" and our sense of perception was abnormal. What a different world this would be: to see, but not actually see, to rely most heavily on music, not sight, for the visual sensory to lose it's emotional and personal aspects and become more like, how Sacks compares Dr P., a computer.


While reading this book, I kept returning to the question our course was centered around, does brain, in fact, equal behavior and does Sacks believe this? Sacks writes, "It is possible that there [is]...a gulf...between the psychical and the physical; but studies and stories pertaining...to both...may nonetheless serve to bring them nearer" (viii). In other words, Sacks does not equate brain to behavior. There is a space between them that is closing because they are forever overlapping and complicating each other, so that they are in some ways indistinguishable. Yet, it is interesting to examine the terminology he uses to discuss the different "disorders." For example, towards the end of "The Man Who Mistook His Wife for a Hat," Sacks is discussing Dr P.'s artwork with Mrs P. His artwork is hung in chronological order, starting with realistic paintings, then shifting to abstract and cubist, and finally becoming "nonsense," according to Sacks. After mentioning his observations to Mrs P., she scolds him for not being able to see "artistic development." Sacks withholds his opinions from Mrs P. but shares them with the reader. He views the "wall of paintings [as] a tragic pathological exhibit, which belonged to neurology, not art" (17). He continues to ponder the wall and wonders if she is not partly correct—artistic development may indeed be paired with pathological development.


Sacks describes Dr P.'s painting abilities as "an almost Picasso-like power to see....abstract organizations...normally lost in, the concrete..." But this only goes so far for Sacks, he maintains that the final pictures are nothing but "chaos and agnosia" (18). It seems that Sacks views brain and behavior as overlapping, but he often waivers, believing that the brain is the cause of his cases' behavior. It seems unfair that he allows the progression of the paintings to be, at least partly, artistic development, until the end when he decides that it is no longer art. It makes me wonder about the geniuses, artists, and literary masters and the possibilities of their neurological oddities as part of the reason for their uniqueness. It is true that for some, like Picasso who could go from painting a completely abstract painting to drawing a perfectly clear sketch, this particular kind of "disorder" might not be the case. Nevertheless, it is interesting to explore the possibilities. When does perceiving something differently become a disorder?


Sacks writes for a general, non-science reader, which make his essays incredibly interesting and entertaining. In fact, authors who write about science in a non-science, easily accessible way are my favorite kind, such as Steven Jay Gould and Carl Sagan. I like that Sacks paid heed to Luria's suggestion of writing these accounts as stories, focusing largely on the person. Yet, my only criticism would be that sometimes I did not feel quite satiated—whether it was because Sacks didn't give enough information about the subject or the science behind it, I am unsure. However, overall I felt that he did an excellent job of making bizarre case histories interesting to the general public.


Works Cited
Sacks, Oliver. The Man Who Mistook His Wife for a Hat. New York: Touchstone, 1985.



Full Name:  Andrew Garza
Username:  Agarza@haverford.edu
Title:  Book Report: Tipping our Understanding of Human Behavior
Date:  2006-05-13 17:59:46
Message Id:  19389
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Malcolm Gladwell's The Tipping Point argues that a relatively small group of people with special abilities create minor changes that have a disproportionately large and fast impact on the rest of society. Gladwell asserts that these changes, which he calls Tipping Points, occur in many realms of the human experience including the spreading of ideas, products and diseases. Three conditions are necessary for Tipping Points to occur: the right messenger(s), an attractive idea1 that has the inherent power to captivate people, and the proper situational context. In this review, I will focus on the characteristics of the messengers and of the situational contexts, as they are the issues that are most relevant to the topics we discussed in Neurobiology 202. Gladwell argues that his theory of how ideas spread strongly challenges intuitive beliefs about how change happens: We have the natural tendency to assume that the level of input in a given situation is directly proportionate to the outcome. But in fact, Gladwell demonstrates that if the three elements of a Tipping Point are present, seemingly small inputs can cause exponentially larger changes. Gladwell's central arguments are generally consistent with the material we studied in Neurobiology and Behavior, although at times he relies too heavily on situational factors to explain behavior. The Tipping Point offers several fascinating points that could contribute to the course by adding a fresh angle to the way we perceive behavior.

Gladwell devotes a considerable portion of the book to exploring the characteristics of people who foment change. He asserts that they can be divided into three personality types: mavens, connectors and salesmen. People who fall into these categories are outgoing, risk-taking cultural "translators" (Gladwell, p. 200) who see something interesting that people on the fringes of society are doing and tweak those actions to make them more attractive to the mainstream; they play the crucial role of bridging the gap between radical visionaries and the rest of the population. Mavens are people who have a lot of specialized knowledge about particular subjects, and people who are close to them look up to them for advice on those subjects. Connectors have circles of acquaintances that are several times larger than those of the average person; they are "social glue" (70), with the power to spread ideas to vast numbers of people. Salesmen are exceptionally persuasive individuals that convince others to try new things. Interestingly, on a neural level, salesmen's emotions are so powerful that they trigger mirror neurons of the same emotion – whichever emotion it happens to be at the particular moment – in others. The three personality types are not mutually exclusive, since Gladwell writes that Paul Revere exemplified the traits of all three.

Gladwell's argument that certain people with distinct personalities play a disproportionately large role in spreading ideas is consistent with the material we covered in Neurobiology 202. He assumes that there are fundamental differences between people's personalities, although some people do share traits. In class, we agreed that there was infinite possibility for variation in behavior since the brain is molded by people's unique patterns of genes, experiences and other conditions. We defined personality as aspects of our behavior that are resistant to change, and we discussed how there were common traits that some people shared. Many aspects of the three personality types he describes overlap to a significant extent with the trait that Big Five personality theorists call extroversion. Extroversion,2 in its extreme form, refers to people that are highly engaged in the outside world, like to be around people, are dominating forces in social groups, enjoy taking risks and communicate prolifically; the three personalities Gladwell describes also communicate prolifically, take daring risks, and hold strong influence over other people. It appears that his three personality types are essentially just extroverts with slightly different traits. Another interesting point on which Gladwell's argument coincides with the Neurobiology 202 course material pertains to teenage smoking. He offers evidence to suggest that the people with the three personalities that catalyze change are also often the same ones who are initially interested in trying cigarettes and, interestingly, that they are the most prone to long-term addiction. He writes that an underlying assumption of this point is that a large aspect of personality is tendencies towards certain actions that come more from the unconscious than the conscious parts of our brains, although the conscious part sometimes has the power to overrule those tendencies if it chooses to do so. This assumption is nearly identical to the one we made in class about how both the neo-cortex and the rest of the nervous system make important – although different – contributions to our personality and behavior.

Another one of Gladwell's main arguments is that human behavior is influenced by situational factors in profound and sometimes shocking ways. We tend to think that people have a number of essential characteristics that they will reliably display almost always, such as "Jack is friendly and competent" or "Jill is lazy but creative." But Gladwell argues that
Character...isn't what we think it is or, rather, what we want it to be. It isn't a stable, easily identifiable set of closely related traits, and it only seems that way because of a glitch in the way our brains are organized. Character is more like a bundle of habits and tendencies and interests, loosely bound together and dependent, at certain times on circumstance and context. The reason that most of us seem to have a consistent character is that most of us are really good at controlling our environment. (163)
He justifies his argument by citing a number of experiments that show that people's behavior "is a function of social context" (150). He notes that in Zimbardo's Stanford Prison Experiment, men with no history of behavioral problems quickly adopted the personas of guards and prisoners, and they behaved their roles with such increasing brutality that Zimbardo had to end the experiment after six days in order to protect the health of the participants. Although this is clearly an example of an extreme situation altering people's behavior, Gladwell gave evidence that even seemingly minor situational factors that we encounter everyday could significantly change our behavior.

For instance, Darley and Batson ran an experiment at Princeton Theological Seminary, in which some aspiring priests were told that they should prepare a speech about the parable of the Good Samaritan, and some had to prepare a speech on another biblical topic. Then, the researchers told each student on an individual basis that he was either early for his speech but that he should start walking over now or that he was late and he should hurry. On the way to the location where the speech was supposed to take place, each student passed a man lying down and moaning in his direct path. The researchers took note of which students stopped to help the man, and they later correlated the results with the topic of the student's speech, the student's motivation for joining the seminary (prestige or desire to serve), and the amount of time the students were told that they had to arrive at the other building. The only variable that had a relationship with the students' behavior towards the person on the sidewalk was whether they were in a rush or not. The researchers conclude that "the conviction of your heart and the actual contents of your thoughts are less important...in guiding your actions than the immediate context of your behavior" (165).

The argument of Gladwell's work is structured in a curious way, such that the first part emphasizes that there are three personalities which have exceptional power to influence the rest of us, and then the part about situational factors de-emphasizes the role of personality and states that "it is possible to be a better person ... in a clean subway than in one littered with trash and graffiti" (168). His argument about the strength of the three personalities conflicts with the argument about the power of situational factors, and I believe this conflict is a result of the fact that Gladwell wrote this book for a lay audience; thus, he may have chosen to sacrifice some of the book's coherence in return for more sensationalized individual arguments. For instance, his focus on the specific traits of mavens, communicators and salesmen cannot be linguistically reconciled with the statement, "Character, then, isn't...[an] easily identifiable set of closely related traits." However, upon closer examination, Gladwell does acknowledge that situational factors only have a powerful influence on behavior "at certain times" (163), which leaves open the possibility that – as we agreed in class – there are more stable traits that influence our behavior in most situations (It is not often in our own lives that we face a situation like Zimbardo's prison experiment). Even though Gladwell occasionally overstates his argument for the influence of situational context over behavior, he does make a compelling case that understanding situational factors is a crucial aspect of understanding behavior. It is an intriguing finding that time pressures make people more likely to ignore those in need – although the experiment should be replicated to see if stopping to help also correlates with well-established personality traits like neuroticism and extroversion.

The content of Malcolm Gladwell's The Tipping Point largely coincides with the material we studied in Neurobiology 202, although the book does offer several points about the importance of situational factors that would deepen future class participants' understanding of how the brain and behavior work. As Gladwell shows, the brain is keenly sensitive to environmental cues like time and thus, at least in some situations, our behavior is largely influenced by our current situation. At the same time it is important to see the importance of situational factors within the larger context of a comprehensive view of behavior that also attributes proper significance to other influences on behavior including genes, experience, and culture. In our class, we agreed that there were many influences on behavior and that situational factors were one of them, although far more emphasis was placed on genes and experience as influences on our behavior. During the study of inputs to the nervous system it would be meaningful and impactful for future neurobiology students to learn more about the ways that situational factors impact the nervous system and behavior – with the understanding that the structure of the nervous system prevents there from ever being an absolutely clear, predictable link between inputs and outputs.

Footnotes:

1 Although Gladwell's concept of Tipping Points applies to many areas of life, when I make broad generalizations about his concepts I will refer to them as focusing on the spread of ideas or actions or products rather than constantly repeating that his concepts apply to all of these fields and more. This simplification should reward the reader with additional clarity.
2 I learned about extroversion through research I did for my second web paper.



Full Name:  Andrew Garza
Username:  Agarza@haverford.edu
Title:  Neurobiology: Changing the Way Economics Sees Humans
Date:  2006-05-13 22:27:32
Message Id:  19390
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

One of the most important purposes of innovative new research in the sciences has always been to challenge incorrect assumptions that others had previously held about the nature of existence. Consider Copernicus' revolutionary assertion that the Earth revolves around the Sun or Pasteur's less dazzling but still vital discoveries about how germs function and how humans can protect their food from them. In the modern era, economics is one of the academic disciplines that is most heavily entrenched in the functioning of societies around the world, as policy-makers and institutional leaders depend on its assumptions when they make decisions concerning the livelihoods and welfare of their people. I want to dedicate my last web paper to showing how neurobiological research is challenging neoclassical economic theory's portrayal of human beings' motivation (1).

Neurobiological studies have cast doubt on neoclassical economics' assumption that people act out of a primary desire to increase their own material well-being, regardless of the harm that that may cause for others or the environment (1). A study by Jorge Moll and his colleagues uses fMRI scans to observe how people's brains react to seeing pictures that represent scenes of what they call unpleasant "moral violations," which depict physical assaults, children deserted in the streets and war scenes (2). The experimenters control for the groups who see these images by showing other groups different images: ones that are also unpleasant but that don't involve humans, pictures of human faces, and more neutral images. The study's premise was that if there was a difference in activation level in the areas of people's brains that were stimulated when they saw moral violations versus a) when they saw unpleasant images without humans and b) when they saw human faces, then that would suggest that people are sensitive to observing human suffering.

In fact, Moll et al. found that there were several areas of the brain in which there was a difference in strength of activation: the medial orbitofrontal cortex (OFC), medial frontal gyrus (MedFG) and superior temporal sulcus (STS). The researchers suggest that these three areas may play an important role in detecting social-emotional events, in cognitively processing that stimuli and in generating motivation to act upon it (2). This evidence suggests that people are not cold, purely selfish beings that are oblivious to the suffering of others, but rather beings that are very sensitive to the condition of other humans. More research must be done in this area to further support the hypothesis that these neural reactions positively influence outward benevolent behavior towards people in suffering. However, even though the study doesn't completely demonstrate that people act upon their empathy for one another, this study falls within the context of other research that has demonstrated that some animals, such as vervet monkeys, do act in a self-sacrificing and altruistic way to protect other animals of their same species (3). In addition, Moll et al. cite research that shows that people act in an altruistic way towards members of their kinship group (1).

Another area in which neurobiologists have challenged neoclassical assumptions about what it means to be human is in the field of rewards and punishments that affect people's decision-making behavior (4). The British philosopher Jeremy Bentham revised the neoclassical view of people as heartless actors that strove only to increase their material gains and minimize their chances of losing wealth. He introduced the concept of utility (5), which hypothesizes that human decisions are actually based on the broader criteria of whether a given action will bring net pain or pleasure to the economic actor; utility can take into account anything that would influence a person's decision, ranging from monetary gain to altruism to respect from one's peers. However, once Bentham made this startlingly simple and profound argument, economics as a discipline was limited in its ability to assess the utility of non-financial rewards and punishments. The economics instructor that taught me game theory at Haverford admitted to me that many of the values that the models operated on were sometimes vaguely and arbitrarily calculated. Economics has only been able to assign values based on what economists observe people doing, not from a deeper knowledge of the motivations behind that behavior. Fortunately, neurobiologists like Peter Shizgal are pioneering research into the issue of why people do what they do, and thus, what utility really is.

Through electrophysiological studies of rats' behavior, Shizgal finds that the neurobiological reward that is associated with positive utility is electrical stimulation to certain aspects of the brain, in particular those areas that direct goal-oriented behavior (6). Shizgal studies various aspects of the positive reward effects that arise in rats' neural circuitry when he stimulates their medial forebrain bundles (MFB). He finds that rats' desire for the strongest level of neural stimulation is so strong that they would cross electrified grids, press levers for hours almost without stopping, and abstain from eating despite food deprivation in order to receive the electrical neural stimulation (7). Shizgal's work is likely to be directly relevant to our understanding of human emotion since researchers have found that stimulation of certain areas of the brain causes pleasure in animals ranging from goldfish to humans (7). He strives to gain a better understanding of how these neurological reward systems work, because if he can do that then the concept of neural stimulation may eventually be able to be used as a universal least common denominator when comparing the utility of different options that people choose from in all kinds of situations (5). As far as concrete results go, one can at least extrapolate from his findings that far more than just financial rewards are at work as the core essence of the utility of animals, including humans, although it is likely that neurobiologists will eventually show that money is an important element of modeling human utility.

It is clear that neurobiology has played a fundamental role in challenging neoclassical economic assumptions about human behavior thus far, and it is critical that this research continues if we are to gain a deeper understanding of what drives humans to behave the way they do. As the amount of literature on the subject grows and the applications of neurobiological technology become more sophisticated, neurobiologists must continue to fill in the gaps in our understanding of why people choose to lead their lives the way they do and how people can attain maximum long term utility.

References:

1)Neoclassical Assumptions, Paper with interesting info about neoclassical assumptions and challenges posed to them by bounded rationality

2)Moll Experiment, Paper about human sensitivity to others people's suffering.

3)Vervet Monkeys, Info about the altruism of vervet monkeys.

4)Neurobio & Econ, Good insight into the need for greater neurobiological influence on economics

5)Utility, Explanation of utility

6)Shizgal's work, Good summary of Shizgal's work on utility and neurobio

7)More Shizgal, Interesting lessons gleaned from studying rats



Full Name:  Bethany Keffala
Username:  bkeffala@bmc
Title:  Monkey See, Monkey Do... Human See, Human Say?
Date:  2006-05-18 13:43:08
Message Id:  19403
Paper Text:

<mytitle>

Biology 202

2006 Second Web Paper

On Serendip

Since the discovery of mirror neurons in areas of the human brain that deal with language, scientists have been speculating about and searching for the connection between this type of neuron system and language. Mirror neurons have the potential to shed light on many things having to do with human language, such as how it originated and evolved, and how we learn it now. In monkeys, as well as humans, these neurons are known to fire both when one performs an action, and when they observe an action being performed by someone else.

Mirror neurons are found in the F5 area of the monkey brain, which is considered to be the homologue to Broca's area in humans, a vital language center. (1) Damage to this area of the human brain causes Broca's aphasia. Patients with Broca's aphasia have trouble putting sentences together with the correct structure. Speech is very choppy and confused, and words are often left out. Patients with this type of aphasia are also aware that they are not making much sense when they speak. Because it is thought that mirror neurons exist in this area of the brain that is so associated with language, it makes sense to think about how and look for ways in which mirror neurons could affect language.

What do people want mirror neurons to be able to answer in terms of language? The questions that seemed to be focused on in current literature are those concerning language evolution and learning. Unfortunately, what we have come to expect from mirror neurons in terms of what they can tell us about language may be too much. People seem to be looking for an answer in the mirror neuron system that would explain how speech creates a thought in the hearer's brain that is similar to the intended message in the speaker's brain. We want mirror neurons to explain just how we are able to understand each other through the use of speech. If in monkeys neurons fire in a certain pattern for both action and observation of an event and we see this as using a template during observation that facilitates understanding of the action in the observer, then it also seems reasonable to expect mirror neurons to serve as templates that facilitate understanding between speakers and hearers in humans. People want to find that a pattern of mirror neurons firing can represent one or both of two things in terms of language: the sentence as all of its combined linguistic components as it travels through the brain in the minds of the speaker and hearer, and the meaning of the utterance as it travels through the minds of the speaker and hearer.

The difference between speech and action is that whereas action is basic and iconic in nature, speech is comprised of arbitrary units that somehow represent things which they are not. There is nothing in the sound of the word 'cat' that belies the 'cat-ness' of the actual creature. It is therefore most likely wise to take smaller steps in figuring out what mirror neurons do for us. We know that mirror neurons in other animals have to do with action and observation of action, and not as a language-processing mechanism. This means that the mirror neuron system we inherited most likely started out in the same basic vein, as a mechanism for representing action in both action and observation. It therefore may be productive for us to address the action or motion questions of language, or how children can learn and copy sounds, and string sounds together. If we have a template for the action it takes to form a /p/ sound, for example, it makes it a lot easier to start making and learning how to use this sound, as opposed to having to learn it from scratch. (2) This can also be expanded to hand and arm motions in sign language, and possibly facial expressions for both signed and spoken languages.

Many theorists believe, however, that language evolved from gestural and mimetic communication. The idea is that the progression of communicative activity developed from a reliance on iconic hand motions and facial expressions, and that these slowly became more abstract at the same time as being gradually enhanced by an increasing repertoire of vocalized sounds. From this set-up, we could come by verbs (actions), and nouns (real-world objects). In this way we traveled from grasping to brachio-manual and oro-facial communication (pantomime), to arbitrary signs, to eventual oral and signed languages. (5) This view is supported by the finding that Broca's area is activated in action and observation of action involving the mouth and hands. (8)

If this is the case, then it may become more reasonable to expect mirror neurons to be able to handle something like phonology or syntax, and other aspects of language which are not as connected with physical action as its phonetic components. Arbib, a leading mirror neuron researcher, suggests that, "Not just simply observing someone else's movement and responding with a movement which in its entirety is already in one's own repertoire, but rather "parsing" a complex movement into more or less familiar pieces, and then performing the corresponding composite of (variations on) familiar actions." (1) The use of the word "parsing" here is a direct allusion to language and the way in which we process it, understand it, and respond to it. If language emerged as a consequence of the mirror neuron system, it does seem more plausible for mirror neurons to be able to help with our understanding of how we comprehend speech.

We might not be able to explain with mirror neurons how we get from an arbitrary string of sounds to a complex thought, but just because they may not provide easy answers does not mean that they are not helpful to our understanding of how language could work. (4) If language evolved as a byproduct of our mirror neuron system, and if they are found in an area of the brain that deals heavily with language, then we should look to this system for possible explanations for our language questions. So what questions should we be asking?

It is very easy for young children to pick up language. Within a few short years they master the basics, but after 12 or 13 years of age it usually becomes very difficult for a person to start learning a new language. Might there be some change in the brain having to do with mirror neurons that makes it more difficult for humans to acquire language? Could we look at possible differences in the mirror neuron systems of adults versus those of children who are still acquiring language? If we argue that mirror neurons are partially responsible for our speed in language acquisition, then it is reasonable to expect some sort of tangible change in that group of mirror neurons when the ability they afforded us changes. If we look at what neurons are firing during an utterance in the brains of both a speaker and a hearer, and we can single out which neurons are responsible for any motor actions, any neurons left that are firing in both conditions may be related to the more abstract elements of speech. It would be also interesting to look at this using sentences that include passive versus active verbs, and to look at sentences that differ in person (1st, 2nd, and 3rd) for agent roles.


Works consulted:

1)The

Mirror System, Imitation, and the Evolution of Language. Arbib, Michael. Computer Science Department and USC Brain Project, University of Southern

California. Los Angeles, CA

2) Bichakajian, Bernard H. "Looking for Neural Answers to Linguistic Questions." Mirror Neurons and the Evolution of Brain and Language.

Ed. Gallese, Vittorio, and Stamenov, Maxim I. John Benjamins Publishing Co. December 2002.

3) Fogassi, Leonardo, and Gallese, Vittorio. "The Neural Correlates of Action Understanding." Mirror Neurons and the Evolution of Brain and

Language. Ed. Gallese, Vittorio, and Stamenov, Maxim I. John Benjamins Publishing Co. December 2002.

4)Hurford, Jim. "Language Beyond our Grasp: What mirror neurons can, and cannot,

do for language evolution." University of Edinburgh, 2004.

5)The Motor Theory of Social Cognition. A Critique. Jacob, Pierre, and

Jeannerod, Marc.

6)Mirror Neurons and the Motor Theory of Speech. Skoyles,

John R.

7) Stamenov, Maxim I. "Some Features that Make Mirror Neurons and Human Language Faculty Unique." Mirror Neurons and the Evolution of

Brain and Language. Ed. Gallese, Vittorio, and Stamenov, Maxim I. John Benjamins Publishing Co. December 2002.

8)The "shared manifold" hypothesis: From mirror neurons

to empathy. Gallese, Vittorio.