Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.


Biology 202 Spring 2004 Web Paper Forum


Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.

Go to last comment

Does Testosterone Really Lead to Aggression?
Name: Cham Sante
Date: 2004-02-20 00:19:29
Link to this Comment: 8300


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

It is a common myth that testosterone causes aggression but is there biological reason to back up this assertion? Some say there is, while emphatically rattling off statistics and experimental evidence, while still others are armed with ambiguous or even refuting information with which to contest this argument. The bottom line is that we do not know for sure whether or not testosterone causes aggression (how problematic the idea of cause and effect can be in biology!) and so at this point we must turn away from the enticing idea that there exists a clear and definitive answer to this question. We must instead turn our attention to the evaluation of available information, in order to better understand the role of Testosterone in guiding behavior.

According to theory from evolutionary biology, aggression serves an important function in terms of both individual survival as well as procreation potential. In terms of this evolutionary biological theory, what it comes down to is this: competition arises when resources are limited and therefore animals/species must actively compete in order to increase their own fitness. It does not take a biologist to then infer that aggression is advantageous at both the individual and genetic levels. (1).

Hormones are inextricably linked to behavior as seen by the impact that its presence or absence has on an organism. In terms of aggression, there exists intriguing evidence that there is a definite connection between the hormonal effects of testosterone and the outward expression of aggressive behavior (1). For example, castration leads to a marked decrease in aggression as shown by castration experimentation on various species. Furthermore, when testosterone is replaced through hormone therapy in these castrated animals, the amount of aggression increases and is restored to its original pre-castration level (1). Taken together, this seems to present a strong argument for the role of testosterone in aggression. However, the story does not end here: if we are to suppose that testosterone does in fact lead to aggressive behavior we must then necessarily ask how and why it does. In doing so, we might just find that the original supposition falls through.

Testosterone exerts its hormonal and behavioral effects upon interaction with androgen receptors (i.e., when converted into 5-alpha-dihydrotestosterone) or with estrogen receptors (i.e., when converted into estradial by aromatase) (2). . According to some, there exists a "critical time period" (i.e., during development) when testosterone serves to "sensitize" particular neural circuits in the brain. Presumably, this sensitization allows for the effects of testosterone that manifest in adulthood. A recent theory builds upon this story, adding the idea that almost immediately after birth, testosterone leads to the establishment of an "androgen-responsive system" in males. And what about females? It is presumed that a similar androgen system is set-up in females, "although a greater exposure to androgens is required to induce male-like fighting" (2).

Although not the primary function of most hormones, neural activity can be modulated as a result of their presence. For example, it has been shown that some hormones can modify cell permeability and therefore have a crucial impact on ion concentration, membrane potential, synaptic transmission and thus neural communication and behavioral outcomes (2). More specifically, when a hormone such as testosterone acts on a target neuron, the amount of neurotransmitter that is release is significantly affected. For example, it has been suggested (i.e., with experimental data) that testosterone acts on serotonergic synapses and lowers the amount of 5-HT available for synaptic transmission. This is important when coupled with the fairly well accepted idea that the presence of 5-HT serves to inhibit aggression, as shown convincingly in studies done on male rhesus monkeys: Serotonin reuptake inhibitors such as Fluoxentine and several other antidepressants lead to a significant decrease in aggression in both monkeys and humans (2).

Although convincing relationships have been found between testosterone and aggression, hormones in general cannot cause a particular behavioral outcome; they can only facilitate or inhibit the likelihood that such an outcome will occur. For example, the mere presence or level of testosterone is not sufficient in invoking aggressive behavior, as seen by a significant population of males that are not aggressive. There must therefore be other factors involved: at the hormonal level, what about the effects of noradrenaline, acetylocholine or glutamate? It is important to remember here that the endocrine system consists of a complex array of communication pathways, none of which act independently (2).

Furthermore, we know that biological factors do not act in a vacuum and we must therefore concede significant impact and effect from environment and social factors as well. For example, some studies have found that it is not testosterone level that is the best predictor of aggression, but that obesity and lower levels of "good" cholesterol tend to be the best predictors of aggressive behavior in human males (3). Additionally, it has been shown that social status greatly influences the presence/degree of aggressive behavior in both animals and humans. The facts are that higher levels of social status correspond to higher levels of testosterone, although the quandary remains: is this elevated status a result of elevated testosterone levels and the evolutionarily advantageous aggressive behavior it might influence, or is the testosterone level a result of the heightened social status (i.e., building upon the well-supported idea that "winning" social competition leads to an increase in testosterone levels) (4)? It is the age-old nature versus nurture debate, or perhaps more appropriately, nature and nurture discussion.

To come full circle and reiterate this discussion's opening declaration: we do not know for sure whether or not testosterone leads to aggression. Therefore, any assertion of a causal relationship between the two is instantly problematic. Instead, we must continue to learn and to discuss the various possibilities with an open mind, in order to come to a better understanding of the role that testosterone and other hormones play in aggressive behavior.

Resources

1)Gender Website, a comprehensive cross- disciplinary approach to gender difference, touching upon areas such as Psychology, Genetics, Neurobiology, and Development to name a few.

2) Simpson, Katherine. The Role of Testosterone in Aggression. McGill Journal of Medicine, 2001. A thorough biological examination of aggression and the role that hormones play in facilitating/inhibiting aggressive behaviors. Many studies sited, comprehensible graphs presented. As found from the website: http://www.med.mcgill.ca/mjm/v06n01/v06p032/v06p032.pdf

3)DeNoon, Daniel. Don't Blame Testosterone for Aggression: Angry, Hostile Men Don't Have Extra Sex Hormone. WebMD Medical News, November 11, 2003. A newspaper article reporting on recent findings that Testosterone might not be the most important factor in aggression.

4)Steroids Website, a website dedicated to education regarding anabolic-androgenic steroids. Informative articles available such as: "Psychological and Behavioral Effects of Endogenous Testosterone Levels and Anabolic-Androgenic Steroids Among Males: A Review".

References

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Schizophrenia
Name: Natalie Me
Date: 2004-02-22 12:54:44
Link to this Comment: 8351


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

I have always been interested in mental disorders, particularly ones with dramatic debilitating affects on an individual's behavior and their brain. Illnesses such as severe depression, bipolar disorder and the more intense schizophrenia have intrigued me my whole life. Diseases of the mind seem to be uniquely connected to each other, and connected to humanity in a very intimate yet dissociative way. It seems to me that the brain is the great mystery of the universe, and even greater a mystery is that of the mind. With all our modern scientific advancements, what prevents us still from overcoming these obstacles as a society? Why can we not yet see into the mind enough to heal it from a disease like schizophrenia?

This course has prompted me to look at these issues more intensely and as I go through other courses and encounter certain situations throughout the semester, I have been constantly reminded of this mind/body connection. A few weeks ago I was reading the January edition of Scientific American that had an interesting article on Schizophrenia inside. It spurred my interest and I began to look online for more information about schizophrenia's symptoms, effects, treatment and research. What do we know about this disease and how are we beginning to answer the question of mind/body connectedness through the search for cause and cure of schizophrenia?

Schizophrenia typically makes its appearance in an individual during their late teens and early twenties. This may be different for men and women, with women developing symptoms up until their thirties (1). I found this particularly interesting and wonder why this would be the time period for developing such a destructive disease. I wonder if it has anything to do with stress and the challenges of adolescence and early adulthood. Those times of particular pressure may force the brain to self-destruct in a way. Often, homelessness, poverty and unemployment are associated with schizophrenia, however these are usually secondary to the illnesses devastating effects (3).

Symptoms of schizophrenia vary wildly. They range from observed behavior such as apathy, decreased speech and movement, sleeping problems, poor health or appetite problems, and money management problems to symptoms that are harder to detect such as delusions and hallucinations, obsessive thoughts and compulsivity, sad or depressed mood, poor concentration, distrust, and anxiety (2). Unfortunately, many of these symptoms can be misdiagnosed as they mirror other mental illnesses. Schizophrenia may early on be thought to be depression or bipolar disorder and will be treated inappropriately. The most disturbing symptoms of schizophrenia are a distorted perception of reality caused by hallucinations and delusions. These can alter one's personality turning them into a very different person.

.

Treatment of schizophrenia is also varies, with inconsistent results. Most frequently, medications are prescribed to inhibit the intense symptoms patients suffer. Treatment more often than not is a life-long control management problem that is dependent on compliance and dedication by an individual who is not always competent enough to maintain such a regimen of treatment. Psychotherapy is sometimes used, though more frequently some form of counseling is used for family members rather than the patients due to the emotional burden supporting someone with schizophrenia can cause (3). A combination of different types of medications has been used to treat the varying symptoms of schizophrenia including antipsychotic, antidepressant and antianxiety medications. These medications, especially when taken in high doses and for long periods of time may have seriously detrimental physical and mental side effects that further discourage patients from sticking with their harsh regimen of treatment (4).

Treatment also depends on the causes of schizophrenia and is therefore always changing due to new evidence supporting one cause or another. This was my original question and interest in writing this paper, discovering the connection between cause and treatment and those links between the mind and body. For a long time, medications were serotonin-dopamine antagonists, treating supposedly the deficit in the brain causing schizophrenia. More recently though, it has been discovered that dopamine is not the primary brain agent causing the illness. Schizophrenia is actually caused by a multi-faceted system of breakdowns in the function of the brain.

This brings me to my main concern. According to the January Scientific American, "scientists have long viewed schizophrenia as arising out of a disturbance in a particular brain system – one in which brain cells communicate using a signaling chemical, or neurotransmitter, called dopamine" ((5), p. 50). It has recently been found, however, that like the multiple system attack of schizophrenia, the disease may be caused by glutamate, a neurotransmitter that plays a role in many different functions of the brain. Scientists discovered that the NMDA glutamate receptor is blocked or inhibited in schizophrenia patients. Glutamate is a more "pervasive neurotransmitter," affecting dopamine receptors as well. This abnormality would also explain why dopamine was originally thought to be the agent responsible for schizophrenia. As I am not a biologist, some of this is a bit confusing to me.

What I have found interesting is the fact the giant step forward in knowledge this discovery has provided. It answered a tremendous number of questions, including the question of how one neurotransmitter caused such a wide range or problems. Really, it didn't – it was part of a larger process and malfunction. This indicates to me that perhaps the mind and body are connected at a more scientific level. We, at this time, lack the explanatory skills and evidence to prove exactly how it works but perhaps someday we may better comprehend the intricate ways in which our nervous system and intelligence operate.

I did have some additional questions after doing my research. From a social perspective, I wonder about the resources available to both the patients suffering from the disease as well as those more peripherally affected such as family members and friends. Why do many suffering from schizophrenia end up on the street without jobs and cut off from resources? I stumbled onto a web page listing the schizophrenia diagnosis criteria available to physicians online (6). The behaviors and symptoms listed were quite specific but also very familiar to me. Schizophrenia exists within our collective memory. I was reminded of the social theoretical field of symbolic interactionism that posits that people occupy a role expected of them. The sociology of deviance and medical sociology are particularly applicable here. It is interesting to wonder though, the effect of a shared knowledge of symptoms and expected behavior within the sick role has upon one finding themselves labeled as 'mentally ill' or 'schizophrenic.'

There is a lot of information out there about schizophrenia. I feel as though I've only scratched the surface. Social and psychological effects of the disease are far reaching, both on those who suffer but on family members, friends and the wider society as well. It is good to know that progress is being made in the search for a cause and treatment, though I wonder if we will ever really know exactly how the mind interacts with the brain and why things go wrong and how to fix that.


.

References

1) SCHIZOPHRENIA.COM

2) MENTALHEALTH.COM

3)PSYCHCENTRAL.COM

4)PSYCHOLOGYINFO.COM

5) Scientific American. January 2004. Volume 290, Number 1. By Daniel C. Javitt and Joseph T. Coyle. "Decoding Schizophrenia."

6)FPNOTEBOOK.COM


HOW DOES MARIJUANA AFFECT THE BRAIN?
Name: Akudo Ejel
Date: 2004-02-22 21:59:44
Link to this Comment: 8374

AKUDO EJELONU
NEUROBIOLOGY FIRST PAPER
SPRING 2004

HOW DOES MARIJUANA AFFECT THE BRAIN?

Pot, weed, grass, ganja and skunk, are some of the common words used to describe the dried leaves drug known as marijuana. Marijuana is a cannabis plant that is "usually smoked or eaten to entice euphoria."((1). Throughout the years, there has been research on the negative and positive effects of marijuana on the human body and the brain. Marijuana is frequently beneficial to the treatment of AIDS, cancer, glaucoma, multiple sclerosis, and chronic pain. However, researchers such as Jacques-Joseph Moreau have been working to explain how marijuana has harmful affects on the functions of central nervous system and hinders the memory and movement of the user's brain. The focus of my web paper is how the chemicals in marijuana, specifically cannabinoids and THC have an effect on the memory and emotions of a person's central nervous system.

Marijuana impinges on the central nervous system by attaching to brain's neurons and interfering with normal communication between the neurons. These nerves respond by altering their initial behavior. For example, if a nerve is suppose to assist one in retrieving short-term memory, cannabinoids receptors make them do the opposite. So if one has to remember what he did five minutes ago, after smoking a high dose of marijuana, he has trouble. Marijuana plant contains 400 chemicals and 60 of them are cannabinoids, which are psychoactive compounds that are produced inside the body after cannabis is metabolized or is extorted from the cannabis plant. Cannabinoids is an active ingredient of marijuana. The most psychoactive cannabinoids chemical in marijuana that has the biggest impact on the brain is tetrahydrocannibol, or THC. THC is the main active ingredient in marijuana because it affects the brain by binding to and activating specific receptors, known as cannabinoid receptors. "These receptors control memory, thought, concentration, time and depth, and coordinated movement. THC also affects the production, release or re-uptake (a regulating mechanism) of various neurotransmitters."(2) Neurotransmitters are chemical messenger molecules that carry signals between neurons. Some of these affects are personality disturbances, depression and chronic anxiety. Psychiatrists who treat schizophrenic patient advice them to not use this drug because marijuana can trigger severe mental disturbances and cause a relapse.

When one's memory is affected by high dose of marijuana, short-term memory is the first to be triggered. Marijuana's damage to short-term memory occurs because THC alters the way in which information is processed by the hippocampus, a brain area responsible for memory formation. "One region of the brain that contains a lot of THC receptors is the hippocampus, which processes memory."(3) Hippocampus is the part of the brain that is important for memory, learning, and the integration of sensory experiences with emotions and motivation. It also converts information into short-term memory. "Because it is a steroid, THC acts on the hippocampus and inhibits memory retrieval."(4) THC also alters the way in which sensory information is interpreted. "When THC attaches to receptors in the hippocampus, it weakness the short-term memory,"(5) and damages the nerve cells by creating structural changes to the hippocampus region of the brain. When a user has a high dose of marijuana, new information does not register into their brain and this may be lost from memory and they are not able to retrieve new information for more than a few minutes. There is also a decrease in the activity of nerve cells.

There are two types of memory behavior that is affected by marijuana, recognition memory and free cells. Recognition memory is the ability to recognize correct words. Users can usually recognize words that they previous saw before smoking but claim to recognize words that they did not previously see before smoking. This mistake is known as memory intrusions. Memory intrusions are also the consequence of THC affecting the free cell of the brain. "Marijuana disrupts the ability to freely recall words from a list that has been presented to an intoxicated subject."(6) For example, if a list of vocabulary words presented to the intoxicated subject and few minutes later, they have to recall the words that were on the list. The only words that they remember are the last group of words and not the words that are in the beginning of the list. This is an initiation that their memory storage has been affected. "The absence of an effect at short term delay times indicates that cannabinodis did not impair the ability to perform the basic task, but instead produce a selective learning and/or
memory deficit."(7)

I did a study with two college students (Student A and Student B) who both smoke marijuana every other week. This particular study was done an hour before, while and after they were under the influence of the drug. Student A was watching television before she smoked marijuana, was asked which advertisement was splayed before the show started and she got four out of five of her answers correct. After this first section, she smoked a small dose of marijuana twice within an hour. Fifteen minutes after she smoked her last blunt, she continued her regular activity of watching sitcoms. When a commercial would come on, I would ask her simple questions like what happened before the show went to a commercial break. Her responses would be macro-answers about what was going on but when I asked her what the main character was wearing, she did not remember. This was ironic because the protagonist wore a bright yellow suit that my friend commenting on earlier when the show began ten minutes ago. Her short-term memory is weakening because she was only able to remember big picture information and not small picture. Though the results are interesting, I know that I would have had different response on someone else because it depends on how often the user smokes and if they have good memory prior to smoking weed.

Marijuana also impairs emotions. When smoking marijuana, the user may have uncontrollable laughter one minute and paranoia the next. This instant change in emotions has to do with the way that THC affects the brain's limbic system. The limbic system is another region of the brain that governs one's behavior and emotions. It also "coordinates activities between the visceral base-brain and the rest of the nervous system."(8) I am now going to use Students B to describe how emotions are affected by marijuana. Students B is an articulate and well spoken young woman who has a troublesome relationship with her best friends which gets her upset and tense up. But after she smoked one high dose weed, her body was relaxed however, she had trouble formulating her thoughts clearly and would talk in pieces and was jubilant. It has been researched that a person needs to have high dose of marijuana would be in the state of euphoria. High dose of marijuana is measured as "15mg of THC can cause increased heart rate, gross motor disturbances, and can lead to panic attacks."(9) Thankfully, Student A did not experience any of these extreme examples.

College students usually smoke marijuana because they are stressed over schoolwork and feel that marijuana can help them unwind. I have encountered marijuana smokers who are chilled and have no worries in the world but after the effect of the drug wears off, they're sometimes capable to tacking their problem or at the original state that they were in before the drug. The effects of happiness that marijuana usually cause to the user is not a lasting effect because even though a user smokes weed to get away from the troubles of his/her own life, they still have to face these problems after the effects of the drug wears-off. In a survey with college student, an organization called, parents: the Anti-Drug interviewed college students and found that "compared to the light users, heavy marijuana users made more errors and had more difficulty
sustaining attention."(10) This was evident through my second experiment with Student B but not everyone who smoke high doses of marijuana experience the same affect.

The chemicals in marijuana bring cognitive impairment and troubles with learning for the user. "Smoking [marijuana] causes some changes in the brain that are like those caused but cocaine, heroin, and alcohol. Some researchers believe that has changes may put a person more at risk of becoming addicted to other drugs such a cocaine
and heroin."(11) To prevent such harm, one must be cautious of their actions. Those who do not do drugs do not risk harm. So please the next day you light up, remember you that you central nervous system and brain will be at risk.

1)Online Dictionary

2)Marijuana: The Brain's Response to Drugs, A Good Web Source

3)Mind Over Matter: Marijuana Series, A Good Web Source

4)Alcohol Addiction & The Limbic System, A Good Web Source

5)Marijuana: The Brain's Response to Drugs, A Good Web Source

6)Cellular and Molecular Mechanisms Underlying Learning and Memory Impairments Produced by Cannabinoids, A Good Web Source

7)Cellular and Molecular Mechanisms Underlying Learning and Memory Impairments Produced by Cannabinoids, A Good Web Source

8)Marijuana and the Brain by John Gettman. High Times, March, 1995, A Good Web Source

9)Alcohol Addiction & The Limbic System, A Good Web Source

10)Parents. The Anti-Drug. -- Drug Information, A Good Web Source

11)Marijuana: Marijuana Brain Effects, A Good Web Source


Chocolate on the Brain
Name: Kristen Co
Date: 2004-02-23 10:19:05
Link to this Comment: 8389


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

While thinking of things to put in a gift basket for a friend who was in the hospital, my roommate turned to me with some of her German chocolates and inquired if indeed it was true that chocolate makes a person happy. "It has something to do with endorphins in the brain, right?" she asked me. I decided to do some research. Does chocolate make you happy by effecting the brain? Intrigued, I turned to the Internet and searched for "chocolate on the brain." Lo and behold, I discovered that the over 300 chemicals that compose chocolate have numerous and varied effects on our bodies through the nervous system (1).

Chocolate can affect the brain by causing the release of certain neurotransmitters. Neurotransmitters are the molecules that transmit signals between neurons. The amounts of particular neurotransmitters we have at any given time can have a great impact on our mood. Happy neurotransmitters such as endorphins and other opiates can help to reduce stress and lead to feelings of euphoria. As connections between neurons, they are released from the pre-synaptic membrane and travel across the synaptic clef to react with receptors in the post-synaptic membrane. Receptors are specified to react with particular molecules which can trigger different responses in the connected neurons. The proper neurotransmitter can trigger certain emotions.

It turns out that my roommate was correct in her assertion that chocolate affects the levels of endorphins in the brain. Eating chocolate increases the levels of endorphins released into the brain, giving credence to the claim that chocolate is a comfort food. The endorphins work to lessen pain and decrease stress (2). Another common neurotransmitter affected by chocolate is serotonin. Serotonin is known as an anti-depressant. One of the chemicals which causes the release of serotonin is tryptophan found in, among other things, chocolate (1).

One of the more unique neurotransmitters released by chocolate is phenylethylamine. This so called "chocolate amphetamine" causes changes in blood pressure and blood-sugar levels leading to feelings of excitement and alertness (1). It works like amphetamines to increase mood and decrease depression, but it does not result in the same tolerance or addiction (3). Phenylethylamine is also called the "love drug" because it causes your pulse rate to quicken, resulting in a similar feeling to when someone is in love (4).

Another interesting compound found in chocolate is the lipid anandamide. Anandamide is unique due to its resemblance to THC (tetrahydrocannabinol), a chemical found in marijuana. Both activate the same receptor which causes the production of dopamine, a neurotransmitter which leads to feelings of well being that people associate with a high. Anandamide, found naturally in the brain, breaks down very rapidly. Besides adding to the levels of anandamide, chocolate also contains two other chemicals which work to slow the breakdown of the anandamide, thus extending the feelings of well-being (4). Even though the anandamide in chocolate helps to create feelings of elation, the effect is not the same as the THC in marijuana. THC reacts with receptors more widely dispersed in the brain and is present in much larger amounts. It would take twenty-five pounds of chocolate to achieve a similar high to that of marijuana (1).

Theobromine is another chemical found in chocolate that can affect the nervous system. Besides having properties that can lead to mental and physical relaxation, it also acts as a stimulant similar to caffeine. It can increase alertness as well as cause headaches. There is much debate as to whether or not caffeine even exists in chocolate. Some scientists believe that it is the less potent theobromine which is solely responsible for the caffeine-like effects (5).

When examining the effects of chocolate on the nervous system, it is also important to point out that chocolate does not treat all nervous systems the same. Many animals, for example, can be killed by the chemicals in chocolate. Theobromine in particular does not metabolize as quickly in other animals such as dogs and horses (1).

Chocolate has a long history associated with feelings of well being. It has been favored by people ranging from the ancient Aztecs to high society Victorians to Popes. Chocolate also has a history of being a known aphrodisiac (6). This makes sense when you combine phenylethylamine's ability to quicken the heart, the feelings of euphoria from anandamide, theobromine's power to cause relaxation, and the other neurotransmitters sending pleasurable feelings throughout the brain. Even the names associated with chocolate imply its power. Anandamide is derived form the word ananda which is Sanskrit for bliss and theobromine can be traced back to the Greek word theobroma meaning "food of the gods" (6).

It seems to be true that eating chocolate can increase feelings of euphoria as well as decrease stress and pain, but is it possible that chocolate can be addictive? There are many people out there who consider themselves to be addicted to chocolate, partly because of its mood-enhancing qualities. Many questions, however, still remain regarding if chocolate can, like the drugs with similar chemicals and effects, be an addictive substance. The majority of scientists seem to agree that chocolate is not addictive. Some go as far to say that chocolate is merely a kind of placebo that only causes these effects because people believe that it will. Chemicals such as phenylethylamine and anandamide can be found in other edibles in much greater amounts but they don't seem to have the same effect (1). There are plenty of self professed chocoholics out there who would, however, refute this claim and who continue to proclaim the wonders of chocolate.

It is also important to remember that not all chocolate is created equal. The strength of chocolate depends greatly on how it is manufactured. The cacao bean, from which chocolate is derived, has a naturally bitter taste and is greatly diluted by sugars and other ingredients. In the United States, something needs only to have 10% cacao in it to be considered chocolate (5). When examining my roommate's collection, most of which is from Germany, I found that cacao levels were around 30%, the dark chocolate being slightly higher. It seems that in diluted chocolate, the effects would be minimal.

I think it is quite fascinating that a food such as chocolate can have such an effect on the operations of our brain and thus our perceptions of the world. Since I met my roommate over a year ago, I have significantly increased my chocolate intake. I also think I'm a happier person than I was before we met. Could it be that the chocolate I consume now almost on a daily basis has something to do with my subtle transformation in mood? I would like to think not, but it is an interesting thought. I do, however, instinctively find myself reaching over to the chocolate stash whenever I start feeling a little depressed or overwhelmed and it always seems to make me feel better.


References

1)BBC News ,
2)"Endorphins: The Body's Stress Fighters" ,
3)http://www.chocolate.org/refs/index.html,
4)"All About Chocolate: Chocolate and your Health" ,
5)http://www.mrkland.com/fun/xocoatl/index.htm#SEL,
6) "Chocolate: Melting the Myths" ,
7) Neuroscience for Kids-Chocolate and the Nervous System ,


Alzheimer's Disease: The Loss of One's Self
Name: Sarah Cald
Date: 2004-02-23 14:25:05
Link to this Comment: 8394


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Our class discussions of late have related behavioral characteristics to the anatomy of the brain. We have questioned what it is that defines a persons "self?" What is it that processes various sensory inputs in an individual and formulates that individual's personal outputs, feelings and attitudes in response to these inputs? For the time being, we have given the responsibility of input processing to the I-box. There are several mental illnesses that may accompany dementia. A person suffering from one or more of these illnesses can be characterized as having "lost one's self" (1). In this paper, I hope to understand how Alzheimer's disease causes loss of memory and, eventually, the loss of one's "self." What factors of the disease determine how much of one's original "self" is lost from day to day?


Four and one - half million Americans are estimated to have Alzheimer's Disease (AD) in greater or lesser degree (2). Alzheimer's disease is a complex condition that affects the brain and is one which is considered to be a major public health problem for the United States. AD has a huge impact on individuals, families, the health care system and society. While scientific research has enabled scientists to develop a better understanding of Alzheimer's, and consequently develop more effective diagnosis, effective treatments have been elusive. Overall, the disease remains enigmatic.

Alzheimer's disease was first observed and described in 1906 by German physician Dr. Alois Alzheimer during the autopsy of a woman with dementia (2). Alzheimer's is an irreversible, progressive brain disease that slowly destroys memory and thinking skills. As the disease progresses it eventually prevents those suffering from the disease from performing simple tasks (4). Although once viewed as rare, research has shown that AD is the leading cause of dementia. Dementia is an umbrella term for several symptoms all of which result in a decline in thinking and cognitive capabilities. Such symptoms include: gradual memory loss, reasoning problems, judgment problems, learning difficulties, loss of language skills, and a decline in the ability to perform normal, routine tasks. People with dementia also experience personality and behavioral changes such as agitation, anxiety, delusions and hallucinations (4). It is important to note that dementia is not a disease itself, but a group of symptoms that usually accompanies a disease. Accordingly, dementia is not solely a result of Alzheimer's it is also experienced in many related disorders of the brain.

The progression of AD varies widely and can last anywhere from 3 to 20 years. Alzheimer's first affects the areas of the brain that control memory and thinking skills, as the disease progresses cells in other regions of the brain die as well (2). Researchers aren't certain of the causes of AD and theories of its cause have ranged from intake of excessive aluminum from modern cookware to exposure to pesticides. At present, the causes remain open to scientific debate. What is known is that people with the disease have an abundance of two abnormal structures in the brain: plaques and tangles. Plaques are dense accumulations of a protein called beta-amyloid. Tangles are twisted fibers caused by changes in a protein called tau. The beta-amyloid plaques reside in the spaces between neurons, in the brain, and the neurofibrillary tangles clump together inside the neurons. Plaques and tangles block the normal transport of electrical messages between the neurons that enable us to think, talk, remember and move. As AD progresses, nerve cells die, the brain shrinks, and the ability to function deteriorates (5). The destruction and death of nerve cells causes the memory failure, personality changes and other features of AD (5). To be sure, plaques and tangles develop in the brains of many older people, however the brains of AD patients have them to a much greater extent. While there is strong evidence that suggests these protein accumulation are involved in AD, their exact role in the disease continues to elude scientists.

The two biggest risk factors for getting AD seem to be genetic predisposition; about 30 percent of people who have AD have a family history of dementia, and age (6). As many as 10 percent of people 65 years of age and older have AD and nearly 50 percent of people 85 years and older have the disease (6). Sporadic AD refers to cases of AD where no other blood relatives are affected by the disease, this type of AD occurs in about 75 percent of cases (4). In these cases, the risk of developing AD increases as a person gets older. The remaining 25 percent of AD cases are hereditary, which means they are caused by mutated genes and tend to cluster within families. These cases can be divided into early-onset disease (symptoms begin before 65 years of age) and late-onset disease (symptoms begin after age 65) (4). Scientists have identified several genes that play a role in early-onset AD, the more rare form of the disease that strikes people as young as in their 30s (7). Research has also identified a gene that produces a protein which may play a role in late-onset AD, although this is far from certain.

There is no cure for AD. While there are a number of treatment regimes, none are capable of reversing the effects of the disease and overall effectiveness is far from clear. Several drugs have been FDA approved to treat some of the symptoms of AD in an attempt to improve the quality of life of those afflicted with the disease (7). Interestingly, some studies have shown that participating in mentally stimulating activities such as reading books, doing crossword puzzles, or going to museums, may be associated with a reduced risk of AD (7). In addition, this "use-it-or-lose-it" theory postulates that repetitive actions may improve certain cognitive skills and make them less susceptible to brain damage (7).

While scientific research has furthered the understanding of AD, it has yet to address the possibility of the I-function in Alzheimer's patients being impaired. The protein build-up of plaques and tangles as well as genetic mutations play a role in the etiology of AD. However, no investigations have questioned how factors affect the I-box. Alzheimer's is viewed as a disease which causes patients to lose their "self." If this is the case, and the I-function is the part of the human brain responsible for defining one's "self," then it would seem logical that AD directly affects the I-function. The possible connection between AD and the I-function is one worth investigating further. Perhaps insight into the I-box is the missing link in understanding, completely, the mechanism of Alzheimer's.

References

1)Alzheimer's Disease: A Family Affair and a Growing Social Problem,

2)What is Alzheimer's,

3)Alzheimer's Association: About Alzheimer's,

4)The National Women's Health Information Center: Alzheimer's Disease,

5)Alzheimer's Disease: Unraveling the Mystery,

6)National Institute of Neurological Disorders and Strokes: Alzheimer's Disease Information Page,

7)FDA: Alzheimer's Searching for a Cure,


Health: Mind and Society I
Name: Aiham Korb
Date: 2004-02-23 20:25:30
Link to this Comment: 8403


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


In the ethnographic study of disability, the subject shifts from THEM to US, from what is wrong with them to what is wrong with the culture that history has made for all of us, from what is wrong with them to what is wrong with the history that has made a THEM separate from an US, from what is wrong with them to what is right with them that they can tell us so well about the world we have all inherited. (1)



This study, the first of three papers, is intended to shed light on the effects of psychosocial factors on the human body and their influence on health. It will explain the physiological basis upon which the environment and society can promote poor health. Disability and pathology are symptoms of deeper problems; disease is the end product of malfunctioning systems. In the interest of better understanding the etiology of disease in human beings, we must recognize the many complex interacting systems that contribute to health and epidemiology. For this, we must take a step back and consider basic questions such as "What is health?". As defined by the World Health Organization, "health is a state of complete physical, mental and social well-being and not merely the absence of disease and infirmity" (2). In this definition, accepted by most countries in the United Nations, physical well-being is clearly only one of several factors that constitute good health. So let this be our point of departure, and let us ask next what problems face global health.

Ironically, poverty is still considered the number one problem linked with poor public health around the world. There seems to be a wide gap between the WHO's definition of health and how health is actually being approached. As society is becoming more technologically advanced, the focus is shifting to a Bio-medical Model. With this specificity, the problem of health today may be that of limited perception. Our society seems to have forgotten the principles from which it has departed on the quest for "Healthy People". For example, the World Health Report 2000 found that despite the fact that the U.S. health system spends a higher percentage of its GDP than any other nation, it ranked 37 out of 191 countries according to its overall performance (3). In 2003, the Census Bureau recorded more than 43.6 million Americans with out health insurance (4). These absurd paradoxes of our society are grave symptoms of malfunctioning political and economic systems. Yet, these figures are often forgotten because of excessive specialization on the physiological aspects of health. Therefore, we need to reconsider the larger point of view, and the many variables that affect the health of individuals and populations. Just as the WHO's definition suggests, psychological and socio-economic well-being are essential to the overall formula of health.

It is the awareness and integration of these "global" factors that we will attempt to introduce with PsychoNeuroImmunology. This field studies the interactions between the mind, the Nervous and the Immune Systems. PsychoNeuroImmunology will help us establish a bridge between the material (biological and physiological) factors and "non-material" (societal, economical, political) factors that affect health and disease. The Nervous System, the brain in particular, is at the center of those interactions. It is the principal link between the mind (or the mental state) and the body's immune system. There are several existing models that try to map these complex interactions. For example, Kemeny's X-Y-Z model investigates the linkages between psychological processes, physiological mediators and disease progression (5). Kemeny indicates that "the brain is the most proximal physiological substrate through which psychological factors act on peripheral neural systems [...] to affect pathophysiological mechanisms and clinical disease" (5). Another model where the Nervous System is at the heart of the interacting factors is Costanzo's Biopsychosocial Model (6). This is a more complete model, integrating the psychosocial, biological and behavioral catalysts on health. According to the Biopsychosocial Model, these factors affects, via stress, the Neuroendocrine and Immune Systems, which in turn determine disease vulnerability and progression. Thus, by mapping those interactions, it provides us with the mechanisms of mind-body relations in disease. Castanzo asserts: "Interactions between psychosocial and immunologic factors are relevant to a variety of diseases including inflammatory diseases, cardiovascular disease, infectious diseases, cancer, diabetes, osteoporosis, muscle wasting, and multiple sclerosis, and processes such as wound healing, surgical recovery, and efficacy of vaccination" (6). It is true that various longitudinal studies have shown stress to be strongly related to heart disease, especially among people of low socio-economic status. Also, loneliness and social isolation have been linked to increased morbidity and mortality. In order to approach these issues, we will start by examining the mechanisms of interactions between the NeuroEndocrine and Immune Systems.

The "non-material" factors on health (psychological, social, economic and political) can affect the human body by inducing change in the physiological systems. This change is brought through by the working of the NeuroEndocrine and Immune Systems, and their effects on the rest of the body. Environmental events that are challenging, uncontrollable or unpredictable activate the body's stress or "fight-or-flight" response. This response triggers physiological and behavioral changes in taxing or threatening situations (7). The Sympathetic Nervous System promotes the release of hormones that affect both the Nervous and Immune Systems. "A key hormone shared by the central nervous and immune systems is corticotropin-releasing hormone (CRH); produced in the hypothalamus and several other brain regions, it unites the stress and immune responses" (7). CRH causes the pituitary gland to release adrenocorticotropin hormone (ACTH), which triggers the adrenal glands to make Cortisol. The HPA axis is composed of the Hypothalamus and the Pituitary gland, located in the brain, and the Adrenal glands, which lie above the kidneys. The HPA axis and its key hormone Cortisol, are major components of the NeuroEndocrine stress response. "Cortisol is a steroid hormone that increases the rate and strength of heart contractions" (7). Cortisol is also an immunosuppressor, a potent immunoregulator and anti-inflammatory agent. This is a key point, because this arousal is thought to be a mechanism by which the stress response affects health. It causes an increasing "wear and tear on bodily systems, and damage to arteries, neural systems, and organ systems, and reducing resistance to pathogenesis" (8). This emphasizes the inter-dependence of the nervous and immune systems, and indicates that the malfunction of their regulating mechanisms can have serious consequences on health. "The adoptive responses may themselves turn into stressors capable of producing disease" (7). Therefore, stress can have negative outcomes on health by dampening the functioning of the immune system and increasing the body's susceptibility to infections and diseases. "The regulation of the immune system by the neurohormonal stress system provides a biological basis for understanding how stress might affect these diseases" (7). It is upon this basis that we will develop the understanding of how psychosocial stress promotes pathology. For example, the feeling of loneliness in humans is associated with an adrenaline-like pattern of activation of the stress response and high blood pressure (7). Our attention will turn to such psychosocial catalysts of disease.

The disparity between the Bio-medical Model and public health is evidence that the integration of all the variables affecting health is lacking and needed. The Biopsychosocial Model is more comprehensive, and will thus help us in our approach to the problem of mind, society and wellness. PsychoNeuroImmunology gives us a physiological basis upon which we can build the mechanisms of how social interactions, or the lack there of, can affect health, for instance. In the next paper, we will explore stress and its correlation with socio-economic status. As we depart from the biological basis to the "non-material" influences on health, we will begin to attain a wider picture of what well-being means. It will eventually make it possible and meaningful to raise certain questions as "Do certain economic systems promote disease? Does a healthy economy necessarily mean a healthy population?"


Sources:

1)Culture as Disability, By Ray McDermott and Hervé Varenne. Serendip website.

2)World Health Organization

3)World Health Report 2000,WHO archives.

4)AMA decries rise in number of uninsured Americans, American Medical Association. Sept. 30, 2003.

5)An interdisciplinary research model to investigate psychosocial cofactors in disease: Application to HIV-1 pathogenesis, By Margaret Kemeny. Brain, Behavior, and Immunity 17. 2003. p. S62-S72.

6)Psychoneuroimmunology and health psychology: An integrative model, By Erin Castanzo and Susan Lutgendorf. Brain, Behavior and Immunity 17. 2003. p. 225-232.

7) The Mind-Body Interaction in Disease. By Esther Sternberg and Philip Gold. Scientific American. 2002.

8) HEALTH PSYCHOLOGY: Mapping Biobehavioral Contibutions to Health and Illness. By Andrew Baum and Donna Posluszny. Annual Reviews. Psychology. 1999. 50:137-163.


Stockholm Syndrome: Unequal Power Relationships
Name: Katina Kra
Date: 2004-02-23 20:40:40
Link to this Comment: 8404

<mytitle> Biology 202
2004 First Web Paper
On Serendip

On August 23rd 1973, Jan Olsson began a bank robbery that would add a new interpretation to the world's view of hostage situations, and the psychological effects behind unequal power relations. It started with the storming a local kreditbanke in downtown Stockholm, Sweden, and the shooting of the police officers who had gone in after Olsson. With this action, a six day ordeal and hostage situation known as the Norrmalmstorg Robbery began. Four hostages were taken into the bank's vault. Dynamite was strapped to them, and they were rigged to snare traps so that in case of a gas attack by the police, the hostages would be killed regardless of any rescue attempts. Three women and one man were confined to this small room, fighting to survive. (7) Yet when these captives were released, they had more sympathy for their captors than the police who had rescued them – and went so far as to publicly decry their own rescue. Two of the hostages became friends with the captors, a fund was set up by them to help pay for the defense fees accrued through their trial, and continue to support their captors against the police even today. (8) Psychologist Nils Bejerot named the captives' attachment towards their abusers "Stockholm Syndrome," and from this case, a new behavioral attachment disorder began. (7)

In the hands of some psychologists, Stockholm Syndrome has proved an extensible term. It has been invoked to describe the results of slavery upon the African-Americans psyche, abusive relationships between men and women, or any situations where the division of power within a relationship or any kind is severely unequal. (7) Though the situations are intuitively connected, there are important differences by the way the term is being applied. Consequently, it's necessary to examine the different interpretations of how Stockholm Syndrome occurs within these power situations, and the reactions and strategy of the subjects who are confined to them.

The "misplaced" attachment of subjects to their abusers is not uncommon, and has been documented in many different contexts. It happens in abused children and women, cults, controlling relationships, prisoner of war camps, and other people or institutions that enforce unreasonable control on those who have no recourse. Stockholm Syndrome itself is most commonly perceived to occur with hostage situations, with the logic behind developing this relationship with an abuser or captor is in the interest of self-protection. (9) This development occurs when there are perceived threats of violence, disempowerment of the subject, high levels of stress or trauma upon subject, and ultimate dependence upon the person in control for base survival. (2)

In an act of self-delusion, the victim of Stockholm Syndrome develops conditions in order to reassure themselves they will be protected or cared for. By creating a false emotional attachment and seeking praise and approval of their captor, they attempt to make a false reality for themselves, in which no harm can come to them. And by defending and/or protecting their captors from police or anyone who "comes to the rescue," they allow themselves to appear as if they have some control in a relationship which they really have no power. The value of their lives, which the captor grants, is seen as a sign of affection or love, and the captive wishes to reciprocate in order to maintain their own position at that time. By accepting a level of objectification that one should reject as a matter of basic human dignity, hostages or captives weaken their ability to control their emotions. This allows themselves to become malleable, thus becoming easily susceptible to the whims of their captors, and creates this unbalanced relationship of attachment between the captor and the captive. (2, 5, 6, 8)

Many associate the image of hostage and captor with Patricia Hearst and Elizabeth Smart. Both cases involve the kidnapping of a woman for the further pursuit of ideals by their captors. However, these cases can be distinguished by the varying ways that Stockholm Syndrome manipulates the emotions, behavior, and actions of its subject. Patricia Hearst was kidnapped from her home, and locked in a closet where she underwent severe psychological, physical, and sexual abuse before she became a member of the Symbionese Liberation Army. At the point where the members of the SLA began to give her more freedom and liberty to speak, she was given the opportunity to leave the SLA or join and help in their fight. (4, 10) However, Hearst, under the influence of the Stockholm Syndrome, chose to remain with the group as a survival tactic.

"I knew that the real choice was the one which Cin had mentioned earlier: to join them or to be executed. They would never release me. They could not. I knew too much about them. He was testing me and I must pass the test or die." (4)
- Patricia Hearst

The effects of the trauma and abuse are clear here in what one might identify as Hearst's 'compromised survival instincts.' She would rather have stayed with those who had tortured her for nearly two months than risk affronting the SLA. After initiation, Hearst, dubbed "Tania" by the group, helped in a robbery, but when the SLA lost their power in a fire fight with the Los Angeles Police Department, she was returned to her family. (4) Unlike the Norrmalmstorg Robbery, she distanced herself from the group and her captors when she returned to her regular life, and insisted that her reasons for joining were purely in self protection. Perhaps Patricia Hearst, despite the abuse endured in the time of her kidnapping, was not necessarily protecting the others by joining the SLA, but attempting to save herself by the actions she believed would help. (10)

In the case of Elizabeth Smart, a very different dynamic between the captor and the captive emerged. At the young age of fourteen, Smart's own instincts of survival or protection were not as developed as Hearts', and this lack of maturity resulted in the development of a strong bond between her and Brian Mitchell, resulting in intense Stockholm Syndrome. (3) This is exemplified by her failure to seek help. Only three days after her kidnapping, Smart had heard her uncle searching and calling from her not far from her hidden location, but did not call out or draw attention to herself. (11) This derisive lack of motivation to be rescued is prevalent the nine months of which she was under hostage. Many people questioned her and her captors about who she was during this period of time, but she denied anything but what she had been told by Mitchell.

The evidence so far shows no physical abuse to Smart, but there was a constant subjection to threats, the trauma of the kidnapping itself, and propaganda forced upon her, that all resulted in the Smart's personal will breaking down, allowing for the relationship of affection to develop towards her captor. (2) Even during her rescue, Smart was still reluctant, perhaps still believing in the myths Mitchell had told her, or convinced that those helping her were hurting her by taking her away from this man who she had become so attached to. (11) Unlike Hearst, Smart did not speak out against her captor once she had returned to regular life, despite an angry and vocal family. She remained silent about what occurred during the nine months of which she was under his control, and did not defend her choice to avoid seeking rescue. A predominant sign of Stockholm Syndrome is this sympathy and compassion with your captor, and even though Smart did not outwardly explain this relationship as those hostages in Sweden had, she had remnants of it even after she was returned home. (2, 5, 8)

Both of these cases exhibit Stockholm Syndrome through the hostage scenario, but there are many other situations in which its dynamics can be identified. In the mid-19th century, many African-Americans felt betrayed by Lincoln when his government emancipated them. Some adamantly refused to leave their masters even when they were granted freedom. Though in a sense, slaves were confined to the area which their master presided over and had the lingering fear of violence, they still could claim certain areas of their lives were their own, and were not generally as directly threatened as hostages are. Even so, the legacy of domination and abuse manifested itself in these "one-sided relationships" where African-American slaves remained devoted to their American master despite the cruelty they had endured. (1) "Indeed, the regulation of behavior and the resultant adjustment that was made had a direct influence on the consequent formation of the slave's personality."(Huddlestone – Mattai, 347) Consequently, this domination pattern of those with money and power, typically European, over the African-Americans is still prevalent today, as parts of society still holds that they are inferior, as would be in a master and slave complex.

Not all potential subjects, placed in these situations, react in a way that engenders Stockholm Syndrome. Many in similarly unequal power relationships seek revenge or escape as soon as it is offered. Bank hostages have held their captor to the window to be shot (8), slaves have killed their masters in rage, and so one cannot assume that there exists a 'hard and fast rule' to generalize that captives will come to inappropriately identify with their captors when placed in such survival scenarios. Strong morals and beliefs are personality traits that may attenuate Stockholm Syndrome in some people. (2) In the rapid change of Elizabeth Smart, it may be possible to attribute this to her age and lack of clear values within her life due to inexperience, and her desire for acceptance and obedience.

As a basic concept, Stockholm Syndrome is the duality of a power relationship over someone. A person captured becomes deeply involved with the captor due to the typical confine of the circumstances, and because even through the abuse and threats, they still must accept them as the only source of contact and nurturing that focuses on them. The need, under duress, of approval and reassurance, when combined with a fear of severe punishment, creates the precondition for the type of aberrant attachment described as Stockholm Syndrome. Nevertheless, its specific consequences – for Elizabeth Smart, Patricia Hearst, or a generalized category of victims such as African Americans – are highly variable, and so more careful clinical examination would be merited in order to define the ways in which Stockholm Syndrome effects those who experience it.

References

1. Huddleston-Mattai, Barbara. "The Sambo Mentality and the Stockholm Syndrome Revisited: Another Dimension to an Examination of the Plight of the African American." Journal of Black Studies. Vol. 23, No. 3, pg. 344-357
2. A site about Elizabeth Smart and Stockholm Syndrome.
3. An article written about the expert opinion involving Elizabeth Smart.
4. A site about the Patricia Hearst kidnapping.
5. The dictionary of Peace and their ideas about Stockholm Syndrome.
6. A site describing the symptoms of Stockholm Syndrome.
7. A encyclopedia that discusses the Norrmalmstorg robbery.
8. An article written about the mental health issues of Stockholm Syndrome.
9. A site describing Stockholm Syndrome in to abusive relationships.
10. An interview with Patricia Hearst and the effects of her kidnapping.
11. A site describing Elizabeth Smart after her rescue.


Fear and Anxiety: Post-Traumatic Stress Disorder
Name: Amy Gao
Date: 2004-02-23 22:30:39
Link to this Comment: 8408

<mytitle> Biology 202
2004 First Web Paper
On Serendip

Almost all of us, at some interval in our lives, will come to experience emotionally perturbed events such as bereavement of a loved one, violence, sudden disaster and other similar events that seem to spin our lives out of control. Even though time eventually may help to dim the memories of such tragic events and many people will come to terms with and accept these losses, many individuals may remain emotionally scarred from their experiences.

Post-traumatic stress disorder, or PTSD, is an anxiety disorder associated with the reactions that an individual has in response to a dramatic emotional event. The incident can be one that has directly affected the individual or one that the individual has witnessed. In adults, symptoms for the disorder include flashbacks and dreams associated with the event, feelings of detachment or estrangement from others, noted diminished interests in activities that the individual once avidly participated in.(1)

It is estimated that PTSD may affect 3 percent to 6 percent of adults in the United States(1), which account for around 5.2 Americans. (4) Women are twice as likely to be afflicted with this syndrome as compared to men, and reports indicate that substance abuse and other anxiety-related disorders may occur concurrently with PTSD.(4) Studies have also indicated that individuals who have had histories of emotional disorder, substance abuse, anxiety, and being part of a dysfunctional family, may be predisposed to PTSD more than other people who have not had such histories.(1)

One example of what triggers PTSD would be the tragic events on September 11, 2001. Indisputably, all of us have experienced some state of shock and disbelief at the horrendous acts committed, and some of us more than the others. Take, for instance, the study conducted that found high levels of PTSD found in New York residents who lived in the vicinity of the World Trade Center, which also found that the farther away from the disaster epicenter, the lesser the incidence of PTSD.(3) This appears to suggest that the closer an individual is to the disaster scene or related to it, the higher chance of the individual being afflicted with PTSD.

In addition to the aforementioned symptoms that may be exhibited by victims of PTSD, some studies have found that there is an association between poor physical health and PTSD. It has been found that individuals who are afflicted with PTSD are more likely to have physical health problems than those who do not have the disorder. (2) The research so far seem to suggest that for those who are not in the prime of their physical health, they are either more vulnerable or more perceptible to be diagnosed with PTSD. Further exploration of the causation and affect link between the two is necessary, since the data that support this theory have only come from veteran populations.

Researches that attempted to correlate PTSD with the brain have focused on the areas in the brain that are believed to be involved in anxiety and fear, which is an emotional response that is triggered when the individual faces danger. Studies have found that the amygdala, a complex structure inside the brain, is responsible for the fear response that activates many of the body's protective mechanism. Therefore, if the previous assumption holds true, it should stand to reason that if the amygdala malfunctions in some way, the results could lead to anxiety disorders, one of which includes PTSD.(4)

PTSD victims also have been found to secrete uncharacteristic levels of hormones when they respond to stress. Opiate is a substance that assists in pain-relieving that is produced when people are in danger, and it has been found that PTSD patients have maintained a high level of opiate even after the danger has passed, which may be associated with the dissociative disorder that is also observed in individuals afflicted with PTSD.(4) Moreover, it appears that cortisol, a steroid hormone released from the adrenal cortex during stress that prepares the individual to deal with the stress factors and insure that the brain receives adequate energy sources are lower than normal.(5) Epinephrine, which is secreted by the medulla, is also known as the "fight-or-flight" hormone that is responsible for increased metabolism and norepinephrine, a neurotransmitter that is released during stress to activate the hippocampus, which is the section of the brain that is responsible for long-term memory, are found in higher levels than normal in PTSD patients.(4) Therefore, deducing from the information above, if an individual is under extreme stress, it can be reasoned that norepinephrine (since it has been found to be present in high levels even after the moment that triggered the response has passed) may have a stronger impact on the hippocampus, which may explain the reason why individuals with PTSD often have recurring flashbacks.

There are many ways to rehabilitate a PTSD patient. Treatments for PTSD include anti-depressive medication that may help in reliving some of the symptoms exhibited by PTSD, behavioral therapy that focus on rehabilitation of PTSD-onset behavior and family therapy that work with the families of PTSD patients who may have been affected by the patient's PTSD-behavior.

PTSD is one way that an individual responds to extreme stress under traumatic events. If diagnosed in time and treated properly, it is an illness that can be successfully cured. Though further research will be necessary to observe if other parts of the brain play parts in the abnormal hormone levels secreted in patients with PTSD and more concrete evidence are needed to correlate situations in which an individual may be more pre-disposed to PTSD than others, this disorder is not so shrouded in mystery as many other mental disorders are anymore.

References

(1)The Mayo Clinic, The Mayo Clinic on PTSD

(2)National Center for Post-Traumatic Stress Disorder

(3)National Institute on Drug Abuse, National Institute on drug abuse, depression, PTSD, substance abuse in crease in wake of September 11, 2001 attacks

(4) National Institute of Mental Health, The National Institute of Mental Health on PTSD

(5) Medline Plus Medical Encyclopedia, Medline Plus Medical Encyclopedia on definition of cortisol


Studying Functional Differences in the Adolescent
Name: Elizabeth
Date: 2004-02-23 22:53:14
Link to this Comment: 8413


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Adolescents between the ages of 13 and 19 tend to act impulsively and irrationally. Testing limits, experimenting, and acting without considering future consequences are all part of adolescent behavior according to Dr. Laurence Steinberg of Temple University (1). He states that teenage self-regulation of impulsive behavior does not appear to mature until later in adolescence (1). The perceived rebellious actions of teenagers that were once dismissed as changes in hormones corresponding with the beginning of puberty may actually be due to functional differences in teenage brains. The behavioral differences between adults and teenagers become recognizable due to the increased freedom and decision making that adolescents acquire. Studying the variations in the brains of adolescents and adults provides evidence for the argument that the actions of the nervous system are responsible for observed behaviors.

Two studies have identified differences between adolescent and adult brains. One study conducted by Dr. Arthur Toga of the Laboratory of Neuro Imaging located at UCLA demonstrates that children and adolescents from ages 12 to 16 have less myelination in the frontal lobes of the brain (2). The frontal lobes, located at the front of the cranium, have been identified as the area of the brain that dictates rational behavior and reasoned weighing of consequences (4). Myelin is composed of neural cells that form insolating lipid layers around nerve processes. Myelinated processes can more effectively conduct electrical signals from one neuron to another. The presence of more myelin in adult frontal lobes implies that more neural processes are connecting neurons together. If connections between neurons in adolescent frontal lobes are not as abundant, adolescents may not be as capable of using their frontal lobes Decreased myelination may mean that neurons in the frontal lobes of children and teenagers are not as interconnected and not as capable of communicating via passing signals as the neurons of adult frontal lobes, resulting in decreased ability to make reasoned decisions. Dr. Jay Giedd of the National Institutes of Mental Health also studied the adolescent brain using magnetic resonance imaging. Dr. Giedd has identified a growth period of the neuron bodies or gray matter in the prefrontal cortex, a specific section of the frontal lobes, at ages 11 in girls and 12 in boys (3). Though adolescents contain more gray matter than adults, neurons are connected throughout the teenage years so development and usage of the frontal lobes occurs gradually. Throughout adolescence, the brain decreases the amount of synapses and increases the amount of myelination of certain processes in order to strengthen them (3). He concludes that the adolescent brain has not made adequate neural connections and can be shaped by activities throughout the maturation process (3).

If the connections in the frontal lobes of children and teens are not as developed as the brains of adults, another portion of the adolescent brain may be used in tasks where adults normally process inputs with their frontal lobes. In a study conducted by Dr. Deborah Yurelun-Todd of Harvard University, brain activity was scanned using functional magnetic resonance imaging (5). Both adults and adolescents from ages 11 to 17, who had no diagnosed psychological disorders or brain injuries, were asked to identify the emotion on pictures of faces on a computer screen (5). The expression of the picture shown to the participants was one of fear. The teens typically activated the amygdala while the adults activated the frontal lobes to perform the same task of identifying the expression (5). Because teens and adults are activating different portions of their brains to perform the same task, studying the function of the amygdala may provide an explanation for observed behavioral differences in adolescents and adults.

The amygdala is part of the limbic system and is responsible for emotional reactions. Dr. Jean-Marc Fellous states that the amygdala is responsible for emotional processing and reactionary decision making because lesions of this region interfere with emotional reactions (6). By using the area of the brain that identifies situations with emotions, adolescents react in an impulsive manner more than a reasoned one. The increased activity of the amygdala in teens may be because the frontal lobes have not yet developed a regulatory role in the nervous system. Dr. Richard Davidson of the University of Wisconsin-Madison found that in 500 individuals who had decreased activity in their frontal lobes, they also had decreased ability to regulate emotion (7). Davidson concludes that there may be some interaction between the amygdala and the frontal lobes (7). Like the individuals Davidson studied, adolescents may not have the ability to sufficiently regulate emotional processes because their frontal lobes have not matured. The impulsive behavior of adolescents is due to the increased reliance on the instinctual part of the brain while the area for rational thought, the frontal lobes, develops.

Further evidence to support the nervous system producing all behavior would be to observe different behaviors corresponding to varying neural connections within the frontal lobes. Signals to different neurons would be expected to produce different types of behavior if interactions of the nervous system are responsible for producing behavior. Since the frontal lobes are still forming myelinated connections between neurons during adolescence, environmental factors can influence development of varying connections. Dr. Giedd identifies the time between 13 and 18 when connections are made as the "use it or lose it principle (3)." He says that the activities that teens participate in will influence the connections made in the brain (3). If the neuron process connections are not properly made through sufficient stimulus, reduced function of the frontals lobes can result. Different environmental inputs can influence the development of teenage frontal lobes (3). If Dr. Giedd is correct that connections can be influenced by different stimuli, monitoring a child's behavior, setting rules and seeing that they are obeyed should promote the development of regulatory connections in the frontal lobes. An individual who grows up in an environment where regulation of emotions is encouraged would be expected to have different myelinated processes than an individual where such activity is not promoted. Environmental inputs may have an important role in forming connections between neurons that leads to increased reasoning ability and self-regulation of emotional behavior. More studies are needed to support Dr. Giedd's theory. One potential study would be to map white matter, myelinated processes, in children who grew up in various household environments.

The structure of the adolescent brain provides an explanation for the perceived teenage behavior of irrationality and impulsiveness. This behavior can be attributed to activation of the amygdala the region of the brain responsible for emotional behavior. Mature frontal lobes may regulate the actions of the amygdala and allows individuals to reason through situations instead of acting on instinct. Poor connections between neurons formed during adolescence may lead to less emotional regulation as an adult. Since differences in the adult and adolescent brains can be correlated to different types of behaviors, the variations in the brains of adolescents and adults provides evidence for behaviors being produced by the activity of the nervous system. Future studies to further correlate adolescent behavior to functional brain differences would include functional magnetic resonance imaging study of the use of the amygdala over the frontal lobes and further evidence that the frontal lobes do regulate the activity of the amygdala.


References

1)The Study of Abnormal Psychopathology in Adolescence, This is the web version of Dr. Steinberg's paper that outlines some normal and abnormal adolescent behaviors.

2)Teenage Brain: A Work in Progress, This site from the National Institute of Mental Health presents several studies on the development of the teenage brain, mainly through MRI imaging.

3)Adolescent Brains are Works in Progress, This site from Frontline presents data obtained from Dr. Jay Giedd's studies of the development of the adolescent brain. Dr. Giedd focuses on prefrontal cortex development study, but also addresses Corpus Callosum and Cerebellum development.


4)Frontal Lobes, This site gives some background on frontal lobe structure and function. Some research on possible frontal lobe abnormalities and consequences are also presented.


5)Deciphering the Adolescent Brain, This is a web version of an article published in the Harvard University Gazette that presents the research performed by Dr. Deborah Yurelun-Todd. She studies the use of the amygdala as opposed to the frontal lobes in children and adolescents.


6)Emotional Circuits and Computational Neuroscience, This site is the online version of a paper by Dr. Jean-Marc Fellous and colleagues Jorge L. Armony and Joseph E. LeDoux. They determine that many emotional responses originate from the amygdala.


7)Brain's Inability to Regulate Emotion Linked to Impulsive Violence, Research conducted by Dr. Davidson on the regulatory role that the frontal lobes play is presented in this article.


The Many Aspects of the Ancient Egyptian "Self"
Name: Ariel Sing
Date: 2004-02-23 22:54:54
Link to this Comment: 8414


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

For the Ancient Egyptians the "afterlife" was a very important concept. Once a person died there were a number of steps that needed to be taken to ensure their continued existence. Mythologically the deceased person came before the god Osiris and denied having committed any offenses in their lifetime. The most famous and telling trial was the weighing of the heart. In this ceremony the feather of Ma'at, the goddess of truth, was weighed against the heart of the deceased. If the heart was not heavier than the feather the person was able to continue on into their afterlife, however if, because of sins, the heart was heavier than the feather, the soul of the person was devoured by a chimeric amalgam of hippopotamus, crocodile and lion. (1)

For each step of the journey to the after-world the Ancient Egyptians believed that their soul or "self" had a different aspect. There were five parts: the ka, the ba, the akh, the name and the shadow or shade. (2) Each of the aspects of the Ancient Egyptian "self" was unique, yet interrelated with the other four elements.

The ka was depicted in two ways; in many instances it was simply a smaller version of the individual. It is thought that this represented, not a distinct facet of the individual, but a way of representing the ka as being within the person. (3) Alternatively, the ka was depicted as two arms upraised. Sometimes, such as in text, this symbol would be seen alone, however often it was attached to the top of the head of the individual. There is no accurate manner in which to translate the meaning of ka, however for the sake of discussion it is often referred to as "sustenance". (4) It was the primary differentiation factor of a living person from a dead one. When born, every ancient Egyptian received their ka and it would stay with them until their death. (5) It was believed that when the god of creation, Khnum, formed the person on his potter's wheel , he also formed their ka. (6) After the person had died the ka still required food, this was supplied in the tomb, either in the form of actual food, or symbolically as tomb paintings. The ka did not so much eat the food, as absorb the life-energy of the sacrifices. Upon the moment of death the ka became dormant, and stayed thus until the end of the mummification process when it was rejuvenated and the ba came to join it in the afterlife. (7)

The ba is the closest manifestation of the modern idea of a "soul". It was always illustrated as a bird with a human head, and sometimes with human arms. (8) Because of the avian depiction the ba is often connected to migratory birds, which were thought to be peoples' bas going from the tomb to the afterlife and back. (9) Humans were not the only creatures with bas, they were also possessed by gods, for example the Benu bird was considered the ba of Re, as the Apis bull was that of Osiris. (10) The ba was all of the nonphysical aspects of a person that defined them, it is sometimes considered to be the modern equivalent of personality. It was the role of the ba to travel to the ka in the afterlife, in place of the body, which was unable to make this journey. Once it had reached the ka, the two joined aspects of the "self" were transformed into the akh. It should also be noted that without the ba, the body of the deceased person would not be able to survive, and thus their entire being would die. There were two things required by the ba in order for it to endure. First, it must return to the body every night. Second, it required the same sustenance as a person, to supply this, food and drink were left in the tomb and their depiction was painted on the walls. (11)

The akh was the combination of the ba and the ka. This was the form of the "self" that lived in the after-world. (12) The akh was believed to have direct influence on the world of the living, for good or ill. In fact, when people believed themselves to be suffering from malice, they would write letters to the akh of dead people to ask for their forgiveness and beg pardon. (13) After the heart of the deceased person had been weighed and had been accepted into the after-world, the ka was allowed to join with the ba creating the akh. This new form was often portrayed as a mummified figure. However, the hieroglyphic form that describes it is the crested ibis. This akh was believed to be the link between the human and the divine, in fact, dead ancestors, who were not royal, were often given a place of exaltation in the house. (14) The akh was one of the aspects of the "self" that was allowed to freely wander the land, and thus able to interact with the living. The akh was believed to be forever the same, it never changed or perished. (15)
Names were given at the moment of birth to all children, for without a name, that child never really existed, and was thus unable to live. Often the name given was adapted from the name of a local deity or a god that was particularly powerful at the time. (16) The only way for a person's name to be preserved was to have it inscribed, either on texts within their tomb, or even directly onto the tomb wall itself. In fact, if one wished to eliminate a person's akh, indeed their entire being, they would remove all mention of the dead person, scratching their name out when carved in stone and destroying any textual reference to that person. (17) One of the most famous examples of this is Hatshepsut, whose probable son (or stepson) ordered all examples of her name to be annihilated. Because of the power that the ancient Egyptians believed true names to hold, gods' true names were often never known. They might have hundreds of names, but none would be the power-relating true name. Conversely, if one knew the name of an evil spirit, it could be vanquished, the ritual words used were "I know you and I know your names." (18) One of the most telling examples of the power of the true name was that it was believed that Ptah, one of the creator gods, brought everything in to being, simply by speaking the name for each. (19)

The shadow (or shade, as it was also know) was a form of the "self" often represented by a darkened painting of the individual. Apparently it was imperative to protect the shadow from any harm, (20) although it itself was considered a form of defense for that individual. This protection was well known, for even in the Valley of the Kings the tombs were built taking the shadow of the sun into account. (21) This idolization of the shade is understandable given the intensity of the sun in Egypt, anything could rapidly become burned, thus something that protected from that heat would be considered powerful. (22) In a similar vein it should be noted that pharaohs were often depicted under the shade of a fan made of feathers or palm leaves. The final defining factor of the shadow was that it moved with tremendous speed and contained great power. (23)

As is clear from the above information, each aspect of the "self" was viewed as unique; each had its own purpose and use. There are, however, also many ways that these aspects are interwoven.

The ka and the ba are the most closely related. They both represent different portions of a person's personality. They are, in fact, so closely related that after death they become joined into one, the akh. Thus it is clear how these three aspects relate to each other, and how without one, the rest would be powerless. The name and shadow are less obviously integrated. Both of these aspects were more closely related to the world of the living than the ka, the ba or the akh. The name was an actual continuous link to those living, just as the ka was, both could be affected by the actions of the living. The name was also similar to the ka because when a child was born, the two thinks that it received were its name and its ka. The shadow was more closely associated with the ba. Both the shadow and the ba were thought to stay with the body after death. Because of their presence the body was sustained and protected.

It can now be seen that the ancient Egyptian "self" was a complicated and intricate idea. By loosing just a single aspect, the dead person was doomed, they would never go into the after-world, instead they would simply cease to exist. It was this symbiosis that created such a strong sense of self and unity among the ancient Egyptian people.

A beautiful example of this unity and power is the name of the pharaoh Akhenaten. The name when translate conveys the idea that the pharaoh is the akh of the god Aten, the lord of light who creates shadow. Thus he has combined into his name all the five element of the soul, the name, the ka and the ba as the sacred akh, and the shadow formed by the passing of the sun, Aten.

References

1) The Spirits of Nature: Religion of the Egyptians, a summary of the basic tenets of Ancient Egyptian religion

2)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

3)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

4)Ka, a summary of the ka

5)The Afterlife, a summary of the aspects of the ancient Egyptian afterlife

6)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

7)The Afterlife, a summary of the aspects of the ancient Egyptian afterlife

8)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

9)Ba, a summary of the ba

10)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

11)The Afterlife, a summary of the aspects of the ancient Egyptian afterlife

12)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

13)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

14)Akh, a summary of the akh

15)The Afterlife, a summary of the aspects of the ancient Egyptian afterlife

16)Name and Shadow, a summary of the two aspects of the soul, the name and the shadow

17)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

18)Names, a summary of the concept of names in ancient Egypt

19)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

20)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

21)Shewet, a summary of the shewet or shadow

22)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

23)Name and Shadow, a summary of the two aspects of the soul, the name and the shadow


The Effect of Video Games on the Brain
Name: Eleni Kard
Date: 2004-02-23 23:03:13
Link to this Comment: 8415


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The effect of video games on the brain is a research area gaining popularity as the percentage of children and adults who play video games is on the rise. Some people believe violence in video games and in other media promotes violent behavior among viewers. While there is not sufficient data to validate this claim, there are a number of studies showing that video games can increase aggressive behavior and emotional outbursts, and decrease inhibitions. From a few of these studies, and from my own observations of children playing video games, it is quite obvious that the video games do have at least some effect on the behavior of the player. The extent and long range consequences of these behavior changes after one has turned off the video game are not so easily deduced. One source states that "While research on video games and aggressive behavior must be considered preliminary, it may be reasonably inferred from the more than 1,000 reports and studies on television violence that video game violence may also contribute to aggressive behavior and desensitization to violence" (1). Another study reports that "Hostility was increased both in subjects playing a highly aggressive video game and those playing a mildly aggressive video game. Subjects who had played the high-aggression game were significantly more anxious than other subjects" (2).

I had a chance to observe the effects of video games first hand on two boys, ages eight and ten, when I babysat them earlier in the semester. They were playing the video game "Mario Cart," which is really not a very violent game; the object is to win a car race by coming in first while maneuvering through different courses. When the younger brother won, the older brother got up and started kicking him and yelling insults! Later on that day, the younger brother was playing another video game by himself and when he could not beat the level, he threw down the controller and screamed at the t.v. screen, "Why are you doing this to me...?!" and burst into tears. I was very shocked by this reaction and was not quite sure how to handle the situation. This game had brought an eight year old boy to tears, right in front of me. "Certainly, video games can make some people go nuts. You just have to look at some enthusiasts playing video games on their cellular phones, mumbling to themselves heatedly even though others are around them. At game centers (penny arcades), frustrated people punch or kick game machines without regard to making a spectacle of themselves" (3). From the above descriptions, it seems that players get somewhat "sucked" into the video game and become oblivious to their surroundings and much less inhibited to share their emotions. What types of changes are occurring in the brain to activate this behavior which one exhibits when "sucked" into a video game?

Akio Mori, a professor at Tokyo's Nihon University, conducted a recent study observing the effects of video games on brain activity. He divided 260 people into three groups: those who rarely played video games, those who played between 1 and 3 hours three to four times a week, and those who played 2 to 7 hours each day. He then monitored "the beta waves that indicate liveliness and degree of tension in the prefrontal region of the brain, and alpha waves, which often appear when the brain is resting" (4). The results showed a higher decrease of beta waves the more one played video games. "Beta wave activity in people in the [highest amount of video game playing] was constantly near zero, even when they weren't playing, showing that they hardly used the prefrontal regions of their brains. Many of the people in this group told researchers that they got angry easily, couldn't concentrate, and had trouble associating with friends" (4). This suggests two important points. One, that the decrease of beta wave activity and usage of the prefrontal region of the brain may correlate with the aggressive behavior, and two, that the decrease of beta waves continued after the video game was turned off, implying a lasting effect. Another study found similar results and reported: "Youths who are heavy gamers can end up with 'video-game brain,' in which key parts of the frontal region of their brain become chronically underused, altering moods" (5). This study also asserts that a lack of use of the frontal brain, contributed by video games, can change moods and could account for aggressive and reclusive behavior. An important question arises: if the brain is so impacted by video games as to create behavioral changes, must that mean that the brain perceives the games as real?

Perhaps looking at what effects video games have on autonomic nerves can begin to answer that question. "'Many video games stir up tension and a feeling of fear, and there is a very real concern that this could have a long-term effect on the autonomic nerves,' Mori commented" (6). Autonomic nerves are those connected with involuntary internal organ processes, such as breathing and heart rate. "Heart rate can be altered by electrical signals from emotional centers in the brain or by signals from the chemical messengers called epinephrine (adrenaline) and norepinephrine. These hormones are released from the adrenal glands in response to danger..." (7). Multiple studies have reported that playing video games can significantly increase heart rate, blood pressure, and oxygen consumption. If studies show that heart rate is increased when playing video games, then it seems that the brain is responding to the video game as if the body is in real danger. Does repeated exposure to this "false" sense of danger have an effect on what the brain then perceives as real danger?

From the above studies and observations, video games do effect the players in some ways, since it appears that players get so wrapped up in the game that they forget their surroundings and begin to see the game as a real quest. Studies have shown that playing video games can increase heart rate and blood pressure, as well as decrease prefrontal lobe activity while the person is playing the game. This could account for changes in the player's mood and cause him or her to become more aggressive or emotional. However, the extent of these effects on the body once video game playing has ceased are preliminary and need to be confirmed.

References

1)Mediascope website, highlights data from various scientific studies concerning video games.

2)Mediascope website, violent video games causing aggression.

3)Japan Today News website, an interesting news site and discussion board.

4)Mega Games website, a hardcore gaming site, including cheats, demos, and facts.

5)Beliefnet website, centers around spiritual, religious, and moral issues.

6)Sunday Herald online, a news resource.

7) Freeman, Scott. Biological Systems. New Jersey: Prentice Hall Inc., 2002.


Body Dysmorphic Disorder- A Brain Disease?
Name: Nicole Woo
Date: 2004-02-23 23:09:33
Link to this Comment: 8416


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


Though Body Dysmorphic Disorder, commonly known as BDD, was first documented in nineteenth century, it is not a well known disorder. However, despite its lack of notoriety, BDD is not a rare disease, affecting two percent of the population (3). As scientists attempt to discover more about this illness to learn how to treat it, psychological and sociocultural factors have been considered to be possible causes for BDD


When discussing the origins of BDD, scientists and patients have been inclined to attribute BDD to psychological factors. Many have felt that BDD arises from childhood trauma, resulting in channeled feelings of conflict, shame, or guilt
(2). However, though psychological factors do not seem to be causal, it would be foolish to deny their influence on someone with a genetic predisposition towards BDD.


In addition to the psychological factors, sociocultural factors seem to influence BDD as well, mainly by exacerbating it. Many would be inclined to attribute the presence of BDD to the images that our modern society is constantly bombarded with, namely images of the ideal beauty. On every magazine cover and every television channel, the message of the ideal, for both men and women, is displayed constantly. How could these impossibly perfect images not affect how individual persons perceive themselves in comparison? There is no doubt that these images can increase the anxiety felt by anyone, particularly those with BDD, when compared to their own bodies. While images of ideal men and women could make anyone feel dissatisfied with their bodies, where does one draw the line between the desire to look more attractive and the obsession of those who suffer from BDD? While it is generally accepted that people like models and ballet dancers obsess over their bodies, there is another profession, though less well known for this preoccupation, that also has high rates of BDD; those who are involved in the arts (5). While no doubt these environments can increase the attention placed on the body, I would suggest that many dancers, models, and art historians are drawn to their respective professions because the focus is on appearance. While it may be unconsciously done, perhaps the people involved in these professions are already preoccupied with their bodies, thus an occupation which demands constant surveillance of appearances appeals to them. As an art history major, ballet dancer, and former model, I ask myself the question, "Is my involvement in these industries a suggestion that I am predisposed towards this disorder?" Though there may be an inclination on behalf of the twentieth century observer to claim that the media has caused an unnatural obsession with appearances, the fact that cases of BDD were documented as early as 1886 is evidence that BDD predates the era of the supermodel(2). In addition to cases like this, the causes of BDD originate can be seen in how patients response to treatment. Currently, there is evidence that BDD responds to medications known as serotonin-reuptake inhibitors, suggesting that BDD results from a dysregulation of serotonin

References

1)BDD Central, a helpful website discussing various aspects of BDD,including a forum where one can read the writings of those who suffer from BDD


2) Phillips, Katharine A. The Broken Mirror. New York:Oxford University Press, 1996.


3) Body Dysmorphic Disorder, a good resource for basic information on BDD


4) Facts Sheets: Realising Human Potential , a good source for statistics about BDD


5) Body of Work: art career linked to image , an article discussing occurrence of BDD among certain professions


Dreaming Through the i-box and the id-box
Name: Amar Patel
Date: 2004-02-23 23:47:21
Link to this Comment: 8418

<mytitle> Biology 202
2004 First Web Paper
On Serendip

Dreaming has always been an enigma plaguing the studies of psychology and biology. Through each of these fields we get a different interpretation of the reason for dreams and their effects on our own consciousness. From the start, one needs to define consciousness in terms that can be identified through both of the fields in which we will analyze dreams. When one looks at the nervous system, and its general function as an input/output mechanism one can interpret it as a "box" theory, which was developed by Paul Grobstein Ph.D.(1) The theory explicates the nervous system and its relation to consciousness. In this theory, the entire nervous system is a box in which a stimulus (input) will travel through a complex pathway and appear as some output. There are many other intricacies, such as inputs which produce no output, or outputs which produce no inputs, that are explained through self initiating boxes within the nervous box. Additionally, there is an I-box which functions as the section of the nervous system that correlates to consciousness. This consciousness is where an individual holds his/her sense of "self."(1) Beginning with the psychological (Freudian) viewpoint and then continuing into the biological (physiological / developmental) interpretations of the dream state we will come to understand their individual effects on the I-box theory.

When examining the thoughts about dreaming from a psychological standpoint, one must look at the works of Sigmund Freud, a pioneer in the interpretation of dreams. During his time, little was known about the science behind the study of dreaming. This meant that there was more clinical speculation and less proven lab work behind the theories developed by him. Freud was only able to examine dreaming through patients who tried to recollect dreams after waking up, which proved to be very inconsistent, and a rare occurrence. When he did get summations of dreams, Freud was able to develop the notion that dreaming was the "royal road" to the unconscious. (2) Freud saw the rare occurrence of dreams as forms of recalling the earliest events in one's life, with the undertone of one's desire and passions being fulfilled. This theory was called his "wish fulfillment theory". (3)

When this theory is applied to the notion of the I-box we see some complications. Where is the unconscious in relation to the conscious box? One may say that the unconscious exists as a separate entity, another box which has its own inputs and outputs. Although this may be a temporary solution to Freud's interpretations, one understands that in dreams all the inputs (senses) and emotions are in tact. Additionally, in the case of lucid dreams, the conscious extends far enough to gain control of the unconscious and fulfill its desire or will. These notions force a strong link between an I-box and another "unconscious" box. The most sufficient way to explain this theory, through the psychological standpoint of the ego (consciousness) versus the id (unconsciousness), would be to place the I-box inside this "Id" box. Since the psychological beliefs of conscious state that the Id is the predecessor to the ego, the ego being merely an evolution of control imposed by society on the id, the I-box can be seen as this ego. Since the Id is the precursor to the ego, one must also note that it holds greater importance in the sense of "self". This hierarchy places the Id in a more prominent space, around the evolved ego or I-box. Although this is quite a controversial step, it accounts for any of the psychotic-like episodes people experience in their dreams. Phenomena such as out-of-body experiences mean that people are thinking within the Id-box, but not the within the context of the I-box. The idea of the Id-box containing the I-box supports Freud's and other psychologists' claim of the Id being present from birth, and the ego being a product of the environment, meant to tame the Id.

Looking at the I-box function and its relation to dreams in the scope of physiology will better explicate the notion of an I-box within the Id-box. First of all, one must examine the recent scientific knowledge of dreaming. The state of sleeping that is most associated with dreaming is REM (rapid eye movement). Scientists discovered that this state of sleeping can be measured through the use of an EEG which measures the theta waves that the neocortex produces. In the REM stage of sleeping, the theta waves are comparable to those of a waking person.(2). Research has been conducted to help differentiate the states of consciousness in REM versus that of a person in the waking state. The results show that "brain activation during waking is associated with noradrenalin, 5-hydroxytrptamine (5-HT) and acetylcholine-mediated neuromodulation, brain activation during REM is exclusively cholinergic..." (4) Essentially, the types of chemicals that are active during the dreaming state are distinguishable from those present in the waking state. Another important result from the same study shows that the role of the prefrontal cortex in dreaming and waking can explain some distinctions between the states of consciousness. The reduction of activity within the components of the frontal lobe is what contributes to a change from waking to dreaming state. Additionally, when an individual encounters the REM stage of dreaming, select portions of the posterior and medial prefrontal cortex are activated. (4) Aside from the prefrontal cortex, the brainstem and occipital lobe (vision center) have increased activity which also leads to the REM stage of dreaming. (5) What these studies outline is the notion that there are, in fact, different areas of the brain that correspond to dreaming. Additionally, there have been findings that show that the prefrontal cortex does in fact have different activity dependent on the amplitude and frequency of the theta wave, showing different stages of dreaming. These intricacies to the dream state maintain the notion that dreams must encompass some specialized functions in our brain, such as an I-box.

Many scientists have provided proof that dreaming states are nothing more than the brain attempting to unlearn any useless memories it had acquired during the day. (2) Although these studies are very well documented, one notion that has not been addressed how, in fact, this would correspond to the dream states in which there was little difference between the dream and reality. Jonathon Winson Ph.D. therefore looked back at the evolutionary biology behind the anatomical components of REM and learned that as evolution progressed for mammals, so did the process of REM. Dr. Winson established the theory that REM sleep was, in effect, a useful tool in animals because it was a process of relearning the traits that were not coded in genes, but which were still important to functioning in its own environment. (2) These ideas help to establish the notion that perhaps the unconscious is something that incorporates the I-box, something essential to the behavioral patterns of all animals.

After examining the notion of dreaming through the Freudian definitions of the conscious and unconscious states, one can make the clear argument that the I-box is indeed supported within this state of unconsciousness, or "id-box". The idea that dreams are always relative to the sense of an individual self contribute to the notion of an enveloped I-box. When taking this idea further through the biological aspects and noting the differences between the Id and I-boxes, one can see how dependent the Id becomes on the I-box. This leads to a fundamental conclusion that the Id-box does envelope the I-box, but only because the Id-box is the entirety of the nervous system. In stepping back from the struggle of the conscious versus unconscious, one must note that there cannot be any action or input/output that does not lie within these two states. Dreaming is therefore the act of experiencing the Id box with little to no support from the I-box.

References

1) "Getting It less wrong, the Brain's Way: Science, Pragmatism, and Multiplism." Paul Grobstein Ph.D.

2) The Meaning of Dreams Winson, Jonathon. Scientific American Online. 2002.

3) Interpretation of Freud's work Domhoff, G. W. (2000). Moving Dream Theory Beyond Freud and Jung. Paper presented to the symposium "Beyond Freud and Jung?", Graduate Theological Union, Berkeley, CA, 9/23/2000.

4) The prefrontal cortex in sleep. Hobson, J.Allan, Muzur, Amir, etc. TRENDS in Cognitive Sciences Vol.6 No.11 pp.475-481

5) General physiological interpretation of dreaming R. Joseph, Ph.D


The Gaps between Science and Behavior in Understan
Name: Debbie Han
Date: 2004-02-23 23:59:26
Link to this Comment: 8419


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In March of 2003, my sister, Christine was in a horrible accident. She tripped off the
platform of the subway station near our New York City apartment and fell into the gap
between the platform and the train; as a result she lost all of the tissue and skin on and
below her knee and is today a below-knee amputee. During the 5 operations needed to
remove the tattered limb and close the wound, the vascular orthopedic surgeons were able
to successfully save her upper leg and create a "residual limb" or "stump." The limb
remained swollen and discharged blood for a couple of months, but gradually, raised
blood vessels and a neuroma, a ball of nerve fibers, formed at the end of her stump (1).

Following the accident, I spent between 5 to 10 hours with Christine everyday. I
monitored her convalescence as well as her initiation into a new life as a below-knee
amputee. Immediately after the amputation, she experienced phantom sensations in her
residual limb, which is common among amputees. It is believed that 50% to 80% of
amputees experience phantom pain (2). In an attempt to better
understand the behaviors of both my sister and her phantom limb, I researched the
scientific explanations for her behavior. To what extent is science helpful in
understanding my sister's case?

Phantom sensations vary in type and in degree. The types of different sensations felt by
amputees include warmth, itching, pressure, shocking, wetness, and the feeling that the
limb is in a certain position, among others. When they become cramping, stabbing, and
intense shocking, they are classified as phantom pains (2).

One of the original explanations of phantom sensations is rooted in the somatosensory
complex, the part of the brain presumed to cause sensation. According to this hypothesis,
neuromas continue to create impulses, and that the impulses travel through the spinal
cord and thalamus to the somatosensory cortex. After a limb is amputated, the nerve
paths still exist; therefore, stimulation anywhere along the nerve path to the homunculus
(a part of the somatosensory cortex similar to a miniature map of the human body (3)) can elicit the same sensation as when the limb did exist (2). This would imply that the brain is hard-wired and that the brain
doesn't realize that the amputated limb no longer exists.

A subsequent hypothesis initiated by Ronald Melzack proposes that the origin of
phantom limbs is in the brain and more focused in the cerebrum than the somatosensory
cortex. According to Melzack, the brain has a neuromatrix (network of neurons) that
creates impulses indicating one's own body, which he calls the "neurosignature" (2). The matrix consists of 3 subunits: the classical sensory pathway,
the limbic system which manages emotion, and the cortical systems which recognize self
and assess sensory signals.

Melzack believes that sensory signals received from the periphery are evaluated by all
three systems and generated into a single output which then receives its specific
neurosignature. The neurosignature is determined by the neurons in the matrix and their
connectivity. The connectivity is determined for the most part by genes and less so by
experience (4). According to Melzack, neuromas can generate an
input which will subsequently travel through the same neuromatrix as a traditional
external input. As a result, a similar output would be generated and the limb would be
perceived to exist.

A new train of thought among scientists is that once an appendage is severed, the
receptive fields go silent and then become active again through other parts of the body.
Vilayanur Ramachandran at the University of California in San Diego has most
extensively studied this theory of cortical reorganization. Through experimentation with
this theory, Ramachandran found that while brushing the body surface of an amputee
with a Q-tip, he was able to evoke sensations in the phantom limb. There were localized
references areas which yielded responses in the lost appendage. More specifically,
Ramachandran found an area of the chest which corresponded to a lost leg and areas of
the face and chin which corresponded to a lost arm. The localized field was not specific
to a patient; rather, Ramachandran found the field on the chin area on a majority of the
patients who had arm amputations with whom he worked with. Pressure and water on the
reference area would elicit responses in the phantom, as well (5).

Ramachandran also developed the mirror box technique. The mirror box technique
consists of a box which is halved by a mirror. The patient can only see one half of the
box. Once the patient puts his "good" leg into the box, the mirror produces a
"sterioisomeric image (5)" of the other leg. For example, if the
participant has an amputated right lower leg, he would put both legs on opposite sides of
the mirror and then the right half would be covered. The mirror mimics the left leg's
actions and the participant perceives this manipulated reflection as his right leg. When
participants kept their eyes open, 4 out of 5 patients claimed that they felt relief from
being able to move their once-phantom limb in or out of positions. This would imply that
the phantom limb is a creation of the brain and that relief can come from satisfying the
brain by maneuvering the image and making oneself believe that the phantom limb
actually exists.

In order to test each of these hypotheses, I compared the theories to my sister Christine's
actual behavior. Regarding phantom sensations, Christine most commonly feels
shocking and itching. The shocking is throughout her entire right leg, and the itching
emanates from what she perceives as her right foot. A few times, Christine has actually
felt as though her leg was wet like she had stepped in a puddle. According to the original
hypothesis on phantom sensations, neuromas can generate random signals and cause the
same sensations that had occurred prior to amputation. What would explain the feeling
of moisture covering her right foot when the neuroma at the end of Christine's stump was
not wet? It is plausible that random firings could cause feelings of shock and pressure,
but random signals have not yet been shown to cause the feeling of wetness in amputees.
This is still a mystery which science has not been able to answer.

If the brain is hard-wired but occasionally malleable to experiences, what type of
experience would interrupt the hard wiring? On numerous occasions, Christine has
tripped and tried to maintain her balance by landing on her right leg and has
unfortunately crashed down on her residual limb. Her brain tells her that she still has a
right lower leg, but when she looks down at her leg it is not there. In split-second
decisions, such as trying to break a fall, Christine instinctively tries to land on her right
leg. If the brain cannot be re-wired to recognize that her lower leg no longer exists, what
type of life experience merits resculpting of the neuromatrix? Is this a matter of habit
rather than a faulty neuromatrix?

According to Ramachandran's theory, sensations are referred to different locations
following amputation of a limb. The Q-tip method was intriguing and to test its findings,
I blindfolded Christine and brushed a Q-tip along her chest. If the chest was a reference
area for the lost leg, Ramachandran's theory could possibly explain phantom wetness.
The Q-tip method did not arouse any sensations in Christine's phantom limb. In addition,
the wet Q-tip test did not yield any results.

Proponents of the principles behind Ramachandran's mirror box technique believe that
phantom sensations are attributed to the brain. If the principles are valid, my sister
should receive a sense of satisfaction in believing that her limb receives the attention it
needs. For example, when her leg is itchy, if she can convince her brain that her leg is
being scratched, even though it is not, she should feel a sense of relief. In trying to
mimic what the mirror box provides for participants, I recommended the following
technique to Christine: I asked her to imagine that her leg was still intact and to scratch
where the foot would be located when the foot was itchy (when Christine has phantom
sensations and pains, she can envision where the feeling is radiating from). This method
provided no relief for Christine. Instead, she tapped and rubbed the bottom of her stump.
Since that method was unsuccessful, I asked her to monitor her own actions when she had
the itchy sensation in her phantom limb. Typically, when she is wearing her prosthesis,
she will unconsciously scratch the part of the prosthetic leg which corresponds to the part
of her leg or foot which feels itchy. She noticed that she would reach down, scratch, and
then realize afterwards that her leg was prosthetic when there was no relief from her
scratching. In Christine's case, Ramachandran's hypothesis was incorrect. Even when
she was "tricked" into believing her prosthetic leg was her own right leg, scratching it
offered no solace.

Although I have studied only Christine's case extensively, I asked other amputees to
contribute their own experiences while I was conducting my research. Three additional
amputees reported that the Q-tip test did not work and that nurturing the prosthesis did
not provide any relief for phantom pain. This leads me to believe that there is a gap
between the scientific explanations for phantom sensations and what I have witnessed in
Christine's behavior towards her phantom limb.

One of the hypotheses that seems reasonable for Christine's case is in the same vein as
Melzack's theory and is credited to Timothy Pons of the National Institute of Mental
Health. His studies indicate that other locations which were previously dormant along
the nerve path of an amputated limb are unmasked and that a "neural reorganization (6)" occurs following amputation. In addition, since Ramachandran
did have many successful case studies in both his aforementioned projects, the Q-tip test
and the mirrored box technique, it is plausible that Ramachandran's science helps to
understand the behavior of a certain population of amputees.

At this point in time, each scientific explanation for phantom limbs, sensations, and pains
seems to have credit-worthy aspects, as well as flaws in trying to understand phantom
sensations for amputees as a whole. Aforementioned research leads to further questions
as to what extent the brain is pre-wired. Christine's behavior seems to exhibit that there
is a lack of communication between the physical reality and conscious and subconscious
understandings, though experiences of amputees differ. It may therefore be worthwhile
for scientists to study phantom limbs on an individualized basis. The origin of phantom
sensations could be dependent upon the type of injury or the specific cause for
amputation. Comparing each explanation to Christine's actual behavior leads me to
believe that different amputees experience phantom sensations for diverse reasons and
that varying sensations are potentially caused by different mechanisms, as well.


References

1) Amputee-Related Terms, A glossary of amputee-related terms

2) Electromyography webpage, An overview of specific phantom pains

3) Neurological Theories , An interesting discussion on
phantom limbs in a less technical voice

4) "Phantom Limbs," Scientific American, April 1992, 120-126
~ A good foundation for understanding phantom limbs

5) Phantom Limb Disorder , A thorough overview of phantom limbs and
phantom sensations and research regarding the topics

6) Touching the Phantom , A fascinating description of the research
of Melzack, Pons, and Ramachandran

Further Reading


BBCi Website
, BBC interview with Vilayanur Ramachandran

Mirror Box Technique, A description of the mirror box technique


Can Science Replace Religion? Analyzing the Neurob
Name: Bradley Co
Date: 2004-02-24 00:17:38
Link to this Comment: 8421


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"As for Heaven and Hell, they exist right here on earth. It is up to you in which you choose to reside."

-Tom Robbins

Religion is a societal entity that has subsisted since the earliest record of man's existence. There are a multitude of religions as well as varying degrees of faith. Many religious convictions are based on spiritual knowledge or simple belief. However, science often searches for physical and mechanical understanding of knowledge. There are many issues in which science and religion clash. These issues range from the beginning of life, evolution versus creationism, to the idea of existence after death. As the advancement of science continues, physical explanations for life's occurrences are presented. Do these explanations disprove religious accounts? Will science eventually disprove religion and render it useless? This question is analyzed in the occurrences of Near Death Experiences (NDE's).

An NDE is defined as "a lucid experience associated with perceived consciousness apart from the body occurring at the time of actual or threatened imminent death (1)." Death is the final, irreversible end (2). It is the permanent termination of all vital functions. The occurrence of an NDE is not a rarity. Throughout time and from across the globe NDE's have been described by many, and in these accounts there are several similarities among them. The commonalities of an NDE include a feeling of peace and connection with the universe, a sense of release from the body (often called an Out of Body Experience or OBE), a movement down a dark tunnel, the vision of a bright light, and the vision of deities or other people from their lives (2). Not every NDE contains each of these events, these are merely the most common similar events described. An NDE can range in magnitude from having all of these events occur to having none of them occur (2). There are two theories explaining the similarities among NDE's. The scientific explanation describes a situation in which a mixture of effects due to expectation, administered drugs, endorphins, anoxia, hypercarbia, and temporal lobe stimulation create a unified core experience (3). The religious explanation claims that they are a glimpse of existence after death. The unified core experience is due to there being a destination after the body dies with a similar path for all. These two theories debate whether an NDE is simply the neural activity preparing the body for death or a preview of the beyond. To further understand the occurrences of an NDE neurobiological research has believed to have mapped the neural activity of an NDE.

The most common similarity of NDE's is the feeling of peace, tranquility, spirituality, and oneness with all (3). This occurrence has been discovered to be associated with the release of endorphins as well as reactions between the right and left superior parietal lobe (4) (5). The right portion of this area of the brain is known to be responsible for the sense of physical space and body awareness. It is responsible for orienting the body. The left portion of the parietal lobe is responsible for the awareness of the self. During an NDE neural activity in these areas shuts down. The result of this is an inability for the mind to have distinction between the self and non-self. All of space, time, and self becomes one (4) (5). Essentially one feels as being the infinite, rather than part of the infinite because there is no realization of self. However, other aspects of the brain are still functioning and thoughts are occurring. These other thoughts are believed to be associated with the visions perceived (4). If a persons thoughts are focused on a deity or personal relation, without the ability to comprehend self, time, and space, the person may in fact see an image of that focused thought because visual neurons are still intact. It is the relation of neural inactivity in the parietal lobe combined with other activities within the human brain that are responsible for most aspects of an NDE (2) (3) (4).

The understanding of neural relationships during NDE's has culminated in the ability to reproduce each phenomena in a controlled setting. It has been found that the intravenous administration of 50-100 mg of ketamine can safely reproduce all features of an NDE (2) and electrical stimulation of the right angular gyrus portion of the brain can safely reproduce an out of body experience (6). Scientific research has even explained why religion is emphasized during an NDE. Activation in the temporal lobe region, known as the "God Spot (7)" during an NDE is reported to stimulate religious themed thoughts (8). This research has major implications in the battle of science versus religion. It provides evidence that specific brain activity can create the perception of religion and divinity. If this is true than this brain activity can be turned off and in effect remove religion from our lives. Many wars would be stopped, borders would open up, life as we know it would change completely. However, there are many faults to this theory. The major error in the idea that understanding the mechanical brain activity of NDE's and religion makes them useless is the assumption that the experience only exists within the brain. Begley (5) uses an example of apple pie to illustrate this point. Upon the site of a pie, the neural activity linking site, smell, memory, and emotion can all be mapped quite clearly. However, this mapping of activity does not disprove the existence of the pie. This is the precise reason the existence of God or any other religious deity or beliefs cannot be disproved. It is just as simple to believe that viewing the mechanics of the brain during an NDE or religious experience is like getting a glimpse of the tool or hardware used to experience religion (9). However, this does not prove the existence of a God, or any other belief, either. It is the principle that understanding the neurobiological mechanics of religion cannot disprove or prove the existence of God, religion, or spirituality that makes it improbable that science will eliminate religion.

Believing that science will eventually do away with religion wrongly assumes that knowledge of the mechanics of the brain and universe are capable of eradicating the importance of religion to humankind. Religion is present in society for a plethora of reasons branching far beyond the mere belief in an existence of a God. The multitude of religions, deities, and even atheism is evidence of this. Among many, the reasons for religion include fear, comfort, stability, and tradition. The NDE provides an excellent example of one of the importance's of religion, the existence of life after death. Existence after death refutes the idea that we are simply organic material organized in a certain fashion with a certain time span of functionality. The religious belief than an DNE is a glimpse of our existence beyond life is valuable for peoples behavior in life, not just as evidence of a theory. In very few NDE's do negative feelings occur. People often describe a "heavenly" light rather than a hell (1) (10) . This may be because of the power of suggestion (3) in that it is a common societal belief that when a person dies they are supposed to see a tunnel, a light, an angel, and heaven. So when an NDE occurs, this is what the person sees because it follows their thought process. Not many people believe that when they die they are going to go to hell. The idea of existence of a better place after death comforts and eases the pain of many who suffer in life. It can provide them with hope through troubling time whether they believe in Jesus, Buddha, Elijah, or no God at all. Religion is a tool of mankind to sustain a belief. The reasons for that belief vary among people and religions but the importance is in believing. Having a belief can instill a sense of pride, confidence, comfort, strength, and much more in a person. A single belief can provide a purpose for life. The actual beliefs of each religion are only important to the individual. However, the idea of belief itself is important to the foundations of religion. The importance of religion to mankind makes it improbable society will ever allow scientific understanding to overrule religion. Science may disprove religious stories such as Moses' parting of the red sea, but the importance of religion goes beyond the stories. Religion is indispensable because it is a belief. For this reason science is incapable of eliminating religion.


References

1)Near-Death Experience, Religion, and Spirituality, a religion and spirituality article related to NDE's
2)Ketamine Model of the NDE, Drug induced replication of the NDE
3) Blackmore, Susan. "Near Death Experiences," Royal society of Medicine. Vol. 89. February 1996, pp. 73-76.
4)Why God Won't Go Away: Brain Science and the Biology of Belief, Excerpts from the author
5) Begley, Sharon. Religion and the Brain. Newsweek, May 7, 2001, p. 50.
6) Blanke, O., Ortigue, S., Landis, T., Seeck, M. Stimulating Illusory Own-Body Perceptions," Nature. Vol. 419. September 19, 2002. pp. 269-270.
7)God on the Brain, An article on the cross between neurobiology and faith
8)Meridian Institute, Transformational experiences
9)Tracing the Synapses of our Spirituality, Examination of brain and religion
10)Susan Blackmore Home Page, Experiences of Anoxia


Dreams
Name: Allison Ga
Date: 2004-02-24 00:45:22
Link to this Comment: 8422


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Dreams are a product of the brain in ways that science cannot fully explain. I have always been fascinated by the ability for dreams to be extremely vivid and realistic. Almost anyone can identify with having a dream in which they awoke feeling as if they had just been active in some way, although they are at home in bed.

The most vivid and active dreams occur during the REM (Rapid Eye Movement) stage of sleep which occurs every ninety minutes (1). It has been found that during this cycle, brain activity is comparable to being awake. The implication of this information is that during sleep, the brain still processes and reacts to information without external influence. This enables the existence of intense imagery and the formulation of situations, dialogue, etc. The body is immobile although mental activity is extremely high (2). The restriction of the body prevents the dreamer from acting out any physical activity that may occur in the dream.

The subject of dreams and their role as part of the brain's functions cannot be discussed without Sigmund Freud's take on the dream world. In his Interpretation of Dreams he outlines dreams as a wish-fulfillment, sending a message that the brain formulates images of something that is lacking in one's waking life (3). While it cannot be decided whether or not this is fundamentally the purpose or meaning of dreams, it is interesting to think that our brains may be trying to communicate a way to fulfill an existing void. Freud also equates dream bizarreness to the mind's effort to cover up the true meaning of the dream and subconscious desires that the conscious mind cannot deal with. Although most dreams may be "strange" or "weird", why would the subconscious go to such lengths to disguise true desires? I believe that Freud's psychoanalytic take on dreams is valuable in trying to understand our subconscious, but do his ideas necessarily apply to every dream? He believes that our hidden desires are trying to break through to our consciousness, but does that include dreams that simply depict a situation in life that is normal to the dreamer? For example, if someone anticipates some major event, such as a giving a presentation or throwing a party, and they dream about this event either being a disaster or a success— does this necessarily communicate desires that are unacceptable to the conscious self? I do think that these "normal" dreams communicate hidden, or even obvious, anxieties and hopes but that Freud's focus on the dark side of the psyche might not always be applicable.

The extensive study of the symbolism of dreams has always fascinated me since it logically follows to wonder how the brain utilizes symbolic imagery to communicate to the conscious self. One example of symbolism from a dream book, which intends to help decipher and understand dreams, states that if a dream includes keys they represent power and access (4). While it may be somewhat obvious for us to think of possessing keys as having access, or wanting access to something—it is fascinating to think that the brain will substitute access with the possession of a key. If we do not have this conscious association with keys in our everyday life, how does the subconscious identify it in this way? I think that everyday objects have subconscious associations that we may not be consciously aware of.

The ability to dream communicates that the brain functions actively without the need to receive input from the external world. In our dreams, we create an alternate universe into which situations, places and people in our everyday lives take on symbolic value. The knowledge of REM sleep and Freud's interpretation of dreams contribute to further our understanding of the dream world and how dreaming involves our brain and the subconscious.

References

1)American Psychoanalytic Association, A helpful article on the current scientific stance on REM sleep and dreams.

2)The MIT Encyclopedia of Cognitive Sciences,A searchable reference outlet that contains more information on sleep, dreams and Freud.

3)Interpretation of Dreams, Freud's interpretation of the meaning of dreams.

4)Dreams, A book to help decipher dreams.


Parkinson's Disease
Name: Shirley Ra
Date: 2004-02-24 01:17:41
Link to this Comment: 8424


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The disease first described as the "shaking palsy" and now known as Parkinson's disease was first discovered in 1817 by James Parkinson, for whom it was named. Parkinson's disease is a central nervous system disorder that affects approximately 1.5 million people in the United States alone. This disease results in a progressive and chronic loss of motor coordination, tremors, bradykinesia as well as other severe impairments (1). Research has shown that men are slightly more prone to the disorder than women, though the exact reason for this fact has yet to be discovered. In addition, the onset of this disorder is usually seen in people around 60 years of age. The disorder has also been known to affect younger people, however the rates of Parkinson's disease are extremely low in people under 40 years of age (1).

James Parkinson provided us with the challenge to resolve the connection between Parkinson's disease and the nervous system. As a result of his work, we now ponder as to why it is that a person suffering from Parkinson's disease cannot control their movement, despite the fact that they are trying to do so? It would be interesting to explore how this disease then affects the I-function.

Soon after James Parkinson described this "shaking palsy" disease it became a goal to research the cause of this disease. Through the examination of Parkinson's postmortem bodies, it was hypothesized that the substantia nigra was involved in this loss of motor control and coordination (2). They arrived at this conclusion by observing considerable amounts of apoptosis in the midbrain, specifically in the substantia nigra. With time there was an increase in knowledge about neurotransmitters and their role in neurotransmission in the nervous system. This knowledge identified that dopamine in the striatum of Parkinson's postmortem bodies was 80% lower than in healthy individuals (2). The fact that Parkinson's patients experience low levels of dopamine and apoptosis in the substantia nigra led many scientists to hypothesize that the substantia nigra generates dopamine, further implying that the low levels of dopamine paired with apoptosis led to the symptoms of Parkinson's disease.

In short, Parkinson's disease is caused by the degeneration of neurons in the substantia nigra which results in the decrease of dopamine. In addition, Mono Amine Oxidase-B breaks down the excess dopamine in the synapse further diminishing the dopamine that is left in the substantia nigra (3). Dopamine is vital for normal movements because it allows messages to be transmitted from the substantia nigra to the striatum, which then initiates and controls the ease of movement and balance (3). Furthermore, the loss of dopamine causes the neurons in the basal ganglia to fire randomly accounting for involuntary movements.

Acetylcholine is another neurotransmitter that is needed to produce smooth movements. In normal individuals there is a balance between acetylcholine and dopamine. In Parkinson's patients there is not sufficient dopamine to maintain the balance with acetylcholine (3). This irregular disproportion results in a lack of movement coordination leading to the more overt symptoms of Parkinson's.

It seems as if our brain is controlling the movement of our bodies without the individual having control over the disease. It would be great if there was an explanation as to the reduction of dopamine in the substantia niagra, but unfortunately there is not a concrete answer. There a many theories which seek to explain the cause of Parkison's. For example, some state that the disease is genetic ( "Parkin" gene) and others believe it is due to environmental toxins such as MPTP (4). MPTP causes Parkinson's like symptoms in drug abusers as seen through PET scans. Other studies conducted in rural areas have shown a higher frequency of Parkinson's in locations where herbicides and pesticides are prominent (5). Additional suggestions as to why dopamine degenerates are mitochondrial dysfunction and excitotoxicity (4). Extensive research is being conducted all over the world in an attempt to discover the definitive cause of Parkinson's disease. This is significant because once we identify what causes Parkinson's disease we can hope to prevent future occurrences of this disease as well as ultimately find a cure.

As mentioned earlier, there is no cure for Parkinson's disease. Therefore, the immediate goal of scientists is to find a drug that mimics dopamine, since dopamine itself is not allowed through the blood brain barrier. Researchers have thus far been successful in depicting the biological pathway of dopamine in the effort to replace the degenerating dopamine in the substantia nigra. This pathway shows that dopamine is derived from the amino acid tyrosine, which is converted into L-Dopa with the aid of the enzyme tyrosine hydroxilase. L-Dopa is then converted to dopamine by the enzyme L aromatic amino acid decarboxylase (L-AACD).

This biological pathway allowed scientist to discover that L-Dopa is able to cross the blood brain barrier giving scientist hope that L-Dopa might be converted to dopamine once it arrived to the brain. L-Dopa was found to be effective in reducing the harsh Parkinson's symptoms, meaning that L-Dopa actually converts to dopamine in the brain. L-Dopa is effective in the brain because the nervous system becomes up-regulated, and therefore craves the drug. In other words, the individual becomes highly sensitive to the drug. Unfortunately, L-Dopa also had severe side effects such as the inducement of vomiting and causing nausea. Later it was found that these side effects where caused due to the overexposure of L-AADC in the gastrointestinal tract. This was corrected by creating an L-AADC inhibitor which was unable to pass through the blood brain barrier. The L-AADC inhibitor allowed dopamine to successfully increase in the brain. There are many drugs for Parkinson's disease, but L-Dopa seems to be the most effective.

The issue of administering drugs in order to decrease the symptoms of Parkinson's disease is relatively controversial, since such administration can create tolerance to such drugs. As a patient's tolerance increases, the less effective the drug becomes and higher doses of the drug are required to discontinue the symptoms of Parkinson's. This leads to a dilemma; when does a doctor prescribe L-Dopa given that, due to the patient's progressively increasing tolerance to the drug, it cannot work forever? Does a doctor administer the drug during Parkinson's early stages when symptoms are becoming apparent or should they wait until Parkinson's is at its peak? It would be a tremendous success if there was a drug that would delay Parkinson's disease, but when the symptoms became severe, administer L-Dopa to regenerate dopamine in the substantia nigra.

Surgeries and implantations of embryonic cells have also been suggested to control the symptoms of Parkinson's disease, but none have been proven to be effective thus far (6).This gives us hope that we are working at making this disease as controllable as we can.

In essence, Parkinson's disease is a horrible disorder that kills many people all over the world. Unfortunately there is no cure for this disease, but many efforts are being made to control the prevalence of Parkinson's. In a positive note, thanks to Parkinson's disease we have learned a lot about the human body and its intricacies. It interesting to understand that malfunctions at a neuronal level can affect a person's life completely, in this case impairing people from controlling their movement. This research topic has allowed me to value how complex we are as humans and how fortunate I am to be healthy. Furthermore, while researching on Parkinson's disease I started thinking about the brain and behavior dichotomy. In this case it seems as though brain malfunctions are controlling behavior. So does brain actually equal behavior?

References:
1)National Institutes of Health , General Information on Parkinson's

2)Home Page, General Information on Parkinson's

3)Home Page, Brain and Parkinson's

4)Home Page, Causes of Parkinson's

5)Home Page, General Information

6)Home Page, Treatment of Parkinson's

7)Home Page, General Information

8)Home Page, Parkinson's and Pesticides


Intelligence Quoi?
Name: Amanda Gle
Date: 2004-02-24 01:21:27
Link to this Comment: 8425


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function."

-F. Scott Fitzgerald 1936 (Bartlett's Familiar Quotations 694:17)


IQ. Intelligence Quotient is defined as "the ratio of tested mental age to biological age." (1) Intelligence is defined as "a. The capacity to acquire and apply knowledge. b. The faculty of thought and reason. c. Superior powers of mind." (1) The IQ allows people to quantify and to create a range that all people can understand. While people do not think about what intelligence actually is much, intelligence is an important portion of society. What is IQ and why is it important in our society? What affects the outcome of that IQ test?

IQ is a method that does not factor in talent or outer exhibition or knowledge. All the results are based on a closed test during which one's intelligence is tested. Made up of several different sections depending on the test, most include questions in different sections. The WAIS test is made of four sections; the reading composite (which is word reading, reading comprehension, and pseudo word decoding), the mathematics composite (which is numerical operations and math reasoning), the written languages composite (which is spelling and written comprehension), and the oral composite (which is listening comprehension and oral expression.) (2) The scores are taken and compiled into four other sections: verbal comprehension, perceptual organization index, working memory index, and processing speed index. These are all looked at not just with scores but with percentiles. For example, on my POI, I got a 97, which means that my score was higher than 97 out of 100 adults my own age. (2) This is all combined into whatever one's actually IQ is. It is important that the IQ is only compared to people of the same age because brains are thought to continue developing until about age twenty-nine. (3)

The scores of these tests are looked at on a curve. People of the same age are compared and the scores are calculated "in a proper sense with the mental age in the numerator and the chronological age in the denominator." (3) The test is what determines the mental age. The number that comes from the equation creates the classification of the results. The classification varies from test to test but one form is that a person is under average if the IQ is under 85, average if it is between 85 and 115, and above average if it is above 115. An IQ between 75 and 85 is classified as debility, between 35 and 70 as imbecility, and below 35 as having oligophrenia or feeble-mindedness. (3)

What is a genius? To be a member of Mensa, one must be in the top two percent of IQ scores. Depending on the test, it means being above a certain score: for Cattell above 148, for Stanford Binet 132, for WAIS 130, and for Otis-Lennon 132, among many. (4) Geniuses are those who leave marks in history based on their intelligent gifts to the world. Today, they are those who are pulled aside early in school and win the Nobel Prizes. Gifted people are encouraged through special schooling and families. Good resources can help increase the results of the IQ test. A brain can be trained to be more intelligent.

The opposite end of the scale is mental retardation. The condition of mental retardation is defined as having these criteria: "intellectual functioning level (IQ) is below 70-75; significant limitations exist in two or more adaptive skill areas; and the condition is present from childhood (defined as age 18 or less)" (6). Those with mental retardation (at times known as oligophrenia) can have it due to a number of reasons including genetic conditions, problems during pregnancy, problems at birth, problems after birth, and poverty and cultural deprivation. (6) Some of the same reasons can manipulate the opposite end of the IQ scale as well.

There are many studies to determine what affects the intelligence quotient. One suggestion is birth order does. In 1973, the first test was done in Holland by Lillian Belmont and Francis Marolla about family size, birth order, and IQ. They found that children from larger families did poorer on tests, the firstborns of any family size always scored better than later-borns as well as in a declining pattern for birth status, and as family size increased the performance decreased. (5)

As was mentioned before, another suggestion is that one's environment affects the way that one turns out. This would be truer for those with lower IQs. It is difficult to increase IQ a large amount based on the food that one is fed. On the other hand, a poor diet can lead to mental retardation.

Whatever affects IQ, it is important in the way it grades one's intelligence. We must remember though that while a person may be extremely smart by the books, on the street or socially it can be a different story. Intelligence, not the quantitative IQ, truly is what is important.

References


1) 1) The American Heritage College dic•tion•ar•y, Fourth Edition. Boston: Houghton Mifflin, 2002.
2)
2) Dr. Thomas Brown; WAIS IQ Test I took January 2003.
3)
3)Intelligence=IQ
4) 4)Mensa International
5) 5)Human Intelligence
6) 6)Introduction to Mental Retarddation
7) The American Heritage College dic•tion•ar•y, Fourth Edition
8) Dr. Thomas Brown and the IQ test I took January 2003


In the Blink of an Eye: A Look at Locked-in Syndro
Name: Shadia Be
Date: 2004-02-24 01:38:34
Link to this Comment: 8426


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In the Blink of an Eye:
A Look at Locked-in Syndrome

Shadia Bel Hamdounia

"Twelfth Night", "Freaky Friday"--we are all familiar with the many scenarios that depict a common fear—being trapped in another's body. But there exists a bigger nightmare. Imagine the horror of being trapped in one's own body. For those with locked-in syndrome (LIS), that fear is a reality.

LIS describes one of the most debilitating conditions in which a person retains consciousness. The result of head injury, brain-stem strokes, or neurological diseases like ALS, locked-in syndrome is caused by a lesion in the nerve centers that control muscle contraction or a blood clot that blocks circulation of oxygen to the brain stem(6),. First introduced in 1966 by Plum and Posner, the term has since then been redefined as "quadriplegia and anarthia, with preservation of consciousness".(1). (Anarthia refers to the neurologic inability to speak, as opposed to an unwillingness to speak.) Unable to either move, or speak, yet fully cognizant of the world around them, these individuals are virtually locked in. An accurate diagnosis of LIS depends on the recognition that the patient can open his eyes voluntarily rather than spontaneously in the vegetative state.(4). Although horizontal eye movements are usually lost, the ability to open their eyes and blink is retained.(4) Therein lies the key to communication with the outside world.

I first learned about this extremely rare condition while helping a friend with a French paper. The subject, Jean-Dominique Bauby's, "The Diving Bell and the Butterfly", piqued my interest. On Dec. 8th, 1995, Bauby, a 42-year-old father of two, was test-driving a new car when he suffered a massive stroke. He awoke from a coma two months later to find himself paralyzed and speechless, but able to move one muscle: his left eyelid.(3) Due to his privileged position as an author and editor of a popular French magazine, he was afforded the opportunity to do the unimaginable—share his experience with the outside world. With the aid of a secretary and an elaborate alphabet in which each letter was recited to him in the frequency with which it occurs in the French language, he was able to blink his novel.(3)

It was Alexandre Dumas who in 1820 first described LIS when he created Monsieur Noirtier de Villefort in his novel, The Count of Monte Cristo. He described his character as a "corpse with living eyes"(1), but Bauby's tale contradicts this commonly held notion. He recounts his struggle with the realization that he is trapped within a paralyzed body—the diving bell—in which his mind flies like a butterfly:

"I am fading away, slowly, but surely. Like the sailor who watches his home shore gradually disappear, I watch my past recede. My own life still burns within me, but more of it is reduced to the ashes of memory. Since taking up life in my cocoon, I have made two brief trips to the world of Paris medicine to hear the verdict pronounced on me from medical heights. I shed a few tears as we passed the corner café where I used to drop in for a bite. I can weep discreetly, yet the professionals think my eye is watering." (3).

In his memoir, Bauby continually addresses the very sense of alienation and exclusion from society that is shared by all who are severely handicapped. How worthy are these individuals to our society? Those with profound neurological disabilities such as LIS, tetraplegia, or who are in persistent vegeatative state, have been the subject of substantial medical and ethical debate. Many feel that the allocation of resources to maintain their lives is too much of a high-stakes game. After reading Bauby's book, I sought to better inform myself about this rare condition. But the paucity of information or research available was disheartening. An interview with Roger Rees, Director of the Institute for the Study of Learning Difficulties at Flinders University in Adelaide, explains that "from an economic rationalist's view of rehabilitation or of a simplistic absolute view that a person is either cured or not cured people in the locked-in state are considered of no account." (3) Although there are no statistics available on the number of patients with LIS, the locked-in populating is growing due to advances in artificial respiration.(2) How then to convince those responsible that the benefits to sustaining these individuals far outweigh the monetary sacrifices?

Niels Birbaumer, a German neuroscientist and leading expert in the field, works on brain-computer interface (BCI) research in an attempt to give those who are locked in a voice so that they might be involved in the decisions that affect their lives. One of his "patients", Elias Musiris, a wealthy Peruvian owner of a casino, suffers from Lou Gehrig's disease which has induced a locked-in state. Using BCI, electrodes were attached to his scalp producing a moving white dot across a screen—Musiris was looking at his EEG, who's up and down motion represents his brain activity. His task—to willingly change the electricity of his brain by changing his thoughts, and in doing so, to control the white dot by keeping it in one half of the screen. Birbaumer had previously developed a similar technique to train epileptics to fend off impending seizures.(2) He hoped that teaching Musiris to influence his EEG would then enable him to "respond" to simple yes-no questions by moving the dot to a certain half of the screen. The results ? After a week of intensive practice, Musiris was able to produce answers that, through repetition, had reached a statistical safety level of more than ninety per cent.(2) Through this new method his family learned that he wished to buy new pool tables and keep the old slot machines in his casino which they were about to sell. For the first time in five years Musiris began to have a deliberate impact on his world and his business—without having to move a single muscle.

The stories of Bauby and Musiris not only put a human face on locked-in syndrome, they offer insights into the question of a mind/body dualism discussed in class. The body is inextricably linked to one's sense of self, however, physical suffering need not steal one's sense of self. There remains a wealth of thoughts, feelings, memories and dreams to be generated and recalled. Bauby's tale is a poignant testimony of human resilience in the face of adversity; it demonstrates that the loss of one's last faint muscle movement does not somehow eliminate the will to be heard.


References


Works Cited:

1. http://web5.infotrac.galegroup.com/itw/infomark/301/818/45683206w5/; very comprehensive research on "Impairment, activity, participation, life satisfaction , and survival in persons with LIS"(Jennifer Doble)

2. http://web5.infotrac.galegroup.com/itw/infomark/301/818/; article on brain-computer interface research; (Ian Parker).

3. http://www.abc.net.au/rn/science/ockham/stories/s10275.htm ; interview with Prof. Roger Rees on LIS

4. http://www.jnnp.com/cgi/content/full/71/suppl_1/i18?RESULTFORMAT=1&eaf; contrasts LIS with coma

5. http://jnnp.bmjjournals.com/cgi/content/full/63/6/759 ; a look at ERP's in patients with LIS

6. http://www.questdiagnostics.com/kbase/nord/nord472.htm; gives the basics of LIS


Theories on Left-handedness and Laterality
Name: Hannah Mes
Date: 2004-02-24 01:54:56
Link to this Comment: 8427


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Every time I walk into a classroom I am faced with the same challenge. It's not that I haven't done my homework or that the professor is boring, but that I can't find the right chair to sit in. Some may argue that the problem is inherently caused by a neurobiological difference. Others might say that it is the work of the devil. My crime be told, I am left-handed.

For 20 years I have suffered from the writing discomfort associated with right-handed desks and notebooks. I have also had difficulty playing the guitar and have been forced to try and play sports in a right-handed fashion. Scissors are always a battle unless they are specifically designed for left-hand use and power tools are not recommended as they have an usually high accident level with left-handedness. I have managed to overcome most of these minor obstacles without much difficulty, but perhaps this is due to my socio-cultural background which is relatively understanding of left-handed differences.

Throughout history, people with left-handedness have been persecuted for a variety of reasons. Deemed social deviants or mentally defected, left-handedness has been considered an undesirable attribute that can be "corrected" through persistent repetition of right-handed behaviors. Adroit, according to the Miriam Webster dictionary, is derived from the Latin word droit (translated as 'right') and in contemporary use means correct or proper. 1 Comparatively, the French word gauche, literally 'left' is used to express someone who is "lacking social grace or is not tactful". 2 Being left-handed has historically been grounds for discrimination although it in recent years these biases have grown more subtle. As I considered the issue of being left-handed more deeply I was convinced that this clearly had not only physical and socio-cultural implications but was also related to neurology, specifically the lateralization of the certain functions in the cerebral hemispheres.

According to Dr. M.K. Holder, the Director of the Handedness Research Institute at Indiana University, left-handedness can be understood in terms of brain lateralization and the functional specialization areas for speech. Although there is not one definitive explanation as to why an individual has a specific handedness, there seems to be evidence that there are several factors such as genetic background, socio-cultural influences, and neurobiological implications.

For example, the Kerr clan of Scotland is infamous for their predisposition towards left-handedness. 3 This feature, known by the Kerr as being "Corry Fisted" or "Kerr-Handed" can be understood as a culturally reinforced trait that was supposed to aid these Scottish warriors in methods of warfare. Due to their common gene pool, however, this trait can also be understood a genetic characteristic of the Kerr clan.

According to Oldenfield (1971), the statistics for left-handedness are also higher in males than in females. Geschwind and Galaburda (1987), developed the "G-G theory" which builds upon the notion of sex-differences, arguing that higher levels of testosterone can affect cerebral lateralization by causing the normal dominance pattern to change. 4 For the majority of right-handed individuals, the language is associated with dominance in the left-hemisphere and visio-spatial skills in the right-hemisphere. Gorski et al expands upon this idea, noting the important role that the levels of the testosterone can have on lateralization.
The hormone can affect the growth of many tissues, and has an inhibitory effect on the growth of immune structures, such as the thymus gland and the burse of the Fabricus. Testosterone is also capable of changing the structure of specific nuclei in the hypothalamus and limbic system. (Gorski, 1986)

From what I understood, the G-G theory argues that testosterone levels can increase for many reasons and one of the effects can be a delay in the growth of the left-hemisphere. This delay can in turn produce what neurologists have coined "Anomalous Difference" which is characterized by "left-handedness, right hemispheric language dominance, left-hemispheric visuo-spatial dominance.." 4 In short, when there are higher levels of testosterone the normal patterns of dominance associated with language in the majority of the population are switched. Their explanation can be categorized as a chemical model based on the changing variable of testosterone in an effort to understand the creation of specific functional lateralization with regards to handedness. Although the G-G theory is widely supported, I would argue that it is not the definitive explanation for left-handedness but rather one of many important factors in determining this disposition.

Both the French neurologist Paul Broca, 5 and German neurologist Carl Wernicke made important discoveries in the 18th century that identified areas in the prefrontal cortex of the left hemisphere as being associated with areas that are primarily used for speech production. Compared to other primates, this area of the brain in humans is greatly enlarged. 6 Although handedness used to serve as a basis for establishing which lateralization individuals had for language, it later became clear with use of the sodium amytal (Wada) tests of the 1960's that lateralization for language in some left-handed individuals can also occur in the left-hemisphere. 7 By injecting patients with this contrast dye, the areas of the brain which are associated with language and memory become visible with use of an x-ray. 8 The explanation for this still remains unknown.

This point raised various questions for me on an individual level. As a left-handed person it may be significant that my disposition for fine arts and foreign language is high, a function that according to my research seems to be associated with the right hemisphere. In line with the G-G theory, this can be understood as an overcompensation in the right hemisphere as a "compensatory growth mechanism" because the left hemisphere growth has been delayed.

Whether or not the G-G theory correlations between testosterone to specific functional lateralization proves causation is debatable. From the research that I have done I would argue that left-handedness can not be understood simply in terms of neurology, genetics, or socio-cultural factors alone but as a combination of all these. The G-G theory also fails to explain why the left hemisphere is more sensitive to levels of testosterone or if there are more testosterone receptors in this area of the brain.

References


References:

1)Miriam Webster Dictionary

2)Miriam Webster Dictionary

3)Kerry Clan Lineage

4)Theories About Handedness Causation

5)Biography of Paul Broca

6)Lateralization and Language II

R7)Medical College of Georgia, MCG Wada Protocol: Clinical Core

8)The Biological Basis for Langauge


Cochlear Implants: A Bionic Sensory Experience?
Name: Lindsey Do
Date: 2004-02-24 02:01:08
Link to this Comment: 8428


<mytitle>

Biology 202
2004 First Web Paper
On Serendip



"Hearing is the soul of knowledge and information of a high order. To be cut off from hearing is to be isolated indeed" (1).

What does it mean to hear? Imagine what it may be like if your perception and recognition of sound has changed three times during your lifespan. Phases one, two and three encompass a full spectrum of hearing, with various technological aids (in phases two and three) triggering a range of psychological and physiological repercussions. An in-depth look at the relationship between the hearing organ and the auditory processing center of the brain might illuminate hearing as a integration of audition and cognition. As someone who has experienced full hearing, deafness and rehabilitated hearing via an electronic prosthesis, how do my experiences contribute to the notion of a personalized auditory experience—an awareness which draws the distinction between the sensation and interpretation of sounds?

The ear contains complex organs which allow for sound to be converted into an electronic signal, which is transmitted to the brain for interpretation. The mechanical input of sound waves is transduced in the cochlea into an electrical response. The basilar membrane in the cochlea vibrates from the movement of the surrounding perilymph, which bends the hair cells, inducing depolarization and triggering an action potential. The ganglion cells that innervate the hair cells within the organ of Corti serve as receptors, and they are responsive to particular frequencies according to topographical (tonotopic) organization. The auditory nerve connects to the brain stem (a bilateral pathway), which synapses into the cochlear nucleus. Here, the information is separated into the ventral cochlear nucleus (time-sensitive localization) and the dorsal cochlear nucleus (quality) (2). The auditory pathway projects into the cerebral cortex, specifically the primary auditory complex located on the dorsal surface of the temporal lobe (3). Furthermore, these auditory nuclei project into other parts of the brain that constitute a neural net—a schema that allows for the functional organization of language, music, memory and knowledge (4).

Hearing loss may be caused by the destruction/degeneration of hair cells in the cochlea (sensorineural), or by damage to/malformation of the apparatus that transmits sound energy (conductive) (5). Hearing aids are one corrective device used to amplify sound; however, the cochlear implant is a fairly new innovation that targets sensorineural hearing loss by bypassing the damaged cochlea. This instrument entails an external microphone built into a speech processor which acts as acts a spectrum analyzer, deconstructing complex sounds into certain frequencies. These electrical signals are then carried to a transmitter held to the head which conveys the coded information through the skin to a receiver implanted in the bone (5). The simulator relays the signal down an electrode that is wound through the cochlea, activating specific frequency locations that coincide with the tonotopic organization of the auditory nerves. The implant "mimics" a sound by stimulating the corresponding neurons, producing the "sensation" of hearing. Subsequently, the cochlear implant is a controversial device that raises ethical questions—what does it mean to replace our "natural" senses with an artificial sensory experience through electronics?

As a unilateral cochlear implant recipient, I have come to think of my CI as an extension of myself—without it I feel helpless and vulnerable. In my personal experience as a two-year user of this fairly new medical device, the cochlear implant may be relevant in the exploration of the cognitive aspect of the auditory process. I was born with full, normal hearing—however, between the ages of three and four, a congenital birth defect (Large Vestibular Aqueduct Syndrome) resulted in a bilateral sensorineural hearing loss that left me profoundly deaf. Hearing aids boosted what little hearing I had until the age of 18. As time passed, I began to notice that I was struggling more than I used to with my hearing aids. My observations were confirmed: my discrimination (the ability to make sense of what I heard) had been declining as a result of reduced stimulation to my auditory nerve cells. This phenomenon is common to those who are born hearing, later suffering hearing loss. The neural pathways of my auditory memory did not disappear, but they failed to sustain my previous recognition of sounds and words. Embarking on the third phase, I hoped to make use of my vestigial sensory ability by getting a cochlear implant, which would directly trigger and invigorate the ganglion cells connected to the auditory nerve. Ironically, the implantation meant destroying any residual hearing function left in my hair cells. Having to undergo a third adjustment to my hearing, relearning sound artificially stimulated was incredibly different and frustrating.

The auditory neural net in my brain continues to be reorganized and reshaped to this day, in order to adjust to an entirely different sensation of sound. An electrical perception translates into an altered recognition/interpretation of sound, in the sense that my familiarity "stemming from a contact between an external event and an internal reception of a previous experience of that event" is rendered inadequate. My success from the implant is likely because I am able to draw not on the audition (the activities of the hearing organ proper, the actual stimulus) enabled at birth, but the experience of auditory processing that encompasses cognition. By cognition, I refer to Reiner Plomp's definition as "the top-down processing stressing the significance of concepts, expectations, and memory as a context for stimulus perception" (6). However, this definition raises certain tensions as I often felt that I was starting from scratch, having to practice auditory memory, language production and processing and interpretation. I continue to think of hearing as an active experience, or an acquired knowledge in which I file away every new sound I hear, attaching labels such as "train whistle," "bird song" or "shhhh" rather than remembering what I heard naturally 17 years ago. Indeed, this suggests that hearing does not rely on the simple stimulating of specific neurons contained in the auditory cortex; rather, hearing is a holistic "exercise" that involves conscious and unconscious extrapolations that construct sound as a subjective perception. Our interpretation of sounds must mediate automatic and voluntary processing (4). I often confuse one sound for another—for example, my brain may automatically mistake a train for music, but upon visual cues I will voluntarily recognize otherwise.

My cochlear implant experience as well as those of thousands of others validates the plasticity of the brain. I no longer rely on 90 percent of my eyesight for information; rather I have come to recognize specific voices, music, sounds. I still lack the ability to localize and pick out sounds from a noisy environment as a result of unilateral hearing. Do I need more time to develop this ability or is this a function inherent to my natural audition?

With the exponential progression of scientific advancement, fully implantable implants are on the horizon, in contention with hair cell regeneration research. Edmond Alexander raises the fascinating issue of the "coming merging of mind and machine," in which the biological authenticity of the human brain may be undermined by the artificial, reverse engineering of the brain which will be enhanced and expanded (7). If I still have a brain that contains a bionic device substituting for my hearing, does that mean that my behavior is still "human?" To what degree can we make the distinction between human and machine? Will the brain still equal behavior if electronic devices are responsible for our sensory experiences?

1)Helen Keller Quotations

2)Auditory Transduction

3) Kandel, Schwartz and Jessel. The Principles of Neural Science. MgGraw Hill Companies, 2000

4) McAdams, Stephen and Bigand, Emmanuel. Thinking in Sound. Oxford: Clarendon Press, 1993.

5)Sound From Silence, The Development of Cochlear Implants ; overview of cochlear implants

6) Reinier, Plomp. The Intelligent Ear: On the Nature of Sound Perception. New Jersey: Lawrence Erlbaum Associates, Publishers, 2002.

7) Edmond Alexander. "The Coming Merging of Mind and Machine." Scientific American Inc, 1999.

Other Helpful Sources:

8)Turned On ; another personal account from a Cochlear Implant recipient

9)Introduction to Cochlear Implants


I have become Comfortably Numb: Depression and Per
Name: Chelsea Ph
Date: 2004-02-24 02:08:33
Link to this Comment: 8429


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"Without emotion, man would be nothing
but a biological computer. Love, joy,
sorrow, fear, apprehension, anger,
satisfaction, and discontent provide
the meaning of human existence."
Arnold M. Ludwig---1980 (1)
Questions and Introduction

Depression is one of several serious mental health conditions affecting over 450 million people worldwide. Is there a universal experience of depression? If so, can that universal experience lead to a deeper understanding of concepts of the self across cultural boundaries?

Facts, Statistics and Symptoms
Symptoms of depression include:
* Depressed mood - most of the day, every day
* Mood swings - one minute high, next minute low
* Lack of energy and loss of interest in life
* Irritability and restlessness
* Disturbed sleep patterns - sleeping too much or too little
* Significant weight loss or gain
* Feelings of worthlessness and guilt
* Difficulty concentrating and thinking clearly
* Loss of sex drive
* Thoughts about death and the option of suicide (2)
"Mental problems are common to all countries, cause immense human suffering, social exclusion, disability and poor quality of life. They also increase mortality and cause staggering economic and social costs" (3). Depression does not distunguish between ethnicity, gender or age, though it is twice as likely to occur in women and often goes undiagnosed or is misdiagnosed in second- and third-world countries without the resources to fund mental health programs (3). In addition, cultural associations with depression frequently prevent sufferers from seeking help.
In China, stoicism is a highly valued character trait- seeking help for depression would indicate a weakness in one's character (4). The same perception is observed in African-American culture, particularly when pertaining to women (4). Information gathered on depression in Hispanic culture indicates that depression is expressed somatically in chronic aches and pains in addition to the "common" symptoms. Linguistic evidence shows that the somatic theme is also present in China. The literal meaning of the word "depressed" in China is a closed and drowning heart and "Depression" is a worrying (heart-troubling) disease (5).

Expressions of Depression
While the above symptoms are naturally important in diagnosis and determining treatment, the personal testimony of those with depression is important when attempting to understand perceptions of the self. Personal testimonies on depression range from completely detached to hysterical and everything in between, including an affinity with the experience, a desire to stay depressed. These testimonies almost always indicate a loss of self, though this may be good in some cases. It is essential to understand that by "self," a person's perception of their normal cognitive state is meant.
"I have become comfortably numb." -Pink Floyd (6)

"No pain remains, no feeling..." -VNV Nation (6)

"...my mind lay limp in an empty world." -Despair, V. Nabokov (7)

"Wake me up inside...
before I come undone,
save me from the nothing I've become." -Evanescene (8)
"...and you´re watching moving shadows live instead of you ...
suicidal tendencies, but no will to interfer.
feel it coming over you ... indifference ... indifference ..." -Wolfsheim (6)

"all the weights that keep you down seem heavier than before.
they hit me in my face, though you feel nothing..." -Apoptygma Beserk (6)

"This is when I feel dead: when I lie in the dark (or sit or stand anytime, anywhere) and can feel how insignificant taking the next breath is...It doesn't hurt not to, there's no panic, only a mild, detached observation that this might be what it feels like to die."
-Anonymous

"Depression is merely anger without enthusiasm"
- Unknown

"...And then I heard them lift a box,
And creak across my soul
With those same boots of lead, again,
Then space began to toll..." -Emily Dickinson, #112 (9)

"It is hopelessness even more than pain that crushes the soul." -William Styron, "Darkness Visible" (10)

Each of these quotes and testimonies are astounding in their repetition. Loss of feeling, numbness, death. "Save me from the nothing I've become." Is "nothing" Ludwig's biological computer? To lose emotion is to lose an essential part of self as identity. Therefore, whatever makes our emotions makes our "selves?"
Chemical Theory
Although the exact cause of depression is unknown, theories on chemical imbalances in the brain have led to the development of medications capable of eliminating or reducing symptoms. Some of these medications are known as SSRI's or Selective Serotonin Reuptake Inhibitors. Serotonin is a hormone produced in the brain, which affects many things, including appetite, emotion, and sleep pattern, and promotes feelings of calm, contented well-being. When too much serotonin is reabsorbed by the presynaptic nerves in the brain, depletion disrupts the normal cycles regulated by serotonin. (11)
Conclusion
Coupling this knowledge with the personal experiences naturally leads to the question: are chemicals the "self?" This chemical balance leads to the feelings of "numbness," "not being oneself," etc. because perceptions of the "normal" self have their roots in the typical chemical make-up of their individual brains. "I'm not happy like I usually am" becomes "My serotonin levels are usually higher than this" or "my dopamine levels aren't usually this erratic." Does the self really exist in combinations of chemicals; somewhere beyond the "I"-box, yet containing it- a fluid, perpetually moving self?

References

1)Dr. Ivan's Depression Central

2)Befriender's International

3)The World Health Organization

4)Depression Screening.org

5)Online Chinese/English Dictionary

6)Song lyrics

7)Nabokov, Vladimir. Despair. New York: Vintage Books, 1989.

8)Song Lyrics Search Engine

9)Serendip Website, a Web Paper on depression and serotonin

10)The Quote Cache

11)More from Serendip


Autism: In a world of dreams and shadows
Name: Geetanjali
Date: 2004-02-24 03:46:15
Link to this Comment: 8433


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Autism is a neurological disorder that is interesting, in part, because of its potential to shed light on how we perceive and understand the world around us, and how we are able to relate to other human beings, by demonstrating what happens when one is unable to relate with others, and has trouble with perception and understanding. The abilities and complexity of the human brain can be seen most clearly when the brain is damaged and vital abilities have been lost. It is often only when one sees the debilitation caused by the loss of an ability that one can see the importance of that ability, and fully understand it.

Autism is characterized by problems in three specific areas: communication, imagination, and socialization.(5) Autistics generally have very poor verbal skills, and can be so unresponsive to speech and noise in general that they are sometimes mistakenly thought to be deaf.(7)(2) Autistics also have trouble understanding the meanings of intonations in a sentence, and have difficulty speaking with the proper intonations themselves. Autistic children often don't appear to engage in imaginative play. They tend to be very socially withdrawn, and unresponsive to human contact as children.(1)

There are other, less general characteristic symptoms. Autistics have trouble making eye contact,(7)(2)(1) and show an aversion to physical affection (such as hugging).(1)(2) There are often motor problems that accompany autism, such as a lack of coordination.(4) Autistics often show an obsession with order and a resistance to change,7 and a tendency to focus on parts of objects instead of the entire object.(5)

Quite a bit is known, then, about the outwardly observable characteristics of autism. To describe all of its observed characteristics in detail would take several pages at least. Autism is, though, a disorder that can unfortunately only be defined by outwardly observable behaviour. Its wide range of symptoms are classed together as one disorder simply because they are seen together often, too often for there not to be some link between them. It is believed that there is a common neurological problem (or even a group of related neurological problems) that is at the core of autism. However, one reason why autism is still something of a mystery and remains surrounded by controversy, is that after decades of research its biological basis is still not known for sure. One possibility is that damage to the amygdala is linked to autism. However, only about 50% of autistics show damage to the amygdala in MRI scans, so other structures are evidently involved as well.(4)

It is known, though, that autism does have an entirely biological basis.(4) Although it was thought for many years that autism was a psychological disorder, it is now known that autism is caused by a combination of genetic and environmental factors.(2) And although the specifics of its biological basis are not known, the neurological damage that lies behind autism creates specific cognitive defects that in turn cause the outward symptoms of autism. Two theories about such cognitive defects are the theory of mind hypothesis, and the theory of central coherence.

A theory of mind allows one to make deductions about what another person might be thinking based on his or her outward behaviour. It allows people to attribute separate beliefs and mental states to another person, and to link these mental states with outward behaviour. A hypothesis was put forth by Baron-Cohen et al in 1985 claiming that autistics lack a theory of mind.(3)

This claim was based on an experiment first done in 1985, where autistic children were given what is called a "false-belief" test. The test went roughly as follows. A girl named Sally hides a marble in a basket, and then leaves the room, so that she can no longer see the basket. Her friend Anne then takes the marble out of the first basket and puts it in a second basket. The question is: when Sally comes back to the room and looks for her marble, where will she look?

In order to correctly reply that Sally will look in the first basket, the subject has to be able to grasp the concept that Sally does not know everything that the subject knows, and therefore believes that her marble is still where she left it. In order to attribute such a false belief to another person, the subject would need to have a theory of mind. Only 20% of the autistic children tested were able to answer the question correctly. The other 80% replied that Sally would look for her marble in the second box. Also, when the 20% that were able to answer the question correctly were given more advanced tests to further test their theory of mind, the majority of them failed.

It has been generally accepted since then that one of the major cognitive deficits in autism is the lack of a theory of mind. However, although the majority of autistics failed the more advanced tests, not all failed. And this combined with the fact that 20% were able to pass a simple false-belief test shows that not all autistics completely lack a theory of mind. This would imply that there are other cognitive defects that contribute to autism, since most autistics who pass false-belief tests are still undoubtedly autistic (although they are generally high-functioning autistics). The fact that much autistic behaviour has no obvious relation to the theory of mind further supports this.(5) Other theories have therefore been put forward, one of which is autistics have weak central coherence.(5)

Central coherence is, in colloquial terms, the ability to see the big picture instead of getting lost in details. It is the ability to read a story and then be able to remember the gist of the story afterwards, even if individual details are lost. The theory put forward by Happé claims that autistics, although they have this ability to a certain degree, still have trouble with central coherence. Several experiments seem to support this claim, but simple observations of autistics support it as well: one widely observed characteristic of autistics is that they have a tendency to focus on the parts of an object over the whole.(5)

One experiment carried out by Happé tested the ability of autistic children to judge the meaning of a word based on the context of the sentence. For example, they were made to read outloud the sentences "There was a big tear in her eye", and "In her dress there was a big tear", to see if they would be able to judge which pronunciation of the word "tear" was appropriate for which sentence. In general it was found that they had difficulty with such judgements. They had a tendency to simply use the more common form of the word, regardless of the context. Another experiment showed that autistics show a remarkable ability for spotting "embedded figures" within an image, which also supports this theory.(5)

The theory of mind and central coherence are not aspects of the human brain that one would even necessarily think about, under normal circumstances. Most people don't think twice about their ability to read a book and understand it, somehow understanding both each individual word and the complex ideas that the words combine to form. Most people don't wonder about their ability to look into a person's eyes and know what that person is feeling. Both tasks are astounding. And yet it is human nature not to notice our own everyday abilities, no matter how astounding they are, simply because they are so common. It is often only when we lose an ability or learn about people who lack it that we take notice of our abilities, and the miracle that is the human brain. Autism has shown the effect when something so basic and necessary as the ability to relate to other human beings is lost from damage to the brain. The sheer isolation of autism is described well by an autistic woman named Donna Williams, in her autobiography,

Staring into nothingness since time began
There and yet not there she stood.
In a world of dreams, shadows, and fantasy,
Nothing more complex than color and indiscernible sound.
With the look of an angel no doubt,
But also without the ability to love or
Feel anything more complex than the sensation of cat's fur
Against her face.(8)

References

1) Autism Resources maintained by John Wobus. (accessed February 15, 2004)

2) Autism Society of America (accessed February 17, 2004)

3) Baron-Cohen, S., Leslie, A.M. and Frith, U. (1985) "Does the autistic child have a 'theory of mind'?" Cognition, 21, 37-46

4) Frith, Uta and Hill, Elisabeth. (2003) "New techniques yield insights on autism". (accessed February 18, 2004)

5) Happé, Francesca. (1997) "Autism: Understanding the mind, fitting together the pieces" (accessed February 17, 2004)

6) Rimland, Bernard. (1997) "Genetics, Autism, and Priorities". Autism Research Review International, Vol. 11, No. 2, page 3.(accessed February 17, 2004)

7) Sterling, Lisa. (2002) "Autism and Theory of Mind" (accessed February 16, 2004)

8) Williams, Donna. (1992) Nobody Nowhere, New York, New York: Avon Books.


What Makes a "Monster"?
Name: Erica Grah
Date: 2004-02-24 03:57:55
Link to this Comment: 8435


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The recently-released film "Monster" is based upon the story of female serial killer Aileen "Lee" Wuornos. This movie raises several interesting questions regarding the nature of homicide, specifically that which is carried out in cold blood, and its causes. What makes people commit murder? Is it a question of underlying aggression? If so, how does this differ among individuals? In combining aspects of the movie with those of Wuornos' real life, the purpose of this paper will be to attempt to identify several factors in the lives of both the fictionalized and actual murderer that may provide some explanation or at the very least, a discussion of certain neurobiological and psychological factors and their roles in homicide and aggression among the general population and more specifically, among women.

"Monster" in some ways portrays the life of a victim. Through flashbacks to her childhood and the mention of it later in the film, the audience is made of aware of the sexual abuse that the young Aileen endured. Research has shown that there are several aspects of an abusive childhood that remain with the child for the rest her life. The impact of child abuse alone, whether physical, sexual or emotional, can over time result in disruptions of mood, including depression and anxiety, and in antisocial traits such as aggression, criminal behavior and impulse control (1). In his article on the neurobiology of child abuse, Martin Teicher discusses the effects that child abuse has been hypothesized to have on certain areas of the brain, particularly the limbic system. This system is described to be the area of the brain that is essential in the development of emotional responses and the recollection of memories. Within the limbic system are the hippocampus and the amygdala, which are thought to be key components of "the formation and retrieval of both verbal and emotional memories" and of "creating the emotional content of memory – for example, feelings relating to fear conditioning and aggressive responses," respectively (1). Research has led to the discovery of a correlation between an early history of abuse and decreases in the size of the adult hippocampus and amygdala. The smaller these brain structures, the greater the likelihood of over-stimulation, to the effect that the individual would be more traumatized by the memories and be more closely attached to the emotions recalled. In addition, given a person's history of maltreatment, the responses brought forth in any perceived threat, regardless of the severity, could be representative of those continually initiated in the past or of the desire to react differently. In the latter case, it can be said that an overly aggressive response blatantly disproportionate to an event, however slight, would occur to counter a past threat in which the individual was unable to be aggressive.

Although the true account of her childhood is sketchy, due to several different accounts given by Lee, it was confirmed by professionals who testified on her behalf that she had borderline personality disorder (2),(3). Individuals suffering from this disorder exhibit difficulties with the regulation of their emotions in that they can be vastly antithetical from one moment to the next. Acute yet ephemeral anger and aggression is sometimes a by-product, and impulsivity can prove to be a problem as well. Research has shown that a history of abuse, neglect or separation is common among a large percentage of individuals with borderline personality disorder, particularly in those who have suffered sexual abuse (4). Thus, it is most likely the case that Wuornos' real life was marred by the occurrence of such maltreatment.

Under this assumption, there are neurobiological reasons that may explain why and how Lee developed borderline personality disorder in the first place. It has been found that the middle part of the corpus callosum – which is essentially the bridge that allows information to travel between hemispheres of the brain – in females who endured sexual abuse tended to be much smaller than in individuals who reported no abuse. This then reduces the amount of communication or integration that can take place between hemispheres at any given time. Lack of integration forces one hemisphere to dominate the emotions of the individual; presumably, the dominating source of emotion can change almost instantaneously and randomly, resulting in rapid fluctuations in perception that are notably characteristic of the borderline personality (1).

In conjunction with this reduced size of the corpus callosum may be a significantly decreased flow of blood in the cerebellar vermis – the middle part of the cerebellum – which plays a role in controlling the presence of norepinephrine and dopamine in the brain. These are neurotransmitters that govern the shift to "a more right hemisphere-biased (emotional) state," and "a more left hemisphere-biased (verbal) state," respectively (1). Research has shown that people with an abusive past exhibit a diminished amount of blood flow in the vermis, which disrupts its ability to regulate the production and discharge of the above neurotransmitters, thus increasing the risk for sporadic hemispheric shifts, leading to the borderline behaviors described earlier. As was previously stated, a history of separation is prevalent among those with borderline personality disorder. Wuornos was abandoned by her mother when she was a toddler, only to be raised by an alcoholic grandfather and her grandmother (3). Perhaps this too led to the neurological development that fosters such a disorder.

Although borderline personality disorder may lead to bursts of extreme anger, this does not completely explain how or why Wuornos came to commit the heinous crimes that she did. She began prostituting in her early teens, and the film portrays a brutal rape scene, taking place in her early thirties, in which she is attacked by one of her johns. This marks the beginning of the end as she, in self-defense, kills her rapist. It is plausible enough to believe that self-defense led her to kill. However, the same does not hold true for the five or six other murders that she committed. Is it possible to believe that one day she just snapped? Viewing the situation from the film's point of view, it is. Women who have been sexually assaulted most likely develop post-traumatic stress disorder (5), symptoms of which may include flashbacks, emotional numbness and sporadic and spontaneous occurrences of anger (5),(6). "Monster" shows Lee having a flashback to her rape during her second murder. Individuals with PTSD tend to have relatively low levels of cortisol, a stress hormone that regulates the release of norepinephrine, which, as was stated previously, controls emotional responses and tends to be higher in people suffering from PTSD. It is activated by the presence of stress, and it triggers the hippocampus to store the stressful input in long-term memory (1),(6). This is believed to explain why greatly emotional events can be recalled so vividly. More dangerous, however, can be highly traumatizing events, in which malfunctions may occur to the extent that memories are formed more strongly than normal, leading to flashbacks or other visual recollections (6). As she continues to murder and rob various men, she seems to do so with an air of stoicism. Supposing she was suffering from PTSD, such emotional numbness can be explained through the increased presence of hormones linked to stress, such as natural opiates. The levels of these opiates produced in PTSD individuals tend to be abnormal and have the effect of disguise pain, but for longer periods of time than would normally occur (6).

However, the portrayal of what happened on film and the perceptions of the police who interviewed her, the reporter who researched her life and the jury who condemned her to death (7),(3) lead to a different question regarding Lee's killing spree. What if she knew exactly what she was doing and planned each murder? Given her spotted record, it should come as no surprise that generally women convicted of homicide, in which the victim is someone other than a family member, usually have a pre-existing criminal record, and are considered to follow the male blueprint for criminal behavior (8). Given this information, we are forced to look at other biological factors that may have played a role in such a tragic expression of aggression. A possible explanation is the existence of abnormalities in the orbitofrontal cortex or in the amygdala, both of which have been cited to function abnormally in the brains of murderers and/or psychopaths (9),(10). The orbitofrontal cortex is part of the prefrontal cortex, which works to inhibit impulsivity (11) and has a role in the decision-making process. Individuals with noted anomalies in this area tend to have problems with controlling aggression (9) and the inability to correctly associate certain behaviors with being either good or bad. The amygdala, as mentioned above, regulates fear responses. Thus, a malfunctioning amygdala would most likely not produce the fearful and empathic responses (9) that would prevent a person from committing a crime such as murder and repeatedly so, thus causing aggression to be acted out without inhibition.

The simple factor of genetics may have also contributed to Wuornos predisposition of remorseless actions. Her biological father was reported to be a child molester who had an extreme case of antisocial personality disorder (7),(3). Individuals with personality disorders have been found to have lower cerebrospinal fluid (CSF) levels of 5-hydroxyindoleacetic acid (5-HIAA), which corresponded to high levels of aggression. It has been found that low levels of CSF 5-HIAA may be genetically inherited and thus may cause a susceptibility to aggression (12).

There is nothing that can unquestionably determine the actual cause of aggression and its more severe consequences. There are certain factors that can be observed, but the biological, psychological and social aspects of a person's life are intertwined so intricately that it would be impossible to fully understand or to answer the simple question of why. Science is a collection of observations, and the life and death of Aileen Wuornos is indicative of this.


References

1)The Neurobiology of Child Abuse, from Scientific American

2)Washington Post, article on Aileen Wuornos by her biographer

3)Crime Library , another story about Aileen Wuornos: The Myth and the Reality

4)Borderline Personality Disorder, from the National Institute of Mental Health

5)The Consequences of Violence Against Women, from Scientific American

6)Posttraumatic Stress Disorder, from the National Institute of Mental Health

7)Serial Homicide, Mind of a Killer: an Investigation of Serial Homicide-Aileen Wuornos

8)Wiley InterScience Journals, Journal of Clinical Psychology article on Homicidal Women

9)Into the Mind of a Killer, from Nature journal

10)Predicting Behavior, from Nature Journal

11)Society for Neuroscience, characterizing Violent Brains

12)The American College of Neuropsychopharmacology, the Neurobiology of Aggression


Memory or Imagination: Where Does the Brain Draw a
Name: Mridula Sh
Date: 2004-02-24 06:10:39
Link to this Comment: 8436


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The creation of false memories has recently been the focus of many experimental investigations and has sparked much debate and controversy. This phenomenon has been studied extensively in view of its impact on related conditions such as memory repression and its recovery through psychotherapy. False memories are created when events that were originally imagined or intensely thought about, are experienced as real on subsequent recollection.(7). Such falsely implanted memories have questioned the accuracy of memory. More importantly they have provoked serious ethical questions regarding the legitimacy of psychotherapy and other intrusive therapeutic procedures. Suspected perpetrators of sexual abuse and murder have been convicted in courts of law based on "evidence" provided by such memories that were nonexistent until the victim underwent therapy.(1). This paper will discuss the phenomenon of False Memory Syndrome (FMS) and attempt to find the neurological pathways that lead to its creation.

Nadean Cool, a nurse's aid went into therapy to help her cope with the effects of a traumatic event experienced by her daughter. Repeated sessions with the psychotherapist involving hypnosis and other suggestive techniques resulted in the resurfacing of memories of abuse that she herself had experienced. She came to believe that she had more than 120 personalities and had been subjected to severe sexual and physical abuse as a child. Once Nadean realized she was a victim of FMS, she sued the psychiatrist for malpractice. Her case was settled out of court for $2.4 million.(3). Nadean is just one of the many women who have developed False Memory Syndrome as a result of questionable therapy. Studies have shown that under the right conditions, guided misinformation can very easily blur the boundaries between reality and imagination.

The classic profile of an FMS victim is a white, middle class woman undergoing long term psychotherapy for relief from emotional problems.(6). She comes to a psychotherapist for treatment who often, in an effort to correlate these emotions with past abuse promotes the development of FMS. The rationale behind such an association lies in the theory that victims of childhood sexual abuse suppress memories soon after the occurrence of such events. These repressed memories induce emotional and physical ailments in adulthood resulting in the development of what some term Incest Survivor Syndrome. While there is no scientific evidence supporting this theory, therapists often induce the patient to take part in Recovered Memory Therapy (RMT).(6). Techniques of RMT include age regression, hypnosis, art and trance therapy and guided visualization.(1). Other techniques include group therapy sessions and reading of other accounts of women who have recovered traumatic memories of such abuse. Such "therapeutic sessions" pressurize the subject to find memories of abuse even when none originally exist. While such manipulative and confusing procedures "recover" disturbing mental and bodily memories of sexual abuse, their purpose is questionable. Misinformation interferes with accurate recollection of the actual event. Such memories misunderstood by the patient and miscomprehended by the therapist result in the creation of false memories leading to FMS.(6). In essence RMT is a technique used by therapists to generate a diagnosis often based on evidence that is conjured by the mind of the patient in response to misinformation fed to it.

The development of FMS impacts the psychological as well as social spheres of the patient's life. The patient is encouraged to distance herself from the perpetrator (often her father), members of the family and skeptical friends. Instead she derives support from other victims of abuse.(6). She gradually looses sense of the real world and encloses herself within an environment that supports the FMS state. The subject can develop multiple personality disorder, discovering hidden personalities ("alters") whose characteristics are significantly different from each other. In some extreme cases the patient believes she is a victim of Satanic Ritual Abuse involving the participation of relatives motivated by clandestine satanic beliefs.(6).

FMS raises a number of questions regarding the authenticity of memories of childhood abuse remembered later in life. Where and under what conditions are such memories generated? Are there ways of differentiating a true memory from a false one? Can one erase false memories created as a result of misinformation? These questions have been the focal point for experimental research in areas related to the repression and restoration of traumatic memories and the creation of false memories. The study of false memories has generated evidence that indicate the complex connection between memory and emotion. While strong emotions can either weaken or strengthen real memories, false memories can provoke strong emotion thereby simulating the creation of real memories.(5). Studies also show that false memories created as a result of the "misinformation effect" show variability depending on both the person as well the memory. The only apparent connection is that persons experiencing lapses of attention are more vulnerable to memory distortion.(5).

Researches working with split brain patients have made some fascinating observations regarding the nature of memory processing in the two hemispheres of the brain. When people are given information, their recollection of it is based largely on their experience. Often it is found that some parts of the recollection are not truly part of the experience. When split brain patients are presented with this information it if found that the left hemisphere is responsible for the creation of false reports whereas that right hemisphere gives a more factual description.(5). While this is proof that the two hemispheres respond to data differently, it also opens up avenues for the determination of how and where false memories are created.

One theory supports the view that false memories are a result of an erroneous processing of past experience. People create an outline of proceedings and then fit in false events that corroborate with the outline to develop a recollection of the original experience. Several observations support this view. The left hemisphere specializes in generating such schemata and has the ability to put the memory into context. In an attempt to interpret pieces of information within the larger context the left hemisphere is constantly seeking meaning and reason behind events. However when presented with information that is inconsistent with the schemata, the left hemisphere unable to differentiate between true and false data constructs an artificial past in place of the original one.(4). These findings are supported by the demonstration that left prefrontal regions of the brain of normal subjects are activated when false memories are recalled. In another experiment to determine the neurological pathway involved in the creation of memory, experimenters PET scanned the brains of volunteers. It is found that while true and false memories activate the hippocampus, only true memories activate the superior temporal lobe.(2). However PET scans cannot be relied on for accuracy. False memories may be equally likely to ignite the sensory apparatus of the brain as true memories do as a result of repeated misinformation.(2).

Once false memories are implanted it is often hard to rid them from memory. Yet studies have shown that propranolol, a beta blocker used in the treatment of patients with PTSD might prove to be effective in erasing false memories. Propranolol "interferes with the neurochemical pathway thought to be responsible for making emotionally arousing events more memorable- the beta adrenergic system."(5). Hence if the creation of false memories rely on activation of this system then propranolol administration could be effective in treatment of FMS. However false memories that are created as a result of fantasies or outright fabrications would be immune to the drug.(5).

This paper has attempted to discuss the phenomenon of False Memory Syndrome and define the neurological processes behind its creation. While this is an area that has seen an explosion of research in recent years, the specific neurological mechanisms that underlie the construction of such memories are yet to be determined. On a cautionary note, it is important not to completely disprove the legitimacy of buried memories. While it is true that memories can be implanted, it does not necessarily indicate that all hidden childhood memories recovered after therapy are fabricated. Thus the big question is, will research eventually allow one be able to correctly distinguish between an accurate memory and a false one?

References

Dr. Elizabeth f. Loftus, Remembering Dangerously. Skeptical Inquirer (March 1995): An interesting article that traces case studies of questionable techniques in Psychotherapy.
Sharon Begley, You must remember this. (false memories) Newsweek (July 15, 1996): Scientific paper that investigates parts of the brain activated by memory.
Dr.Elizabeth Loftus, Creating False Memories. Scientific American (September 1997): Research article that shows how suggestion and imagination can create false memories.
Michael Gazzaniga, The Split Brain Revisited. Scientific American (September 1998):
Scientific article on research into brain organization and consciousness.
We can implant entirely false memories. The Guardian (December 4, 2003) Article on research conducted to determine the nature of false memories.
John Hochman, M.D. Recovered Memory Therapy and False Memory Syndrome. Skeptic vol. 2, no. 3, 1994, pp58-61: An article that investigates techniques of RMT and the creation of FMS.
Christine Mc.Brien; Dale Dagenbach, The contributions of source misattributions, acquiescence, and response bias to children's false memories. American Journal Of Psychology (Winter 1998)


Can You Make Yourself Laugh
Name: Elissa Set
Date: 2004-02-24 07:48:37
Link to this Comment: 8437


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

People often say that laughter is the best medicine. However, how could someone administer laughter to oneself? Most people define laughter as a response to something funny or humorous. What most people do not realize, though, is the complexity that lies behind people's ability to laugh. Laughter has two aspects to it: the neurological part, and the physical part that produces sounds and gestures (4). There are various stimulants that make people laugh, but all of the stimulants cause the same effect in the parts of the brain that control laughter. However, in most cases, laughter can only be stimulated from an external source. Often people cannot simply make themselves laugh, similar to how people cannot tickle themselves. Primarily examining the neurological aspect can explain why that is.

Laughing is a complicated matter. There are fifteen facial muscles involved in laughing. The larynx and epiglottis of the respiratory system also play a vital role in making the gasping noises that are associated with laughter. If someone laughs hard enough, they may also form tears (4). What is particularly interesting is the cause of these actions. Laughter is stimulated through many parts of the brain. One of the main parts is the frontal lobe, one of the brain's largest regions and it controls one's emotional reactions. Activity is also observed in the cerebral cortex, which analyzes the structure of the humor and then helps the brain understand the humor, occipital lobe, which processes visual signals, and the motor sections of the brain, which stimulates the actual physical response of laughter (4). This complex process sets laughter apart from any other emotional response. Other emotions are usually concentrated to activity in a specific area of the brain (4).

A lot of recent research has been conducted to study the stimuli of laughter. In 1998, Nature magazine published a paper that studied how electric stimulation caused laughter in a 16 year-old girl. Researchers were trying to map her brain, because she was having epileptic seizures (5). They were able to map an area in her left superior frontal gyrus that measured about 2 cm x 2 cm that always caused laughter when it was stimulated with an electric current (2). During the test, they would have her do various activities, such as reading a story, naming objects, and hand movements. Whenever her superior frontal gyrus was stimulated, she would laugh and attribute the laughter to the activity she was doing (2). Regardless of what the activity was, she thought it was funny because of that stimulus. Therefore, any kind of stimulus in that region of her brain made her laugh, because they all followed the same pathway.

A similar conclusion was made when a group of neuroscientists did a study on laughter using episodes of Seinfeld, a comedy sitcom, and The Simpsons, a cartoon show (1). There are two main differences in the show. One is that Seinfeld uses live characters, while The Simpsons uses animation. Another difference is that Seinfeld uses a laugh track, which is a recording of people laughing that is played during funny parts of the show. The Simpsons does not use one. Using a magnetic resonance imaging machine, researchers found that both shows set off the same nerve pathway in their brains (1). The study also found that different parts of the brain respond to different parts of a joke. When a participant saw something funny, the posterior temporal cortex and the inferior frontal cortex showed signs of activity, and a few seconds later, when the person responded to the joke, the insula and the amygdale showed activity (1).

Laughter can also be seen as contagious, which is likely one of the reasons why shows using live characters also use a laugh track. One is more likely to laugh when other people are laughing (6). In a study done by Robert Provine, people who are by themselves are 30 times less likely to laugh than if they were in a social situation (6). Laughter can be contagious. In his laughter studies, Provine had a group of undergraduate psychology students listen to a toy called a laugh box that played the sound of laughter for 18 seconds. Provine played the laugh box ten times, and had students report how they felt to the laughter. With the first time the laugh box was played, half the students laughed, and 90 percent of them smiled at the least (6). However, by the tenth trial, only 3 of the 128 students laughed at the laugh box (6). Hearing and seeing the other students laugh made the laugh box seem funny. It was the combined stimulus of the laugh box and the laughter of other students that evoked continued laughter among the group. However, students could only take so much of the same stimuli. By the tenth trial, most of the students had found the laugh box obnoxious (6).

All of these studies have helped formulate the following deductions. One deduction is that there must be a stimulus in order for laughter to occur. Another is that laughter requires activity in multiple lobes of the brain. The other is one that can be deduced from the conclusions of those studies: laughter must involve some sort of element of surprise. In Provine's study using the laugh box, after the element of surprise was removed, the students found the box annoying. Using an example of a joke, the audience does not expect the outcome of the joke, and that is a reason that makes it funny. In a given situation, when the outcome is unpredictable, the audience is stimulated with the surprise, causing laughter. This is one of the possible reasons as to why people cannot tickle themselves. Some scientists believe that laughing is a built-in reflex to the stimulation on one's skin, yet people cannot tickle themselves (5). Although the signal sent from one's skin to their spinal cord and brain should be the same as when someone else tickles that person, the effect has changed, because there is no tension or surprise (5). One's brain is aware that the stimulation is going to happen, so the action is expected.

Laughter is a topic that should continue to be researched. As the recent studies have shown, laughter is an effect of an external stimulus that is networked through various parts of the brain. Future studies can be done in order to understand how people who have suffered strokes can have episodes of uncontrollable laughter or have lost their ability to laugh completely (3). Understand the brain's response to humor can also help researchers understand the mental illnesses and depression (3). Science has already discovered that certain parts of the brain control specific functions of the body. Laughter, being that it activates many parts, can be an encompassing topic of study that helps understand the relationship between the various lobes and regions.


References

1) Brain's Funny Bone , a study about laughter using television

2) Electric Current Stimulates Laughter , a scientific paper from Nature magazine

3) Finding the Brain's Funny Bone , a study about laughter using MRI scans

4) How laughter works , an explanation for the mechanism of laughter

5) Neuroscience for Kids – Laughter and the Brain , an overview of laughter

6) Provine Laughter , a groundbreaking in-depth study on laughter


Dynamic Mimicry of the Indo-Malaysian Octopus
Name: Michelle S
Date: 2004-02-24 08:31:43
Link to this Comment: 8438


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Recently, researchers have discovered the existence of an extremely unique type of octopus. The species, known as the Indo-Malayan octopus, has the ability to alter its shape, form, and color pattern to mimic or imitate other sea creatures in order to avoid predation (2). The discovery of the mimic octopus is noteworthy because no other type of cephalopod is known to have impersonation abilities. The octopus is also not limited to one imitation. Researchers have observed up to eight different formations. The alternations occur depending upon the appetite, surrounding environment, and proximity of predators the octopus encounters (1). In analyzing the formations, behaviors, and predators of the mimic octopus, it is important to isolate the origins of this exclusive, and highly intelligent defense mechanism. Is this means of protection or evolutionary development, one that allows the cephalopod a better means of survival? Or is this the result of observed behaviors where the mimic octopus becomes aware of the relations occurring in the environment, and successfully imitates a species based upon their ability to subsist when dealing with dangerous predators?

The existence of mimic octopi is restricted to the islands of Indonesia, specifically off the coasts off Solawesi, and Bali (3). Surprisingly, the octopi have been viewed during the daylight hours, generally residing near sand tunnels, and holes (1). The octopi enjoy these mounds because they provide a significant source of food, including small worms, fish, and crustaceans. The octopus utilizes its arms to feel for prey, and then captures the food through the use of expanded webs. However, when the animal is attempting to hide itself from possible enemies, the Indo-Malayan octopus can transform itself into a variety of organisms, including fish, sea snakes, and anemones. If the octopus observes a cluster of damselfishes, it will change into a lionfish by swimming above the ocean floor, with arms extended beyond the body (2). The lionfish is known to possess poisonous spikes, which successfully deter the damselfish from preying upon the mimic octopus. Another possible transformation includes the sole fish. The octopus is able to propel itself in a similar manner by forming a leaf-shaped arm that moves it across the ocean floor effortlessly. The octopus's arms are also useful in impersonating the sea snake. Two arms are waved around to appear like a pair of snakes, while the other six are hidden from view. The octopus also changes its color and creates yellow and dark bands across the exposed arms. Other variations employed by the mimic octopus include the sea anemone and the jellyfish (3).

The phenomenal behavior of the Indo-Malayan octopus has left researchers wondering how this trait has developed, or been acquired by the animal. The ability has not been viewed in any other species of cephalopods, despite their lack of a strong internal or external skeleton, a body type ideal for imitations. The studies and observations of these animals within their habitat point to a wide variety of reasons, both evolution-oriented and behavior-oriented, which are responsible for the development of this talent. The Indo-Malayan octopus has been able to copy those animals known to generate poisons, such as fish with toxic glands, and anemone, and jellyfish known for their stinging powers. This characteristic appears to validate the behavioral influence of the octopus's capacity for imitation since the animal has isolated those species, which are known to contain toxins (4). The behaviorist theory is further authenticated by the sexual mate perspective. Researcher's have also explored the idea that the characteristics are not primarily for defense, but to attract sexual mates (2). The idea is that females are more likely to mate with those who have the ability to transform into a larger number of sea creatures. The problem with this theory is that both female and male octopi were able to show mimic mannerisms, even when isolated from each other. Impersonations have never occurred within the cephalopod species without the presence of the male. Therefore, the trait is much more likely to be something examined and observed by the species over a long period of time.

However, there is a considerable amount of evidence, which supports the idea of evolutionary development. The cephalopod species is known to have the ability to duplicate the surrounding environment, by creating colors and patterns similar to the background. For example, the reef squid has the ability to camouflage itself among a group of parrot-fish. Yet, none of these organism types can accurately mimic so many different types of sea creatures. Since the species has begun with the aptitude to emulate an environment, evolutionary theory would explain a new advancement in the area of predatory defense. The progression of mimicry is based upon an organism, which reveals innovative formations that have not occurred within the species before (7). The octopus has developed the ability to not only mimic surroundings, but mimic a number of other creatures. This dynamic mimicry gives the Indo-Malayan choices to cater their behavior toward specific adversaries. This also explains why this trait is such a rare occurrence. As more creatures obtain the ability to imitate, the less effective the trait will become when avoiding enemies. These predators will eventually become aware of imitation, and develop the ability to spot charlatans (2).

Evolutionary assumptions also help in explaining the relative toxicity of the mimic octopus. It is currently unknown whether the octopus is poisonous, and whether the level of poison changes with the alternation in appearance. Theorists assume that the mimic has the same potential for poison whether it physically is perceived as a lion-fish, or a sea snake. This is because the entire act of imitation reveals that the animal is engaging in predatory deterrence. It most likely that the octopus imitates in order to avoid encounters. It does not have the available toxins to truly be a danger.

The evolutionary theory seems to explain more of the octopus's behavior and development. If the assumption is made that the mimic octopus has obtained the behavior through instinctual means, then possible lines of inquiry include what probable advancements in mimicry will occur in the next thousand years, and what behavioral traits will predators develop in order to battle camouflage defenses? Is water a more encouraging environment for camouflage behaviors? Can the qualities found in the imitation of the octopus similar to the imitations that occur in cellular diseases, such as cancer? The support of evolution as a basis for the growth of mimicry merely provides a foundation for the direction of future expansion in the area since most of what is known of these octopi are conjectures.


References

1)ABC's News Article Homepage, General Article

2)The Royal Society Articles Server, Indo-Malaysian Octopus Article

3)National Geographic Website,General and Related Articles

4)News Scientist Website, General Article

5)Science News Website, General Article

6)For Romeo Website, Small Article with Good Picture

7)UniScience Website, General Article


"On Becoming A Person: A Therapist's View of Psych
Name: Jennifer S
Date: 2004-02-24 09:18:36
Link to this Comment: 8440


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

On Becoming A Person: A Therapist's View of Psychotherapy
Carl Rogers
"Life, at best, is a flowing changing process in which nothing is fixed." ((1),222)

Today most bookstores have entire sections designated for self-help books, consumer's guides to psychological illnesses, and how-to guides for recovery from mental illness. In the 1960's, however, mental health remained a veiled science. People spoke about psychology in an unfamiliar language, it was a topic that made many uncomfortable. Freud felt that therapy had to be frustrating and increase the anxiety the patient to allow him to improve, and it was generally assumed that therapy was to be difficult and unpleasant. Carl Rogers's book, "On Becoming A Person", revolutionized the psychological literature. While his intent on publishing this book was simply to make his material more widely available to other practitioner's, he found that everyone from housewives to lawyers sought out the book, and over a million copies were sold. Rogers had not expected this, and his reputation was adversely affected by the success of this book among those outside of the behavioral science fields.
The book, "On Becoming A Person", was originally released in 1962. At that time, the Freudian school of thought prevailed. Studies were demonstrating that psychoanalysis actually made many patients less reflective, less comfortable, and less able to function at a high level socially. In spite of these facts, the silent psychoanalyst behind the couch continued to thrive throughout the fifties and into the early sixties. Studies were beginning to explore behavioral science, but the science was in its infancy. While little was known about neurochemistry, scientists were exploring stimulus-response conditioning, with experiments that attempted to show that patients simply reacted to the rewards they received. Robots were constructed to reward patients in mental institutions for good behavior, and psychiatrists believed this was the wave of the future. Rogers examined these studies, and found them dehumanizing and flawed. He felt out of place in psychology, as he questioned the ideals of the time, often to the dismay of his supervisors. Rogers proceeded to explore behavioral science in a mode similar to the pedagogy of serendip, the idea of "getting things progressively less wrong."((2)) While Rogers recognized that the current methods of psychotherapy in the 1950's were inherently flawed, he worked to apply the concepts he learned in his formal study to his independent research to find a less wrong way of treating patients. Rogers participated in many lectures and freely admitted to his ignorance in behavioral science, but his desire to learn and enjoy the process of learning inspired others and allowed him to make a lasting impact.

In a debate with Dr. R. F. Skinner, who was eager to objectively map out the brain and to diagnose and determine physiological reasons for everything, Rogers argued that all studies man ever completes will be subjective. Rogers felt that, because the hypothesis or ideas a man who begins research has inevitably effect the direction and outcome of his research, research can never be completely objective. Rogers was questioning the core of science- the idea that man could objectively acquire and amass information. His argument resonates in our time as clearly as it did in the fifties. Man always has a choice- a choice to pursue what interests him, and in pursuing these interests, he brings along a set of core values he has learned and chosen to accept as his own. This inherent subjectivity empowers us to pursue ideas that appeal to us.

Research that sought to label every action of man as somehow out of his personal control, as a biological impulse or reaction, seemed to strip man of free will and dignity. Determining the basis for behavior is a debate that spans nearly all disciplines. Philosophers, theologians, and persons alone ponder this point- what causes behavior? In discussions, people are often vehemently split on this point. Rogers, hoping to preserve the enigma of humanity while researching in behavioral science, Rogers felt that research in the behavioral science should be based on the following values:
"Man as a process of becoming; as a process of achieving worth and dignity through the development of his potentialities;
The individual human being as a self-actualizing process, moving on to more challenging and enriching experiences." (1, 396)
His fears, in a time before Prozac, before the frantic rush to label active children with ADD and medicate them, are quite similar to the fears raised today about over medication. He feared that the study of behavioral science, if done with values that did not promote the individuality of man and preserve the idea of free will, could lead to the elimination of creativity and a conforming society. How similar is this to a parent's fear that his child's individuality will be eliminated or altered by the administration of Ritalin? While Rogers did not live to see the psychopharmacology craze of the 1990's, he certainly recognized the power of behavioral science to create a happy, docile, homogenous society. Rogers concludes his lecture hopefully, stating:

"Unless as individuals and groups we choose to relinquish our capacity of subjective choice, we will always remain free persons, not simply pawns of a self-created behavioral science." ((1), 401).

Rogers's beliefs and study seem to center around the inherent value in the part of the mind that is not understood. He didn't label the unknown territory of the brain, what we've come to call the I-function, but he respected it. His contemporary, Dr. Skinner, felt that what we consider to be free will is just the part of the brain that we cannot explain yet. Similarly, in a class forum the I-function was called a default system, a place to put the ideas we couldn't explain. ((3)) The I-function is a known function which has a process that is not understood. It is the core of our humanity, our belief in our free will and ability to evaluate our decisions independently is what we see as setting us apart from other animals. This begs the question that still puzzles scientists and philosophers- what constitutes behavior? If biology does not equal behavior, what unknown elements allow us to behave the way we do? Rogers seems to feel that research in the behavioral science was only useful to the extent that it bettered humanity. Mapping the entire brain would be an incredible feat, if ever accomplished, but what would this do to society? Stripped of free will, how would we come to terms with life?

Rogers's research sought to explore man's life as a process, a continuum, that would never be completely understood, but hoped the discoveries made could better the lives of many without stripping them of their individuality. Rogers independently researched the outcomes of psychotherapy as quantitatively as possible. Rogers was one of the first psychotherapists to record sessions for further analysis. In his research, which was completely based on psychotherapy, not medication, he sought to preserve the idea that man was a unique and independent entity. Displeased with what he had learned from psychoanalysis and other prevailing theories, Rogers set out to do what he called "negative learning." When the ideas provided to him through formal education failed him, he pursued other options. Rogers coined the term "client centered therapy", a term still in use today. His overarching belief was that, through a constructive relationship with a patient in which he was "real", he would be able to help them learn things about themselves and help affect how they acted in their other relationships to become more successful and happy in life. He set out several models to explore what aspects had to be present in order for the relationship to be therapeutic. If a patient felt he was working cooperatively with the psychotherapist to solve a problem; if the psychotherapist was trustworthy and communicated this to the patient; if the patient could be allowed to express his thoughts free from external evaluation; and if the therapist could view the goal as a process of becoming, then he stated that therapy would succeed.

The ideas Rogers raised are helpful not only for use in therapy but also for education. While recognizing the flaws of the education system, Rogers applied the concepts he saw as effective stimuli for learning in the classroom. A class without teachers, lectures, or examinations was his ideal, but this ideal wasn't readily approved by any university. Instead, Rogers attempted to create an environment where students and faculty are seeking a solution to a problem or problems, pushing collectively away from flawed ideas and using resources collaboratively to advance to a less wrong idea. Rogers suggests we see examinations not as markers of the material we've learned, but as necessary tickets for entrance into points in life, such as graduate school. If we as learners could come to value to process of learning more than the examinations and final grade, what could we achieve together? Rogers states that there is no ideal that is a stasis, but a constantly flowing process that we can allow ourselves to become engaged in, and that in becoming part of this process, we can achieve what he considers to be the good life. ((1), 184-196). Learning too, is a process of constant change, of growth in our own knowledge and in the generally accepted ideas of society. By accepting knowledge as a fluid concept, we can further our enjoyment of life and our academic pursuits.

References

1. Rogers, Carl R. "On Becoming A Person: A Therapist's View of Psychotherapy." 1961. Houghton Mifflin Company, New York.

2)"Science as getting it less wrong." Paul Grobstein, Bryn Mawr College serendip website.

3)Class discussion on the I-function, forum for Biology 202, on the serendip website.


You couldn't catch it if I threw it at you: A neu
Name: Erin Okaza
Date: 2004-02-24 09:51:31
Link to this Comment: 8447


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

I was 15 and volunteering at my brother's school for children with learning and developmental disabilities when I met Blake Matheson. Unlike most of the other kids, Blake had cerebral palsy (CP) and was confined to a wheelchair. At the time, I didn't know anything about CP and remember nothing but sweaty palms and racing thoughts as I approached him. After an awkward moment of introductory silence, he started asking me questions. Not only did we become friends, we introduced the other to aspects of our worlds they would have never otherwise known. Though my family's move the following summer ended my time at the school, my humbling experience with Blake remains a constant echo of people's tendency to make assumptions (however innocent) about others based on shallow observations of outward physical appearance and behavioral differences. And in the end, walking away with nothing.

The goal of the following discourse is to provide a useful way of thinking about cerebral palsy in the context of the nervous system. I hope that such an examination will enhance our understanding of behavior associated with CP and ultimately demystify common misconceptions. First, I will explore why box models of the nervous system are useful in explaining CP. Next, I plan to investigate how recognized differences in the nervous system provide useful ways of thinking about specific CP mechanisms and treatment. Finally, I will close with an evaluation of the nervous system's limitations in defining behavior. In effect, this discussion will demonstrate support that cerebral palsy is yet another condition consistent with observations characterizing the notion of brain = behavior.

CP occurs as a result of irreversible damage, before, during or after birth, to the networks of brain cells (neurons) and connecting "cables" (white matter) that control movement. In effect, it is not a disease that can be "caught," but a medical condition dealing with muscle control that affects posture and movement (1). CP is a generic term that covers four distinct cerebral palsies - spastic, athetoid, ataxic, and mixed. In addition, further classification of CP is characterized by body location: quadriplegia (all four limbs), hemiplegia (one side of the body) or diplegia (either in both legs or both arms) (3).

Thinking about behavioral outcomes in terms of boxes is especially helpful in the case of CP. Such analysis offers an explanation of the occurrence of different CP's all within the realm of motor disability. Behaviors associated with various types of CP's will change in relation to the severity and location of brain damage. Athetoid cerebral palsy, caused by damage to the basal ganglia, is characterized by the lack of coordinated smooth movements; ataxic cerebral palsy, evidenced when there is damage to the cerebellum, hinders depth perception and balance; spastic cerebral palsy, mainly caused by damage in the cerebellum, results in stiff difficult movement; children with mixed cerebral palsy may display a combination of two or more of the above types (2). This suggests that behavior is highly dependent on brain organization. Damage to the white matter of the brain does not result in a random expression of behaviors; instead damage to the white matter severs connections between specific internal interconnections linking "boxes" producing a very specified behavior expression consistent with damage only that compartmentalized region.

Further developing the function of "boxes" in explaining the CP is the notion that physical and behavioral differences of the nervous system do not necessary imply mental retardation or a learning disability. In the case of Blake, I noticed that with the help of communication devices, he was able to relay and communicate ideas which were often far more developed than those of his "normal" looking peers who, unlike Blake struggled with learning disabilities. The above discussion about specific "boxes" generating specific outcomes implies that intellectual outcomes are independent of motor outcomes. Thus, causality cannot be used to assume that an individual with behavior differences automatically has cognitive disabilities. Only one-forth to one-half of children with CP experience some type of learning problem such as a learning disability (1). It is important to note that individuals with learning disabilities are usually within the normal range of intelligence as opposed to those with severe learning problems such as mental retardation -- were average intelligence is below normal (4). It is also important to note that many "tracts" run between different "boxes." This suggests the existence of many pathways to achieve the same outcome. In effect, it may be possible for other interconnections (i.e. axon bundles) to take over in the event that damage occurs in one "tract" to still produce the same result. Only if the severity and location of the same motor-hindering brain injury also affects the internal interconnections between "boxes" of the brain specific to the facilitation of intellectual outcomes, and other non-affected white matter interconnections cannot compensate to recreate the output, might causality be determined (5).

The second area of examination investigates the extent to which we can use observations about the nervous system to explain why people with cerebral palsy behave differently. To conduct a thorough analysis, the focus will be placed on spastic CP, as it is prevalent in 80% of all CP cases (2). Spastic CP occurs as a result of abnormal motoneuron excitability (8). Under normal circumstances, muscles usually have enough tone to facilitate movement and maintain posture while adjusting for speed, gravity, and varying flexibility. This movement occurs as sensory nerve fibers communicate how much muscle tone the muscle has as it relays the information "to tense" to the spinal cord which then carries the message to the brain (7). The command to reduce muscle tone follows the opposite path of direction from nerves in the brain via the spinal cord. These two processes work in tandem to coordinate smooth muscle movement and strength. On the other hand, an individual with spastic cerebral palsy cannot control the muscle's amount of flexibility. In effect, the relay from the muscle floods the spinal cord and creates a muscle that is too tense (spastic) (6). The inability of the nervous system to facilitate coordination between the stretch receptors, sensory neurons and interneurons in the spinal cord creates stiff muscles, limits stretching, and hinders muscle range. Over time, spasticity becomes the major cause of physical deformities in limbs (1).

Knowledge of abnormal motoneuron excitability in the nervous system is used to create CP management techniques specific to various types of spasticity. The first technique, selective dorsal rhizotomy (SDR) is currently the only permanent procedure that reduces spasticity and is favored in young children with velocity-dependent spasticity (10). SDR involves cutting hyperactive sensory nerve fibers that originate from the muscle and enter the spinal cord rootlets so as to reduce message flow to the muscle (7). In effect, nerve cells in the spinal cord receive less information from the muscle sensory neuron resulting in a more even distribution of nerve cell traffic in the spinal cord. Another relatively new method is the intrathecal baclofen pump used for patients with diffuse spasticity. It works off the nervous system's failure to release gamma amino butyric acid (GABA), a chemical neurotransmitter that signals the relaxation of the lower back and leg muscles producing an inhibitory affect on the thalamus (4). When baclofen is injected into the spinal cord, it mimics the functions of GABA - blocking abnormal nerve signals and allowing for greater muscle control (7). In the end, both treatments address the muscle neuron's inability to send controlled messages along an interneuronal mechanism, resulting in improvements in standing, sitting, walking and balance control. Though both methods clearly use different mechanisms, both techniques have gained positive responses.

Despite the fact that neurobiological advancements have enhanced our current understanding of cerebral palsy, there are limitations to which the aspects of behavior can be explained by the nervous system. Currently, all treatment for cerebral palsy focuses on symptom maintenance. Little is known about the exact nervous system interactions that cause the death of white matter tissue or why CP primarily affects motor function (7). Prevention of cerebral palsy can only be addressed once researchers understand the process of normal brain development and what mechanisms go awry during development causing nervous system anomalies that are observed as behavior differences (9). Once understood, comparisons might then be made between the brain and nervous system functions in CP and non-CP development to investigate the exact mechanisms leading to brain damage, and possibility of prevention prescriptions. The key to understanding brain development lies within fetal development. We can apply our observations about the box model's usefulness in characterizing cerebral palsy behavior to ask questions about what happens during this time of rapid cell division. At what levels do brain cells specialize into different types? How do they know where to assemble in their respective parts of the brain? We can further the depth of developmental questions by asking about the process by which white matter develops and the nature of connective branches that form crucial connections with other brain and nervous system cells.

CP presents us with yet another example of how the "brain" generates sets of behaviors unique to its construction and organization. People with CP lack the ability to control their motor faculties due to neurodevelopmental impairments caused by damage to specific areas in the white matter of their brain. This behavior is consistent with the severity and location of the damage as generalized by the four major types of palsies. The CP brain accounts for the differences caused by brain damage and produces a slightly different set of behaviors depending on the extent of damage. Though the nervous system is useful in explaining CP behavior, it does not account for all aspects of behavior. This leaves us with suggestions about what we should look for in the nervous system - particularly in the area of developmental neurobiology and the implications such research might have on CP prevention. Cerebral palsy offers a unique look at the neurological triumphs of medicine while simultaneously presenting a humbling manifestation that we are all at the mercy of our own misunderstandings. Though scientific shortcomings are reconciled through trial and error, education is the only way by which clarification and personal understanding is achieved. Maybe this discussion about CP has in a small way continued where Blake and I left off 7 years ago.


References

1) Miller-Dwan's Regional Rehabilitation Medical Center , specifically devoted to providing information about spastic CP
2) University of Virginia Medical School , Children's Center, tutorial for cerebral palsy
3)About Cerebral Palsy , Information focused on specific types of cerebral palsy
4) American Association on Mental Retardation , provides distinguishing characteristics between mental retardation and learning disabilities
5) Cerebral Palsy Resource Center , information and links about treatment, diagnosis, care, ect.
6) University of Alabama , defines the mechanisms of spastic CP
7) St. Louis Children's Hospital , surgical treatment options for spastic CP
8)Kennedy Krieger Institute , general overview of CP and current research initiatives
9) National Institute of Health , general overview of CP
10) Ontario Federation for Cerebral Palsy , information about spastic CP


The Mind's Eye? A Look at Optical Illusions
Name: Ghazal Zek
Date: 2004-02-24 10:14:56
Link to this Comment: 8449


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Epicharmus, a Greek poet and originator of Sicilian Comedy (1) is credited to have said that "the mind sees and the mind hears. The rest is blind and deaf." (2) Although Epicharmus' idea was conceived around 450 BC, it is interesting to apply it to our modern understanding of optical illusions, if we understand an optical illusion to mean a "false visual perception" (3). One type of optical illusion that specifically interests me elicits the illusion of motion. These "motion perception" (4) illusions provide exceptionally striking visual effects, usually of a stationary figure appearing as a circular-rotating figure. Using motion perception illusions as a model, can we use Epicharmus' notion that the "mind sees" and the "rest is blind" (in this case, the eyes) to explain the phenomena of optical illusions?

It will first prove helpful to understand how the eye works. When we see an image, our eyes are actually receiving light, which enters through the cornea. The cornea bends the rays of light before they reach the pupil. The rays of light then pass through the lens and bend toward the retina. (2) The retina, however, captures an inverted image. There is a layer of photoreceptors (among other types of neurons) on the retina which are used to measure light intensity in a way that then allows the rest of the nervous system to understand the signals. In humans, as well as most animals, nerve cells found in the eye are organized into a "lateral inhibition network." Before the signals are sent to the brain through the optic nerve, the lateral inhibition network, along with other organizations of neurons on the retina, process them. (5)

The lateral inhibition network actually "throws away" a significant amount of information. So, is lateral inhibition helping or hurting our ability to see? Lateral inhibition consists of excitatory input from some photoreceptors and an inhibitory input other photoreceptors. The same levels of illumination of excitatory and inhibitory photoreceptors generates the same output signal. However, when there is a contrasting dark/light border, different output signals are generated. (5) In general, lateral inhibition is able to "fill in" much of the information that it "throws away." In this case, lateral inhibition does not hurt our ability to see. On the other hand, sometimes the wrong information is filled in, and we see the illusion of another image.

The motion perception model (also called the "peripheral drift illusion") that I would like to discuss is called "rotating snakes." (6)Here, the viewer sees the image of rotating coils of snakes, whereas the actual image is quite stationary. It is important to note that there exist many different regions of color contrasts in this illusion, and that it heavily relies on peripheral drift. In general, illusory motion has a pattern of moving from a black region to an adjacent dark gray region, or from a white region to an adjacent light gray region. Factors such as curved edges and shorter edges enhance the peripheral drift. (7)
Although Epicharmus could not explain the phenomena of peripheral drift or the lateral inhibition network, his idea that the "mind sees" and the "rest is blind" still raises some interesting points. With regard to the peripheral drift illusions, the image the viewer sees is in large part a product of the "brain." The eyes, therefore do not behave as a camera does; they cannot simply capture an image independent of a lateral inhibition network, independent of the brain's involvement. However, simply because the brain may be involved in our sight does not mean that seeing is necessarily a "conscious" effort. For example, the lateral inhibition networks work as a part of the unconscious brain (5). In effect, no matter how hard one tries to avoid being fooled into motion perception, one cannot do it (unless one is an appreciable distance from the image, therefore lessening the strength of its light/dark regions).

So, while it is the mind, or part of the mind that is deciphering the rays of light picked up by the eyes into meaningful images, it may be working semi-independently from other parts of the brain which are used for logical thinking or problem solving. In other words, when it comes to peripheral drift illusions, we cannot think our way out of seeing something that is not there. On the other hand, we can know that what we are seeing is in fact, an illusion (although not necessarily instantaneously).

Clearly, no two brains are alike, so we can infer that no two people see something in one way. By and large, however, the patterns of vision are similar, especially with regard to motion perception illusions because of the way the eyes (and brain) work. Knowing that what we see is not exactly a snapshot of the world can be a disheartening notion. However, realizing that the way in which we view the world is unique and subject to a system as complex and evolved as the human brain, our view of the world does not seem so disheartening, after all.


References

1)Encyclopedia Britanica: Epicharmus
2)Are you seeing what I'm seeing? By Keith Gaudet, A simplified explanation of how the eye works and perceives illusions.
3)Encarta.msn.com dictionary definition of "optical illusion"
4)Sap Design Guild's Optical Illusions, A nice web resource for the different types of optical illusions
5)Serendip: Lateral Inhibition, A rich resource from Bryn Mawr College about how the eye works
6)Optical Illusions: Rotational motion, A website containing rotation motion optical illusions, namely the "Rotating Snake Illusion"
7)Phenomenal Characteristics of the Peripheral Drift Illusions: Vision: Vol. 15 No. 4 261-263, 2003, An article from the Journal "Vision" explaining the phenomenon of peripheral drift.


Scrutinizing Timmy and Lassie: A Behavioral Explor
Name: Ginger Kel
Date: 2004-02-24 10:45:13
Link to this Comment: 8450


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"A dog teaches a boy fidelity, perseverance, and to
turn around three times before lying down (1)."
-Robert Benchley

In United States homes, people do not dominate—pets do. Today, Americans own 377.8 million domesticated animals, 65 million of which are canines/dogs (2). When surveyed about why these families own pets, words like "companionship, love, company, and affection" frequently popped up (2). In recent times, as the above quotation demonstrates, pets, but more specifically dogs, have been bestowed with near humanity. They are parts of our families, our guardians, and our best friends. What causes humans to venerate dogs so? What is it in our nature that makes us compatible with such a different species? The answers to these queries lie in the behavioral common ground between man and dog.

Canine roommates are not recent phenomena. It has been estimated that dogs were domesticated as far as 15,000 years ago in East Asia (3). Dogs were a form of livestock. "People must have gained some advantage by having this domestic animal at that early time...Dogs may have been used as sentinels, for transport, and for herding in hunts (3)." The task of taming wolves required energy. This is energy that could have been used in some other venue of daily life, and therefore, makes domestication a costly process. However, the function performed by dogs made it a worthwhile cost. Why is this? "Humanity's first obligation is to ensure humanity's survival (4)." The use of canines afforded human beings a hereditary advantage.

Why did man form a partnership with canines despite all the other animals available? The origin of the dog/man attraction is rooted in similar lifestyles. The wolf/dog is a predatory creature, and therefore, naturally exists in packs. Its' home life is a result of its profession. The pack ensures safety for individuals as well as allows for more profitable hunts (5). In other words, the convention of a pack makes survival easier for the wolf. The pack social system is a hierarchy: consisting of an alpha male, an alpha female, and a pecking order of subordinates (6). The alphas (selected dogs are instinctually inclined to this) are extremely aggressive and have to be that way in order to defend their position. Their reward for their paranoia over being usurped is to eat first at kills. Does this all sound somewhat ominous? There's a reason for it; the wolf pack has a great deal in common with the human family. Families are too a hierarchy consisting of alpha(s) figures and the subordinate offspring. They provide protection and resources for the members of the family. As offspring mature, they in turn become alphas that reproduce and support their family. For Americans, this is the quintessential American Dream. For Canines, this cycle is life. Humanity shares its basic social system with canines (6).

How can these patterns be drawn out? Widespread behavioral patterns emerge across species due to the presence of instincts. An instinct "is a behavior that animals exhibit independent of the wide range of learning and experiences of different individuals (7)." Instincts are available to organisms from birth. For example, puppies know to knead on its mother's breast in order to release milk. They are blind and deaf at birth, so there is no way to learn that behavior. It must simply be part of their initial programming. Instinctual behaviors have evolved over evolutionary time to ensure the survival and reproduction of that species (7)." In addition to giving the organism basic survival skills, it serves as a control device. Nature does not favor those who are unhealthy. Instincts, being such primitive signals, can almost be directly translatable from one organism to another. The mounting of a submissive dog by a dominant dog (7) can be equivocated to a bully checking a smaller child into a wall. By urinating on a tree, a dog is doing little more than a human marking his property with a fence.

Instinct is the beginning and end of behavioral correlations between men and canines. Our anatomy, especially our neural structures, is vastly different from our dogs. The greatest contrast being sheer size as well as structural differences in brains. The human brain is roughly 18 x the size of a dog brain (10). The human brain has an exaggerated forebrain with numerous folds to increase surface area. This is a model better suited for memory. Although similar, the dog brain is more hindbrain focused; it is better adapted to certain sensory works (i.e. Smelling). Another difference created by the brains of organism, the "genes controlling brain-cell activity are very different between the species (4)."

Behavioral differences between humans and dogs are made clear through learned behaviors. "...Learned behaviors are shaped by experience (7)". For humans, learning how to walk or crawl would be a learned behavior. For dogs, learning how to hunt would be considered a learned behavior. However, the desire to hunt is an instinct. That's why, despite centuries of repression, even the smallest poodle loves to fetch toys. This ability to adapt behavior is necessary for survival (7). The domestication of wolves into dogs was reliant on adapting learned behavior. Certain behaviors, like barking still exist, due to adaptation on the dog's part. All dogs have a very strong territorial instinct to protect their den. When they became companion animals, the dynamic of their pack was shifted. Humans become the alphas, while the dogs were subordinates. As subordinates, their duty remained one of protection. "This explains why dogs often bark at intruders at home... This behavior is often reinforced since the intruder tends to go away, thus convincing the dogs that its protective, territorial behavior works (8)." Learned behaviors are very reflective of the environment and circumstances of the organism.

It has been demonstrated how man and dog are dissimilar. Common traits have also been pointed out on why human beings would want canines in their lives. However, a connection has yet to be established that shows why dogs are given a "soul" by humans. Somewhere in the 15,000 years together, dogs began to "converge on some of our thought processes (3)." The proximity of living space allowed humans to notice the airs and quirks of dogs. Canines had been forced to accept their owners as members of their pack. When dogs were seen as part of families, humans bestowed personalities upon them.

During World War II, British Sgt. Cyril Jones was helplessly caught by his parachute in a tree in the jungles of Sumatra, Indonesia. A wild monkey, perhaps recognizing Sergeant Jones' hunger and vulnerability, gathered bananas and bamboo shoots, and fed them to the soldier for 12 days straight. Even after Jones finally managed to cute himself loose, the monkey stayed with him. The animal continued to provide fruit as Jones
searched for his regiment (9).

Morality within dogs/animals is a scientific worm hole at the moment. There is no way to communicate with animals (9), and therefore no way to prove or disprove the divinity of an animal. Do animals have true thoughts and true emotions? Did the monkey take pity upon Sergeant Jones? "Animals, like humans, are capable of experiencing really strong feelings. They can choose to express their emotions through behavior that is virtuous and moral (9)." Another circulating trend of thought is that animals ultimately look out for animals. If they act in an unselfish manner, it could be because: they're acting instinctually, they expect a favor, or are making sure their pack survives (9).

Dogs and Humans made a deal 15,000 years ago. In return for their freedom, humans have ensured the survival and dispersion of their species. Humans spend over $31 billion a year on their pets (2). They claim dogs bring forth numerous health benefits including: lower blood pressure, prevention of heart disease, reduction of stress, and even lower health care costs (2). There's something unique about the bond between man and dog. It forces us to face our primitive aspects, and that in itself is healthy. There is something raw, but true, in our differences, in our similarities, and in communicating with something outside our species. Even if dogs are just cute, fuzzy parasites, humanity will be arm in paw with them until the end.

References


1)Mridula Shankar, mshankar@brynmawr.edu "Quotations about Dogs," 19 February 2004, forwarded email (19 February 2004).

2)APPMA Industry Statistics & Trends,from American Pet Products Manufactures Association

3)Stone Age Man Kept A Dog,written by Kendall Powell for Nature News Service

4)Animal-Based Research: Our Human Obligation,written by Dr. Adrian Morrison in BioOne database

5)Herd/Pack Behavior,written by Tom Rittenhouse

6)ThatDarnDog.com - Understanding Pack Behavior,from That Darn Dog. Com

7Basic Animal Behavior in Domesticated Animals,by Kimberly J. Workinger for Yale-New Haven Teachers Institute

8)Instinct & Behaviour,from the ACT Companion Dog Club

9)Unbeastly Behavior,by Sara Steindorf for Christian Science Monitor

10)Comparative Brain Anatomy



Ion Channels and Cystic Fibrosis
Name: Kimberley
Date: 2004-02-24 12:20:02
Link to this Comment: 8452


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Ion channels are a crucial part to all cells. They are responsible for allowing ions in and out of the cell, which permit such things as muscle contraction to occur. But in these gated structures are there ever any malfunctions? If so, what is it that causes these problems in the channels and how are they manifested? It was through the disease cystic fibrosis that I attempted to answer these questions.

Cystic fibrosis is a genetically inherited disease in which chloride transport is the root cause of its symptoms. The easiest detectible symptom, and least detrimental, of cystic fibrosis (CF) is excessively salty sweat, chloride being one component of salt (NaCl). (1) Other more harmful manifestations of the disease are abnormal heart rhythms and thick mucus, which amasses in the lungs and intestines. The mucus cannot drain normally due to its thick viscosity and therefore becomes a breeding site for bacteria. People with CF generally acquire respiratory infections as well as other breathing difficulties. Complications involving lung function is the primary cause of death among CF patients. Additional symptoms include enlarged and rounded digits, abdominal discomfort and poor weight gain. (2)

Treatment of CF generally includes ingestion of digestive enzymes to reduce the abdominal problems, taking antibiotics to prevent lung infections, and thinning the mucus in the respiratory system for more efficient drainage. These treatments have transformed the prognosis of patients from certain death during childhood to an average life span of 30 years. (2) However, these treatments only reduce the symptoms and do not eliminate the cause. The reason these treatments are not able to eradicate the disease from a patient is the nature of what causes CF.

The disease is caused by an alteration in a single gene on chromosome 7. (3) This gene is responsible for producing a protein that regulates transmembrane conductance. Upon discovery of the gene and the protein it is responsible for producing, researchers named the protein cystic fibrosis transmembrane conductance regulator. In CF patients, this gene lacks 3 nucleotides that are in control of producing the amino-acid phenylalanine. Therefore, the defective cystic fibrosis transmembrane conductive regulator (CFTR) protein cannot manufacture phenylalanine. Every time CFTR is made, the defect is detected in the endoplasmic reticulum, (which is responsible for protein synthesis and insertion of proteins into the cellular membrane) and is marked for degradation, never making it to the cell membrane. This type of CF is most common. However, other forms of the disease manifest themselves in slightly different ways. (3)

Some CF patients are able to produce CFTR that is inserted into the cell membrane. However, the protein still malfunctions due to disruptions in the function of nucleotide binding sites. One such mutation in CFTR alters the amount of time the channel stays open, making it so that the protein closes at a faster rate than that of normal CFTR. (4)

In normal CFTR, triphosphates are required for proper function. Two nucleotide binding folds are present in the ion channel, each of which has unique functional traits. Nucleotide binding fold (NFB) 1 controls opening the ion channel and it is responsible for when the ion channel is closed. NBF 2 is also involved with when the channel opens but not when it closes. Adenosine triphosphate must bind to CFTR in order for ion gating to occur, but CFTR has many more binding sites for adenosine triphosphate (ATP) than is necessary for the protein to function correctly. This observation would imply that ATP is important for the extra negative charge that prepares the protein for ion gating. (5)

Much is unknown about the nature of CFTR dysfunction and relatedness to lung infections in CF patients. The molecular mechanisms are still being studied but many hypotheses have come forth over the years. None of them fully explain the means of viscous mucus production and bacterial propagation however. Clearly, the thick nature of the mucus is due to dehydration, if the secretions had more water in them they would be of a more normal consistency. However, studies have not shown differences in chloride concentrations in mucus located in the epithelial airways between people with CF and those that do not. Thus salt concentrations may not be the cause of dehydration in the fluid. The dysfunction in CFTR may be its inability to clear fluid from the surface of the lungs. (6) Again, the mechanism is still unknown making any hypothesis a speculation.

Even though the exact molecular cause for CFTR disruption and effect of poor chloride ion regulation in the epithelial cells are not known, research is being done on ways to cure CF. One such approach is looking at the regulatory domain within the CFTR protein and its interactions with NFB 1. Researchers are hoping to find a way to keep the ion channel open longer in order to allow more time for ion exchange. (6) This research would only benefit those with the mutant type that allowed CFTR to actually make it to the cell membrane. The vast majority of CF patients would not gain from this research because of the degradation of CFTR making it impossible for the protein to reach the cell membrane. Research for this type involves altering viruses to include the normal gene that produces functioning CFTR protein and infecting those with CF with this virus. This research is slow however because patients can build up immunity to the virus, and because the patient must be infected many times this proves a great hindrance in progress. (2)

In answer to the questions posed at the beginning, yes ion channels can malfunction. Malfunctions can even have a genetic cause. Ion channels can malfunction by improper formation making it impossible for it to get to the membrane surface. (2) They can also have gate problems, making it so that the channel is not open for the normally prescribed amount of time. (6) In the case of CF, these problems caused a build up of fluid in the lungs and intestines resulting in chronic infections leading to death. (1)

During the course of writing this paper a connected but different question arose. Since CF is a genetic disease what are the ethics behind two people reproducing who both knowingly have the recessive trait? (7) There is a one in four chance that their child could have CF with the only outcome being certain death by the age of 30 and a life of physical pain. Is it wrong for two people to become parents when they know that there is a strong possibility that their child could suffer most of his or her life?


References

1)Symptoms of cystic fibrosis, for general questions about CF

2)Welsh, M. (1995, December). Cystic Fibrosis. Scientific American, 52-59.

3)Cystic fibrosis gene

4)New Insights Into Cystic Fibrosis Ion Channel

5)Molecular Structure and Physiological Function of Chloride Channels

6)Pier, G. (2002). CFTR mutations and host susceptibility to Pseudomonas aeruginosa lung infection. Current Opinion in Microbiology, Vol. 5, Issue 1, 81-86.

7)Andre, J. (2000). On being genetically "irresponsible." Kennedy Institute of Ethics Journal, Vol. 10 No. 2, 129-146.


Alliance Strategies in Bottlenose Dolphins
Name: Emma Berda
Date: 2004-02-24 12:45:07
Link to this Comment: 8454


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Dolphins have long been considered some of the smartest animals next to humans. They exhibit complex behaviors such as: social hierarchy, formation of alliances, what appears to be suicide(1) and cooperative behavior.(2) This paper will deal with alliance formation in particular. Why do dolphins form these alliances? Is it simply helpful for survival or is it more complex? How do these alliances compare with human behavior?

Researchers have been studying the bottlenose dolphins (Tursiops sp.) in Shark Bay, Western Australia for quite a long time because they are tame. They have observed male-male alliances that seem very stable. Male alliances are usually groups of two or three males that can last many years. The association coefficient for some pairs of males is in the same range as those found for mothers and their nursing calves(3).

So why do males form these alliances? The answer seems to greatly reflect human behavior: to get women. Male alliances typically "herd" females for anywhere from a few minutes to months(4) These herding events are not usually enjoyed by the females. Herding is often forcible with escape events and violence involved.(3) In a herding event males will surround the female or chase her. Aggression toward the female is common and can include: hitting with the tail, head-jerks, charging, biting, or body slamming.(3) Should the female try to escape, which often happens, the males will chase her more often than not. Of course the ultimate goal of a herding event is sex and the males in the alliance will take turns to make sure everyone has an equal share. If the alliance has three members, only 2 will herd the female and the third will stay behind. However, the individual who is left behind changes with every herding event so again all members have an equal chance at mating. (3)

What has just been described is a primary alliance. However, bottlenose dolphins also form secondary alliances. Once again these are between males.(3) Let us say we have a primary alliance A consisting of 2 males. They may have a secondary alliance with another primary alliance B which has 3. Now there is a third primary alliance that is not affiliated with A or B called C which has 3 males. If C has just herded a female that alliance A or B wants then A and B will join together and forcibly take that female.(3) If A or B took on C alone it is unlikely that they would succeed because they would be evenly matched. But if they work together then its 5 against 3 and the secondary alliance will succeed. Both primary alliances do not mate with the stolen female. Perhaps alliance B will claim her this time but this means that next time alliance A will get the female.(3) Again we see equal sharing of the "spoils". Now in a reverse situation let us say that perhaps alliance C has come to reclaim their female from alliance B. B will call upon A and A will help defend the female from alliance C. (3)

I will briefly touch on the third type of alliance, the super alliance.(5) A super alliance is made up of stable alliances and labile alliances.(5) Stable alliances are like the primary alliances described above. Labile alliances are ones in which males change partners frequently. The observed super-alliance consists of 14 males. Each male of the alliance has 5 to 11 alliance partners from within the super alliance.(5) Although theoretically the males should have no preference of one male over another for alliance formation, in reality there are preferences and avoidances.(5) The super alliance is another example of the social complexity found in these dolphins.

We know the dolphins form these alliances to get women but are they looking for sex or to reproduce? Alliances are likely to herd non-pregnant females that are likely to be in estrus.(3) So we can assume that although fun may be had, the overall goal is reproduction. Since there is equal sharing of the female, according to the theory of fitness we would expect males in an alliance to be related. If the males are related, than a member of the alliance would increase his own fitness when one of the other alliance members took their turn with the female. Research shows that males in a primary and secondary alliance are likely to be somewhat related.(4) However, males in a super alliance are not usually related at all.(5) Why then would a male choose to be in the super alliance? One answer could be that since the super alliance is so big they can basically take on all of the primary and secondary alliances and steal many females. Therefore the males in the super alliance would have more access to females. Perhaps that would make up for what fitness is lost by not allying with related dolphins.

Do these revelations mean that dolphins may be closely related to man on an intelligence level? We can definitely say that dolphins have complex social structures. In fact, nested alliances are quite rare and are really only found in dolphins and humans. Also, a lot of the dolphin social behavior and structure is similar to primates, which again suggests that dolphins are close to the human intelligence level.(5) But let us look at relatedness of these alliances with human society. In human society both males and females do form alliances with each other(friendships) and these alliances can last for long or short periods of time. In dolphin society it's only that males that form these alliances. In human society one of the many things that these alliances do is approach members of the opposite sex. The same is true in dolphin society except that dolphins often approach the females aggressively, while the same behavior in humans(gang rape) is much less common. In dolphins, alliances that go after females are likely to be related, in humans this is less common.

Finally, the last issue I will address is the idea of sex for fun. In my opinion, an animal that has sex for purposes other than reproduction is probably more likely to be related to humans intellectually. Earlier I stated that alliances are more likely to herd non-pregnant females. So reproduction is one of the goals. But it has been proven that dolphins do enjoy sex. Dolphins have been recorded having homosexual sex and there is no chance for reproduction there.(6) So perhaps the dolphins in the alliances are also having sex for fun but since they don't have the worries of fatherly duties they may as well have sex with non-pregnant females.

In conclusion, dolphins are remarkably social, intelligent, and complex animals. Their social complexity indicates that they may be near the intelligence plane of human beings. I think that the more we study these animals the more we will realize that they are closer than we think.


References

1)Dolphin fact page

2)Seaworld Bottlenose Dolphin Fact Sheet

3)Connor, R.C., Smolker, R.A, & Richards, A.F. 1992a Two levels of alliance formation among male bottlenose dolphins (Tursiops sp.). Proc. Natl. Acad. Sci. USA 89:987-990

4)Krutzen et al. 2002 Contrasting relatedness patterns in bottlenose dolphins(Tursiops sp.) with different alliance strategies. Proceedings of the Royal Society of London series B. 270:497-502

5) Connor, R.C., Heithaus, M.R., & Barre, L.M. 2001 Complex social structure, alliance stability, and mating access in a bottlenose dolphin 'super-alliance'. Proceedings of the Royal Society of London series B 268:263-267.

6)Gay Marine Animals


Heart Attacks: Cause And Effects
Name: Laura Silv
Date: 2004-02-24 14:29:22
Link to this Comment: 8457


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Part of maintaining a healthy lifestyle is to know, at each stage of one's life, what diseases or dangers one faces. Infants, for example, are extremely susceptible to colds because their immune systems are not fully developed. Children under the age of 10 have a very high chance of getting the chicken pox. Men and women between the ages of fifteen and twenty-five are at high risk of becoming infected with HIV/AIDS. Adults over the age of forty become increasingly at risk for having a heart attack. Since February is American Heart Month, I thought this would be an excellent time to research some of the causes and effects of heart attacks.

A heart attack is caused by the build-up of fatty substances, cholesterol, calcium and other substances that make up plaque. Plaque can begin to build up within the inner linings of the larger arteries of the body in childhood, but it takes much longer, usually thirty years or more, for the build-up to escalate to dangerous levels. This process of plaque build-up is called atherosclerosis, a process which is quickened by having high blood pressure or cholesterol, diabetes or especially by smoking.

Over time the build-up of plaque severely limits the flow of blood to the heart, specifically to the myocardium, the middle layer of the wall of the heart (the outer layer is called the epicardium, and the inner layer is the endocardium). The myocardium is the main muscle which allows blood to flow in and out. In fact, according to the American Heart Association, "the medical term for a heart attack is myocardial infarction." (1)

Because less blood is getting through to the heart, oxygen, which is carried within the blood cells, also becomes limited. If one or more artery (arteries) becomes completely blocked, a heart attack follows. If immediate treatment, usually surgery to clear up the arteries, is not administered, the muscles of the heart become permanently injured, causing the patient to die or become disabled.

A heart attack can, less frequently than by the complete blocking of the arteries, also be caused by a severe spasm or tightening of the coronary artery, which temporarily cuts off blood flow from the heart. While causes of artery spasms are not widely agreed upon, it is believed that they may be caused by smoking cigarettes, heightened stress, or by taking certain illegal drugs like cocaine. (2)

Warning signs of a heart attack are varied and usually do not precede an attack by more than five minutes, so it is necessary to act quickly. Some such warning signs are prolonged or recurring (over the time period of a few minutes) discomfort or irritation in the chest or arms, shortness of breath, which is usually proceeded by the afore-mentioned discomfort, or a feeling of being lightheaded.

Treatments for heart attacks vary depending on severity of the condition and how far in advance the condition was discovered. Most common is an angioplasty procedure, which is when a small tube is placed inside an artery in order to reinstate and facilitate blood flow to the heart. Medications likewise vary from case to case, but most commonly, beta blockers are given to patients to, according to the National Heart, Lung and Blood Institute, "decrease the workload on your heart ... [and] to prevent additional heart attacks." (3)

A new treatment is emerging now for preventing heart attacks years before they start. In November 2003, Dr. Eric Topol of the Cleveland Clinic and his team of scientists were able to locate the first gene known to directly cause heart attacks. This discovery was found with the help of an Iowan family, the Steffensens, which had suffered from heart attacks for generations.(4) Out of ten siblings, nine had their first heart attack between the ages of 59 and 62, and many have had more than one heart attack. And the one sibling exempt from the heart attacks was found not to have the gene. This particular gene "creates weak artery walls", which make heart attacks a practical guarantee. And now that the gene has been identified, it can be isolated and prevented from spreading.


References

1 )American Heart Association Online

2 )National Heart, Lung and Blood Institute

3 )National Heart, Lung and Blood Institute

4 )CBS News


Brain Modularity: Links between Evolution, Intelli
Name: Prachi Dav
Date: 2004-02-24 15:57:00
Link to this Comment: 8460


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Questions that arise during an examination of brain structure seem always to proliferate from a point of origin, that is, from the first question asked. One of the simplest question may ask, is the brain modular? And the simplest answer would be almost certainly positive. However, it is the nature and origin of such modularity that we should perhaps be concerned although the characterisation of each module is presently beyond our grasp. Brain modularity has more recently, with the marriage of psychological and evolutionary concepts, has been depicted as structure arising from evolutionary forces such that the brain constitutes a compilation of adaptations, evolved as solutions to the various adaptive problems in the environments faced by our ancestors. The amalgamation of evolutionary principles and the ideas regarding brain structure are primarily the work of Leda Cosmides and John Tooby who indicate that human reasoning is not generalised but a specialised ability and that the reasoning mechanisms (reflected in modules) are devoted to the management of social problems 1)Evolutionary Psychology an informative source for theory in evolutionary psychology. However, such an approach begs certain questions. For example, such an interaction with the social world and its problems requires mechanisms that can remember and track changes in the environment. As Henry Plotkin asserts, this mechanism is intelligence 2)The evolution of culture clearly presented ideas of a respected evolutionary psychologist. The relationship between intelligence and the modular structure of the brain on the one hand and their relationship to human culture as we see it today, in it's vast and wonderful complexity on the other, are linkages that evolutionary psychologists are currently grappling with and which will be the subject of the following paragraphs.

The field of "Evolutionary Psychology" has begun to be associated primarily with certain fixed principles and the work of Tooby, Cosmides and Pinker, among others 1)Evolutionary Psychology an informative source for theory in evolutionary psychology. Given that evolutionary psychology may strive to discover and explain psychological adaptations and their functions, such psychological adaptations in the brain must be characterised. As they developed in response to various problems in the environment, dilemmas so different from one another that they could not be solved by some abstract, generalised mechanism, they were posited to be represented in the brain as a collection of "modules" which are specialised problem-solving domains. The brain is conceptualised as a container for enormous numbers of these modules, to the extent that some extreme proponents of the theory defend the concept of "massive modularity" which maintains that the brain is riddled through and through by such modules others 3)In Defense of Massive Modularity an extreme position within the school of evolved modularity.

The modules proposed by Tooby and Cosmides are purported to have evolved in order to contend with a variety of adaptive problems encountered by our Pleistocene ancestors (since evolution as a cumulative process requires vast swathes of time, human psychological adaptations are not in accordance to modern life) such as alliance formation, kin relations, sexual attraction and so on. As these module functions are listed, an obvious problem arises, that is the difficulty of obtaining evidence for their existence. In this case, then, the desire to discover the universal structure that links together all members of our species, is severely obstructed.

Given the obvious existence of cultural and attitudinal variance, evolutionary psychologists hold no expectation of finding common human behaviours and beliefs through the discovery of modular commonality, but instead hope to uncover similarities in "cognitive Darwinian Algorithms" 1)Evolutionary Psychology an informative source for theory in evolutionary psychology which are then expressed through different behaviours among humans and are context dependent. The assumption that a wide ranging sample of behaviours may be explained through the indiscriminate application of such modules and evolutionary principles to modern life often leads individuals astray and the endeavour is known as adaptationism. Exaptations, however, are behavioural expressions of functionally empty traits or traits that evolved for different uses, while spandrels are traits which developed as byproducts of others and had no original function and yet became applied toward a different adaptive function. One claim regarding modern human behaviour originates from Gould 1)Evolutionary Psychology an informative source for theory in evolutionary psychology who argues that most "mental properties" are not adaptations but spandrels. This concept is given credence by the difficulty of explaining reading, writing and the consciousness of one's mortality as shaped by natural selection. These claims are not refuted by evolutionary psychologists and they hold that the complex design clearly evinced by the brain is typical of adapted structure. The spandrels, however, are products of evolved mechanisms and they may be what we see today in terms of the remarkable complexity of human culture.

Human culture, however, concerns another capacity whose evolution engenders yet more questions of increasing complexity. The capacity referred to here is that of intelligence and is described by Plotkin (4)as a "special kind of adaptation that generates adaptive behaviour by altering brain states." This assertion is made in the context of intelligence as a mechanism through which individuals track the changes in their environment as they occur and generate behaviours which, in turn, result in learning. Learning, which is known to result in changes in the brain, then results in the storage of information within the organism such that the experience may be applied to future dilemmas. Acquired knowledge, however, is not passed from generation to generation biologically while the evolved structure of the brain is perhaps carried through generations. Such a structure, as posited by Plotkin (4)constrains the kind of learning organisms engage in through the creation of specialised modules. However, the contribution of human intelligence to a phenomenon of human culture, which Plotkin likens (in importance) to the evolution of self-replicating molecules, is great.

Human intelligence allows for extensive learning in various fields. Culture, however, depends on the sharing and communication of that which is learnt by means of "intelligent" mechanisms. The knowledge that is shared is not isolated and fixed but forever modified and metamorphosing into different knowledge and practises and this is the consequence perhaps of communication through the complexity of human language and mediation of intelligence. Furthermore, the existence of intelligence perhaps allowed for the development of a theory of mind among humans, the ability to attribute intentions to others' mental states, which then allowed for the construction of social entities. Therefore, it seems possible that the intelligence allowed for the development of the cultural intricacies that are observed especially in human societies.

In essence, evolutionary psychology is a burgeoning field of study whose proponents believe that almost any experience or instance of human behaviour arises from evolved structure in the form of modules in the brain. This conception continues and is at risk of falling into adaptationalist modes of thought. Regardless, most evolutionary psychologists believe that a great deal of human behaviour is not a direct product of evolved structure and yet this places them in another conundrum, that is, delineation of the function of each module. Yet, the theory that the brain is compartmentalised in such a way, at least to some extent, allows for theorising about the construction and propagation (with modification and addition) of human culture, whose study is fascinating and of interest to most who ponder the origins of such complexity.

References


1) http://host.uniroma3.it/progretti/kant/field/ep.htm; David J. Buller, Evolutionary Psychology, Northern Illinois University.

2) http://www.iisg.nl/research/plotkin.html; Henry Plotkin, The Evolution of Culture

3) http://www.dan.sperber.com/modularity.htm; Dan Sperber, In defense of massive modularity

4) Plotkin, Henry. The Imagined World Made Real: Towards a Natural Science of Culture. New Jersey: Rutgers University Press, 2003.


The Tenuous Past: Memory and the Ways it Fails
Name: Dana Bakal
Date: 2004-02-24 21:51:12
Link to this Comment: 8469


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"I remember it like it was yesterday!" you say. But how well do you really remember it? How well do you remember yesterday? Here's a quick quiz: What time did you have lunch yesterday? What exactly did you eat? What did you say? What did the people around you say? If you read the paper yesterday, name all the stories you read and summarize them briefly.

Don't remember yesterday as well as you thought? Don't worry, nobody does. Our memories are often thought of as recording devices, mechanically noting what has happened during the day and replaying these events like a tape. In truth, memory is a function of the brain, which is constantly in flux, organic, and does not behave like a machine. Your memory can be affected in many ways by many things, which can cause you to forget, to change memories around, to repress memories, and even to invent completely new ones!

This is of no small importance, because our only evidence that the past occurred comes from our memories. In what ways, then, can memory fail us?

Dr. Daniel Schacter of Harvard University lists "7 Sins of Memory," ways in which our memories fail us. His list features :
Transience, absentmindedness, blocking, suggestibility, bias, persistence, and misattribution (5). Most of these sins are things we experience in everyday life. When something you read last week isn't as clear now as it seemed then, that's transience. When you forget where you put your book or forget that you have to be somewhere, that's absentmindedness. Blocking is the "temporary inaccessibility of stored information," such as a person's name or a word. Suggestibility and misattribution go together, since memories can incorporate misinformation and also BE misinformation. Suggestibility is the "incorporation of misinformation into memory due to leading questions, deception and other causes," and misinformation consists of 'remembering' something that did not occur. Persistence is slightly more abnormal, and the inability to get a thought out of your head that it characterizes is common in post traumatic stress disorder.

To this list, some would add "repression," the conscious or unconscious suppression of traumatic memories. Repression was first conceived of by Freud, who felt that people could push memories out of their awareness (1). This theory enjoyed new fame in the 1990's, when hundreds of people, mostly women, 'recovered' repressed memories of abuse, fueling a Satanic Ritual Abuse scare during which many people were convicted of heinous crimes they may not have committed (8).

Michael C. Anderson et al did a study to see whether repression had any physical signals, whether the brain changed when people tried to repress a memory. They set up an experiment wherein subjects looked at a pair of words, memorizing the association. Then, after performing either a task involving thought or one not involving thought, were shown one half of the word pair and either asked to think of its complementary word or to suppress thought of it. They took MRI's of the subjects throughout the process and found that "Controlling unwanted memories was associated with increased dorsolateral prefrontal activation, reduced hippocampal activation, and impaired retention of those memories," and that "Both prefrontal cortical and right hippocampal activations predicted the magnitude of forgetting" (1). This means that there is a physical mechanism for repressing memories. This is important, as it means that memories can be buried and lost, impairing ability to remember entire portions of life.

On the flip side of repression is memory fabrication. This is affected by the 'sins' suggestibility and bias, but is really a case of misattribution. Sometimes we remember things that someone else told us about, things that we dreamed, or things we just made up. University of Washington memory researchers Jacquie Pickrell and Elizabeth Loftus conducted an experiment wherein they showed people a fake advertisement in which the reader is described visiting Disneyland and meeting Bugs Bunny. Later, one third of participants reported that they knew they had or remembered having shaken Bugs' hand. This, of course, cannot be true, since the Bugs Bunny Character is a trademark of Warner Brothers and not Disney (2). This is quite significant in everyday thought and in advertising. If imagination or suggestion can give rise to memories as real as those of actual events, how can we tell what has actually occurred and what has not?
Loftus points out that this is a memory process that advertisers use when creating "nostalgic ads." A company such as Disneyland or McDonalds can prompt consumers to create false memories of having had positive experiences with their products and services in the past, increasing your likelihood of returning (2).

Besides the more everyday ways memory fails, there are many diseases which can affect it. Alzheimer's is probably the most well-known of these. Alzheimer's impairs judgment and changes personality as well as affecting memory (6). It occurs most often in older people, who make up about 50% of the population with the disease, and is very rare in individuals under 40 (7). The memory loss in this disease, as well as in other brain-altering diseases, comes form changes in the physical structure of the brain, rather than from normal brain mechanisms.

Overall, then, our memories, which we depend on to report the past and to form our personalities, are in fact extremely mutable. They can be affected and changed by things we think, things we see, diseases we get, and they can be fabricated out of suggestion or imagination. Since these flawed memories are all we have, we must form a world view based on the premise that they are more or less accurate interpretation of the past; this premise is usually useful and necessary, but can sometimes cause problems. How much should we trust eyewitness reports of crimes, for example? Or the reports of a repressed abuse memory?
How can advertisers manipulate us using these memory flaws? And who are we really if our memories of our selves and our interactions with others are so changeable?

I leave you with those thoughts; but remember, you don't remember yesterday well as you might have thought!


References


1)Science Magazine, Anderson, Michael C. et al. "Neural Systems Underlying the Suppression of Unwanted Memories".

2)University of Washington, "'I Tawt I Taw' A Bunny Wabbit at Disneyland."

3)American Psychological Assosciation, "People Think They Remember."

4)PSYCHIATRIC ANNUALS, Loftus, Elizabeth. "The Formation of False Memories."

5)APA ONLINE,
Murray, Bridget. "The Seven Sins of Memory."

6)WebMD Health,"Alzheimer's Disease: An Overview."

7)WebMD Health, "Who is Affected by Alzheimer's Disease?"

8) Loftus, Elizabeth. The Myth of Repressed Memory. New York: St. Martin's Press, 1994.


Can Hope Heal?
Name: Millicent
Date: 2004-02-24 23:32:25
Link to this Comment: 8480


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

A positive outlook during a time of suffering particularly during an illness can help one heal faster. It is often believed that a person can fight disease with their mind. The thought that we can combat sickness with our attitude is not an idea that has much scientific proof. Until recently little scientific research was available on the effects of hope in the healing process. However recent studies show significant evidence is available which suggests that hope may have an effect on the body during illness.

On the surface there is practical evidence that a hopeful outlook can help a person heal. Someone who believes that he or she might eventually get better when afflicted with a life threatening disease is more likely to take care of his or her body. This approach of taking care of ones body with the hope that it might heal can keep a person alive until better methods of treating her specific disease come along. This type of hope is believed to have helped some HIV patients who were first diagnoses with the virus. As these individuals took various steps to stay as healthy as possible, scientific advances made living for a longer period of time with the virus viable (2). While this observation is interesting this type of hope does not actually help a person heal.

As Abraham Verghese writes in his article "The Way We Live Now: Hope and Clarity" there is a belief in our society that with hope and a positive outlook one can fight off a disease such as cancer. He writes "If you accept the war metaphor... then a diagnosis of cancer becomes a call to arms, an induction into an army and it goes without saying that in such a war optimism is essential. Memoirs of Cancer Centers state this as a creed: a 'positive attitude' influences survival" (2) . He goes on to argue that this belief is not backed by substantial scientific research and therefore adds pressure to patients to always appear positive when the realities of their situations warrant some realistic grief. Verghese cites a study from Australia, which suggests a positive attitude or hope did not have a substantial effect on the survival rate or health of lung cancer patients participating (2). He uses this study to show that hope cannot make a sick person magically better.

Despite Verghese's points many scientists, patients, and medical doctors believe that a hopeful outlook can help a sick person overcome a serious illness. These proponents of the power of hope argue that a person who believes he will get better produces endorphins and enkephalins released by the pituitary gland, which can intercept the feelings of pain in the body from reaching the brain (Groopman, p.170). The research of Jerome Groopman M.D. is some of the most conclusive about the effects hope has on sick people. His research attempts to show the way the brain aids the body's healing and coping ability in sick peoples' bodies. Manipulations of the nervous system sparked by the emotions associated with hope begin a chain of events, which may help sick people recover.

Groopman uses the placebo effect to help explaining how the nervous system, with the help of hope, combats pain (Groopman, p.175-190). The placebo effect is widely accepted by medical doctors and scientists. It shows that in some cases a placebo or fake cure can satisfy patients and make them believe they are cured when in reality no medicine, surgery, or treatment has been given. For example a doctor who prescribes a sugar pill or pill containing no medicine to a group of patients suffering from an illness, without telling them that the drug is a placebo, will have some patients who report their symptoms have faded. It may seem that theses particular patients were not really that sick to begin with, but the placebo effect is actually thought to be a result of "belief and expectancy"(Groopman, p.176). The patient believes and expects the medication to cure her ailments. Belief is encouraged as the patient trusts that a doctor will be able to identify an illness and then find the appropriate medication to treat the problem. The patient then expects the treatment to work. This combination of belief and expectation can sometimes be enough to help a person recover form their symptoms.

Groopman argues that the same type of belief that is present with the placebo effect is similar to that created by a hopeful outlook. Within the body the theory that hope can heal is based within the nervous system. When a person is hopeful their body produces endorphins and enkephalins. These endorphins and enkephalins are chemicals, which alter the messages sent to the brain through the nervous system. The types of endorphins believed to be produced in a hopeful person's pituitary gland include the beta-endorphin, which is thought to improve ones mood by blocking pain (6) . According to Groopman's study in a hopeful patient's body the endorphins prevent the brain from recognizing the message of pain sent through the nervous system. Without the message of pain the body is able to exert the energy necessary to recuperate from an illness. The endorphins and enkephalins are also thought to help improve the immune system. If the body is not preoccupied with the pain of an illness it might be able to fight off a life threatening disease.

The production of endorphins and enkephalins alone cannot explain the positive effects of hope on ill people. A hopeful person benefits from a positive outlook because his body is less likely to produce the chemicals, which can prolong an illness and are associated with a negative outlook. To explain how hopelessness can prolong an illness Groopman looks to the effects of Substance P and cholecystokinin also know as CKK(Groopman, p176). These chemicals, when released in the central nervous system have the opposite effect of endorphins and enkephalins. CKK helps send the messages of pain to the brain thus increasing ones hopelessness and suffering. Groopman argues that these two chemicals are produced when a person is constantly reminded of an illness and the grave circumstances of their infirmity. This is common in patients who have serious illness with low survival rates. The pain creates a cycle, which is hard to escape (Groopman). Groopman argues that this cycle can be broken with hope.

If we accept the theory that hope sends on endorphins and enkephalins that act in a fashion similar to pain killers blocking pain from the brain we are left with the fact that some very hopeful patients never heal and that some very negative thinkers survive the worst of illnesses. The answer to this problem is that while hope may help a person survive or at least feel better it is not a cure for disease. It is simply another tool that can help on the way to recovery. Hopefully more research will come along to redefine and improve on Groopman's observations but for the time being Verghese's belief that hope is not a cure remains. Positive thinking and the mind do not have to power to completely overcome pain. However thanks to Groopman we now know that our minds and bodies together have the ability to protect us from certain pains, which could eventually help seriously ill people heal.


Sources


1.) 1) Groopman, Jerome. The Anatomy of Hope. New York: Random House Press, 2004.
2.) Verghese, Abraham. "Hope and Clarity." The New York Times Magazine February 22, 2004. Available on the web at the following web site:
5)New York Times Web Site,
3.) 5)web site dealing with the issues faced by those with serious illness, a rich resource from Bryn Mawr College
4.) 5)Acumen Journal Web Page, a life science journal
5.) 5)Acumen Journal Web Page, a life science journal
6.) 5), a beta endorphine


Individual versus Group Behavior
Name: Sonam Tama
Date: 2004-02-25 20:44:31
Link to this Comment: 8504

<Individual versus Group Behavior> Biology 202
2004 First Web Paper
On Serendip

"The passions released are of such an impetuosity that they can be restrained by nothing...Everything is just as though he really were transported into a special world, entirely different from the old one where he ordinarily lives, and into an environment filled with exceptionally intense forces that take hold of him and metamorphose him"

Emile Durkheim on Group Consciousness (1965)

The discussions we have been having in class about the brain and the self as well as the idea that we are constantly changing led me to think about group versus individual behavior. Although some people may feel otherwise, I think we are all influenced – in different degrees – by others. All of us, at one time or another, have known what it is like to be part of a group. However, there often seems to be a negative feeling towards the group, with more focus being put on negative group behavior. This paper is an exploration on group and individual behavior and thought. Are certain people more group oriented? Are others more individual minded? Which one is the "real" self?

According to an article I read, the biological explanation behind why we behave differently when we are in a group as opposed to being on our own is that the limbic system in the brain, which is involved with emotional activity, dominates the person's actions and thinking, and therefore suppresses the neo-cortex, or the logical thinking part of the brain when a person joins a crowd. Therefore, the person acts irrationally because he or she is under "emotional pressure (1)." The author of the article uses the stock market as an analogy, stating that the reason why markets crash – in various societies, rich or poor – after a sudden boom, is that people tend to follow crowds (1). This analogy leads me to ask whether non-Western societies are therefore irrational, since they are regarded as collectivists. And also, if stock brokers in various societies act in the same irrational manner, does that not prove that individuals in different societies behave the same? These two ideas seem contradictory to me and also, they raise many questions regarding the idea that joining a group means loss of rationality.

When I studied psychology, it was always made clear that western and non-western psychology were different because of the western emphasis on individuality and the collectivist nature of non-western societies. Statements like these make clear and definite comparisons between western and non-western societies: "Western societies often define adjustment by one's level of individuality, independence, and achievement promoting emotional detachment from social groups...Contrary to Western cultures, many Eastern cultures endorse a communal view of society and do not conceptualize a person apart from his or her relationships." (2). Furthermore, it was stated that social hierarchy, social support, and interdependence are highly valued in these (non-western) cultures. Thus, these different views lead psychoanalysts to believe that Western groups would endorse antisocial coping strategies (strategies targeting independence and self-advancement) and non-Western groups would be more likely to endorse prosocial coping strategies (strategies targeting joining with others for support and considering the needs of others).

But is this really fair? I was raised in a non-western society but I do not feel that I have no individual self. My American friends are no less loyal in their friendships and consider the needs of others. In an unrelated study performed on groups of boys and girls that concluded that there are gender differences in the way we learn, Dr. Grobstein, professor of neurobiology at Bryn Mawr college responded to the study by stating that, "Population differences, while real are of no use whatsoever in characterizing a given male or female...For any particular measure, a given male may be more 'male-like' or more 'female-like' than a given female." Thus, comparing groups may show major differences between boys and girls but comparing individuals may have different results. In the same way, a given non-western person may be more individualistic than a given western person or a given non-western person.

Upon reaching adolesence, parents and teachers warn children about "peer pressure" and the dangers of choosing the "wrong" kind of group. Images of teenagers smoking and drinking excessively or joining gangs are presented as consequences of "peer pressure." Even images of football "hooligans" creating all sorts of trouble after games repeatedly shown on television adds to the idea that even a non-violent person may somehow become violent when in a group. A relatively recent news article about teenagers who performed violent attacks on strangers and videotaped these "pranks" caused concern regarding group pressures. In the article, Jay Reeve, a psychologist at Bradley Hospital at Brown University in Providence, states: "Group pressure can override common sense fairly easily for these folks. ... Teens tend not to have developed a clear sense of right and wrong, apart from their peers." The immediate result, he concludes, is that teens are more prone to impulsive, violent behavior. Additionally, Dr. Alice Sterling Honig, professor emerita of child development at Syracuse University in New York agrees that violence is often linked to peer acceptance and states that, "murderous feelings and triumph of physical power are glorified and held up as splendors by society (3)." To be an individual is praised, to be in a group is to be dependent and to cause trouble. The message too often is that to be easily influenced by others is to be weak and in a dangerous position.

I still did not get a clear explanation on how this change how or why we behave differently. Then I read French sociologist and philosopher, Emile Durkheim's view on group behavior, or more specifically, "group consciousness." Durkheim feels that attempts to explain "irrational" behavior are "post facto attempts to explain socially generated compulsions which cannot be understood nor controlled." I agree with Durkheim's statement because linking group behavior with irrationality seems too clear cut. Also, is positive group behavior considered to be irrational then? Durkheim also states that, "social psychology has its own laws that are not those of individual psychology" and that there is a "conflictual ebb and flow between singularity and community, self and group...on the one hand is our individuality – and, more particularly, our body in which it is based; on the other it is everything in us that expresses something other than ourselves...(These) mutually contradict and deny each other (6).." Both Durkheim and German Sociologist Max Weber not only agreed that individual and collective states of mind are different but also that being in a group as opposed to being alone is "transcendence", an "extraordinary altered state of consciousness among individuals in a group, which Durkheim called "collective effervescence." Also, unlike the idea that non-western people are collectivists and western people are individualists, Durkheim proposes that the individual and the collective state of mind are within all people and that there is a constant struggle between these two states. Furthermore, Durkheim and Weber see the individual as egoistic and immoral but subdued within the transformative grip of the social (1).

An interesting question was raised by a former Biology 202 student, who wanted to know if terrorists are as crazy as we think they are and whether their brains function very different from ours. Of course, most of us would assume that terrorists are crazy. But what she found out was that terrorists are, in fact, like us. Clark McCauley, Professor of Psychology at Bryn Mawr notes that terrorists are not crazy and that "psychopathology and personality disorder no more likely among terrorists than among non-terrorists from the same background (5)." This caught my attention because terrorists are a perfect (negative) example of individuals who crave membership in a group or organization where the members are like family members to each other, "each with their role, and each providing support for their fellow terrorists (5)" The thought of many individuals giving up their lives for a cause or a group of people seems downright crazy to many people. This "blind loyalty" (5) may signal irrationality but I also think that these terrorists may have individual interests in mind. After all, didn't the terrorists of September 11 kill in order to enter paradise? Is this not individual interest? Terrorists are also said to "crave power" (5) and perhaps even fame and notoriety. And what about the fact that stockbrokers follow crowds in the interest of the individual.

I had hoped to find more information on this topic but the lack of information on positive group behavior (online) indicates that group behavior is generally thought of as irrational. It would have been great to find out more on positive group behavior – for instance, my favorite bands would not create wonderful music if there were no such thing as "group consciousness". I have reached to the conclusion that all the answers I have found are too clear-cut. I agree most with Durkheim in believing that both the individual and group modes exist within us, struggling with each other – and maybe even working with each other. Maybe how we behave in groups reflects how we are as individuals, or how we would like to be as individuals.

I would like to conclude this paper by mentioning that there are studies being done on the collective decision-making by ants (of the genus Lepotothorax), which looks at "how individual cognitive abilities are designed to optimize group behavior (4)." I think studies likes these are a great starting point to understanding why we humans behave the way we do.

.

WWW Sources

1) Psychology is the Key

2) How antisocial and prosocial coping influence the support process among men and women in the U.S. Postal Service

3), ABC News Online , Punch-Drunk Teens: Experts Say Peer Pressure, Media Fuel Youth Violence

4) Department of Ecology and Evolutionary Biology, Princeton University , Collective nest site choice by ant colonies

5) the Serendip Website , Terrorists: How different are they? By Stephanie Habelow

6) Dept. of Anthropology Boston University , Charisma, Crowd Psychology and Altered States of Consciousness by Charles Lindholm


The Beta-Amyloid Peptide, the Gamma-Secretase Comp
Name: Jean Yanol
Date: 2004-02-25 21:29:15
Link to this Comment: 8506


<mytitle>

Biology 202
2004 First Web Paper
On Serendip









In many neurological diseases, problems in cellular signaling pathways cause the onset of the major physiological symptoms associated with the disease. Alzheimer's disease (AD) is a neurodegenerative disorder that affects millions of people by inducing dementia. There are two forms of the disease, sporadic and familial. Familial Alzheimer's disease usually affects people earlier in life than its sporadic counterpart. Even though the major hallmarks of both sporadic and familial AD are extra cellular senile plaques, intra cellular neurofibrillary tangles, and subsequent neuronal and synaptic loss(1), the proposed cellular mechanisms by which these two forms of AD function is different. Being that familial AD is genetically linked, there have been significant findings elucidating its pathogenic cellular mechanisms.

The extra cellular senile plaques and intra cellular neurofibrillary tangles associated with AD have been the major focus of research. The neurofibrillary tangles are mostly composed of the hyper phosphorylated tau protein(2) and the senile plaques are composed of deposited 42 amino acid long b-amyloid peptide(3). While the complete methods of synthesis of both structures are unknown, the production of the extra cellular amyloid plaques is one major defining point between familial and sporadic AD. Also mutations in the components that generate the b-amyloid peptide cause most cases of familial Alzheimer's disease.

The b-amyloid peptide exists in two predominant forms, one is the 40 amino acid long peptide and the other is the 42 amino acid long peptide. The differences in peptide length arise from differential cleavage of the amyloid precursor protein (APP) from which numerous forms of the b-amyloid peptide come(5). The 42 amino acid long b-amyloid peptide, which forms the senile plaques, comes from APP cleaved by both b- and g-secretases (Figure 1, modified figure from Sinha and Lieberburg 1999). The principal b-secretase in neurons is the aspartic protease BACE1 (b-site APP Cleavage Enzyme 1) which performs the first APP cleavage to release the NH2 terminus of the b-amyloid peptide from its precursor . Subsequent cleavage by the g-secretase releases the COOH terminus of the b-amyloid peptide(6). The g-secretase is a high molecular weight complex which is composed of Presenilin 1 (PS1), mature Nicastrin, APH-1, and Pen-2 (7). Elucidating the formation of this complex is key to finding pharmaceutical treatments for Alzheimer's disease because mutations in the gene that codes for presenilin 1 are the cause of half of all familial AD cases(8)(other causes are mutation in the APP substrate). The g-secretase is also thought to be involved in the cleavage of ErbB4 (9), intra cellular domains of Notch(10), and other similar types of proteins which show that this secretase is important in other pathways.

PS1 mutations have been shown to increase the amount of secreted 42 amino acid long b-amyloid peptide(11)(12). PS1 is an aspartyl protease (meaning that the active sites are two conserved aspartate residues, D257 and D385 that are located on the 6th and 8th hydrophobic region of PS1) and has between 6 to 8 transmembrane domains (most researchers believe there are eight transmembrane domains, Figure 2 from Kim and Schekman 2004) which are important to its function and interactions in the g-secretase complex (13). This protein is localized primarily in the ER (endoplasmic reticulum) and Golgi complexes. In the ER, PS1 exists as an uncleaved holoprotein( proteins that function in the presence of a non-protein cofactor) which is thought to be inactive, but in the Golgi region PS1 exists as a heterodimer with the NTF(N-terminal fragment) and CTF(C-terminal fragment) seperated, but closely associated in a 1:1 stoichiometry(14)(15). The mechanism by which PS1 is cleaved into its respective NTF and CTF is not known, but it is speculated that the other members of the g-secretase complex Nicastrin, APH-1, and Pen-2 are needed for formation of the stable g-secretase complex and for PS1 maturation(16). Nicastrin is a type 1 transmembrane protein that spans the membrane once and interacts in the g-secretase complex after it is N-glycosylated (this is the factor that is required to make mature Nicastrin) in the ER(17). In a low molecular weight sub complex, nicastrin interacts primarily with APH-1, which is predicted to transverse the membrane seven times(18). This nicastrin/APH-1 sub complex then is predicted to interact with PS1 CTF. Pen-2, which spans the membrane twice, is believed to interact with PS1 NTF and facilitates its maturation. In this model there are two sub complexes, one composed of nicastrin, APH-1, and PS1 CTF, and the other composed of Pen-2 and PS1 NTF (Figure 3 from Fraering et al. )(19). These sub complexes interact through the heterodimeric state of the PS1 NTF and CTF. In yeast, mammalian, and Drosophila cells, presence of PS1, nicastrin, APH-1, and Pen-2 were enough to reconstitute g-secretase activity (7)(20)(21). Once the stable g-secretase complex is formed it can cleave APP into the 42 amino acid long b-amyloid peptide. g-secretase activity is believe to happen in the ER, late golgi/TGN, endosomes, and plasma membrane. Depending on where APP is cleaved in the cell is thought to determine whether it is secreted or not. However it is debated what factors lead to the 42 amino acid long b-amyloid peptide plaques form. Also the role of non-secreted b-amyloid in AD is debated and some researchers think that intra cellular b-amyloid is generated by a distinct presenilin independent g-secretase (22).

One new avenue of research has opened up very recently, which is the role of a PS related protein called IMPAS 1 in presenilin 1 holoprotein cleavage. In cells transiently transfected with IMPAS 1 and PS1 holoprotein, however there was little to no indication of this possibly due to the disadvantages associated with Western Blot analysis (23), however it is highly possible that IMPAS 1 or one of the other proteins in its recently discovered family could possibly be responsible for PS1 holoprotein proteolysis. Further analysis must be performed in order to conclude any possible cleavage interaction between IMPAS 1 and PS1. Being that IMPAS 1 is thought to be able to cleave type 1 transmembrane domain proteins (23), it is possible that it may be part of other similar pathways. Other mechanisms have recently been proposed to be functional in AD such as Inositol triphosphate (IP3) ion-gated calcium ion channels because that PS1 is known to modulate IP3 mediated calcium ion liberation(24). It has been shown that in cells with familial AD linked mutations in the gene that codes for presenilin 1, there is an increase calcium ion transients which serve in many signaling functions. Recent studies have shown elevation in ER excitability due to calcium transient elevation caused by a specific PS1 mutation, but subsequent inhibition in the plasma membrane which will disrupt cell to cell signaling(24). This implies that PS1 not only affects AD through its role in the cleavage of the amyloid precursor protein, but also in elevating specific ion transients which disrupt responsiveness to certain synaptic signaling.

While many factors are thought to contribute to familial Alzheimer's Disease, the g-secretase complex is one of the most unknown and most researched components, due to its implications in other pathways and its novel interactions which have a substantial impact on the formation of the disease. Indeed further analysis of the interaction and stoichiometry of the components is needed in order to fully understand the complex and how it is functional in familial Alzheimer's Disease. By researching the mechanisms of the disease's formation, we can hope to apply this information one day to pharmaceutical treaments that can be used for familial Alzheimer's disease patients and to use this information to possibly elucidate the formation of similar neurodegenerative disorders.


.



References




1.Selkoe, D.J. The molecular pathology of Alzheimer's disease(1991) Neuron 6, 487-498

2.Kang, J., Lemaire, H.-G., Unterbeck, A., Salbaum, J. M., Masters, C. L., Grzeschik, K.-H., Multhaup, G., Beyreuther, K., and Muller-Hill, B. The precursor of Alzheimer's disease amyloid A4 protein resembles a cell-surface receptor (1987) Nature 325, 733-736

3. Roher, A. E., Lowenson, J. D., Clarke, S., Woods, A. S., Cotter, R. J., Gowing, E. & Ball, M. J. beta-Amyloid-(1-42) is a major component of cerebrovascular amyloid deposits: implications for the pathology of Alzheimer disease.(1993) Proc. Natl. Acad. Sci. USA 90, 10836-10840

4. Grundke-Iqbal, I., Iqbal, K., Tung, Y.C., Quinlan, M., Wisniewski, H.M., Binder, L.I., Abnormal phosphorylation of the microtubule-associated protein tau in Alzheimer cytoskeletal pathology (1986) Proc. Natl. Acad. Sci. USA 83, 4913-4917

5. Price, D.L., Sisodia, S.S., Mutant genes in familial Alzheimer's disease and transgenic models. (1998) Annu. Rev. Neurosci. 21, 479-505

6. Sinha, S., Lieberburg, I., Cellular mechanisms of b-amyloid production and secretion. (1999) Proc. Natl. Acad. Sci. USA 96, 11049-11053

7. Kimberly, W.T., LaVoie, M.J., Ostaszewski, B.L., Wenjuan, Y., Wolfe, M.S, Selkoe, D.J. g-Secretase is a membrane protein complex comprised of presenilin, nicastrin, aph-1, and pen-2 (2003) Proc. Natl. Acad. Sci. USA 100, 6382-6387

8. Cruts, M., van Duijin, C.M., Backhovens, H., van Den, B.M., Wehnert, A., Serneels, S., Sherrington, R., Hutton, M., Hardy, J., George-Hyslop, P.H., Hofman, A., van Broeckhoven, C., Estimation of the genetic contribution of presenilin-1 and -2 mutations in a population-based study of presenile Alzheimer's disease. (1998) Hum. Mol. Genet. 7, 43-51

9. Lee, H.J., Jung, K.M., Huang, Y.Z., Bennett, L.B., Lee, J.S., Mei, L., Kim, T.W., Presenilin-dependent g-Secretase-like Intramembrane Cleavage of ErbB4. (2002) J. Biol. Chem. 277, 6318-6323

10. Kimberly, W.T., Esler, W.P., Ye, W., Ostaszewski, B.L., Gao, J., Diehl, T., Selkoe, D.J., Wolfe, M.S., Notch and the amyloid precursor protein are cleaved by similar gamma-secretase(s). (2003) Biochemistry 42, 137-44.

11. Borchelt, D.R., Thinakaran, G., Eckman, C.B., Lee, M.K., Davenport, F., Ratovitsky, T., Prada, C.M., Kim, G., Seekins, S., Yager, D., Slunt, H.H., Wang, R., Seeger, M., Levey, A.I., Gandy, S.E., Copeland, N.G., Jenkins, N.A., Price, D.L., Younkin, S.G., Sisodia, S.S., Familial Alzheimer's disease-linked presenilin 1 variants elevate Abeta1-42/1-40 ratio in vitro and in vivo. (1996) Neuron 17, 1005-13.

12. Mehta, N.D., Refolo, L.M., Eckman, C., Sanders, S., Yager, D., Perez-Tur, J., Younkin, S., Duff, K., Hardy, J., Hutton, M., Increased Abeta42(43) from cell lines expressing presenilin 1 mutations. (1998) Ann Neurol. 43, 256-8

13. Kim, J., Schekman, R., The ins and outs of presenilin 1 membrane topology. (2004) Proc. Natl. Acad. Sci. USA 101, 905-906.

14. Capell A, Grunberg J, Pesold B, Diehlmann A, Citron M, Nixon R, Beyreuther K, Selkoe DJ, Haass C. The proteolytic fragments of the Alzheimer's disease-associated presenilin-1 form heterodimers and occur as a 100-150-kDa molecular mass complex.(1998) J Biol Chem. 273, 3205-11.

15. Thinakaran G, Regard JB, Bouton CM, Harris CL, Price DL, Borchelt DR, Sisodia SS., Stable association of presenilin derivatives and absence of presenilin interactions with APP. (1998) Neurobiol Dis. 4, 438-53.

16. Hu Y, Fortini ME. Different cofactor activities in gamma-secretase assembly: evidence for a nicastrin-Aph-1 subcomplex. (2003) J Cell Biol.161, 685-90.

17. Kimberly, W.T., LaVoie, M.J., Ostaszewski, B.L., Ye, W., Wolfe, M.S., Selkoe, D.J., Complex N-linked Glycosylated Nicastrin Associates with Active gamma-Secretase and Undergoes Tight Cellular Regulation (2002) J. Biol. Chem. 277, 35113-35117

18.Fortna RR, Crystal AS, Morais VA, Pijak DS, Lee VM, Doms RW., Membrane topology and nicastrin-enhanced endoproteolysis of APH-1, a component of the gamma-secretase complex.(2004) J Biol Chem. 279, 3685-93.

19. Fraering PC, LaVoie MJ, Ye W, Ostaszewski BL, Kimberly WT, Selkoe DJ, Wolfe MS., Detergent-dependent dissociation of active gamma-secretase reveals an interaction between Pen-2 and PS1-NTF and offers a model for subunit organization within the complex. (2004) Biochemistry. 43, 323-33.
20. Takasugi N, Tomita T, Hayashi I, Tsuruoka M, Niimura M, Takahashi Y, Thinakaran G, Iwatsubo T., The role of presenilin cofactors in the gamma-secretase complex. (2003) 422, 438-41.

21. Edbauer D, Winkler E, Regula JT, Pesold B, Steiner H, Haass C., Reconstitution of gamma-secretase activity. (2003) Nat Cell Biol. 5, 486-8.

22. Wilson CA, Doms RW, Lee VM. Distinct presenilin-dependent and presenilin-independent gamma-secretases are responsible for total cellular Abeta production. (2003) J Neurosci Res. 74, 361-9.

23. Moliaka YK, Grigorenko A, Madera D, Rogaev EI., Impas 1 possesses endoproteolytic activity against multipass membrane protein substrate cleaving the presenilin 1 holoprotein. (2004) FEBS Lett. 557, 185-92.

24. Stutzmann GE, Caccamo A, LaFerla FM, Parker I., Dysregulated IP3 Signaling in Cortical Neurons of Knock-In Mice Expressing an Alzheimer's-Linked Mutation in Presenilin1 Results in Exaggerated Ca2+ Signals and Altered Membrane Excitability (2004) J Neurosci. 24, 508-13.



The Beta-Amyloid Peptide, the Gamma-Secretase Comp
Name: Jean Yanol
Date: 2004-02-25 21:31:11
Link to this Comment: 8507

<mytitle> Biology 202
2004 First Web Paper
On Serendip


In many neurological diseases, problems in cellular signaling pathways cause the onset of the major physiological symptoms associated with the disease. Alzheimer's disease (AD) is a neurodegenerative disorder that affects millions of people by inducing dementia. There are two forms of the disease, sporadic and familial. Familial Alzheimer's disease usually affects people earlier in life than its sporadic counterpart. Even though the major hallmarks of both sporadic and familial AD are extra cellular senile plaques, intra cellular neurofibrillary tangles, and subsequent neuronal and synaptic loss(1), the proposed cellular mechanisms by which these two forms of AD function is different. Being that familial AD is genetically linked, there have been significant findings elucidating its pathogenic cellular mechanisms.

The extra cellular senile plaques and intra cellular neurofibrillary tangles associated with AD have been the major focus of research. The neurofibrillary tangles are mostly composed of the hyper phosphorylated tau protein(2) and the senile plaques are composed of deposited 42 amino acid long b-amyloid peptide(3). While the complete methods of synthesis of both structures are unknown, the production of the extra cellular amyloid plaques is one major defining point between familial and sporadic AD. Also mutations in the components that generate the b-amyloid peptide cause most cases of familial Alzheimer's disease.

The b-amyloid peptide exists in two predominant forms, one is the 40 amino acid long peptide and the other is the 42 amino acid long peptide. The differences in peptide length arise from differential cleavage of the amyloid precursor protein (APP) from which numerous forms of the b-amyloid peptide come(5). The 42 amino acid long b-amyloid peptide, which forms the senile plaques, comes from APP cleaved by both b- and g-secretases (Figure 1, modified figure from Sinha and Lieberburg 1999). The principal b-secretase in neurons is the aspartic protease BACE1 (b-site APP Cleavage Enzyme 1) which performs the first APP cleavage to release the NH2 terminus of the b-amyloid peptide from its precursor . Subsequent cleavage by the g-secretase releases the COOH terminus of the b-amyloid peptide(6). The g-secretase is a high molecular weight complex which is composed of Presenilin 1 (PS1), mature Nicastrin, APH-1, and Pen-2 (7). Elucidating the formation of this complex is key to finding pharmaceutical treatments for Alzheimer's disease because mutations in the gene that codes for presenilin 1 are the cause of half of all familial AD cases(8)(other causes are mutation in the APP substrate). The g-secretase is also thought to be involved in the cleavage of ErbB4 (9), intra cellular domains of Notch(10), and other similar types of proteins which show that this secretase is important in other pathways.

PS1 mutations have been shown to increase the amount of secreted 42 amino acid long b-amyloid peptide(11)(12). PS1 is an aspartyl protease (meaning that the active sites are two conserved aspartate residues, D257 and D385 that are located on the 6th and 8th hydrophobic region of PS1) and has between 6 to 8 transmembrane domains (most researchers believe there are eight transmembrane domains, Figure 2 from Kim and Schekman 2004) which are important to its function and interactions in the g-secretase complex (13). This protein is localized primarily in the ER (endoplasmic reticulum) and Golgi complexes. In the ER, PS1 exists as an uncleaved holoprotein( proteins that function in the presence of a non-protein cofactor) which is thought to be inactive, but in the Golgi region PS1 exists as a heterodimer with the NTF(N-terminal fragment) and CTF(C-terminal fragment) seperated, but closely associated in a 1:1 stoichiometry(14)(15). The mechanism by which PS1 is cleaved into its respective NTF and CTF is not known, but it is speculated that the other members of the g-secretase complex Nicastrin, APH-1, and Pen-2 are needed for formation of the stable g-secretase complex and for PS1 maturation(16). Nicastrin is a type 1 transmembrane protein that spans the membrane once and interacts in the g-secretase complex after it is N-glycosylated (this is the factor that is required to make mature Nicastrin) in the ER(17). In a low molecular weight sub complex, nicastrin interacts primarily with APH-1, which is predicted to transverse the membrane seven times(18). This nicastrin/APH-1 sub complex then is predicted to interact with PS1 CTF. Pen-2, which spans the membrane twice, is believed to interact with PS1 NTF and facilitates its maturation. In this model there are two sub complexes, one composed of nicastrin, APH-1, and PS1 CTF, and the other composed of Pen-2 and PS1 NTF (Figure 3 from Fraering et al. )(19). These sub complexes interact through the heterodimeric state of the PS1 NTF and CTF. In yeast, mammalian, and Drosophila cells, presence of PS1, nicastrin, APH-1, and Pen-2 were enough to reconstitute g-secretase activity (7)(20)(21). Once the stable g-secretase complex is formed it can cleave APP into the 42 amino acid long b-amyloid peptide. g-secretase activity is believe to happen in the ER, late golgi/TGN, endosomes, and plasma membrane. Depending on where APP is cleaved in the cell is thought to determine whether it is secreted or not. However it is debated what factors lead to the 42 amino acid long b-amyloid peptide plaques form. Also the role of non-secreted b-amyloid in AD is debated and some researchers think that intra cellular b-amyloid is generated by a distinct presenilin independent g-secretase (22).

One new avenue of research has opened up very recently, which is the role of a PS related protein called IMPAS 1 in presenilin 1 holoprotein cleavage. In cells transiently transfected with IMPAS 1 and PS1 holoprotein, however there was little to no indication of this possibly due to the disadvantages associated with Western Blot analysis (23), however it is highly possible that IMPAS 1 or one of the other proteins in its recently discovered family could possibly be responsible for PS1 holoprotein proteolysis. Further analysis must be performed in order to conclude any possible cleavage interaction between IMPAS 1 and PS1. Being that IMPAS 1 is thought to be able to cleave type 1 transmembrane domain proteins (23), it is possible that it may be part of other similar pathways. Other mechanisms have recently been proposed to be functional in AD such as Inositol triphosphate (IP3) ion-gated calcium ion channels because that PS1 is known to modulate IP3 mediated calcium ion liberation(24). It has been shown that in cells with familial AD linked mutations in the gene that codes for presenilin 1, there is an increase calcium ion transients which serve in many signaling functions. Recent studies have shown elevation in ER excitability due to calcium transient elevation caused by a specific PS1 mutation, but subsequent inhibition in the plasma membrane which will disrupt cell to cell signaling(24). This implies that PS1 not only affects AD through its role in the cleavage of the amyloid precursor protein, but also in elevating specific ion transients which disrupt responsiveness to certain synaptic signaling.

While many factors are thought to contribute to familial Alzheimer's Disease, the g-secretase complex is one of the most unknown and most researched components, due to its implications in other pathways and its novel interactions which have a substantial impact on the formation of the disease. Indeed further analysis of the interaction and stoichiometry of the components is needed in order to fully understand the complex and how it is functional in familial Alzheimer's Disease. By researching the mechanisms of the disease's formation, we can hope to apply this information one day to pharmaceutical treaments that can be used for familial Alzheimer's disease patients and to use this information to possibly elucidate the formation of similar neurodegenerative disorders.

.

References


1.Selkoe, D.J. The molecular pathology of Alzheimer's disease(1991) Neuron 6, 487-498

2.Kang, J., Lemaire, H.-G., Unterbeck, A., Salbaum, J. M., Masters, C. L., Grzeschik, K.-H., Multhaup, G., Beyreuther, K., and Muller-Hill, B. The precursor of Alzheimer's disease amyloid A4 protein resembles a cell-surface receptor (1987) Nature 325, 733-736

3. Roher, A. E., Lowenson, J. D., Clarke, S., Woods, A. S., Cotter, R. J., Gowing, E. & Ball, M. J. beta-Amyloid-(1-42) is a major component of cerebrovascular amyloid deposits: implications for the pathology of Alzheimer disease.(1993) Proc. Natl. Acad. Sci. USA 90, 10836-10840

4. Grundke-Iqbal, I., Iqbal, K., Tung, Y.C., Quinlan, M., Wisniewski, H.M., Binder, L.I., Abnormal phosphorylation of the microtubule-associated protein tau in Alzheimer cytoskeletal pathology (1986) Proc. Natl. Acad. Sci. USA 83, 4913-4917

5. Price, D.L., Sisodia, S.S., Mutant genes in familial Alzheimer's disease and transgenic models. (1998) Annu. Rev. Neurosci. 21, 479-505

6. Sinha, S., Lieberburg, I., Cellular mechanisms of b-amyloid production and secretion. (1999) Proc. Natl. Acad. Sci. USA 96, 11049-11053

7. Kimberly, W.T., LaVoie, M.J., Ostaszewski, B.L., Wenjuan, Y., Wolfe, M.S, Selkoe, D.J. g-Secretase is a membrane protein complex comprised of presenilin, nicastrin, aph-1, and pen-2 (2003) Proc. Natl. Acad. Sci. USA 100, 6382-6387

8. Cruts, M., van Duijin, C.M., Backhovens, H., van Den, B.M., Wehnert, A., Serneels, S., Sherrington, R., Hutton, M., Hardy, J., George-Hyslop, P.H., Hofman, A., van Broeckhoven, C., Estimation of the genetic contribution of presenilin-1 and -2 mutations in a population-based study of presenile Alzheimer's disease. (1998) Hum. Mol. Genet. 7, 43-51

9. Lee, H.J., Jung, K.M., Huang, Y.Z., Bennett, L.B., Lee, J.S., Mei, L., Kim, T.W., Presenilin-dependent g-Secretase-like Intramembrane Cleavage of ErbB4. (2002) J. Biol. Chem. 277, 6318-6323

10. Kimberly, W.T., Esler, W.P., Ye, W., Ostaszewski, B.L., Gao, J., Diehl, T., Selkoe, D.J., Wolfe, M.S., Notch and the amyloid precursor protein are cleaved by similar gamma-secretase(s). (2003) Biochemistry 42, 137-44.

11. Borchelt, D.R., Thinakaran, G., Eckman, C.B., Lee, M.K., Davenport, F., Ratovitsky, T., Prada, C.M., Kim, G., Seekins, S., Yager, D., Slunt, H.H., Wang, R., Seeger, M., Levey, A.I., Gandy, S.E., Copeland, N.G., Jenkins, N.A., Price, D.L., Younkin, S.G., Sisodia, S.S., Familial Alzheimer's disease-linked presenilin 1 variants elevate Abeta1-42/1-40 ratio in vitro and in vivo. (1996) Neuron 17, 1005-13.

12. Mehta, N.D., Refolo, L.M., Eckman, C., Sanders, S., Yager, D., Perez-Tur, J., Younkin, S., Duff, K., Hardy, J., Hutton, M., Increased Abeta42(43) from cell lines expressing presenilin 1 mutations. (1998) Ann Neurol. 43, 256-8

13. Kim, J., Schekman, R., The ins and outs of presenilin 1 membrane topology. (2004) Proc. Natl. Acad. Sci. USA 101, 905-906.

14. Capell A, Grunberg J, Pesold B, Diehlmann A, Citron M, Nixon R, Beyreuther K, Selkoe DJ, Haass C. The proteolytic fragments of the Alzheimer's disease-associated presenilin-1 form heterodimers and occur as a 100-150-kDa molecular mass complex.(1998) J Biol Chem. 273, 3205-11.

15. Thinakaran G, Regard JB, Bouton CM, Harris CL, Price DL, Borchelt DR, Sisodia SS., Stable association of presenilin derivatives and absence of presenilin interactions with APP. (1998) Neurobiol Dis. 4, 438-53.

16. Hu Y, Fortini ME. Different cofactor activities in gamma-secretase assembly: evidence for a nicastrin-Aph-1 subcomplex. (2003) J Cell Biol.161, 685-90.

17. Kimberly, W.T., LaVoie, M.J., Ostaszewski, B.L., Ye, W., Wolfe, M.S., Selkoe, D.J., Complex N-linked Glycosylated Nicastrin Associates with Active gamma-Secretase and Undergoes Tight Cellular Regulation (2002) J. Biol. Chem. 277, 35113-35117

18.Fortna RR, Crystal AS, Morais VA, Pijak DS, Lee VM, Doms RW., Membrane topology and nicastrin-enhanced endoproteolysis of APH-1, a component of the gamma-secretase complex.(2004) J Biol Chem. 279, 3685-93.

19. Fraering PC, LaVoie MJ, Ye W, Ostaszewski BL, Kimberly WT, Selkoe DJ, Wolfe MS., Detergent-dependent dissociation of active gamma-secretase reveals an interaction between Pen-2 and PS1-NTF and offers a model for subunit organization within the complex. (2004) Biochemistry. 43, 323-33. 20. Takasugi N, Tomita T, Hayashi I, Tsuruoka M, Niimura M, Takahashi Y, Thinakaran G, Iwatsubo T., The role of presenilin cofactors in the gamma-secretase complex. (2003) 422, 438-41.

21. Edbauer D, Winkler E, Regula JT, Pesold B, Steiner H, Haass C., Reconstitution of gamma-secretase activity. (2003) Nat Cell Biol. 5, 486-8.

22. Wilson CA, Doms RW, Lee VM. Distinct presenilin-dependent and presenilin-independent gamma-secretases are responsible for total cellular Abeta production. (2003) J Neurosci Res. 74, 361-9.

23. Moliaka YK, Grigorenko A, Madera D, Rogaev EI., Impas 1 possesses endoproteolytic activity against multipass membrane protein substrate cleaving the presenilin 1 holoprotein. (2004) FEBS Lett. 557, 185-92.

24. Stutzmann GE, Caccamo A, LaFerla FM, Parker I., Dysregulated IP3 Signaling in Cortical Neurons of Knock-In Mice Expressing an Alzheimer's-Linked Mutation in Presenilin1 Results in Exaggerated Ca2+ Signals and Altered Membrane Excitability (2004) J Neurosci. 24, 508-13.



Quantifying Intelligence
Name: Maria
Date: 2004-02-26 01:55:00
Link to this Comment: 8515


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Quantifying intelligence is something that makes people anxious. Most people, when asked, cannot pinpoint what exactly it is about assigning a number or value to a person's intelligence that makes them so uncomfortable. What most people do know is that there is something about intelligence that causes it to take precedence over other culturally valued traits such as athletic prowess or physical attractiveness. That out of all the characteristics that humans value, intelligence is the one that matters the most. It is not a incorrect assessment: while athleticism might give one an edge in a sports game or appearance might help one in social situations, intelligence helps one navigate life and the broader and more complex challenges of living every day. Much of the anxiety over trying to quantify intelligence stems from the conflict between the deeply held cultural belief that all people are born with equal opportunity and the realities demonstrated by the IQ Test that some people are born with greater intellectual potential than others . While our society might aspire to be egalitarian, the fact remains that intellectual ability varies from individual to individual (1). It is an important difference too, research done over a period of years confirms that intellectual ability of the type measured by the IQ test has a profound and widespread impact on the way in which a given individual lives his or her life (1).


In an individual person's professional life, there appears to be a strong (and not terribly surprising) correlation between an individual's IQ and the type of employment he or she is able to sustain. Those in the top five percent of the adult IQ distribution (above 125) are able to enter whatever profession they choose (1). Individuals with average IQ are not competitive for most high level jobs, but are able to perform the majority of the jobs in the America (1). Individuals in the bottom five percent of the IQ distribution (below 75) are not competitive within in the workforce (1). The government recognizes the correlation between ability and IQ: during World War II Congress banned the enlistment of those with an IQ below 80 because they were too difficult to train (1).


The effect of IQ is not limited to the professional arena. There is an undeniable correlation between low IQ scores and negative social experiences, probably due at least in part to the strong correlation between an individual's IQ and their socio-economic status. Individuals with IQs somewhat below average are seven times more likely to be jailed than those with somewhat higher than average IQs (1). They are eighty-eight times more likely to drop out of high school and are 50 percent more likely to be divorced (1). Obviously one cannot make assumptions about any individual based on these numbers—after all there are many people with high IQs who are divorced. It would also be erroneous to assume that from this information one could attribute the poverty, single motherhood or divorced state of any given individual to a lack of intelligence. Rather these statistics suggest that the lives of those who are not as well equipped to deal with intellectual complexities tend to be more difficult in today's society in economic, social and personal matters.


In order to understand why an individual's ability to do well on the seemingly odd tasks that are involved in an IQ test is so closely linked to the individual's success in life, one has to understand what trait causes the correlation in the first place. At the turn of the last century by the British Psychologist Charles Spearman noticed a pattern of correlation when analyzing the results of IQ tests. The IQ test is made up of subtests on a variety of unrelated topics, yet an individual who did well on one subtest was likely to do well on all of them, no matter how disparate the contents of the various subtests were. This observation lead Spearman to conclude that there was another force at work, a "general intelligence" or g that accounted for this consistency in performance. It is important to note that g is not simply the cumulative result of someone being good at literature and math and spatial exercises. Rather, it is it's own separate function, which has recently been shown to take place in the lateral frontal cortex of one or both hemispheres (2). This is the area that high-g tasks call on, not a wide variety of cognitive functions (2).


The existence of g is not simply an abstract concept created by scientists. The notion that some people are more able at some things than others is familiar to all of us and used by all of us from the time we are small children. People intuitively sense the existence of g, they just have different names for it. Someone might be considered 'bright' or 'smart' which could just be another way of saying that they are able to handle cognitive complexity. That someone is 'quick' is particularly interesting word choice in light of the fact that recent testing has shown that it does indeed take less time and less energy for the brains of those with high IQs to solve problems (2). Which makes it also rather appropriate that people with less high IQs were often called 'slow'. How precisely to define g is exactly is difficult. Simply stated, g can be defined as "the ability to deal with cognitive complexity" (1). Being able to interpret information, recognize similarities and differences and to understand ideas and concepts are all the hallmarks of the intelligent person, and are also the abilities that constitute g (1).


The discomfort that most people feel at the idea of having a value assigned to their intelligence is a natural reaction given that in many ways it would seem that the results of an IQ test are not predictive so much as prophetic. Given that intellectual ability effects so much, it is easy to understand how the individual could feel that there is little room left for autonomy or self-determination. It would indeed be disconcerting if one felt that the results of a single test determined one's fate. I think that viewing g and the continued study of what effects and is effected by g in such a way would not only be incorrect, but also be a serious mistake. The concept of g is a useful one, but only if it seen for what it is: one way of quantifying an individual's ability to function with relative ease in the world. The test itself exists in a cultural context, a culture that values highly the results of tests. As with any test the claims to make broad judgements about an individual's future, could be self-fulfilling (5). As is the case with most differences between people, it is not the differences themselves that pose a potential problem; rather, it is the value judgements that other make based on those differences that are problematic.


The notion of g is not egalitarian one; few things about human makeup are egalitarian. While we may not think of it in those terms, we all accept and make our peace with this inequality every day (4). That I am not skilled at tennis like Venus Williams, musically gifted like Billie Holiday or beautiful like Julie Christie is not news to me or to the other 99.9% of the world for whom the same can be said. And while I admire the ability in other to do what I cannot, I don't feel that it detracts in any way from the capabilities that I do have. The same principle applies to intelligence, even though the effect of intelligence on one's life is more far-reaching than musical skills or athleticism. Every day we all tacitly acknowledge the existence of g in whose advice we seek, who we do or do not consider competent to perform a given task and the assumptions we make about people based on their profession, socio-economic status or lifestyle. As is true with any valued trait, the ability to quantify it carries with it the concern that we will begin to allow the value society places on the trait to determine the value society places on the individual who happens to possess (or not possess) it. There are a lot of ways that people can be extraordinary and there are a lot of ways that people can lead productive, useful lives regardless of how they score on a test. If correctly approached, the study of g can help us understand why people have the experiences that they do in life and can ultimately help us as a society to accept the mixed bag of skills and weaknesses that are each person.


References

1)Godfredson, Linda. The General Intelligence Factor, A very thorough article detailing the implications and importance of g in daily life.

2) Duncan, John. A Neural Basis for General Intelligence. Science Magazine. Vol.289; 21 July 2000.

3) Article by Ari Berkowtiz on Serendip, Quite a good discussion of the role genetics plays in IQ among other things.

4) An Article from Science on the International Society For Intelligence Research, General thoughts on uses of intelligence and such.

5) Letter to the Editor of NY Times Book Review by Professor Grobstein Pretty much what the title says what it is: a letter to the editor of the NYT Book Review by PG


Brain Dependence: The Debate Over the Addictive Pe
Name: MaryBeth C
Date: 2004-02-26 02:54:02
Link to this Comment: 8519


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Though alcoholism and other damaging addictions are often be traced as symptoms of depression and other emotional distress, the relatively new notion of the "addictive personality" has a significant community of supporters. According to its supporters, the addictive personality is a distinct psychological trait that predisposes particular individuals to addictions. While the nature and the very existence of this trait is still actively debated in the medical, neurobiological and psychology communities, there are definite implications in the brain that contribute to addiction. Also important to this debate are the issues of gender in relation to addiction and how these are and are not compatible with the addictive personality theory.

Addiction, as typically defined, is a reliance on a substance or behavior that the individual has little power to resist. This definition, however, fails to address the neurological aspects of this phenomenon. Dr. Alan Leshner, PhD, of the National Institute on Drug Abuse describes addiction instead as "a brain disease" and "a chronic relapsing disease", in that there are visible alterations in the brains of addicted individuals and these effects are long-lasting within their neurological patterns.(1)Also important in describing addiction is addressing the types of addiction and substance abuse that are often accredited to the addictive personality. There are two primary forms of addiction, one being the substance-based, the second being behavior-based.

The substance-based addictions, such as alcoholism, as well as nicotine, prescription and narcotic addictions, are more easily explained and identified neurologically. Particular drugs, such as crack and heroine cause massive surges in dopamine in the brain, with different sensations ranging from invincibility and strength to euphoric and enlightened states. Use of these substances almost immediately changes particular aspects of the brain's behavior, making most individuals immediately susceptible to future abuse or addiction.

Also common are the behavioral addictions including gambling, shopping, eating, and sexual activity. These addictions are not as easily explained neurologically, but are generally included in the addiction susceptibility characterized by the personality trait. Also common are sorts of combined addictions, that is, addictions that include both substance, as well as behavioral aspects, most commonly the addiction to nicotine, either smoking or chewing. This particular addiction combines a physical addiction to nicotine and a mental facet, the repeated routine of the behavior, such as a cigarette after meals.

Another issue interestingly related to addiction is the relative relationship between these abuses and addictions regarding gender. A collection of recent studies have shown that male adolescents are more active in early drug and alcohol experimentation and that men in general are four times more likely to become dependent on alcohol, twice as likely to routinely use marijuana, and one and a half times more likely to become addicted to cigarettes. Conversely, female adolescents are far more likely to experience the activities associated with behavioral addictions, and women far outnumber men in addictions to eating, binging and purging, thus developing eating disorders at a greater rate.(2)

This stratification may either evidence a key difference in the nature of addictive personalities and a link to gender, or it may discredit the theory as a whole, depending on perspective. It has been shown with other diseases, cancers and genetic traits that particular disorders favor one gender over another, therefore these statistics may show an interesting aspect of the genetic or neurobiological nature of the inherited trait. On the other hand, the variances in the addictions of men and women are often traced to societal values and the images presented to young men and women. In one interesting element of this debate, it seems that the popular image of alcohol consumption among Americans as in mass advertising is one that is largely geared towards men. Some of the symptoms of alcohol consumption and drunkenness are less acceptable for women, such as uncontrolled behavior, lessened inhibitions and weight gain, while these are more acceptable for men. It also seems that popular images associated with cigarettes have a similarly masculine undertone, as the primary face of the tobacco industry, the "Marlboro Man" embodies popular American manhood like few other icons.

While no one has succeeded in proving the existence of a true addictive personality, many experts now believe that the predisposition to addiction is more accurately a combination of biological, psychological and environmental factors. Certainly, as with all issues of psychology and behavior, the distinct combinations of genetics and inheritance must be countered with an acknowledgment of environmental factors, and the biology of addiction is no exception.

References

1)Sommerset Medical Service Website: The Science of Addiction

2)Hendrick Health System Website: Addiction


The Relationship Between Epilepsy and the Brain
Name: Chevon Dep
Date: 2004-02-27 00:30:08
Link to this Comment: 8537

Uncontrollable shaking, tongue biting, and eyes rolling have frequently been associated with demonic spirits. In the film "The Exorcist" the little girl displayed these actions and they are labeled as demonic. Unfortunately, the labeling of such actions was not just a notion of films but also in the medical field. For example, epileptic patients were characterized as being possessed, because they exhibited such behavior and it was unexplainable. As more information was gathered about the relationship between the brain and these episodes, this notion began to disappear. Since my mother has been an epileptic patient for quite some time, it is important to understand the brain's role in her recurring seizures.

Since the neurons communicate with one another by firing tiny electrical signals that pass from cell to cell, the firing pattern of these electrical signals reflects how busy the brain is at any moment, and the location of the signals indicates what the brain is doing, such as thinking, seeing, seeing, feeling, hearing, and controlling the movement of muscles."(1) Epilepsy is a brain disorder that occurs when the electrical signals in the brain are disrupted.(1) Disturbance occurs when the firing pattern of the brain's electrical signals become abnormal and intense, either in an isolated area of the brain or throughout the brain. (2) More specifically, epilepsy is a condition that involves having repetitive seizures. (2)Two or more seizures must occur before a person can be diagnosed as having epilepsy.(3)

One of the most serious types of seizures is Grand Mal Seizures (Generalized Seizures). This particular type of seizure occurs when changes in the electrical signals spread through the entire brain at once.(1) Once the entire brain is affected, there can be a loss of consciousness and shaking of all limbs.(1) According to Scott, an epileptic attack can be divided into three parts: the warning, actual fit, and the recovery.(4) When my mother experiences an attack, there is usually no warning. It seems that she goes immediately into the attack. Air is forced through the larynx and the cry that is produced indicates that the attack has begun and not that the person is in pain. (4) The gagging sound does seem that the person is in pain, because it is difficult to breath. The reason that breathing has ceased is because no oxygen is entering the lungs. (4) Not only is there no oxygen getting to the lungs but also the brain. During an epileptic fit, oxygen consumption of the brain may be increased up to fifty percent. (4) In order to function normally the nervous system requires vitamins and oxygen, which are carried to the brain.(4) Therefore, if a person has numerous seizures a serious problem can arise because there is little oxygen getting to the brain.

The tonic phase leads into the clonic stage. Many things occur in this stage such as loss of bowel or bladder control. This is due to the violent contractions of the body. Epileptic patients are unable to control their movement because of the change in location of the signals that communicates with the brain. It is not unusual for one to become unconscious or fall into a deep sleep from a few minutes to several hours.(4) After a seizure, the person rarely has memory of it.

In most cases, the cause of epilepsy is unknown. The term used is idiopathic, because there is no definite abnormality of the brain. My mother's grand mal seizures are characterized as idiopathic, because she did not experience any short term or lasting scarring or damage to the brain from head injury or serious brain infections. The electro-encephalogram (EEG) is a test that records the electrical activity of the brain.(4) This test is helpful in diagnosing epileptic patients because it reveals the unusual brain activity. The EEG is sometimes used to determine the nature of the abnormality causing the seizures. (3) Those with epilepsy have brain cells, which have disordered electrical functions and this leads to a seizure. An epileptic patient's brain cells are less able to suppress electrical discharges. Although the cause of many epileptic episodes are unknown, there are things which triggers seizures such as stress, lack of sleep, starvation, and flashing lights. (1)

In order to control seizures, many patients are prescribed some type of medication. The type of prescription one receives depends on the type of seizure. For major attacks such as grand mal seizures, phenobarbitone and dilantin are widely used by epileptic patients. There are, of course, side effects to these medications. Drowsiness and skin rashes are the most common.(4) The purpose of the medication is to control the number of seizures. However in my mother's case, she constantly has to keep switching medications, because she frequently has grand mal seizures. First, she was on phenobarbitone, but that did not seem to work so now she takes dilantin. The frequency in seizures can be the result of the triggers and not necessarily the medication.

Epilepsy is the second most common neurological disease in the United States, affecting approximately two million people.(2) More importantly, each year 125,000 to 150,000 people are diagnosed with epilepsy.(2) Serious cases of epilepsy prohibit certain activities such as driving. Employment is sometimes difficult for epileptic patients to find, because employers feel that the patients are liable to accidents and will more likely to take time off of work. (3) Epilepsy is a serious disorder that not only affects the brain but also limits the activity one can perform. When people watch "The Exorcist," it stirs up a lot of eerie feelings. Imagine living with a disorder that prevents you from controlling your actions. Which is scarier watching it or living with it?

WWW Sources
1)Trileptal Home Page, A Good Web Source

2)MayoClinic Home Page, A Good Web

3)Aetna Intelihealth Home Page, A Good Web

4)Scott, Donald. About Epilepsy. New York: International University Press, Inc., 1973


Anxiety: Simply Stage-Fright or a Daily Demon
Name: Maja Hadzi
Date: 2004-02-27 14:56:05
Link to this Comment: 8543


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Anyone who has ever been on a rollercoaster ride knows the sound of metal hitting metal as the safety bar bangs to close in front of you. A heavy sensation develops at the pit of your stomach as you are pulled up against gravity to the top of the ride. Fear and the weightless feeling of dropping down at inhumane speeds are soon to follow. Your heart races, you feel the palms of your hands sweating, and you know you have no control over your fate at this point. Well, imagine going though that every time you attempt to do a mundane daily task.

Anxiety is a part of the normal human pallet of feelings and emotions, and everyone has experienced it at one point or another. Weather it be the butterflies in one's stomach before a large performance, getting weak-kneed and tense before a date, or the fear of an approaching snake. It appears that the feelings of anxiety are a normal part of biology and the human experience, a type of defense mechanism in our sympathetic nervous system which functions on a "flight or fight" system. For some people, however, these moments of anxiety are not brief, mild, rare and isolated incidents like they are for most. Instead, anxiety is a constant and dominating force that goes far beyond the occasional nervousness and severely disrupts their quality of life. Anxiety disorders are chronic, relentless, and can grow progressively worse if not treated (2).

Anxiety disorders are the most common mental illness in the United States with 19.1 million adult Americans affected. The disorder manifests itself in a number of distinct but related forms that all share extreme debilitating anxiety at their core. The different types of anxiety disorders are as follows: Generalized Anxiety Disorder (GAD), Obsessive-Compulsive Disorder (OCD), Panic Disorder, Post-Traumatic Stress Disorder (PTSD), Social Anxiety Disorder (Social Phobia), and Specific Phobia (5).

Generalized Anxiety Disorder is characterized by excessive, unrealistic worry that lasts for at least six months. It's chronic and fills ones day with exaggerated worry and tension, even though there is little or nothing to provoke it. GAD symptoms also include trembling, muscular aches, abdominal upsets, insomnia, irritability, and dizziness. GAD rarely occurs alone, so it is usually accompanied with another anxiety disorder, depression, or substance abuse (1). "I always thought I was just a worrier. I'd feel keyed up and unable to relax. At times it would come and go, and at times it would be constant. It could go on for days. I'd worry about what I was going to fix for a dinner party, or what would be a great present for somebody. I just couldn't let something go" (2).

Obsessive-Compulsive Disorder involves anxious thoughts or rituals the individual feels they can't control. They are plagued by persistent and unwelcome thoughts or images and the urgent need to engage in certain rituals. Most people recognize that what they're doing is senseless, but they can't stop it. There is no pleasure in carrying out the rituals that they are drawn to, only temporary relief from the anxiety that grows when they don't perform them (1). "I couldn't do anything without rituals. They invaded every aspect of my life. Counting really bogged me down. I would wash my hair three times as opposed to once because three was a good luck number and one wasn't. It took me longer to read because I'd count the lines in a paragraph. When I set my alarm at night, I had to set it to a number that wouldn't add up to a "bad" number" (2).

People with Panic Disorder have feelings of terror that strike suddenly and repeatedly with no warning. Symptoms include heart palpitations, chest pain or discomfort, sweating, trembling, tingling sensations, feeling of choking, fear of dying, fear of losing control, and feelings of unreality. When people's lives become so restricted that they avoid normal everyday activities such as grocery shopping or driving, the conditions is called agoraphobia (1). "For me, a panic attack is almost a violent experience. I feel disconnected from reality. I feel like I'm losing control in a very extreme way. My heart pounds really hard, I feel like I can't get my breath, and there's an overwhelming feeling that things are crashing in on me" (2).

Post-Traumatic Stress Disorder is a deliberating condition that can develop following a deliberating event. There are three main symptoms associated with PTSD: reliving of the traumatic event in the form of flashbacks and nightmares, avoidance behaviors, emotional numbing and detachment from others, and physiological arousal such as difficult sleeping, irritability, or poor concentration (1). "Then I started having flashbacks. They kind of came over me like a splash of water. I would be terrified. Suddenly I was reliving the rape. Every instant was startling. I wasn't aware of anything around me, I was in a bubble, just kind of floating. And it was scary. Having a flashback can wring you out" (2).

Social Anxiety Disorder, also called Social Phobia, involves overwhelming anxiety and excessive self-consciousness in everyday social situations. People who suffer from it, have a persistent, chronic, and intense fear of being watched and judged by others and of being embarrassed and humiliated by their own actions. While many people recognize that their fear may be excessive or unreasonable, they are unable to overcome it (1). "In any social situation, I felt fear. I would be anxious before I even left the house, and it would escalate as I got closer to a college class, a party, or whatever. I would feel sick at my stomach-it almost felt like I had the flu. My heart would pound, my palms would get sweaty, and I would get this feeling of being removed from myself and from everybody else" (2).

A Specific Phobia is an intense fear of something that poses little or no actual danger. The level of fear is usually inappropriate for the situation and is recognized by the sufferer as being irrational. This inordinate fear can lead to the avoidance of common, every day situations (1). "I'm scared to death of flying, and I never do it anymore. I used to start dreading a plane trip a month before I was due to leave. It was an awful feeling when that airplane door closed and I felt trapped. My heart would pound and I would sweat bullets. When the airplane would start to ascend, it just reinforced the feeling that I couldn't get out. When I think about flying, I picture myself losing control, freaking out, climbing the walls, but of course I never did that. I'm not afraid of crashing or hitting turbulence. It's just that feeling of being trapped" (2).

It is interesting to examine how something that has evolved to help us survive, when in imbalance, impedes our daily life. Studies have shown that people with panic disorders might have a serotonin deficiency, or serotonin isn't being used correctly by the body. Experts believe that anxiety disorders are caused by a combination of biological and environmental factors (4). In general, there are two types of treatment available for anxiety disorders: medication and psychotherapy, which includes behavior therapy, cognitive therapy, and relaxation techniques. The goal of behavior therapy is to modify and gain control over unwanted behavior. Cognitive therapy is aimed at changing the unproductive or harmful thought patterns. Relaxation techniques help individuals develop the ability to deal with the stress that triggers anxiety as well as some of the physical symptoms associated with it (5).

However, because anxiety is something that everyone will experience at some point, many people and certain cultures do not consider it to be an illness or a problem. They may see it is a personality setback or a lack of self-control and will-power. As Irina Moissiu argued in her paper, "If it was not enough to subject adults to these ridiculous, socially constructed illnesses, we have decided to put our children through the same traumas" (3). As with any other disease, it is entirely possible for there to be a misdiagnosis for anxiety problems. However, I think that it would be more traumatic for the child to have the problem ignored and let them become overwhelmed with anxiety as the problem escalates, instead of recognizing it and using psychotherapy to teach them how to deal with it at an early age. Those who argue that anxiety disorders are not a problem do not realize that the frequency, intensity, and type of anxiety that a person with an anxiety disorder experiences is much different from the usual nervousness most stressed-out individuals feel from time to time. Thus it is unfair to compare the two on the same level.


References

1) Anxiety Disorders Association of America

2) National Institute of Mental Health

3) Anxiety Disorders

4) The Physiology of Panic Disorders, Part II

5) Treatments for Depression, Anxiety Treatments, and Stress Relief

6) Anxiety Disorder Association of Ontario

7) The Anxiety Panic Internet Resource


The Self; Social Yet Biological
Name: La Toiya L
Date: 2004-02-28 04:05:19
Link to this Comment: 8550


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Socialization effects social image in so many ways. Socialization is how we learn and process the norms of our culture, and in take in the values and beliefs we are supposed to follow in order to develop a sense of whom we are. Scientists and sociologist debate which of nurture or nature does most to affect our sense of self. Also whether entwined sociology and biology theories better explain how and what is affecting us. What about these effects contribute to how you feel about your self?
Nature versus nurture, which one helps form our self-image? In the late nineteenth and twentieth century, scientists felt the basis of their argument was stronger to believe in the fact that it was nature. Many scientists support Charles Darwin's theory that it is the survival of the fittest. This theory is often misinterpreted as the strongest, but it's much more than that. What Darwin actually meant by "fittest" was the best possible fit between organism and environment. And the organism that fits best is the one that is most capable of adapting and using its strengths to meet the challenges presented to it. (1)

With today's consistently evolving society, humans, the social beings we are, must rethink and reevaluate how we socialize and how we equip ourselves to do so. In our rapidly changing societies the fittest persons will be those who survive through adaptation to the social norms, knowledge, and conceptualizations. In this light knowledge and how we use it to socialize and adapt is key. Thomas Spencer, a behavioral psychologist, has said, "The average worker of today will probably have to relearn his job five different times in his career." And he could be underestimating it significantly. Marshall McLuhan put it another way: "The future of work now consists of learning a living rather than earning a living."(1)

A study was done on twins where they were separated at early age. This experiment was supposed to reveal how heredity and social environment help form behavior. The study concluded that nature and nurture equally shape. The concept of combining ideas and theories help us understand and clarify things. Another way of trying to approach this question is through the lens of sociobiology, where people - including Darwin himself -- have been speculating on how our social behaviors (and feelings, attitudes, and so on) might also be affected by evolution.(3) integrates theories and research from biology and sociology in an effort to better understand human behavior. The main idea of sociobiology is biology; genetics and physiology help develop our characteristics. An example that will demonstrate this is the process of early childhood socialization. At conception and during prenatal development, our DNA already can tell us what our sex, race, skin, color, hair, and eye color will be. Researchers believe that the first two to three years of a baby's brain are like vacuums ready to receive any knowledge available.

How does socialization affect self-image? Self-image is based on your personality, a person's attitude, feelings, and behavior. There are three different parts to personality: the id, the superego, and the ego. In Freudian theory the superego is the division of the unconscious that is formed through the internalization of moral standards of parents and society, and that censors and restrains the ego.(2)
The id consists of the essentials to life, drives what you are aspiring to be, the ego is what balances out one's superego and the ego is the conscience. According to Freud, socialization was due to internal factors not the environments.

Socialization for every person is different. For instance for women socialization has changed, women were once viewed as inanimate objects to role models. The norms of the culture were for women not think as much and just stick to their daily chores and their life would be fulfilled. But women are not socialized to things for themselves in society and they have aspiration in life.

In my culture the norms and beliefs that we were taught are opposite to the type of person I am now. When you're a child you are taught that everything your parents teach you is right and that they are never wrong, even though they know eventually you'll realize no one's always right. As a child I was a bit troublesome with all my questions and testing what I was told. For example when I was told I couldn't wear pants to church, I was the brat that would rebel and sit with my legs open in a skirt until my mom let me wear pants to church. That personality in a much more mature way is still apart of who I am. My ability to free think is what liberates me, and it also gives me the strength and foundation to challenge myself in different situations. How I carry myself and more importantly challenge myself is not only my socialization skills but also my behavior. Socialization affects us in so many ways far beyond the visible. Our individual socialization patterns shape our mentalities. The things we individual experiences in society directly affect our minds, which explains how our minds register and react to incidents and situations we encounter differently.

.

References

1)Waking Up in the Age of Information,

2)Dictionary.com,


3)Sociobiology, Excellent site!!! Very well written :)

The writings of Charles Darwin on the web


Sociobiology Another great site if you like - or want to like sociobioloy.


LSD- Origins and Neurobiological Implications
Name: Michael Fi
Date: 2004-02-28 18:14:45
Link to this Comment: 8555


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

D-Lysergic acid diethylamide, commonly known as the drug LSD was discovered serendipitously in 1938 by Dr. Albert Hofmann during an attempt to synthesize coramine, a circulatory and breathing stimulant. (1) The compound was considered chemically uninteresting and was ignored until 1943 when Hofmann, while reopening his research of lysergic compounds accidentally ingested some of the compound and he "suddenly became strangely inebriated. The external world became changed as in a dream. Objects appeared to gain inrelief; they assumed unusual dimensions; and colors became more glowing. Even self-perception and the sense of time were changed." (2) How was it that Hofmann, who subsequently became the father of psychopharmacology, hallucinated after ingesting d-lysergic acid diethylamide? How was his perception of reality changed? Most importantly, how did LSD affect his Central Nervous System, physically and otherwise, in order to bring about these effects, and what do these effects imply about the Central Nervous System and the neurobiology of behavior as they relate to an alteration or a divergence of consciousness?
Physical Response
LSD is a structure comprised of four cyclic structures and three notable functional groups- two ethyl groups and a methyl group. The structure of LSD bears a striking similarity to that of serotonin, which is the molecule principally responsible for determination of mood. (3) A useful explanation for the brain's receptivity to LSD is its structural similarity to serotonin. A C14 marking of ingested LSD shows that about 10% of LSD molecules ingested by a subject pass through the blood brain barrier and bind to serotonin receptors in the hypothalamus. (4) The hypothalamus is part of the limbic system, which has a diverse array of functions associated with homeostasis, movement and more importantly emotion and organization of responses. (5) Once the LSD molecule binds to the serotonin site, it alters the responsiveness of the subject's neurotransmitters. A hallucinogen produces the sensory distortion known as hallucination by lowering the threshold at which nerves produce a response signal. This means that neurons which normally require a large chemical stimulus to produce a signal which is then sent to the brain produce signals at the slightest chemical prompting. (6) This increased volume of neuron activity and signaling means more sensory information is being sent to the brain than it can handle.
The consequence of this mechanism is that LSD molecules, when introduced into the system can become an inhibitor of serotonin. This may cause depression depending on other factors. However, non-hallucinogenic LSD derivatives such as 2-brominated-LSD can be used as serotonin inhibitors to control chemically-based psychological disorders. (7)
Consciousness and Mind Expansion
If the hypothalamus, a center of organizational control and emotion is adversely affected by the binding of LSD to its serotonin receptor sites and functioning irregularly, the outward effects of LSD seem sensible. However, this explanation of neurochemical phenomena barely begins to address the idea of altered and different forms of consciousness. Once one becomes able to see sounds and hear smells, and experience a trip outside of his normal neurological configuration, one could truly say he has experienced a different form of consciousness. (8) Could thoughts generated during an acid trip have been generated under "normal" conditions? If consciousness is merely a function of the pattern or manner of impulse generation and reception, can consciousness be electrically manipulated?
The most profound manifestation of this difference in consciousness is the flashback. In a flashback, an individual returns unexpectedly to the mental state of an acid trip. Whether there are residual LSD molecules involved in a flashback, it is unclear, but a flashback, with its deviation from an individual's perceived reality, provides an excellent juxtaposition between the individual's normative consciousness and the consciousness generated by LSD. The flashback concept also introduces the idea of an LSD placebo of sorts. A brain can generate an LSD-like consciousness state without the aid of the drug itself, showing an ability to redirect the processing of neuron impulses in ways usually thought to be automated.
Ultimately the barrier to LSD research is the inherently philosophical nature of the drug itself (not to mention its illegality). The realms of consciousness reserved for psychology are yet to be blended with the realms of neurophysiology and biochemistry. LSD is peculiar amongst drugs in that it produces emotions and sensations which bend the realm of ordinary human conceptions of consciousness and defy chemical and scientific description at our current level of scientific advancement.


References

1) "Stanislav Grof interviews Dr. Albert Hofmann, Esalen Institute, Big Sur, California, 1984," MAPS, Volume XI, Number 2, Fall 2001.

2)Hofmann, Albert. LSD- My Problem Child. McGraw Hill: New York, 1980.

3)C. D. Nichols, J. Ronesi, W. Pratt and E. Sanders-Bush, "Hallucinogens and Drosophila: Linking Serotonin Receptor Activation to Behavior," Neuroscience, Volume 115, Issue 3, 9 December 2002, Pages 979-984

4) "Stanislav Grof interviews Dr. Albert Hofmann."

5) David B. Givens, "The Hypothalamus, "Center for Nonverbal Studies, 2001.

6) Anna Bacon, Heather Cagle, Paul Mikowski, Michael Rosol, "The Effect of LSD on the Human Brain," Michigan State University, 1996.

7)See "Stanislav Grof interviews Dr. Albert Hofmann" as well as the journal article by Watts, Val J.; Lawler, Cindy P.; Fox, David D.; Neve, Kim A.; Nichols, David E.; Mailman, Richard B. "LSD and structural analogs: pharmacological evaluation at D1 dopamine receptors." Psychopharmacology (Berlin) 
(1995),  118(4),  401-9.  CODEN: PSCHDL  ISSN: 0033-3158.  Journal  written in English.    CAN 123:74809    AN 1995:603436    CAPLUS.

8) National Institutes of Health, "NIDA factsheet," 2003.


In Search of the Neural Substrate of Humanity
Name: Emily Haye
Date: 2004-03-01 18:17:11
Link to this Comment: 8597


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Introduction: Ontogeny Recapitulating Phylogeny

The idea that ontogeny recapitulates phylogeny is both a catchphrase and backbone of evolutionary study. I assume, in my search for the neural root of what we call "humanity," that this assertion is true, that the development of an individual organism in many ways mirrors the evolution of its species. If this is true, then there must be some point in human neurological ontogeny that defines us, a point at which our own development stops mirroring that of our closest mammalian cousins, the chimpanzee and other higher apes. At this point, our development begins to mirror the next leg of our phylogeny: the evolution of extinct hominids. Therefore, again, there must be a point in our ontogeny at which we separate from even the most recent of these ancestors. It is this point, the point in our ontogeny that mirrors the point at which we became fully human, or distinct in derived characteristics from our hominid ancestors, that must be derived from a specific neurodevelopmental event, likely the full maturation of a specific and unique brain structure. It is this point that can lead us to the neural substrate of humanity. What is this derived characteristic? Are there many? What are their neural correlates? This is what I hope to discover.

Human Development: Ontogeny ((1),(2))

I will use the evolution of a child's play to represent development in general cognitive function. I do this because as a child's play evolves with age it becomes increasingly symbolic. We will see that this symbolism is integral in finding the neural substrate of humanity.

From the age of eighteen months to approximately three years, a child's play develops within certain parameters. Toddlers are very physically oriented; they do not so much play as do. They imitate: people, dogs, birds, cars, anything exhibiting interesting actions or sounds. This imitation is not symbolic, however. The child is not pretending to be a dog. Rather, she is making the sound a dog makes. She is getting to know what "dog" is in her environment. This may seem to be a minor distinction, but it is an important one.

During the toddler phase, a child is also beginning to develop capacity for language ((3). This, too, follows a pattern of imitation and doing. The language of a toddler progresses from babbling and repetition to the expression of simple mental activities, like hunger, in simple verbal phrases, usually lacking syntax (i.e. "want milk"). This lack of syntax correlates with a lack of the full symbolic nature of language. The child is very oriented, both in speech and play, around the present. She imitates things she has recently seen or heard and names things within her immediate environment. She may recall these things from memory but her relationship with the world is not yet fully matured, and therefore her capacity for symbolic cognition is not fully developed.

Around age three, a child begins to play pretend. She is still tied to the present in that she needs props for her games; she does not mime a phone but rather needs a play phone in order to pretend to have a phone conversation. In this way, she is not yet fully removed from the constraints of time and place, as are cognitively mature humans. At this stage, the child will also begin to assume roles in her play, but they are ones of concrete and immediate importance: mommy, daddy, and baby. This development is significant because symbolic representation, or the association of meaning with arbitrary symbols, be they auditory or pictoral, is a capacity only of the evolved intellect ((18)). There is however some confusion between reality and fantasy. She is beginning to be able to leave the present in her games of pretend, but they are not sustained, they require props, and she does not cognitively distinguish them fully from reality.

It is important to the understanding of this stage to understand what is happening in the child's development of language. During this time, the child's language is becoming more complex, her syntax more complete. Show now expresses her desire for milk by saying, "give me milk," or "I want milk," sentences rather than a phrases, with subjects (one of them implied), verbs and direct and indirect objects. This is far more advanced than the "want milk" of a toddler. While this example does not correlate directly to the examples of play development above, the more complex language it represents does. One of the major results of human language is that it allows us to be free of present time and space in that we can discuss things absent, past, future, and intangible ((19)). This allows us to think (which we do in language) about what is going on, to project into the future the consequences of a present action, and to evaluate risk and benefit. In other words, we are freed, to some degree, from the reflexive and instinctive reactions of other animals. We are able to willfully control our responses to certain stimuli. If you extend this idea of freedom from the immediate, you will see the basic outline of how humans migrated out of the tropics, tamed the environment through agriculture, and developed art, religion, philosophy, etc. So again, while the "Give me milk" of a three-year-old may seem a menial developmental step, it is an important step on her path to full cognitive maturity.

The ages of four and five bring the culmination of the child's cognitive development. By the end of these phases, she has the basic capacity for mature human cognition; in other words, her capacity for symbolic representation is complete.

Around age four, a child's distinction between reality and pretend, which was cloudy at age three, solidifies. She begins to exhibit "sophisticated role-taking;" the family in a game will expand to include the dog and cat ((1)). She becomes less physically constrained in her play. She no longer needs a toy phone to hold an imaginary conversation, but may instead pretend that a banana or a block is a phone. Also, with the full language of her age, the child can engage in "cooperative play" ((1)) in which the idea for a game is communicated among and shared by all the players. Up until this point, play was "parallel;" while several toddlers may be playing in the same vicinity, they are not sharing their games. The full language capacity that comes around age four is integral in the development of cooperative play; the children can now express their own imaginings and plans to the others in order to engage them.

Age five sees more complex games of pretend and cooperative play, but also the important arrival of our final developmental step: the ability to solve problems verbally. At this age, children are able to communicate complex mental activity, like the desire for a toy (rather than a survival-basic like food) into words and use her words to obtain said toy (to solve a problem, namely not having the toy). To an adult, this seems an obvious use for language, but in the development of a child it is a huge step. Having developed the ability to use the full symbolic character of language in communicating about imaginary games and using language to solve problems, we will call the child, for our purposes, cognitively mature. There are many other steps in cognitive development leading to a fully mature adult mind, but for our purposes these can be ignored. We will see why shortly.

What We Know About Apes and Hominids: Phylogeny

The cognitive faculties of a human two year-old have been compared to those of a chimpanzee, in that both operate using a "general intelligence," or "simple, general-use computer program" about the world ((4). Like a two year old, a chimpanzee may know what a phone is and what a banana is, but neither would use a banana to represent a phone. Here, we see a divergence in the ontogeny and phylogeny of humans: There is evidence that our closest living mammalian relative is the chimpanzee, but very early in our own development (at age four) we diverge cognitively from this close relative. This means that the human capacity to "use" a banana as a phone (in other words, our ability to pretend or imagine) is a derived characteristic; it is not shared with our close relative. But is this characteristic derived only from the chimpanzee, or from our hominid ancestors as well?

This is not an easy question to answer. We cannot put an extinct hominid in a lab, expose him to bananas and telephones, and then see if he talks into the banana as he had seen people do into a phone. However, there is evidence we can use to deduce whether a hominid would have been cognitively capable of doing this.

Language is the tool I will use to deduce whether a hominid would have been capable of the banana/phone trick. I will do this in a round about fashion, without analyzing the endocasts of various extinct hominid species. Rather, I will use two major pieces of archaeological evidence to glean a general idea of hominids' capacity for language.

KNM WT 1500, or Nariokotome Boy ((5), is a hominid specimen that has clarified, for some, the issue of whether his species, Homo ergaster, was linguate (the word KNM WT 1500 expert Walker (6) uses to describe "having the capacity for full language.") It was originally thought that Brocca's area was sufficient evidence for linguacy in hominids. It was known that Brocca's area played some role in human language, and therefore it was assumed that a bump in the region of Brocca's area on a hominid endocast was evidence of language in that species.

Walker's study of Nariokotome Boy changed this hypothesis. With the advent of PET scans, it has been discovered that while Brocca's is involved in human language, it is not the only center of high metabolic activity during language tasks; in other words, Brocca's area is not solely responsible for language. This meant that Nariokotome Boy, though he had a Brocca's area, was not necessarily linguate. Close osteological study revealed that the foramen in 1500's thoracic vertebrae were significantly smaller than those in modern humans. This implied that Nariokotome Boy did not have the capacity for the complex muscle, specifically diaphragm, control needed to produce the full range of sounds involved in human speech. If he did not have the capacity for human speech, then he certainly did not have the capacity for human language. (6)

The next piece of evidence is the center of heated anthropological debate: the date of the appearance of anatomically modern humans. (I refer to the early members of our species as "anatomically moderns" in order to avoid entanglement in the Homo sapiens v. Homo sapiens sapiens argument (7).) Anatomically moderns have been dated by some as early as 100,000 years ago in Africa and Asia, where the evidence for this speciation is the appearance of tool industries not associated with preceeding hominid forms (7). The emergence of anatomically moderns in Europe is generally dated about 50,000-60,000 years later, when they replaced or evolved from Neandertals (7), (8)). For my purposes, I assume that anatomically moderns appeared at the later date, 40,000 years ago, subscribing to the school of thought that describes human evolution in terms of two "out of Africa" waves and implying that Homo neanderthalensis is a distinct and extinct side branch of human evolution, rather than direct evolutionary predecessors to H. sapiens. Within this context, while the appearance of bone and more advanced stone tool industries is evidence of higher cognitive function by their makers, this is not significant enough to place anatomically moderns at the dates coinciding with these tools (7). Rather, the event marking the appearance of anatomically moderns occurred in Europe around 40,000 years ago: what is known as a "cultural explosion" ((9)). In simplest terms, art appeared.

This is why art, rather than advanced tool making, is the defining behavior of anatomically moderns: Art is symbolic representation. Symbolic representation is a cognitive behavior possible only with fully developed language (also evidence of symbolic representation, as discussed in the section on ontogeny). Nariokotome Boy, who, remember, was not capable of human language, was a member of Homo erectus. In the timeline of hominid evolution ((10)), H. erectus is the species directly preceding H. sapiens. It has been concluded that the evolution of the human vocal tract, necessary for full speech and therefore for language, would have been slow ((11)). Also, as the appearance of the human vocal tract would be a derived characteristic, worthy of attributing any specimen with a human vocal tract to a species separate from H. erectus, I conclude that this species is H. sapiens. Anatomically moderns, and they alone, are capable of full human speech and therefore human language. It follows, then, that only anatomically moderns are capable of the symbolic representation, which removes them from chronological, special, and biological immediacy (see section on Ontogeny), and therefore are the creators of the art appearing 40,000 years ago. In other words, we have found our derived characteristic: symbolic representation and its resulting independence from chronological, special, and biological immediacy. This would have allowed for agriculture which emerged approximately 10,000 years ago ((12)), science, philosophy, religion, etc, all of which require language and the capacity for symbolic representation.

Conclusions: Recapitulation and Neural Correlates

I have demonstrated that symbolic representation, as manifested in art and language, is the derived behavioral characteristic of humanity, as it is the basis for all other things which we consider to be "human": science, math, philosophy, religion, theology, civilization (which is based on agriculture), etc. The appearance of full capacity for symbolic representation in human ontogeny (not until 4-5 years) implies that this capacity arose late in human phylogeny. I have demonstrated that this is indeed true, with the capacity for symbolic representation being a faculty of anatomically modern humans alone. I have discussed some behavioral manifestations of symbolic representation: language, art, religion, etc. This begs the question: If brain equals behavior, then what is the neural correlate of these behaviors? What is the neural substrate of these solely human behaviors?

Recent research has demonstrated that the human brain has no more cerebral cortex than would be expected of a primate of our brain size ((13)). However, the human encephalization quotient (EQ) is 7.44, which means that, for body size, humans are seven times as encephalated as should be expected ((14)). But simply having a lot of brain can't account for symbolic representation; there must be something unique about this large quantity of brain that is the correlate.

Significant research has been done on the prefrontal cortex in humans and great apes in an effort to discern differences. In 2001 Semendeferi et al. published their findings regarding Area 10 of the prefrontal cortex, one of the regions to which "higher cognitive functions such as the undertaking of initiatives and the planning of future actions" has been attributed ((15)). Semendefri et al. discovered that the GLI (gray-level index) of humans is unique among hominoids (humans and great apes). This means that humans have more room for connections among neurons that do the great apes. To me, this implies the following: that higher cognitive functions, facilitated by symbolic representation, arise from the huge number of connections in the human brain. Somehow, this must relate to, if not solely be, the neural substrate of humanity.

While research on the uniqueness of the human brain seems to be concentrated in the prefrontal and visual cortexes, I would assert that the temporal lobe may yield interesting findings, as well. Dr. V.S. Ramachandran is conducting fascinating research at the University of California, San Diego, about the role of the temporal lobe in human spirituality ((16), (17)). I have argued that the practice of religion is one of the uniquely human behaviors made possible by symbolic representation. As spirituality is the foundation of religious practice it is likely that some important findings regarding symbolic representation could result from further study of the temporal lobe.

I am in no way educated enough in the methods and knowledge of modern neuroscience to be able to draw a highly credible conclusion about what I am calling the neural substrate of humanity. In accordance with the research that I have done, however, it seems to me that all things which we consider to be human, all things we do in excess of survival, are facilitated by or directly associated with symbolic representation and language. It makes sense to me, then, that the neural correlates for these behaviors, being the result of a complex and advanced cognitive function, would lie in the areas associated with higher cognitive functioning, namely the frontal, and as I have suggested, temporal lobes. It seems, also, that the huge EQ of humans and the large degree of connection shown by Semendeferi et al. would have something to do with the generation of these higher functions.

.

References


1) Dehouske/Schomburg, educational chart on human cognitive development, Carlow College, 3/15/80.

2) Personal interview with Nancy Hayes, Masters Equivalent in Early Childhood Education, 2/26/04

3) The Development of Children, 2nd Ed, Michael and Shelia Cole. Scientific American Books, New York: 1993.

4) Patricia Greenfield, in The Prehistory of the Mind, Stephen Mithen. Thames and Hudson, New York: 1996.

5)Nariokotome Boy A description of specimen KNM WT 1500

6) The Wisdom of the Bones: In Search of Human Origins, Alan Walker and Pat Shipman. Knopf, New York: 1996.

7)Human Evolution: Summary of the Debate

8)Indiana University, Archaeology Page

9) The Prehistory of the Mind, Stephen Mithen. Thames and Hudson, New York: 1996.

10)

11) "On the Nature and Evolution of the Neural Bases of Human Lanugage," Philip Lieberman. Published in Yearbook of Physical Anthropology 45:36-62, 2002.

12) a paper on the Neolithic Agricultural Revolution

13)Development of the Cerebral Cortex

14)comparative neuroatnatomy site on Serendip

15) "Prefrontal Cortex in Humans and Apes: A Comparative Study of Area 10," Katerina Semendeferi et al. 2001. Available at:

16) "A 'God-module' in the human brain?" Published in: Perspectives: A Journal of Reforme Thought v.14 n.2 (1999) p. 17, 23. Available at:

17) Phantoms in the Brain, V.S. Ramachandran, M.D., Ph.D, and Sandra Blakeslee. William Morrow and Company, Inc, New York: 1998.

18) Davis, Rick, November 22, 2002. Class notes from Anthropology 101 at Bryn Mawr College, Bryn Mawr, PA.

19) The Ape That Spoke: Language and the Evolution of The Human Mind, John McCrone. William Morrow and Company, Inc, New York: 1991.


SSRI's: Successes and Questions
Name: Mariya Sim
Date: 2004-03-05 18:59:51
Link to this Comment: 8713


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


If you go to any pharmacy on the Main Line and look behind the counter, where they have the frequently-asked-for drugs, you will see that, apart from one or two birth control brands, all dispenser labels read "Amitril," "Prozac," "Zoloft," "Paxil," "Lexapro," etc. There is nothing special about the Main Line, it does not have an uncanny ability to attract emotionally imbalanced people; its pharmacies, in fact, simply reflect a worldwide and perpetually worsening epidemic called depression. Called by some "the cancer of emotions," depression affects approximately twelve percent of American women and eight percent of American men in their lifetime (1). Although the causes of depression are unknown, a range of effective antidepressants is available and is widely used by psychiatrists to treat various subtypes of depression. Moreover, it is frequently said that pharmacological treatment of depression has a great advantage over mere psychotherapeutical approach, and that any lasting effect and remission can only be achieved, if the patient combines an antidepressant with therapy (2).

There are several classes of antidepressants, but I would like to focus on selective serotonin uptake inhibitors (SSRI's), which have been developed in late 1980's (3) and which are, perhaps, the most popular antidepressants currently used. The reason for this popularity is not so much their effectiveness as compared with other drugs (for example, with monoamine oxidase inhibitors or tricyclic antidepressants) – studies show that their efficacies are similar (4) – but, rather, SSRIs' significantly smaller range of side effects (5). Patients taking SSRI's are more likely to complete the full course of treatment and, therefore, are more likely to reach remission.

This relative safety and tolerability of SSRI's are due to their selective action. Most antidepressants work by reestablishing communication between neurons through increasing the available level of neurotransmitters in the synaptic cleft. While other antidepressants affect several factors of the communication process (and some of their actions are unclear to the researchers), SSRI's action is focused strictly on the reuptake of serotonin by the presynaptic neuron, providing less leeway for possible side effects. By inhibiting the work of the serotonin reuptake transporter of the presynaptic cell, SSRI's increase the level of serotonin in the synaptic cleft, thus increasing the time in which serotonin can bind to the post-synaptic cell's receptors and the quantity of serotonin molecules in the cleft. (3)

Although patients treated with SSRI's often reach recovery, the assertion that serotonin deficiency is at the root of depression is not only arguable but most likely false. Several researches have shown that, although the connection between serotonin and depression is evident, it is by no means clear how the level of this neurotransmitter affects the condition, or whether it is even an important factor for all patients. Thus, the tryptophan depletion test, which allows researchers to reduce the level of serotonin in test subjects, shows that only 50 percent of healthy subjects with prior history of depression suffered from a relapse after their level of serotonin dropped. Evidently, this parameter is essential only for about half of depressed patients. Moreover, depression was not induced in the healthy subjects who never had depression prior to the test, which shows that serotonin deficiency is not the cause, but, most likely, itself one of the effects of depression. (1)

Another interesting dilemma is the fact that symptoms of depression can be alleviated not only by inhibiting the reuptake of serotonin, thus increasing its level in the synaptic cleft, but also by enhancing it, thus lowering the level of the neurotransmitter. For instance, tianeptine, a drug available in Europe, is as effective as most antidepressants, but its mechanism of action is the opposite of SSRI's. (1), (9) This once again highlights our current ignorance both of the cause of depression and of exact pathways of effective treatments.

Yet another SSRI's mystery is the time that is needed for them to take effect. In vitro studies show that they stop the uptake of serotonin into presynaptic neurons as soon as their level in plasma reaches the needed mark (which should take 2-8 hours) (4), but the actual therapeutic effect of alleviating depressive symptoms does not show until 2-6 weeks after the start of the treatment. (7), (8) This discrepancy may, in fact, provide an insight into various possible causes of depression that either stem from or influence the level of serotonin in the synaptic cleft.

One hypothesis that attempts to account for this time discrepancy proposes that the increased amount of serotonin in the synaptic cleft activates both the postsynaptic receptors and the autoreceptors of the presynaptic cell. The latter decrease the level of serotonin released by the presynaptic cell, not allowing serotonin to build up in the cleft. Over time, the autoreceptors become desensitized, the serotonin release is increased, and the therapeutic effect of the drug is noticeable. (11)

Another way proposed to account for this is that the therapeutic lag may be related to the number and sensitivity of postsynaptic receptors. In depressed patients, 5-HT (serotonin) receptors in the postsynaptic cell are up-regulated to compensate for the lack of serotonin. Studies show that SSRI's treatment causes down-regulation of the receptors at first, and then their activity is finally balanced. It is interesting that the time that it takes for the receptors to begin functioning normally is consistent with the time of the therapeutic lag. (11) This hypothesis suggests that the causes of depression are connected not to the level of serotonin but, rather, to receptor activity.

But perhaps the most interesting story that is currently used to account for SSRI's therapeutic lag is the connection between serotonin and hypothalamic pituitary adrenal system, which is involved in human stress response. Stressful events cause the neurons in hypothalamus to release corticotrophin releasing factor (CRF), a 41-amino acid containing neuropeptide, into blood. CRF affects the anterior pituitary, which responds by releasing adrenocorticotrophic hormone (ACTH), which is then transported to the adrenalin gland that produces glucocorticoids (cortisol in humans). Cortisol, in its turn, influences the anterior pituitary, hypothalamus, and the hippocampus through glucocorticoid receptors – a negative feedback process which maintains normal level of cortisol in the nervous system. In response to stressful events, cortisol levels rise, providing the organism with extra energy and increasing alertness. CRF-producing neurons are found not only in hippocampus, but also throughout the central nervous system – in the cerebral cortex, amygdala, and the brain stem (including the locus ceruleus and raphe nuclei – the sites of origin for norepinephrine and serotonin neurons). In addition to regulating the release of ACTH, CRF appears to have the functions of neurotransmitter, mediating the endocrinal, immune, autonomic, emotional, and cognitive responses to stress. (1), (10)

Studies document that there is a substantial increase of the levels of CRF, ACTH, and cortisol in depressed patients and the anatomical increase in the number of CRF-producing neurons. These constant high levels cause the downregulation of the glucocorticoid receptors ("glucocorticoid resistance"), and this imbalance may lead to the development of depressive symptoms, although the exact reasons for that are not clear. (1), (10) The imbalanced glucocorticoid receptors activity is also thought to decrease cell resilience, increase cellular death, and decrease neurogenesis, ultimately leading to decreased hippocampal volume (8). Laboratory animal studies show that antidepressants, including SSRI's normalize glucocorticoid receptors activity and indirectly influence cell survival and cell plasticity. These effects of antidepressants take about 2 weeks, which may explain their therapeutic lag in humans. (1), (8), (10) Antidepressants, including SSRI's, may not only eliminate depressive symptoms, but also help reduce stress vulnerability. However, studies suggest that chronic antidepressant treatment is needed in order for these effects to be stable. (1), (8)

Several important lessons can be drawn from the history of the use of SSRI's. First of all, contrary to popular assumptions, the existence of a successful (or partially successful) drug does not imply that medical researchers are clear on the origin of the disease. It is far more likely, as the case is with antidepressants, that effective treatments will be developed as a result of chance observation, and that the existence of appropriate drugs and the ability to monitor their effect on the patient will ultimately lead to the understanding of the causes of the illness. Secondly, the widespread prejudice against "dirty drugs," the drugs that affect not only one or two, but several biological processes (which often include unclear ones as well) are not necessarily better that "clean drugs," which exert influence over much smaller range of biological processes. It is clear that the more "dirty" tryciclic antidepressants may be more effective in some cases than the "clean" SSRI's. Thirdly, the inability of SSRI's to effectively cure about half of patients suggests that there cannot exist a single possible medication, whether for depression or for other illnesses, due to biological differences between humans. Fourthly, the study of the successes and, more importantly, of the failures of SSRI's have led the researchers to surmise that what we term depression may be not one, but, in fact, multiple disorders, having distinct paths of origin and, consequently, necessitating different treatments. The popular assumption that SSRI's may be a cure-all for depression is thus challenged. Fifthly, and most provocatively, the inquiries into SSRI's actions suggest that there is no single cause even for individual subtypes of depression, but, rather, that multiple processes – environmental, genetic, intracellular – in multiple parts of the brain are combined (and this combination may be different in individuals), producing depressive symptoms. Overall, serotonin imbalances may be just one of the "final pathways" (1) that multiple causes of depression take, and it is lucky that modifying secondary serotonin imbalances may affect primary processes involved in depression. All of this taken into account, there is a need to continue searching for new understandings of the causes of depression and to develop new medications that will include serotonin-regulating effects, but whose mechanisms of action will also include directly modifying other imbalances involved in depressive disorders.

References

Web Sources:
1) Noha Sadek, MD, Charles B. Nemeroff, MD, PhD. "Update on the Neurobiology of Depression.", on MedScape site (from a collection of articles from Clinical Update, an online journal for continuing education of medical professionals).
2) All About Depression, a website with a general review of the causes and treatments of depression.
3) Charles B. Nemeroff, MD. "Neurobiology of Depression." Scientific American, June 1998. Web access to the archived article available from BMC campus.
4) Stuart A Montgomery. "Selective Serotonin Reuptake Inhibitors in the Acute Treatment of Depression.", on The American College of Neuropsychopharmacology site.
5) Barbui C. et al. "Treatment discontinuation with selective serotonin reuptake inhibitors (SSRIs) versus tricyclic antidepressants (TCAs).", on MedScape site (from a collection of articles from WebMD Scientific American® Medicine online textbook for continuing education of medical professionals).
6) Thomas AM Kramer, MD. "Mechanisms of Action.", on MedScape site (from a collection of articles from Medscape General Medicine online journal for continuing education of medical professionals).
7) "Depression: Beyond the Catecholamine Theory of Mood.", a comprehensive site developed for a University of Plymouth psychology course.
8) Husseini K. Manji et al. "The Cellular Neurobiology of Depression.", from Nature Medicine online.
9) David Gutman, BS, Charles B. Nemeroff, ND, PhD. "The Neurobiology of Depression: Unmet Needs.", on MedScape site (from a collection of articles from Clinical Update, an online journal for continuing education of medical professionals).
10) Juan F. Lopez, MD. "The Neurobiology of Depression.", on The Doctor Will See You Now website.
11) "Neurobiology of Depression.", a handout for a San Diego State University psychology course with a comprehensive discussion on neurobiology of depression.


An Ethical Minefield: Stem Cells
Name: Allison Br
Date: 2004-03-06 03:04:45
Link to this Comment: 8714


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Stem cells derived from either embryos or adults do not constitute human life. Therefore, stem cells should not be afforded the same protection as human life. The purpose of my analysis is to examine particular ethical questions surrounding stem cell research. Though I am fully aware of the benefits and risks of stem sell research, I am not going to explore the science or results of various research studies.

Stem cells provide the foundation for every organ, tissue, and cell in the body to develop. Three major types of stem cells exist: totipotent, pluripotent, and multipotent. Totipotent cells contain the complete genetic information needed to manufacture all the cells of the body as well as the placenta. Totipotent cells are only present in the first stage after the egg has been fertilized, after three or four divisions the cells become increasingly specialized. This second stage of division results in pluripotent cells. These cells are extremely adaptable, and have the capacity to develop into any cell type with the exception of the placenta. The further division of pluripotent cells creates multipotent cells. Multipotent cells are far more specialized than the previous two types of stem cells, and therefore can only produce limited cell types. Multipotent cells can generate hematopoietic cells, blood stem cells with the ability to create red blood cells, white blood cells, and platelets, but are unable to develop into brain cells. Terminally differentiated cells are the products of the chain of stem cell divisions. These cells are programmed to serve a specific function. Terminally differentiated cells comprise the embryo (1)

Stem cells can be obtained from embryos as well as the adult body. Embryonic stem cells are derived from the inner cell mass of the blastocyst, an early embryo consisting of approximately 150 cells. Adult stem cells are commonly retrieved from bone marrow (2). Totipotent, pluripotent, and multipotent cells are all present in the embryo, but only pluripotent and multipotent cells can be found in adults. Embryonic stem cells are highly versatile, and scientists assert have more potential for research than adult stem cells, because embryonic stem cells have the capacity to generate practically every type of cell in the human body (1). I will focus my analysis of human life only on embryonic stem cells because of the more controversial nature of the debate.

After examining the scientific components of stem cells, I can now analyze the bulk of my assertion, the controversy over human life. In a speech to the American public regarding stem cells, President Bush vows to "foster and encourage respect for life," (3). The President's reason for not granting federal tax dollars for stem cell research is, "because extracting the stem cell destroys the embryo, and thus destroys its potential for life," (3). Herein lies the problem, "potential for life" is just that potential. It is not in and of itself human life. President Bush does not outright declare stem cells to be human life. To use a rudimentary example, a grape seed is not a grape. Under favorable conditions and with an elapsed period of time, the seed will become a grape, but it is simply not a grape when in the seed stage. It is important to differentiate between the two stages. I will concede stem cells do hold the potential for life, but destroying the embryo ends this potential. Without a potential for life, stem cells cannot constitute human life regardless of how the potential was destroyed.

In a speech to the Vatican, Pope John Paul II denounces stem cell research based on the fact it, "destroys human life in its embryonic stage," (4). As previously noted, embryonic stem cells are extracted from a 150 cell blastocyst. I do not consider a cluster of 150 cells to be human life. Multipotent stem cells extracted from an embryo are designed to have a prescribed function, but because the cluster of cells as a whole is not developed further the multipotent stem cells cannot and are not functioning. Because the multipotent stem cells, the most specialized form of stem cells, are not functioning, the cluster of cells, in essence, is a blank slate as well as lacking the vital characteristics of human life.

Through analyzing President Bush and the Pope's comments, new questions arise, where does human life begin? Does human life begin when "potential" is realized? What is "potential"? What are the characteristics of human life?

I cannot attempt to provide concrete scientific answers any of these questions, but can explore my opinions and the implications of different opinions/answers. In my opinion, human life begins at birth. Not after a baby's first breath, because I consider stillborn babies to be human. Birth is the act of completely exiting the mother's body, with the exception of conjoined twins, birth implies complete physical separation from other human beings. Having said this, birth is not final until the umbilical cord is severed. A fetus is the term used before birth, for me this distinction elucidates the term potential. The potential for life is a fetus, birth is the realization of this potential, and therefore birth denotes the beginning of human life. From this assessment, I believe the basic characteristic of human life is birth. With the wide range of physical and mental disorders affecting a small percentage of babies, it would be almost impossible to attribute anything else such as: sight, sensing touch or pain, movement, or cognition, as a basic characteristic of human life.

From my political perspective, it is important human life is defined as beginning at birth. If human life was categorically defined before birth there would be sufficient cause to overturn the current federal abortion law. With the exception of state sanctioned murders, intentionally killing another human being is strictly prohibited by all 50 states. If an aborted fetus was determined to be a human, the doctor who performed the abortion would be subject to premeditated murder indictments, and the mother of the fetus would be subject to charges of conspiracy to commit murder.

In conclusion, stem cells provide the foundation for the entire body, but alone stem cells do not constitute human life. Human life is characterized by birth. All stages before birth, including totipotent, pluripotent, and multipotent stem cells, have the potential for human life. The potential for human life is realized at birth. Characterizing human life before birth would give the judicial system an adequate basis to overturn Roe v. Wade, and therefore restrict individual autonomy over one's body.


References

1) The Stem Cell Research Foundation.

2) International Society for Stem Cell Research.

3) A Whitehouse press release dated August 9, 2001, on the official government Whitehouse site.

4) A Vatican press release dated November 10, 2003, on the official Vatican website.


Behavioral Response to Smell: the answer may be un
Name: Sarah Cald
Date: 2004-04-05 00:04:33
Link to this Comment: 9157


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Of the five senses, smell is perhaps the least understood both mechanistically and behaviorally. There are many questions as to why people behave differently, if at all, to certain smells. This difference in behavior may be interpreted as being due to a physical characteristic of the human body. However, it remains to be seen what is responsible for this difference in behavior, the brain or an alternative organ?

General conclusions regarding olfaction can be made using observations, however such conclusions give little insight into the actual mechanism of olfaction and behavioral responses to smell. Regardless, they are a good starting point in exploring these issues. First, we can conclude that odors and smells are perceived in humans through a common pathway. We know this because on some basic level, all humans can agree that certain things smell. For example, we can all agree for the most part that a rose smells—we may not all agree on what a rose smells like, but it does have a scent. Along these lines we can also conclude, generally, that there are different odors which differ somehow in their chemical components causing them to be received differently. For example, the smell perceived from an orange can easily be identified as different from that of gasoline.

In addition to expanding and understanding further the aforementioned conclusions, this paper seeks to understand how humans can receive the same odor and behave differently to it. Gasoline is one example of an odor that elicits different behaviors in different people: many people despise the smell of gasoline saying it causes feelings of nausea, while others find the smell somewhat pleasant. This particular phenomenon intrigues me. More broadly speaking, what is responsible for the behavioral response to odor?

In order to fully explore this question, a better understanding of the mechanism of olfaction is needed. Odorants are collected in the sensory epithelia of humans, located in the upper regions of the nasal cavity ((1)). Odorant molecules are absorbed in the mucus layer of the sensory epithelia where they then travel to receptor cells, which are located on the cilia that line the nasal cavity. These cilia each comprise receptor proteins that are specific to certain odorant molecules ((1)). Binding of an odorant to a receptor protein results in an activation of a second messenger pathway. There are two known pathways to date. The most common, involves the activation of the enzyme adenyl cyclase upon the binding of an odorant molecule ((2)). This enzyme catalyzes the release of cyclic AMP (cAMP). The increase in cAMP levels causes ligand-gated sodium channels to open, causing a depolarization of the membrane. This depolarization results in an action potential ((2)). These electric signals are carried by olfactory receptor neurons through the olfactory bulb. The olfactory bulb then relays information to the cerebral cortex resulting in sensory perception of smell ((2)).

On average, humans can recognize up to 10,000 separate odors ((3)), yet only have about 1,000 different olfactory receptor proteins ((4)). Clearly, there is a step in the pathway of olfaction that allows for combinations of odorant molecules to be organized. This step was found to take place in the olfactory bulb. Within this organ, the activity of different olfactory receptors in combination is used to signal the brain for specific smells ((4)). Richard Axel M.D., an investigator at Columbia University College of Physicians and Surgeons and a pioneer in the field of olfactory research, explains this processing role of the olfactory bulb best:
The brain is essentially saying... 'I'm seeing activity in positions 1, 15, and 54 of the olfactory bulb, which correspond to odorant receptors 1, 15 and 54, so that must be jasmine ((4)).


Knowledge about the mechanism of olfaction now allows us to explore what is responsible for behavioral responses to odor. My initial answer to this question was the brain. One thing I have learned in our class discussions was that for the most part, behavior is the result of inputs and outputs from the brain and how they are processed. Accordingly, the brain should be responsible for the different behaviors observed in response to smell. However, after exploring and learning about olfaction on a more detailed level, I now believe that the source of behavioral response to odor may lie within the olfactory bulb. One role of the olfactory bulb is to receive signals from odorant receptors and relay that information to the brain. In this way the olfactory bulb is functioning to process and interpret the input signals from odorant receptors and produce corresponding output signals for the brain to subsequently interpret. It seems logical that in processing the inputs from odorant receptors, the olfactory bulb is also producing some type of output that results in a behavioral response.

Further investigation revealed evidence that may support this hypothesis. Signals sent from the olfactory bulb are sent not only to the cerebral cortex, which is responsible for conscious thought processes; but also signals the limbic system, which generates emotional feelings ((5)). This leads me to question whether the signals sent to the cortex and limbic system are identical or similar in any way? Also, is there a difference in the number of signals sent between the two locations in response to odorant reception? Meaning, do more signals get sent to the cortex when a person smells oranges, compared to the limbic system? All of these questions are worth pursuing; perhaps it is information in the signals sent to the limbic system, which is responsible for the behavioral responses to odor.

There is much about olfaction that remains unclear, particularly about the relationship between behavior and olfaction. To date, there is little evidence that suggests what portion of the body is responsible for behavioral response to odors. Further investigations involving the olfactory bulb may prove to be a worthwhile endeavor.


References

a name="1">1)Monell Chemical Senses Center, an overview of olfaction

2) Lancet, Doron. "Vertebrate Olfactory Reception." Ann. Rev. Neurosci. 9 (1986): 329-355.

3)The Mystery of Smell: The Vivid World of Odors

4)The Mystery of Smell: How Rats and Mice—and Probably Humans—Recognize Odors

5)Sensing Smell


Synethesia and the Human Brain: Questions Answere
Name: MaryBeth C
Date: 2004-04-06 00:48:04
Link to this Comment: 9193

<mytitle> Biology 202
2004 First Web Paper
On Serendip

"It had never come up in any conversation before. I had never thought to mention it to anyone. For as long as I could remember, each letter of the alphabet had a different color. Each word had a different color too (generally, the same color as the first letter) and so did each number. The colors of letters, words and numbers were as intrinsic a part of them as their shapes, and like the shapes, the colors never changed. They appeared automatically whenever I saw or thought about letters or words, and I couldn't alter them."(1)

At some point, most people consider the way that they perceive the world and how these perceptions may vary from other people's perceptions. We may wonder how the same words sound to different people, or whether or not colors are the same in everyone's eyes. Though most of these differences will never be resolved due to the indescribable nature of sensory observations, one key difference in the perception of the world has been pinpointed, that is, the world of the synesthete. Synesthetes experience language and ideas differently from the average human brain. Ideas, words, letters, numbers and sounds become inherently linked with a color association that manifests itself differently from one synesthete to the next.

Originally, experts in science and psychology were skeptical of the very existence of this rare condition. A recent British study, however has shown that synesthetes were able to recall these complex color and shape associations for significantly longer periods of time than nonsynesthetes. Many experts speculate that these associates are simply the remnants of early methods of learning the alphabet, numbers, shapes and the like, such as colored letters in a book, or multi-colored refrigerator magnets.(2)These associations that survive could be evidence of a very visually-oriented learner, such as many with photographic or visual memories of learned ideas and concepts.

It seems, however, that the neurological patterns of synesthetes are variant from normal patterns in many more ways than association and visualization. Repeated studies show different aptitudes among synesthetes at particular associative exercises, suggesting completely different thought patterns. Some experts now believe that synesthetes actually have a rare "cross-wiring" of the regions of the brain that deal with numbers and computation and colors and visual perception, two regions that are located in close proximity in the brain. Dr. Jeffrey A. Gray has also done some brain scan research that has shown increased activity in the color region of the brain among synesthetes upon hearing words and letters than the control subjects also used in the experiment.(3)

Some synesthetes' also associate particular colors with emotions or experiences. For "Carol", orange is associated with pain, stress and anxiety. When she was experiencing the pain of a toothache and approached her dentist for root canal, she immediately regarded the tooth as being "orange". Further, as the dentist was performing the procedure, her eyes were flooded with the color orange.(4)

The synesthete phenomenon is an important and telling discovery in the field of neurology and brain behavior for many reasons. Firstly, the condition raises the question of what is, in fact, the "normal" perception of letters, numbers, words, and ideas. The very nature of the human senses is called into question. While similar color associations and visualizations may unite synesthetes under a similar experience, does this make nonsynethete experience also similar? Also raised are the questions concerning the source of these associations and what makes them different from other learned associations. While only one in every two-thousand people is regarded as a synesthete, many people of all ages are classified as "visual learners". Many people remember particular facts and experiences by where they were when they learned them or where the sentence was located on the page. Is this mode of memorization related in anyway to synesthesia? Are the associations of synesthesia simply a more complex manifestation of this "visual" way of thinking?

Secondly, the condition of synesthesia raises a new notion of the so-called "cross-wirings" or "cross-firings" in the brain. Popular theories in neurology have suggested that there are distinct areas of strength and weakness in each individual's brain. One particular theory within this notion is that of "left-brained" and "right-brained" individuals. Along these lines, synesthesia suggests a co-operation between two or more regions of the brain that are seemingly unrelated. Whereas the left- and right-brained theory suggests a separation between visual and artistic individuals from number-oriented individuals, synesthesia suggests that these areas are not oppositional, and further, that these regions of the brain may, in fact, work together.

For those with synesthesia, recent research and identification of the condition has provided some answers. The same research, however, has raised countless questions about the nature of the human experience and the differences in individual perceptions of and within this experience. As puzzling as this condition may be, it provides one more unique insight into the individual nature of the human brain.

References

1)Blue Cats and Chartreuse Kittens, A book by Patricia Lynne Duffy that describes her personal experiences with synesthesia.

2)Synesthetes Show Their Colors, An article by Lila Guterman that explores some of the scientific aspects of the condition, as well as some of the recent research.

3)Synesthetes Show Their Colors: Dr. Jeffrey A. Gray's Experiment, A discussion in Guterman's article that describes one of the recent studies and some of the results.

4)Audio Transcripts from Interviews with "Carol", Interviews with another synesthete that describe some of her unique experiences.


Un-Full House: The Story Of Amnesic Syndrome
Name: Akudo Ejel
Date: 2004-04-06 08:55:25
Link to this Comment: 9199

Un-Full House: The Story of Amnesic Syndrome
By: Akudo Ejelonu

Do remember the series finale for the television sitcom, Full House, in which the youngest daughter, Michelle fell off her horse while trying to jump a log and developed symptoms of amnesia? Luckily, for Michelle, her memory was restored and she returned to a full functioning state. We would all like for this happy conclusion to occur in the lives of those we know who are suffering from the memory loss deficit known as amnesia. However, we all know that images on television are sometimes false and are methods for producers to draw viewers away from the melancholy of their life and into the arena of happiness and goodness. Though Michelle's accident did make us aware of amnesia, some of us may not understand how one gets amnesia, its various types, and how it is cured. This will be explained in this paper. So sit back, relax and enjoy the show.

The brain performs main functions such as storing, processing and drawing on memory. "Amnesia is a profound memory loss which is usually caused either by physical injury to the brain or by the ingestion of a toxic substance which affects the brain...memory loss can be caused by a traumatic, emotional event."(1). Memory loss may result from bilateral damage to the limbic system and the hippocampus in the medial temporal lobe, which are parts of the brain that are vital for memory storage, processing, or recall. When someone has amnesia, tissues in the temporal lobes of the brain are destroyed along the medial borders. Amnesia is a symptom of various neurodegenerative diseases. Individuals having lost his or her memory are described as amnesiacs. Syndromes and diseases such as Wernicke-Korsakoff syndrome and herpes can cause amnesia by damaging the brain's memory centers from the use of substances such as alcohol or by infections in the brain tissue. Some medical treatments such as Magnetic Resonance imaging (MRI) and Psychological testing called neuropsychological testing can be very helpful in determining the presence of amnesia.

Amnesia is an inability to form or retrieve memories and is a defect in declarative memory. Declarative memory is known as cognitive system, stores facts and events that are accessible to conscious recollection. It is located inside the medial temporal lobe, medial thalamus, and orbital prefrontal lobe of the brain. The hippocampal system, which "contributes to (1) the temporary maintenance of memories and (2) the processing of a particular type of memory representation,"(2). is what is first affected during memory loss. It also plays a vital role in memory and learning because it secures the link between immediate memory and the long-term storage. Although amnesia results from medial temporal lobe damage, there have been cases in which severe amnesia can occur without the hippocampus being damaged. That is only if the cortical areas surrounding the hippocampus are infected.

When someone has amnesia, he or she has difficulties recalling old and/or new information. The three main types of amnesia are anterograde, retrograde and transient global amnesia. Anterograde is the inability to remember events that occurred after an incident. Though their short term memory many disappear, "victims can recall events prior to the trauma with clarity."(3). Common causes of this type of memory loss are Alzheimer's disease, stroke, and trauma. The patient cannot create new memories and can only recall what they know from the past. However, how is it that new memories of the present cannot be stored again by the brain? Anterograde is also called post-traumatic amnesia (PTA) because it usually follows a traumatic injury to the brain. When short-term memories are in the process of becoming long term, they go through consolidation. During consolidation, short-term memory is repeated and rooted for long-term admission. When one has anterograde amnesia, their short-term memory cannot be restored for access.

Retrograde amnesia is opposite from anterograde because it is the inability to remember events that happen before the incidence of trauma, "but cannot remember previously familiar information of the events preceding the trauma."(4). In other words, one cannot recall memories of the past. "A person who experiences physical trauma to the brain or an electroconvulsive shock may forget his past while retaining the ability to create new materials." (5). It can also encode memory for one's emotional behavior such as being happy, sad and ecstatic. If the hippocampus is damaged, the amnesiac will not be able to recall new memories but can recollect older memories. "Usually, when a person has a brain injury resulting in a memory disorder, there is some degree of both anterograde and retrograde amnesia. Often, the anterograde amnesia is more severe and more difficult to deal with." (6). The last type for amnesia is transient global amnesia, which is a brief cereval ischemia that produces sudden loss of memory that can last from minutes to days. Usually middle-aged to elderly people suffer from this. In severe cases, a person can be extremely confused and may experience retrograde amnesia that can last for several years.

What Michelle suffered from was retrograde amnesia; she was not able to remember events that happened before the accident. However, what was significant about her case as that she had a concussion after she fell off her "wild stallion". Concussion is a head injury that results in temporary loss of consciousness. Most often amnesia is usually caused by concussion. Michelle got amnesia because she only forgot what happened to her before her head injury. Therefore, if she had an argument with her father the night before, she would not have been able to remember it unless her father reminds her about it. When she was released from the hospital, Michelle's doctor did not diagnose her because the best way for her to recall her memories was being her family familiarizing her with the things she loves and having he go through her daily routine. This method works for some people but for other depending on the severity of their case, may be prescribed a drug called Amytal (sodium amorbarbital). Amytal helps them recover some lost memories. "Cognitive rehabilitation may be helpful in learning strategies to cope with memory impairment."(7). In addition, psychotherapy can be used for people who amnesia is caused by emotion trauma.

Memory is the persistence of learning over time. Memory impairment occurs with a variety of neurological conditions and is associated with symptoms such as cognitive and motor impairments, brain trauma and Parkinson's disease. Nevertheless, once one has amnesia, they have to try to relearn the things that they have forgotten and learn new information. "In the medical field, amnesia means a disturbance of long-term memory and a loss of memory caused by brain damage."(8). There are various prognoses for amnesia, depending on the type of amnesia and the severity of the case. The next time you get the rare opportunity to watch the last two episodes of Full House, you will be able to make links between what I have stated in this paper and how Michelle's behavioral pattern changes before, after, and during the memory loss.

1)Blueprint for Health: Amnesia, A Good Web Source

2)Two Component Functions of the Hippocampal Memory System, A Good Web Source

3)Amnesia., A Good Web Source

4)Blueprint for Health: Amnesia, A Good Web Source

5)Kinder, Annette and Shanks, David R. "Amnesia and the declarative/ nondeclarative distinction: A recurrent network model of classification, recognition, and repetition priming". Journal of Cognitive Neuroscience, July 1, 2001, v13 i5., A Good Book

6)What is amnesia? , A Good Web Source

7)What is amnesia? , A Good Web Source

8)Amnesia, A Good Web Source


Additional Sources:
9), A Good Book

10) Long. Charles J. PH. D. Physiological Psychology, 24. Memory., A Good Web Source

11)What is amnesia?, A Good Web Source

12)Cohen, Neal J. and Howard Eichenbaum. Memory, Amnesia, and the Hippocampal System. MIT Press: Cambridge, 1993, A Good Book

13)Memory., A Good Web Source


Health: Mind and Society II
Name: Aiham Korb
Date: 2004-04-07 21:01:28
Link to this Comment: 9241


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


In the previous paper, Health: Mind and Society I, we argued that many different variables interact to influence health and disease. Through the principles of psychoneuroimmunology and the biopsychosocial model, we showed that the nervous system is the center of the interactions of these multiple factors (1). The connections and interplay between the neuro-endocrine and immune systems is the physiological basis upon which we will continue our study of how psychosocial factors can, and do, promote poor health. In this paper, we shall explore the biological associations between socio-economic status, stress, and disease. The links between these will shed more light on the social structures and atmospheres fostering such stress, rather than on the physical outcome of disease itself. But first, let us take a look at some of the leading causes of death in our society.

"Atherosclerosis, a disease of the large arteries, is the underlying cause of approximately 50% of all deaths in modern western society" (2). In fact, heart disease is the first leading cause of death in the United States, followed by cancer and deaths from iatrogenic causes (such as unnecssary surgery, medication errors, infections in hospitals, etc.) (3). Therefore, heart and artery diseases constitute a major health concern in our society. It is important to note that these diseases are more prevalent among people from low socio-economic classes. This interesting and distressing finding implies inevitable links between the environment and physical well-being. Besides predisposition, and access to health (or the lack thereof), it is clear that social factors contribute to these significant health problems. "There is a marked socioeconomic gradient in the incidence of CAD [Coronary Artery Disease] such that people of low socioeconomic status, as defined by occupation, education or income, have an increased risk of CAD and acute coronary syndromes" (4). These patterns have also been observed in monkeys. Living in dominance hierarchies, monkeys who are more socially subordinate were found to have higher levels of athersclerosis than the dominant monkeys (5). Given the greater complexities of human society, this may serve as an idea of how powerful socio-political and socioeconomic environments can be in influencing health, and even promoting disease. One can not help but ask whether socioeconomic systems based on cooperation might be healthier than those based on competition and hierarchies. This is only one of many hypotheses that attempt to account for the grave failures of the political and economic structures of our society. In any case, let us now turn to an interdisciplinary research study on socioeconomic status, stress, and its physiological outcomes.

The following study was a collaboration between the Department of Epidemiology and Public Health (Psychobiology Group) and the Department of Medicine at University College London, U.K. (2). Participants were divided into high and low SES (socioeconomic status) groups based on occupation grades. They were then administered two short stress-inducing mental tasks. The two SES groups did not differ at baseline. Yet, the results showed significant differences in the physiological responses to stress in the two groups. Following the test, those in the low SES group had a delayed recovery in blood pressure and heart rate than those in the high SES group (2). "Heart rate increased to the same extent following stress in both groups, however by 2h post-stress, it had returned to baseline in 75% of the high SES group compared with only 38.1% of the low SES group" (2). Another significant difference was in the delayed recovery in interleukin-6 levels experiences by the low SES group, as compared with that of high SES group. "Stress induced increases in plasma IL-6 in all participants, however, in the low SES group, IL-6 continued to increase between 75 min and 2h post-stress, whereas IL-6 levels stabilized at 75 min in the high SES group" (2).

It should be noted that interleukin-6 (IL-6) is a "circulating cytokine" associated with stress (6). Cytokines are chemical messengers which serve in the "bi-directional communication" between the CNS (central nervous system) and immune system. However, excessive amounts of cytokines can be toxic to nerves in the brain (6). Therefore, frequent and prolonged increases in IL-6 levels would have adverse effects on the body. Also, IL-6 stimulates the HPA (hypothalamus, pituitary and adrenal glands) axis. As we have seen in the previous paper, the overworking of the stress and neuro-endocrine responses causes a dampening of the immune system, and a negative outcome on health. Moreover, "HPA hyperactivity is associated with central obesity, hypertension, insulin resistance, and dislipidaemia, all risk factors for CAD" (2).

Thus, taking the results and relevant data, the experimenters came to the following conclusion: People of low SES have a "dysfunctional adaptive response" to psychological stress due to chronic stress-related increases in IL-6 and HPA activity. This chronic stress is understandable if one considers the psychosocial conditions that are more common in low SES groups. The study mentioned such conditions as "the exposure to adverse work characteristics, chronic life stress, social isolation, hostility, depression, and anxiety". All of these factors have been consistently identified as to increasing the risk of cardiovascular disease (2). This highlights again the relevance of the environment and its strong effects on health and the etiology of disease. Moreover, the study adds: "people of low SES tend to be more exposed to sources of chronic stress such as low job control, financial strain, and neighborhood stress, and generally have less social support" (2). Apparently then, the socioeconomic gaps are not such a benign outcome of our capitalist society. This experiment is one of many that have linked SES inequalities to heart disease and other ailments. In fact, longitudinal studies (which follow participants over several years) have also found that chronically stressful environments increase the chances of developing heart disease. Such examples are a small sampling of the accumulating evidence that support the relevance of psychosocial factors in defining and influencing health.

So far, we have seen that considering environmental factors is essential to a better understanding of their important effects on the origin and progress of pathology. Going a step further, and building on the issues raised thus far, the integration of psychosocial socio-political and socioeconomic factors into a broader formula of health should be possible. In the next paper, we will continue to follow the pathological effects of stress-related increases in IL-6. For example high levels of IL-6 have been associated with age-related conditions, general morbidity and mortality (2). We will also explore social isolation and its correlation with HIV progression. As we progress in our study, we become more aware of the role of the environment on our bodies and on our health. Being "social animals", the existence of human being necessarily involves intricate political, economic and social systems. It is more and more evident that these systems could be potential catalysts of disease. Therefore, it is our responsibility to create and monitor systems such that they would cater for a healthy population and society. Indeed, we can build psychosocial protective factors, such as social support and networks. So, perhaps we should consider again the question of an environment based on cooperation rather on competition. Which social structure is more likely to induce malady? And which one would cushion against pathology?


Sources:


1) Psychoneuroimmunology and health psychology: An integrative model, By Erin Castanzo and Susan Lutgendorf. Brain, Behavior and Immunity 17. 2003. p. 225-232.

2) Socioeconomic status and stress-induced increases in interleukin-6, By Brydon, Edwards, Mohamed-Ali et al. Brain, Behavior and Immunity 18. 2004. p. 281-290.

3) Is US Health Really the Best in the World? By Dr. Barbara Starfield. Journal of American Medical Association (JAMA). Vol 284, No. 4. July, 2000. p. 483-485.

4) Social class and coronary heart disease, By Marmot, M. and Bartley, M., in Stansfield, S., Marmot, M. (Eds.), Stress and the Heart. BMJ Books, London, 2002. p. 5-19.

5) Social status and coronary artery atherosclerosis in female monkeys. By Shively, C.A. and Clarkson, T.B. Arterioscler. Thromb. 14. 1994. p. 721-726.

6) The Mind-Body Interaction in Disease. By Esther Sternberg and Philip Gold. Scientific American. 2002.


Schizophrenia
Name: Laura Silv
Date: 2004-04-07 23:38:28
Link to this Comment: 9242


<mytitle>

Biology 202
2004 Second Web Paper
On Serendip

One problem with the wonders of modern-day medicine, as a friend of mine in medical school tells me constantly, is that they tend to work so well that those taking the medicines begin to believe that they no longer need it and therefore cease to take it. I began to think about this and remembered hearing a similar comment about patients of schizophrenia, and thought I would investigate this further, and a paper for this class would be the perfect opportunity to do so.


So naturally, the best place to begin would probably be the beginning – what is schizophrenia? It is a brain disease affecting one out of one hundred people. While men and women have equal chances of getting the disease, men tend to develop the symptoms earlier on, even as early as late teens(1). Early symptoms are paranoia and emotional indifference, which make schizophrenia hard to distinguish from other kinds of brain diseases like depression or bipolar disorder.


As the disease develops, two types of symptoms emerge; negative symptoms, when formerly enthusiastic, lively and social people suddenly become introverted, unemotional and reclusive, and positive symptoms, which are more forceful and include strong hallucinations and delusions. These positive symptoms are called the "psychosis", or "acute" phase of schizophrenia(2). This is the phase of schizophrenia most often portrayed by the media, the phase that you're most likely to find in movies or on TV. Many people in this phase are mistaken for being high or drunk, and indeed some patients begin to rely on illegal substances as self-treatment to keep some of the stronger symptoms in check.


While the exact cause of schizophrenia is unknown, there are indications that it is hereditary. According to Schizophrenia.com, people with a close relative who has schizophrenia run a higher risk - as high as 50% - of eventually getting it themselves. Scientists are looking for particular genes that may either cause or predispose one to the illness, much like what was recently done for heart attacks. But schizophrenia is a brain, not a genetic disease, and is generally thought to be caused by an imbalance between the brain chemical dopamine and other brain chemicals such as serotonin(3) or glutamate(4). Dopamine controls one's emotions, and some of its neurotransmitters are also thought to affect attention and motivation. Serotonin controls sleep and appetite, and also acts as a stimulant of physical movement. Glutamate is the nervous system's main neurotransmitter between cells.


Schizophrenia is also thought to be caused by certain physical deformities within the brain. While this is not a reliable or fool-proof method of predicting who might become schizophrenic, the most common attribute which sufferers have is enlarged ventricles. Ventricles are holes within the brain which transport fluids from one part of the brain to another. The Surgeon General, in his 2002 report on the causes of schizophrenia(5), also cites "environmental factors" as one of the possible causes of schizophrenia and why family members of one sufferer run the greater risk of developing symptoms, but he fails to list what those factors might be.


For those diagnosed with schizophrenia, the most common and effective method of treatment is drug therapy, which treats the chemical imbalances previously described and which also keep the psychotic symptoms – hallucinations, et cetera – from returning. Recommended dosages differ from patient to patient, as each case is different. Traditional medications include Haloperidol (trade name Haldol), which treats hyperactivity and mania but which is known to cause other problems such as lethargy, and Trifluoperazine, called Stelazine, which treats anxiety and nausea but fails to treat the withdrawal. Other medications include Loxapine, Perphenazine and Prolixin, all of which treat only some of the symptoms and none of which cure schizophrenia(6). And, of course, the problem with treating the symptoms and not the causes of the disease is that patients tend to think they've been cured, and therefore cease taking their medications. The disease is permanent – symptoms might disappear for a while but generally return.


While there is no cure for schizophrenia, so much research is being invested into discovering more about the disease. Doctors, hospitals, charities and medical societies are all donating time, effort, money and resources to find better treatments for the disease, and perhaps, one day, a cure. Though the end may not yet be in sight, the outlook for schizophrenics and their families is good.


References


1.) 1) Schizophrenia.com


2.) 2) Mental Wellness Mental Wellness Online: www.mentalwellness.com/


3.) 3) Mental Wellness


4.) 4) Glutamatergic Aspects of Schizophrenia


5.) 5) Schizophrenia.com


6.) 6) Schizophrenia.com


The Psychometric Approach to Intelligence: How Sma
Name: Bradley Co
Date: 2004-04-09 00:09:39
Link to this Comment: 9253


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Aristotle and Cicero were some of the first great minds to contemplate, and even allocate a word to, the phenomenon now referred to as intelligence (1). History has been filled with people trying to pin down the precise existence of the term because of the general belief that it is an exact term. In this century the psychometric approach has been the primary method of studying intelligence (2). This method is based on the presumption that intelligence is a measurable factor, and thus the IQ test was born.

Over the past several decades children across the globe have been given IQ tests at some point in their elementary education. A letter comes home and parents are given a number which compares their child to all other children their age. This number is meant to be a measure of the child's intelligence and is believed to play a role in determining his or her track in life. However, the downfall of these results is that they have created the modern day common belief that intelligence is a finite characteristic. It is universally understood that different people contain varying degrees of intelligence, however it is highly misunderstood what intelligence is.

IQ, or "Intelligence Quotient," was originally obtained by dividing a person's mental age by their actual age (2). More recently IQ tests have been defined to measure specific abilities. These abilities include but are not limited to such things as verbal ability, problem-solving ability, social competence, knowledge, motivation, dealing with abstract concepts, ability to classify patterns, ability to modify behavior, ability to reason deductively, ability to reason inductively, and ability to understand (3) (4). There are literally hundreds of different skills or abilities that can be measured. These measurements are scaled in comparison with many other people of the same age. The scale is set to have a mean of 100 and standard deviation of 15 (5). The belief behind the psychometric principle of measuring intelligence is that, like many modern psychiatrists and psychologists today believe, intelligence is essentially what the tests measure (3). However, these measurements are merely data. This data is used to draw conclusions, and hopefully a definition of intelligence.

The purpose of collecting intelligence data is to better understand the meaning of intelligence itself. Although the notion of intelligence is widely accepted and referred to, the definition is vague. At one specific symposium, twelve psychologists were separately asked to define intelligence, and twelve distinct responses were returned (1). This vagueness is evident in the fact that there are hundreds of separate IQ tests that measure many different abilities. Granted that an actual IQ test will cover several of these abilities, it is inevitable that many will be left out. To render this problem Charles Spearman, in the 1920's, created a statistical extraction using factor analysis called the general factor (g) (6). The g factor is a correlation among the varying IQ tests and mental abilities. The significance of the g factor is that it claims to explain the differences among ability tests and will hold up regardless of the test type or manner in which the test is administered (6). Many professionals do not doubt that the g factor does enter, to varying degrees, the countless mental activities that guide human behavior (1). The g factor is an extractable statistical factor of intelligence.

The implications of obtaining a correlation among all mental abilities, and therefore intelligence, are immense. It gives us the capability to set levels of intelligence, as well as make predictions about such things as way of life, success in life, and even happiness. Although it has been found that a persons given intellectual performance can vary from day to day, as well as among abilities (2), it has also been found that a person's intellectual ability is generally stable and unchanged after adolescence (6). This infers that intelligence is a set factor and therefore so is their intelligence based fate. Studies have been shown that a person's g factor and IQ positively correlate with success both in school and out of school (2) (6). Thus, the smart people will be successful and the stupid people will not. Not only will they not be successful but they cannot be helped. The U.S. army banned people with IQ scores below the tenth percentile from enlisting during WWII because they felt they could not be taught to be good soldiers (6). These correlations can even imply that due to an inability to succeed in life, a person's happiness will be determined by intelligence. The intelligent will be rich and happy while the unintelligent will be poor and sad. These implications are extremities, but not far fetched. The holes and misconceptions of the g factor and psychometric principle become clear in the extremities of their implications.

The major fault of the psychometric principle, that intelligence is measurable, is in its central understanding. An essential aspect of the theory is that even though there are innumerable mental abilities of intelligence there is a general g factor that correlates to them all and can be extracted. However, in admitting to the multitude of mental abilities that comprise intelligence, any removable characteristic is negated as being a sole representation of intelligence. In searching for a finite characteristic of intelligence, this theory provides evidence that in fact there is no fixed attribute. In the 1980's these problems sparked new theories of intelligence.

The evolution of intelligence theory began to acknowledge the vastness of what the term intelligence represents. People such as Howard Gardner and Robert Sternberg approached the concept arguing that the psychometric attempt leaves out much of what intelligence is. While analytical mental capabilities were measured, practical and creative aspects were ignored (1) (3) (5). This new train of thought was based on a perception that there are many types of intelligence in which only some can be measured. There are many aspects to intelligence which make it too complex to distinguish. This makes the phenomenon of intelligence indefinable, but rather better described, in order to portray a more detailed image than any definition could provide (4). It is this idea that has fueled the search and study of intelligence in the past few decades. The idea of "painting a clearer picture" is the motivating force behind the research.

Unfortunately, the aspiration of finding a certain aspect of intelligence that can be recognized as the primary factor has still fogged modern research. There have been studies both proving and disproving positive and negative affects on intelligence by such factors as knowledge, education, exercise, stress, and even listening to Beethoven (2) (5) (6). Correlations between such things as brain size, gender, and ethnicity have gone through cycles of being published, then disputed, then revoked, and republished (7) (8). The disputes over causes and affects are often about specific characteristics but also often deal with the issue of environment verses genetics. Which is the dominant factor in intelligence, or is it both? Research on issues such as stress and upbringing clearly emphasize environmental factors of intelligence. However genetics are often referred to as essentially important. Until recently, the beneficial aspect of these research elements is that both sides understood and further exclaimed that it is most likely both aspects which affect intelligence.

In most recent times technological advances, such as the completion of the human genome project and advanced brain scanning techniques like MRI, have driven the research and beliefs about intelligence to go full circle. We are now in a time when once again the search for a specific factor of intelligence is underway with new technology. Twin studies have tried to determine the factors of environment and genetics and even more specifically tried to find specific genes related to intelligence (9). Although it is understood that the relationship between genes and behavior is rarely a one to one correlation (9), this has not halted the search for the linked genes. New measurable factors such as the degree of branching in cortical neurons, the rate of brain metabolism, and the number of neural connections are being studied with regards to intelligence as well (2). These studies are simply using new tools for the psychometric approach of measuring intelligence. It is not impractical to predict that the near future will entail new IQ tests which simply take a sample of DNA and a brain scan to report a new number of intelligence.

The psychometric approach has provided very good correlations to many varying aspects of intelligence. It has set standards and scales that future researchers can compare with and expand upon. However, it ignores the major factor of its theory; the enormity of what intelligence really is, a conglomeration of the many mental abilities both measurable and immeasurable. The realization of this factor has been set aside because it creates great difficulty in providing validation to intellect studies. Yet, every study and researcher makes note of it. In essence future research would be wise to deal with the primary issue of the extensive nature of intelligence rather than the futile search of a straightforward quantifiable conclusion.


References

1)The Evidence for the Concept of Intelligence, A rich source of both history and intelligence theories

2)IQ and Intelligence, A Brain.com article on the relations between IQ and Intelligence

3)Genetics of Childhood Disorders, A good article demonstrating disagreements about intelligence

4)The Concept of Intelligence in Cognitive Science, A review of modern theories of intelligence

5)Intelligence: Knowns and Unknowns, An in depth look at intelligence theories and research

6)The general Intelligence factor, An explanation of the g factor and intelligence

7) Does Brain Size Matter? , Research relating brain size to intelligence

8)Cranial Capacity and IQ , Research relating cranial capacity to intelligence

9)Our genes, ourselves?, An in depth look into the role of genetics and environment in intelligence


Laughter: The Glue of Humanity?
Name: Kristen Co
Date: 2004-04-10 19:44:37
Link to this Comment: 9260


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

A sign on a repair shop door in England says: We can repair anything. (Please knock hard on the door - the bell doesn't work) (1).

Did you laugh? If you did you just expended calories, lowered your blood pressure, and increased the number of immune response cells in your blood. You unconsciously triggered a neural circuit in the brain which resulted in the physical response of laughter. Laughter is an unconscious behavioral response that results from complex interactions in the brain as a result of a stimulus which we deem to be "funny." The implications of laughter, however, extend much further than expressions of enjoyment. Laughter is a cultural mechanism that has evolved from the need for members of the same species to get along. It is an example of how the unconscious workings of the brain have an effect on the conscious workings of our lives.

Ernst Haeckel, a German Evolutionist, referred to laughter as a kind of reflex or response to "psychological tickling" in which vasomotor nerves were stimulated by either a physical or mental stimulus (2). Modern scientists have made further discoveries regarding the workings of the human brain during laughter, but the general idea that laughter is an uncontrollable response to some kind of mental stimulus remains. Laughter has been found to result from a signal that travels through a loop of connected neurons located in various regions of the brain. Studies have been done that monitor these electrical signals in the brain (3). If enough voltage occurs to create an action potential, the wave of activity will travel through different regions of the brain and will result in a laugh.

Laughter is a combination of three ideas, the intellectual "getting" of something humorous, the emotional response, and the physical response of laughter (3). The main part of the brain responsible for the correct interpretation of a joke is the frontal lobe. People with damaged frontal lobes don't laugh or smile as much when shown humorous material and when given a test, often choose the wrong punch line to a written joke (4). Emotional interpretation of something humorous occurs mainly in the limbic system of the brain. This system is composed of several different parts which enable it to serve as the emotional center of the brain. The hypothalamus, in particular, deals with expressions of emotion such as laughter (5). The motor response of laughter occurs in the motor cortex, which sends signals to various muscles of the face and body. The link between the motor cortex and the brain was inadvertently discovered while testing an epilepsy patient. When the motor area was stimulated, the patient would smile or laugh uncontrollably (6).

These three portions of the brain that deal with the three aspects of intellectual comprehension, emotional response, and physical response work together to create what we know as laughter. The instigation of this event, however, is not under our conscious control. It is very difficult for one to laugh on cue without a stimulus, and there are several instances of pathological laughter in which the subject laughs without the intent (7). The most cited example of uncontrollable laughter was an epidemic which occurred in 1962 in Tanganyika and lasted for six months (2). Laughter can also be brought on by non-humorous stimuli such as laughing gas or alcohol (7). The fact that laughter can occur unconsciously through these neural circuits is significant. It implies that it is not necessarily a highly cognitive function and may have a more basic purpose.

Laughter is not only found in humans. Behavior similar to laughter can be observed in other mammals as well. Scientists have identified laugh-like behavior in rats at play. High-pitched vocalizations can be elicited by tickling and seem to indicate whether the rat is playing or fighting. Puppies are also known for a kind of laughing when they play. If a young dog lacks the ability to laugh, its actions will be interpreted as aggressive and it will get beaten up (2). Laughter, in this way, is a tool for survival. Chimpanzees and apes also exhibit laughter. It is not the definitive "ha ha ha" of the human, but more of a breathless panting noise. They produce this noise only in positive social situations such as physical play or tickling (7). Observing laughter in other species indicates that laughter seems evolved as a method of determining friend from foe.

Humans take laughter to another level of sophistication from our biological ancestors. Other animals maintain the limbic system and motor areas of the brain, but lack the highly developed cortex which enables humans to perform more analytical processes. Instead of primarily responding to physical stimuli, humans also respond to stimuli that are visual or aural. The primary purpose of laughter, however, remains the same in both humans and other mammals. It is a form of communication that encourages social bonding amongst the species.

Laughter begins to develop at a very young age, between three and four months. It is a way in which a baby can communicate without using words (8). As development occurs, laughter is used during everyday speech as punctuation. It sends an additional message to those around us that we are in a good mood and want to "play." Laughter is most often heard in groups of children as they learn how to get along and work with each other (8). Adults continue to use laughter in social situations, improving relationships with those around them. People are 30 times more likely to laugh in groups than when alone and less than 20% of what we laugh at is pre-determined jokes. Most of the things we find funny are simple everyday phrases (7). Using laughter evokes trust and works to inhibit the fight-or-flight response (1).

Have you ever found yourself laughing for no apparent reason just because someone near you is also laughing? This is because laughter can be quite contagious. Some scientists believe that humans possess some kind of laugh detector which is triggered by particular species-specific vocalizations. This detector acts as a sensory receptor and sets off the serious of neurons which results in laughter (7). The contagiousness of laughter lends itself to social bonding. When we laugh with someone, we feel instantly at ease. People often laugh in nervous situations in order to make others feel more comfortable.

Laughter has a great impact on social dynamics. Scientific observations have concluded that women laugh more than men and they laugh the most when in the presence of men (7). Also, in the office the boss tends to laugh more than the employees and when employees laugh, it is generally in response to a joke told by the boss (2). Laughter, in this way can be used to manipulate and control a relationship. Although most of its effect is positive, laughter can also be used to exclude or alienate. If a person is socially ridiculed or laughed at they might feel the need to either conform to or leave a particular group (1).

It is clear that laughter has a great impact on our lives. It enables us to build or maintain relationships by releasing social tension. What is significant about laughter is the fact that it is an unconscious response to our social situations. It is an example of one of the many little rules which work together to allow the complicated emergent system of culture to operate. It demonstrates just how little conscious control we have over our lives, and stresses the close ties between brain and behavior.


References

1)LOL website, website dedicated to the health benefits of laughter
2)Our Ancient Laughing Brain by Sylvia H. Cardoso , about the evolution of laughter
3)How Laughter Works , general overview of how laughter works
4)Brain Briefing "Humor, Laughter, and the Brain, 2001 Society for Neuroscience newsletter
5)"Limbic System: Center for Emotions", by Júlio Rocha do Amaral, MD and Jorge Martins de Oliveira, MD, PhD. Overview of the limbic system and how it effects emotions
6)"Scientists Find Sense of Humor" , from BBC News Feb. 1998
7)"Laughter" , American Scientist article by Robert Provine from 1996
8)"A Big Mystery: Why Do We Laugh?", MSNBC article by Robert Provine from 1999


Are you being brainwashed by Muzak?
Name: Debbie Han
Date: 2004-04-11 11:13:00
Link to this Comment: 9263


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

People listen to music for various reasons. Some people use music in order to increase relaxation. Others use music as a form of energy. Music is heard in cars, in homes, at shopping malls, and at dentists' offices, among many other places around the world. Sometimes, a song gets into your head and you find yourself humming a tune all day long and then you realize that a stranger who had passed you hours ago had been whistling that song, or that you had heard 2 seconds of that song on your radio alarm that morning before pressing the snooze button. This is the idea behind Muzak.

In 1922, General George Squier invented Muzak, a type of music to deliver from phonograph records to workplaces via electrical wires. He realized that the transmission of music at the workplace increased productivity of his employees. Soon after, there was a study that showed that people work harder when they listen to specific kinds of music. As a result, the BBC began to broadcast music in factories during World War II in order to awaken fatigued workers (1).

Muzak's patented "Stimulus Progression" which consists of quarter-hour groupings of songs is the foundation of its success. Stimulus Progression incorporates the idea that intensity affects productivity. Each song receives a stimulus value between 1 and 6 - 1 is slower and 6 is upbeat and invigorating. A contemporary, instrumental song full of strings, brass, and percussion (27 instruments in total) would most likely receive a stimulus value of 5 (3). During a quarter hour, about six songs of varying stimuli values are played followed by a 15-minute period of silence (2). A 24-hour plan is engineered to provide more stimulating tunes when people are the most lethargic - at 11 a.m. and 3 p.m. and slower songs after lunch and towards the end of the day.

Careful programming of Muzak has been proven to increase morale and productivity at workplaces, increase sales at supermarkets, and even dissuade potential shoplifting at department stores. Over 20 years ago, numerous department stores in the United States and Canada installed what was called "the little black box," which mixed music and anti-theft messages. The quick repetition of "I am honest. I will not steal." 9,000 times an hour at a barely audible volume was able to curb shoplifting at one of the department stores by 37% during a nine-month trial (4).

More recently, Adrian C. North, a psychologist at the University of Leicester, measured the influence of music on decision-making. He and his colleagues tested the effect of in-store music on wine selections at a supermarket by setting up a wine shelf with French and German wines. On alternating days, French accordian music or German pieces done by a Bierkeller brass band were played over a two-week period. Prices were similar, shelf ordering was reorganized daily, and if French music played the first Monday, German music was played on the following Monday. In order to make the nationality of the wines clear, national flags were attached to the display adjacent to the wines. After the shoppers made their wine selections, an interviewer disguised as a shopper approached the consumers to fill out a questionnaire regarding their purchase. The questions delineated whether the respondent had a preference for French or German wines before the purchase, to what extent the music made him/her think of France or Germany, and if the music influenced his/her wine selection. 82 shoppers bought wine from the display during the two-week period, and 44 agreed to complete the questionnaire (5).

The results indicated that music did indeed influence shoppers' wine selections. When French music played, 40 bottles of French wine and 8 bottles of German wine were purchased. When German music played, 22 bottles of German wine and 12 bottles of French wine were purchased (5).

Researchers Charles Areni and David Kim have established a preference-for-prototypes model, which suggests that the mind is composed of closely packed, interconnected cognitive units which relate music and other structures and ideas (6). According to their model, music can stimulate the mind into thinking about ideas similar to the music. For example, French music conjures up images of France. In addition, the speed of music can influence behavior. For example, several studies have been conducted which illustrate how fast music makes supermarket shoppers more around more quickly. Likewise, fast music causes diners to eat faster and slow music slows eating down (and leads to more drinks being purchased at the bar) (6).

What is interesting about background music is that it is intended to be just that - noiseless noise. The concept of barely audible tunes affecting one's behavior leads to the question as to whether one's behavior can be manipulated by another individual without the person being aware of the manipulation. According to the research conducted by North, unobtrusive music selected by store managers, business managers, and companies like Muzak can affect a person's thoughts and action without the person even knowing.

This is evidence that stimuli below the threshold of conscious can influence thoughts, feelings, and actions without the I-function becoming involved or even knowing about it - that there is unconscious perception. It is likely that a number of other things can cause the same result. Events which an individual didn't realize was being witnessed could be interpreted by the person's unconscious and correspond to behavior (7). This theory diminishes the power of the I-function because conscious recognition is not necessary to cause action. This suggests that there are lag times in processing information to the I-function and that only selected information gets to the I-function. As a result, the I-function is not getting involved and people are not consciously recognizing that they are being manipulated by music when it is occurring. Therefore, the unconscious is being manipulated. Is it even manipulation if you didn't know it was happening and you didn't know it had an effect on you?

If the conscious is a sieve, then the unconscious is a vacuum. The influence music has an individual's actions and behavior is evidence that the unconscious is substantially faster than the conscious mind. Sights and sounds that are not registered by the conscious are likely to be registered by the unconscious. It seems like we should not be scared of subliminal messaging through music, but rather, be amazed by the power of the unconscious mind.


References

1) Muzak Home Page, The Muzak website with some interesting information on the company's background

2)Article on Stimulus Progression , A good foundation for understanding Muzak's patented Stimulus Progression

3)Muzak Stimulus Progression graph, Interesting graph on how Muzak chooses ratings for different instrumental songs

4) "Secret Voices: Messages that Manipulate," Time, September 10, 1979, 71. ~ Good background on the subliminal messaging through music in department stores

5)Adrian C. North's homepage, Fascinating research on the effect of French or German music on the selection of French or German wines

6) Areni, C.S., & Kim, D. "The influence of background music on shopping behavior: Classical versus top-forty music in a wine store," Advances in Consumer Research, 1993, 20, 336-340. ~ Additional research on the influence of music on shopping behavior

7)Committee for the Scientific Investigation of Claims of the Paranormal Webpage , A good resource on different types of subliminal perception

Further Reading

BBCi Web page , Muzak: Past, Present, and Future

University of Waterloo Department of Psychology Web site , Additional background on the influence on subliminal messaging on the unconscious mind


The Oracle at Delphi
Name: Eleni Kard
Date: 2004-04-11 20:51:09
Link to this Comment: 9271


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The city of Delphi, in which the ancient ruins of the Temple to Apollo still remain, lies on the southern slopes of Mount Parnassus, 100 miles northwest of Athens. The ancient Greeks considered Delphi the center of the universe. Legend has it that when Zeus unleashed two eagles from opposite ends of the earth in order to locate its center, the two eagles met over Delphi. (1) Delphi was also famous for its oracle; a priestess who could communicate with the gods and predict one's future in exchange for gifts. Both rulers and everyday people journeyed to Delphi to consult the oracle for her advice. (2)

The oracle worked in the following way: a priestess would sit in a small underground room and breath vapors from the ground while drinking water or inhaling mist from the warm spring beneath the Temple of Apollo. She would enter an exalted state of mind and give advice as if in a trance. (3) While in this state, she would mutter words, in a somewhat cryptic manner, and a priest would translate them to the person seeking advice. Sometimes her words were just ramblings. Other times her answers went over her questioner's head. On at least one occasion, the priestess is said to have gone into seizures and died. (3)

The existence of this oracle is not disputed, but how and why it worked has been questioned. How was it possible for this young woman to enter a trance-like state and give people supposedly relevant advice and true predictions of their futures? A recent hypothesis claims that ethylene gas, a common hydrocarbon found in nature, was detected in rocks and water near the Temple of Apollo and could have been responsible for producing the trance-like state. In high doses, ethylene gas can even cause death, which would account for the fate of at least one of the priestesses.

Ethylene (molecular formula H2C=CH2) is a sweet smelling gas known to effect the nervous system. Ethylene gas is naturally emitted by fruits, flowers, and other vegetation. It is the substance that causes fruits to ripen. Among the many changes that ethylene causes is the destruction of chlorophyll. With the breakdown of chlorophyll, the red and/or yellow pigments in the cells of the fruit are uncovered giving the fruit its ripened appearance. (4) Small amounts of ethylene are also found in volcanic emissions and natural gas. The production of ethylene from inside the earth led researchers to analyze the rocks and springs located beneath and surrounding the Temple of Apollo in search of answers of the workings of the oracle.

Geologist Jelle Z. de Boer of Wesleyan University in Connecticut and Archaeologist John Hale of the University of Louisville led the research team at Delphi. The team conducted tests on the Delphi rock and on the water of a nearby spring. Both contained the presence of methane, ethane, and ethylene. (5) They also examined pieces of travertine, a limestone stalactite deposited by an ancient spring, and detected measurable amounts of ethane and methane there as well. (3) The team concluded that the Temple of Apollo sits on crisscrossing geological faults. When the faults shift and rub each other, a large amount of heat is given off, which causes hydrocarbons to vaporize and come up though fissures in the ground. In this way, the gases can seep into nearby springs or fuse into crystalline rock formations.

The results from the tests indicate the presence of ethylene in the rocks and springs at Delphi. The geological aspect also provides a logical explanation of how the gases could have come up to the surface. This opens the possibility that ethylene gas was present in the chamber where the priestess sat since it was detected in the same vicinity. If this was the case, did the ethylene gas have an effect on the priestess and was it responsible for her trance-like state and ultimately influence what she said?

The main threat of ethylene gas is that it can displace oxygen in the air, which can result in symptoms associated with oxygen deficiency. A lack of oxygen to the brain can cause symptoms such as rapid breathing, diminished mental alertness, impaired muscular coordination, faulty judgment, depression of all sensations, emotional instability, and fatigue. As asphyxiation progresses loss of consciousness may result, eventually leading to convulsions, coma, and even death. (6) High concentrations of ethylene, as well as ethane, propane, and propylene, may have anesthetic-like effects (central nervous system depression) causing drowsiness, dizziness, and confusion. (6)

Historically, ethylene was in fact used as an anesthetic until less flammable compounds were discovered. The process by which general anesthetics work is still unknown. The two main hypotheses are the "lipid theory" which proposes that anesthesia directly interacts with cell membranes involved in brain functions (7) and the "protein theory" which suggests that anesthesia blocks sodium channels in the nerve membrane, which inhibits nerve impulses. (8) Biophysicists Wu and Hu have proposed another theory in which anesthesia works by reducing oxygen to the brain. "In essence, their mechanism holds that anesthetics act as barriers to oxygen transport in both membranes and proteins, reducing oxygen availability to the brain." (7) This idea seems to draw from both of the above hypothesis-that some function of/in membranes and proteins is altered (through lack of oxygen in this case). The concept of decreasing oxygen to the brain can also be applied to ethylene gas, for it displaces oxygen as well, which could account for why it worked as an anesthetic.

Attempting to explain how the oracle at Delphi worked through geological findings is one way to try to understand this mystical being. It seems likely that ethylene gas was present in the chamber where the priestess sat since it was detected in mineral formations and springs under and surrounding the Temple of Apollo. Was the ethylene gas indeed capable of producing the trance like state? The strongest support for this argument is that ethylene gas affects the nervous system by displacing oxygen to the brain. The symptoms of oxygen deficiency described above do include loss of consciousness. As one source writes, small doses of ethylene "produce a floating sensation and euphoria. In other words, just what an oracle needs to start having visions." (2) Its former use as an anesthetic also supports this idea, especially if Drs. Wu and Hu's hypothesis that anesthetics work by decreasing oxygen to the brain turns out to be correct, since that would imply that ethylene's displacement of oxygen could lead to anesthetic/trance-like effects. On that note, until further research is conducted on anesthetics, the ethylene gas hypothesis remains only a possible explanation for what went on thousands of years ago at the oracle at Delphi.

References

1) Greece Taxi Tours information website , good background information on ancient Delphi as well as travel information to modern Delphi.
2) Wikipedia, a free online encyclopedia , provides good links to related concepts.
3) What You Need to Know About website , contains many geology related articles.
4) Ethylene gas, provides interesting information on ethylene gas, particularly its relevance to the fruit industry.
5) Hallucinogens website , an article from the Washington Post with the history and geological findings at Delphi.
6) The BOC Group website , a material safety and chemical data sheet on ethylene gas.
7) Article from a UPI Science correspondent , describes the possible mechanisms for the workings of anesthesia.
8) Dr. Joseph F. Smith Medical Library online , a useful source to search for information on medical related terms.


Behavioral Response to Smell: the answer may be un
Name: Sarah Cald
Date: 2004-04-12 01:25:54
Link to this Comment: 9277


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Of the five senses, smell is perhaps the least understood both mechanistically and behaviorally. There are many questions as to why people react differently, if at all, to certain smells. This difference in behavior may be interpreted as being due to a physical characteristic of the human body. However, it remains to be seen what is responsible for this difference in behavior, the brain or an alternative organ?

Some elementary conclusions regarding olfaction can be made using general observations, however such conclusions give little insight into the actual mechanism of olfaction and behavioral responses to smell. Regardless, they are a good starting point in exploring these issues. First, we can conclude that odors and smells are perceived in humans through a common pathway. We know this because on some basic level, all humans can agree that certain things smell. For example, we can all agree for the most part that a rose smells-we may not all agree on what a rose smells like, but it does have a scent. Along these lines we can also conclude, generally, that there are distinct odors which differ somehow in their chemical components causing them to be received differently. For example, the smell perceived from an orange can easily be identified as different from that of gasoline.

In addition to expanding and understanding further the aforementioned conclusions, this paper seeks to understand how humans can receive the same odor and behave or respond differently to it. Gasoline is one example of an odor that elicits different behaviors in different people. Many people despise the smell of gasoline saying it causes feelings of nausea, and avoid smelling gasoline as much as possible. Yet there are others who find the smell somewhat pleasant, and go out of their way to smell more of it by taking longer to pump gas or by taking deeper breaths while doing so. This particular phenomenon intrigues me. More broadly speaking, what is responsible for the behavioral response to odor?

In order to fully explore this question, a better understanding of the mechanism of olfaction is needed. Odorants are collected in the sensory epithelia of humans, located in the upper regions of the nasal cavity ((1)). Odorant molecules are absorbed in the mucus layer of the sensory epithelia where they then travel to receptor cells, which are located on the cilia that line the nasal cavity. These cilia each comprise receptor proteins that are specific to certain odorant molecules ((1)). Binding of an odorant to a receptor protein results in an activation of a second messenger pathway. There are two known pathways to date. The most common, involves the activation of the enzyme adenyl cyclase upon the binding of an odorant molecule ((2)). This enzyme catalyzes the release of cyclic AMP (cAMP). The increase in cAMP levels causes ligand-gated sodium channels to open, causing a depolarization of the membrane. This depolarization results in an action potential ((2)). These electric signals are carried by olfactory receptor neurons through the olfactory bulb. The olfactory bulb then relays information to the cerebral cortex resulting in sensory perception of smell ((2)).

On average, humans can recognize up to 10,000 separate odors ((3)), yet only have about 1,000 different olfactory receptor proteins ((4)). Clearly, there is a step in the pathway of olfaction that allows for combinations of odorant molecules to be organized. This step was found to take place in the olfactory bulb. Within this organ, the activity of different olfactory receptors in combination is used to signal the brain for specific smells ((4)). Richard Axel M.D., an investigator at Columbia University College of Physicians and Surgeons and a pioneer in the field of olfactory research, explains this processing role of the olfactory bulb best:
"The brain is essentially saying... 'I'm seeing activity in positions 1, 15, and 54 of the olfactory bulb, which correspond to odorant receptors 1, 15 and 54, so that must be jasmine ((4))."


Knowledge about the mechanism of olfaction now allows us to explore what is responsible for behavioral responses to odor. My initial answer to this question was the brain. One thing I have learned in our class discussions was that for the most part, behavior is the result of inputs and outputs from the brain and how they are processed. Accordingly, the brain should be responsible for the different behaviors observed in response to smell. However, after exploring and learning about olfaction on a more detailed level, I now believe that the source of behavioral response to odor may lie within the olfactory bulb. One role of the olfactory bulb is to receive signals from odorant receptors and relay that information to the brain. In this way the olfactory bulb is functioning to process and interpret the input signals from odorant receptors and produce corresponding output signals for the brain to subsequently interpret. It seems logical that in processing the inputs from odorant receptors, the olfactory bulb is also producing some type of output that results in a behavioral response.

Further investigation revealed evidence that may support this hypothesis. Signals sent from the olfactory bulb are sent not only to the cerebral cortex, which is responsible for conscious thought processes; but also signals the limbic system, which generates emotional feelings ((5)). This leads me to question whether the signals sent to the cortex and limbic system are identical or similar in any way? Also, is there a difference in the number of signals sent between the two locations in response to odorant reception? Meaning, do more signals get sent to the cortex when a person smells oranges, compared to the limbic system? All of these questions are worth pursuing; perhaps it is information in the signals sent to the limbic system, which is responsible for the behavioral responses to odor.

There is much about olfaction that remains unclear, particularly about the relationship between behavior and olfaction. To date, there is little evidence that suggests what portion of the body is responsible for behavioral response to odors. Further investigations involving the olfactory bulb may prove worthwhile in determining what is responsible for the behavioral response to smell.

References

1)Monell Chemical Senses Center, an overview of olfaction

2) Lancet, Doron. "Vertebrate Olfactory Reception." Ann. Rev. Neurosci. 9 (1986): 329-355.

3)The Mystery of Smell: The Vivid World of Odors

4)The Mystery of Smell: How Rats and Mice-and Probably Humans-Recognize Odors

5)Sensing Smell


Artificial Intelligence: Is Data Really 'Fully Fun
Name: Dana Bakal
Date: 2004-04-12 10:34:18
Link to this Comment: 9283


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Every day, as I walk into Park Science Building and round the corner, I am faced with an intriguing poster. This poster asks about the possible personhood of machines. If a computer, robot, or android can pass the Turing test, the poster asks, can it then be considered a person? If it cannot pass, can its personhood be discounted?

Since humans began to develop complex machinery, and recently computers that mimic the human mind in many ways, they have been preoccupied by this question. Consider the science fiction series Star Trek: The Next Generation. One of the major characters is Data, an android. It ( I will call Data "it" until we conclude his personhood satisfactorily) is generally treated as a person by its crew mates on the Enterprise, and people relate to it as if it were not only a person, but a friend. But is Data really a person, and can we refer to it as "he"?

For this paper, I feel I need to define several terms, or the discussion will be very confusing. I am defining a "person" as an entity with "a sort of awareness - of self, of interaction with the world, of thought processes taking place, and of our ability to at least partially control these processes. We also associate consciousness with an inner voice that expresses our high level, deliberate, thoughts, as well as intentionality and emotion" (2). I will refer to members of the species Homo Sapiens as "humans." "Humans" are not necessarily "persons," but many or most are, and all "humans" deserve the presumption of "personhood."

Alan Turing believed that personhood could be tested for. He devised a test wherein a human subject sits in one room and interacts indirectly, like through a computer terminal, with two tentative persons. One of these tentative persons is a human, and one is a computer, an artificial intelligence. The subject is allowed to communicate with both tentative persons, to ask questions, state feelings, etc. If the human subject cannot identify which of the terminals represents a human, or if she determines that the AI is a human, then the AI has passed the Turing test, and must be considered a person (2).

Lets say, then, that Data is subjected to a Turing test. If it passes, (which it is almost certain to do, based on evidence of the way it is treated on the Enterprise,) it will be a person, according to Turing. Can we then be sure that Data is a person and should have rights as such? Not really. One major argument against the Turing Test providing indisputable proof of personhood is the Chinese Room Paradigm.

This thought experiment, as suggested by Searle in 1980. He asks us to imagine a room containing one human and a code book. Chinese writing is pushed under the door of the room by humans outside. The human inside does not speak or read Chinese, but humans outside do. The code book contains a complex set of directions detailing how to "correlate one set of formal symbols with another set of formal symbols." (1). The human in the room can thus provide the correct answers to questions in Chinese without having any understanding either of the questions or of his responses. To summarize, the person in the room has a codebook which allows him to produce output which looks like understood Chinese. Applied to an AI, this experiment claims that an entity like Data could process input and provide output such that its shipmates would perceive it as a person, but without having any consciousness or understanding of either the input or the output. Data could pass a Turing test, but pass it only because it is running a very convincing code.

The Chinese room experiment cautions us to not conclude personhood when none may be present. There are several responses that challenge Searles's conclusion. The Systems Response claims that the human in the room cannot understand Chinese, but the room and the human taken as a total system can. The Robot Reply says that if we can get a robot to act as if it was perceiving, understanding, etc., then it would. This is a similar argument to the Turing test. These replies bring up interesting ideas, and there are many more of them to explore and consider.

Back to Data. If we cannot prove it is a person (it might just be a Chinese room), can we assume that it is not one? I would suggest that we must err on the side of caution, and assume that he (I will now call Data "he") is indeed a person. I say this out of fear. What would happen if he was a person but was not considered such? What would the ethical implications of this be? What about humans who do not seem to be persons (could not pass the Turing test, show very little intelligence)? If an Autistic human is unable to pass a Turing test, should we deny that human personhood? The ethics would be appalling, and true persons would be denied their basic rights simply because we cannot prove their personhood.

"I think, therefore I am" is an interesting statement to apply to this discussion. Since we can perceive our own emotions and thoughts, we consider ourselves to be persons. We cannot directly observe the thought processes of other humans or of artificial intelligences, so we cannot prove that they are persons. In order to be safe, in order to keep society running, and in order to remain sane, we assume that other humans are persons, unless proven otherwise. Since we cannot prove that Data is not a person, we have the same evidence of his personhood and of the personhood of humans around us. The response to Searle that I want to emphasize is the other minds response. "If you are going to attribute cognition to other people you must in principle also attribute it to computers (1)."

So Data should be considered a person. But Data is a fictional Android created by a fictional mad doctor who took the secret of how to construct a person to the grave. Can we now construct artificial persons? Is it even reasonable to believe that we will ever be able to? If we cannot create artificial persons, even in theory, then their potential personhood is moot.

Perhaps the largest problem in artificial intelligence, and in computing in general, is the frame problem. This problem was described eloquently in 1984 by Daniel Dennet, a leading author in philosophy of mind. He tells a story in which scientists build a series of robots. The first, R1, fails in its task to survive because it does not anticipate the reactions that will be caused by its actions and the secondary etc reactions caused by those. The second robot, R1D1 (Robot-deducer1), fails because it does consider all implications, and is locked in an infinite computation of all the possibilities. The third robot, R2D1, is programmed to decide which implications are relevant and which are not, and likewise fails as it sits and rejects those thousands it deems to be irrelevant. Dr. Westland of the University of Derby provides a more complete explanation of Dennet's story and of the frame problem on his website (4).

Westland explains the with robots, you start at zero. The things that seem obvious to a human, the things you never have to explain, need to be explained in detail to a robot. You do not have to tell a child, to use an example from Professor Grobstein, that opening the refrigerator door will not cause a nuclear holocaust in the kitchen. That possibility never occurs to them; that is, it is rejected implicitly. With artificial minds, the implicit processing is not there, so the simplest tasks require the processing of impossible amounts of information (4).

But how do humans solve the frame problem? Where does our implicit programming come from? Nobody really knows. I would claim that since humans have solved this problem, the possibility exists, however remote and impossible it seems from here, that AI's could be developed which were not subject to it.

Data, to get back to our original example, seems to have solved it perfectly well, although he does sometimes need to be told simple things, taught like a child. Organisations such as IDSA and CSEM and projects such as the SWARMBOT EU project, are doing just that (5). They are working on algorithms and neural networks that allow robots to learn.

Assuming we can develop intelligent robots that can learn and pass the Turing test, we should treat them as if they were people, because we do not know that they are not. In order to develop these, the frame problem must be mastered, perhaps through the use of learning algorithms. Who knows, one day we may be attending a march for robot's rights!

References

1)The Internet Encyclopedia of Philosophy, description of Chinese room argument

2)Brain Web Entrainment Technology, Introduction to artificial intelligence

3)Internet Encyclopedia of Philosophy, overview of AI

4)Dr. Westland's Site, description of frame problem

5)Learning Robots, site of IDSA robot project


Hypnotism: Entertainment or Science?
Name: Allison Ga
Date: 2004-04-12 13:53:35
Link to this Comment: 9284


<mytitle>

Biology 202
2004 Second Web Paper
On Serendip

Hypnosis has been referred to and observed as a mode of entertainment. Stage hypnotists make appearances at many college campuses and on television. Good Morning America featured Tom Deluca, a hypnotist who hypnotized a portion of the audience and had them perform his bidding: they laughed when he told them to and one man was unable to feel the affects of ice water on his hand (1). Is hypnotism only to be used for entertainment purposes? This question has influenced me to look at the issue of hypnotism and explore its history, how it is perceived in the 21st century and why it remains so controversial.

Everyone has been hypnotized at some point; becoming fully engrossed in a film or book is similar to a hypnotic trance. Have you ever been on your way home, down a familiar road and suddenly you are at your destination without being totally sure how you arrived there? This experience is also similar to a hypnotic trance. The history of hypnosis can be traced back to ancient Egypt and even has a part in Greek myths; Greek oracles and soothsayers were said to reach this place of clarity through self-hypnosis (2). Franz Mesmer, scientist during the mid-1700's, began the foray into the scientific uses of hypnotism through his belief that magnets held healing purposes. Many believed that his overwhelming presence influenced his patients to go into trances, in this way Mesmer was able to bring about the resurgence of hypnotism. A surgeon by the name of James Braid followed in the steps of Mesmer and Mesmerism when he deduced a fundamental rule of hypnotism: the success of a hypnotic state came from within the subject, not the hypnotizer. He also came up with the term hypnotism from the Greek word hypnos, which means sleep. In 1889 Albert Moll wrote the book Hypnotism in which he insisted that it was a scientific subject to be included in the growing study of Psychology. With the help of these men, the exploration of hypnotism's medical use as well as the debate of it as science or entertainment found its beginnings. Consequently, the stereotype of hypnotists as evil mind-controlling people developed and made its way into books and film. In 1894 George du Maurier's book Trilby included the character "Svengali" who controlled Trilby with hypnotism. Soon after, in the 1900's films began to be made with this same Svengali character in the guise of an evil hypnotist (3). This perception that a hypnotized person is under complete control of the hypnotist is quite a misconception. In a hypnotized state, the person is hyper-attentive and still retains their ability to act freely (4). The imagination is peaked and the subconscious is tapped into, making the person open to things that their conscious self would not normally allow them to do or say. They are extremely suggestible and open to the ideas of the hypnotizer.

Today, the perception of hypnotism appears to be moving toward one of increased acceptance to its therapeutic possibilities, but it is still not taken completely seriously. Present day films are an interesting source of information as to how the 21st century perceives this age-old practice. Two examples will be discussed: one from a comedic film, the other from a drama. In the 2001 film Shallow Hal, Jack Black plays a superficial man who does not feel the need to look beyond the surface of the women he encounters. He meets a self-help guru, Tony Robbins, who hypnotizes him to see the "inner beauty" of all women; those who were gorgeous are now repulsive, while 300 pound Rosemary played by Gwyneth Paltrow, (who, according to the film, is automatically unattractive due to her weight) is now stunning and skinny in Hal's eyes. Hypnotism in this example is used as a way to teach Hal a lesson, since he remains unaware of how his perception has been altered for a good portion of the film. This example presents hypnotism as entertainment since Hal's inability to see things "as they are" becomes funny as well as ironic. Hypnotism becomes a non-scientific enterprise, which is extremely evident by the fact that the process is prefaced by the phrase "Devils come out!" Hypnotism retains its entertainment value as well as a comparable Svengali character who runs the show.

For another look at hypnotism in the media is the 2003 film, The Butterfly Effect. Ashton Kutcher plays Evan, the film's main character, whose childhood has been filled with several traumatic experiences which he has blocked out of his memory. These experiences have shaped him and his childhood friends in different ways which he tries to remember. He discovers that he is able to revisit the past through self-hypnosis made possible when he reads his childhood journals; this endeavor backfires for Evan and his friends. While revisiting his past, he relives when his thirteen year old self is hypnotized by his psychiatrist in order to remember his hidden memories. The doctor is forced by Evan's mother to bring him out of his hypnotic state when his nose begins bleeding, presumably as a result of the trauma of the memory. In this instance, hypnosis is portrayed as something utilized in the scientific context, but not guaranteed to work. This implicates that the process of being hypnotized is not in the hands of the hypnotizer, or the hypnotized but rather an entity on its own. Although this brings about the interesting point that in hypnosis the conscious and subconscious are separated, it still does not present the practice of hypnotism as a serious and helpful scientific practice.

Hypnotism has a variety of uses: psychiatric hypnotherapy where psychiatrists help the hypnotized access memories that are the cause of phobias and mental anguish; in law enforcement referred to as forensic hypnotherapy where witnesses are hypnotized in order to access memories they have forgotten or blocked out; and medical hypnotherapy which suggests that people can be cured from illnesses directly as a result of influencing the subconscious to heal the body (5). Forensic hypnotherapy is extremely controversial because hypnotism is a union of memory and imagination which indicates that the hypnotizer can influence the witness to have false memories and/or the hypnotized can also mix reality and imagination together. These doubts create the idea that hypnotism used in this sense is highly unreliable and thus should not be used. Medical hypnotherapy is also extremely controversial since many people believe that the cure for various illnesses should not be left to something as unknown as the subconscious. Two important questions arise out of the concern of forensic and medical hypnotherapy. The first is how can it be possible to separate the conscious and the unconscious? And second: how is it possible to remain in control of your actions and thoughts if you are in an extremely imaginative and suggestible place in your consciousness? Similar to the latent desires and associations that are revealed in dreams, the subconscious area of the brain is closed off to our conscious mind. Perhaps this is so because the implication of various impulses and hidden memories that our brain buries deep in the subconscious are done so for a reason. The ability to change possibly disturbing memories makes hypnosis seem unreliable in this medical context. Hypnotism's ability to access memories that the subconscious has buried has become an issue that science cannot explain.

Throughout its history, hypnotism has long been thought of as a means for mind control or as pure entertainment. It is interesting to note that hypnotism is important in its various medical and scientific uses, although the controversies and questions over its effectiveness and actual use are understandable. This simple idea of hypnotism as merely entertaining serves to detract from its numerous other uses of which many people remain unaware.


References

1)ABC News, article titled "Is Hypnotism Science or a Sideshow?" about Deluca on Good Morning America

2)History of Hypnotism, a helpful website detailing the origins of hypnotism

3)History of Hypnotism

4)How Hypnotism Works, informational website on different aspects of hypnotism, how it works, its background, and what it can be used for

5)How Hypnotism Works


Fibromyalgia, Pain and What It Means
Name: Erica Grah
Date: 2004-04-12 20:08:32
Link to this Comment: 9294


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Introduction
Fibromyalgia is a musculoskeletal syndrome whose main symptoms consist of chronic pain and fatigue. The pain is generally widespread, but diagnosis is dependent on the existence of 11 of the 18 specifically designated tender points on the body (1), which are generally located at the intersection points of various muscles and tendons (2). A tender point is generally defined to be an area of hyperalgesia, which is another way of saying that a painful stimulus is perceived to be much more painful than it actually is. In the case of those suffering from fibromyalgia, even slight pressure to a tender point or any surrounding area has the capacity to cause intense pain. The fatigue experienced in fibromyalgia is quite similar to that suffered by patients with chronic fatigue syndrome. Therefore, it is the occurrence of excruciating pain given moderate pain experience that distinguishes the two. Although widespread body pain is sometimes a symptom of chronic fatigue syndrome, it tends to be more intense and localized in origin (i.e. tender points) in cases of fibromyalgia. Thus, it is this aspect of the illness on which I will focus.

A Sensory Experience
There are two components of physical pain in humans. The first component is sensory. Our nervous system provides a way in which different parts of the body can recognize and react to pain. Painful information that originates within the nervous system is referred to as noxious stimuli. These noxious stimuli are received by pain receptors, called nocioceptors, which are located in the somatic or visceral tissues of the body. The nocioceptors are either chemical, mechanical or thermal in nature, and they function by transmitting impulses to the rest of the brain, notifying it of the existence of pain (3). This, in theory, will trigger natural responses that will remove the pain over time and permit healing, restoring the body to its natural state of equilibrium, free of pain.

Deficiencies occur when the brain's ability to free the body of pain is lacking or slowed. Intuitively speaking, persistent painful stimuli will not allow the brain sufficient time to respond effectively to the nocioceptive impulses. Thus, as the impulses are created and travel through to the brain, the stimuli build up and consequently amplify the intensity of pain. Neurologically, this idea helps to explain the concept of central sensitization, which occurs when the nocioceptors' response to noxious stimuli is amplified greatly. Under normal circumstances, about one-fifth of nocioceptors are triggered to regulate everyday pain impulses. However, when injury or inflammation of tissue occurs, the majority of them are activated. Therefore, under the influence of prior injury and post-healing, the amplified state of both nocioceptive (i.e. noxious) and non-nocioceptive impulses remains. Hence, continuous pain can result in the sensitization of nocioceptive-specific neurons in the presence of such input. Another type of neuron, called wide dynamic range(WDR) neurons, responds to both painful and painless stimuli. Because both types of impulses are heightened upon tissue injury, the WDR neurons can also be sensitized, thus reducing the individual's threshold for pain, as any stimulus can be treated as a noxious one. This is especially significant with respect to WDR neurons, since their response to noxious stimuli is greater than nocioceptive-specific neurons (3). This process is one proposed cause of chronic pain in fibromyalgia patients, particularly since the locations designated as tender points and the surrounding areas are more prone to minor, almost insignificant, injury (2).

Contributing to the body's (dys)regulation of pain impulses are neurotransmitters. More specifically, people suffering from fibromyalgia generally have much greater quantities of substance P, which is a chemical that excites pain responses in the nervous system and further works to sensitize neurons receiving nocioceptive information (2),(4). In addition, these individuals tend to have lower amounts of serotonin and norepinephrine, both of which play a partial role in the reduction of pain (3),(5).

Determining the origins of chronic pain is a relatively sketchy process, particularly when injury or other easily noticeable factors cease to exist. I think that the notion of setpoints would be put to good use in this circumstance. Because our bodies are used to operating at equilibrium, any and everything they can do to maintain such an equilibrium puts us at ease (4). However, everyone operates at a different equilibrium, and that level for fibromyalgia sufferers is perhaps even lower than the average person's, with respect to pain processing in particular. Given this reasoning, an individual operating at the fibromyalgia pain equilibrium, versus one at "normal" equilibrium, will have an increased number of nocioceptors reporting pain at any given point. This is interesting given the results of a study in which people with the syndrome, upon application of mild pressure to their thumbs, exhibited increased brain activity in twelve areas, whereas control subjects had activity in two (6). Given a specific threshold setpoint for pain, it is rather plausible that the results of this study can be generalized to the wider population of people with fibromyalgia. If our bodies possess a pain setpoint which regulates the minimum number of nocioceptors that are activated at any given point, it follows that given a higher setpoint in a fibromyalgia patient, any noxious stimulus, regardless of severity, would activate a higher number of nocioceptors. This would thus increase the magnitude of the pain signals being sent to the brain, and therefore activate more parts of the brain. As a result, even at rest, there is an increased state of pain response in comparison to that in an individual without chronic pain.

The same logic may apply to the existence of greater amounts of substance P in the system. The setpoint for the substance is higher in suffering patients, and as a result, internal signals that excite its release fire more often. Similarly, if the equilibrium amount of serotonin and norepinephrine are lower, pain impulses will not be inhibited as easily.

A Perceptual Experience
Pain is not only a sensory experience in humans, but a perceptual one as well. Simply put, people feel pain. This feeling of pain is as important as the existence of the pain itself. Like the sensory impulses, awareness of pain, in addition to the subsequent physical and emotional effects that it has on the individual, is contingent upon past experiences with pain, genetic factors and cognitive dispositions. In other words, there are many other factors, both conscious and unconscious, that can affect the intensity and magnitude of the pain impulses and the individual's awareness of them. This concept refers to the gate-control theory of pain, in which a person's thoughts or emotions at the time of pain processing can either reduce or amplify their perception of that pain (2),(3).

I find it is very important to separate the overall pain experience into two separate components. The main reason for this is the I-function. Because the I-function is the part of the brain that consciously experiences, it is rather easy to differentiate between pain exclusive of the I-function and that which is inclusive. Sensory pain being the former, if the same chronic pain were to occur in an individual who possessed a disconnection between the I-function and the parts of the body being affected, which are linked to the rest of the brain, the intensity of the existing pain would be reduced, but rather the awareness of that pain and the emotional consequences that frequently accompany it. Therefore, it goes without saying that pain perception varies on an individual basis. However, in the case of those suffering from fibromyalgia, because the equilibrium threshold for pain is lowered, the conscious experience is greater with respect to the amplitude of the pain, and it is the resulting emotions stemming from this experience that feed into whether that conscious state of pain is consistent, lessened or exacerbated. It seems that people in general experience a feedback loop of emotion when it comes to pain, to the extent that the I-function goes through a process of assessing pain, developing a proportional response/reaction, which in turn may or may not benefit the individual and their perception of what is occurring. This is true even more so in cases of persistent and intense pain.

Observations
I think the concept of chronic pain and pain in general is an interesting one. In gathering information for this topic, I have come to question the true origin of pain. I think that observing that which is happening on a neurobiological level, and the impact that it has, raises a question of where one should look to solve the problem. What exactly is pain if it cannot be felt? I think that the I-function plays a major role in identifying any kind of pain, arguably more so with chronic pain; it is, after all, the I-function that recognizes the pain not just as a continuous stream of impulses but as a problem. I think the gate-control theory would be put to good use, specifically because it seems to account for those cases in which a person can experience serious injury and not feel any pain. If the I-function is preoccupied, or focused elsewhere, the statement, "there is a problem" does not exist in consciousness. I am sure that we all, although maybe not consciously, possess the ability to eliminate the perceptual effects of noxious stimuli. I think that this capability, with respect to chronic pain, could possibly beneficial. However, I do think that our awareness of pain is necessary to the extent that if the idea "there is a serious problem" needs to be available in consciousness for the purposes of medical attention or some equivalent, we can actually act accordingly before extreme damage is done to our bodies. As a result, I believe we are notified of any pain, chronic or otherwise, as a warning to protect ourselves – both our bodies and I-functions included – from existing threats.


References

1) An Overview Of Fibromyalgia , from the Mayo Clinic

2) Understanding Chronic Pain and Fibromyalgia: A Review of Recent Discoveries , from the National Fibromyalgia Association

3)The Neurobiology of Pain , from The National Pain Foundation

4) The Neuroscience and Endocrinology of Fibromyalgia , report from a workshop held at the NIH, from the National Institute of Arthritis and Musculoskeletal and Skin Diseases

5) Fibromyalgia: Not All in Your Head, Newsweek article written on the subject, posted by the National Fibromyalgia Association

6) New Brain Study Finds Fibromyalgia Pain Isn't All in Patients' Heads, from Science Daily


The Effects of Methamphetamine on the Brain
Name: Amy Gao
Date: 2004-04-12 20:38:42
Link to this Comment: 9296

<mytitle> Biology 202
2004 Second Web Paper
On Serendip

When the word "meth" is mentioned, what is the image that immediately flashes into your mind? Perhaps a picture of individuals huddled together, inhaling substances that give off repugnant odors in a dark alley somewhere? Or drug cartels that wage bloody warfare upon each other on the Mexican-American border over the control the supply of the drug? These stereotypical impressions may have been correct years ago, but methamphetamine, whose street names include speed, chalk, ice, crystal, crank, and glass has long moved beyond just being the dominant drug of choice in the San Diego, CA area.(1) The use of this drug has spread to rural and urban areas of the Midwest and the South. Thus, the problem of methamphetamine is no longer confined to a certain geographical area; it has become a nation-wide problem.(3)

Methamphetamine, a powerful synthetically produced stimulant of the central nervous system (CNS), is a substance that has similar effects on the human body not unlike cocaine. Under federal regulations, it is a schedule II drug, which means that it has a high potential for abuse with severe liability to cause dependence.(2) The drug, like many illicit substances, may be injected, ingested, snorted, or smoked. However, unlike cocaine, it has a longer, lasting affect on the human body. In animal models, methamphetamine have shown to cause the release of high levels of dopamine, a neurotransmitter that stimulates the brain cells, which in turn would enhance temperament and body moment. Consumption of this compound also has a neurotoxic effect on the brain cells that store dopamine and serotonin, another substance that is responsible for neurotransmission.

Even minute consumption of methamphetamine will induce wakefulness, increased physical activity, decreased appetite, increased respiration, hyperthermia, euphoria. Effects of methamphetamine on the CNS also include irritability, insomnia, confusion, paranoia, and aggressiveness. Since it is known that it is difficult for nerve cells to be regenerated after having been damaged, it is a clear indication that use of this drug—in small or large quantities—cause irreversible damages in the CNS. This observation was reported in a study by the National Institute on Drug Abuse (NIDA), which also found that individuals who have a long history of methamphetamine abuse have reduced levels in dopamine transporters, which are associated with slowed motor skills and weakened memories in the individuals.(4) Abusers who remained abstinent for at least nine months were found to have recovered from damage to their dopamine transporters, but their motor skills and memories were not found to have significantly recovered.

"Methamphetamine abuse is a grave problem that can lead to serious health conditions including brain damage, memory loss, psychotic-like behavior, heart damage, hepatitis, and HIV transmission," says Dr. Nora D. Volkow, director of the National Institute on Drug Abuse (NIDA).(5) In another study done by Drs. Ernst and Chang at the Harbor-UCLA Medical Center in Torrance, CA., it was found that methamphetamine users had abnormal chemistry in all parts of their brains. According to Dr. Chang, "In one of the regions, the amount of damage was also related to the history of drug use-those abusers who had the greatest cumulative lifetime methamphetamine use had the strongest indications of cell damage."

There have been more than two decades of studies and researches focused on the effects of methamphetamine upon the body, especially the damages that the compound does to the brain. Even though the substance may bring about extreme pleasures, these "flashes" only last for a few minutes. It is well-known that users can become addicted very quickly, and the drugs are used with increasing frequency and in increasing doses.(6)

As with drug addiction of any kind—methamphetamine addiction included—may be successfully treated. The treatment usually includes counseling, psychotherapy, support groups, and family therapy. Medications prescribed to individuals assist in the suppression of the withdrawal syndrome, craving of the drug, and in blocking the effects of the drug upon the body. It has been found that the more treatment given, and the longer the period, the more successful the addict will stay abstinent from the source of addiction.(7)

The use of methamphetamine has been proven repeatedly to be associated with irreversible damages to the brain. Even though the neurotransmitters in the brain may recover once the individual has abstained from the drug, the damages have already been done and the effects cannot be reversed. With each consumption/inhalation of the substance, the individual sinks lower into a never ending spiral of drug abuse. A few moments of pleasure in exchange for permanent damages done to the organ that is the most important in the body—after all, brain is the only organ that can never be transplanted—is it really worth it?

References

1. National Institute on Drug Abuse

2. Street Drugs, an informational site about drug abuse

3. U.S. Drug Enforcement Administration

4. NIDA, NIDA Research on Withdrawl from Methamphetamine

5. National Institute of Health, NIH News

6. NIDA, NIDA information about methamphetamine

7. NIDA, Information about drug addiction treatment


Tourette's Syndrome and Education
Name: Nicole Woo
Date: 2004-04-12 20:46:13
Link to this Comment: 9297


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Tourette's syndrome, though better known as the cursing disease, often manifests itself is much less extreme expressions. Though the media has created a sensationalistic portrayal of those individuals with TS who suffer from coprolalia, whose symptoms include excessive swearing and foul language, those who suffer from this disorder are only a small minority of individuals with TS (4) . In fact, less than ten percent of people with TS are though to have coprolalia (3). As those who suffer from Tourette's syndrome are usually diagnosed in their childhood, around ages five to eleven, the varying tics and abnormalities which TS encompasses can greatly impact their social development and educations. In addition, tics, which often result in alienation, can directly indirectly be the cause of psychological damage. The educators and parents of today must then address the question of how to teach and socialize their children, despite their disorder.


Tourette's syndrome, identified by the French physician Georges Gilles de la Tourette in 1885, is defined generally as a neurological disorder that results in repeated, involuntary body movements (known as tics) and uncontrollable vocal sounds (5). Tourette documented in his research nine individuals who experienced involuntary movements and compulsive rituals of behavior since childhood. The criteria for diagnosis of Tourette's syndrome, as defined by the Tourette's Syndrome Association (5), are as follows;
1) Both multiple motor and one or more vocal tics are present some time during the illness although not necessarily simultaneously.
2) The occurrence of tics many times a day (usually in bouts) nearly everyday or intermittently throughout a span of more than one year and
3) Periodic changes in the number, frequency, type and location of the tics, and waxing and waning of their severity. Symptoms can sometimes appear for weeks or months at a time.
4) Onset before the age of 18.


Though the average age that TS begins is 6-7 years old, and though almost all cases of TS emerge before the age of 18, there are exceptions. The most common tics among those who are diagnosed with TS involve movements of the neck, mouth, and eyes. Tics, particularly in childhood, vary in their severity and frequency, following what is known as a waxing and waning process (2). Often, tics are particularly noticeable for a finite period of time, after which they may subside for weeks or months. As a result of this waxing and waning period, parents or educators may either dismiss such actions as a phase, or else attribute actions to physical problems. A child who, for instance, continual sniffs their nose, may be thought to have a cold or an allergy to something in his or her environment. However, when brought to a physician, they cannot attribute the tics to an illness or allergy. After the tic has waned, parents tend to think that either the phase or unidentifiable sickness has run its course.


The urge to act out a tic is experienced as irresistible and, similar to the urge to sneeze, eventually must be expressed (3). Both the severity and frequency of tics are increased as a result of tension and stress and decrease during relaxation or when focused on a particularly absorbing task. A continual source of frustration for parents is the fact that their children are sometimes able to remain tic-free, something which would seem to suggest that they have a certain amount of control over the tick. It is not uncommon for children with TS to be "free" of their tics when engrossed in a particular task, such as playing Nintendo. This is generally misinterpreted as children having more control over their tics than they in fact do (4).


Tourette's syndrome appears to be a genetic, inherited predisposition, although outside factors do appear to have some affect upon the severity of the symptoms (4). Recent research presents a convincing case demonstrating the relationship between a parent's own status with TS and that of their children's. In 2003, researchers compared the onset of TS in children whose parents had TS compared to those children whose parents did not have the disorder (4). Children who were considered to be "at-risk" or prone to TS and "control" children, children whose parents did not have TS, were observed between the ages of 3 and 6 years and followed with yearly structured assessments over intervals of 2-5 years. The results of this study, conducted by McMahon, Carter, Fredine, and Pauls (2003) seem to indicate a definite genetic component to the onset of TS:


"Of the 34 at-risk children who were tic-free at baseline, 10 (29%) subsequently developed a tic disorder; 3 of those 10 met criteria for TS. None of the 13 control children developed a tic disorder" (4).


Research also suggests that gender is also a factor when considering who is prone to develop TS, as males are affected 3-4 times more than females. While the transmission of Tourette's syndrome does appear to be genetic, the "basic underlying defect" which causes TS remains unknown (2). There is speculation by a number of researchers who suggest that TS results from abnormalities of neurotransmitters, more specifically, the activity between dopamine within the basal ganglia. This conclusion has been tentatively made after observing biochemical brain analyses of those diagnosed with TS. Researches observe that dopamine-blocking agents often suppress tics in patients.


While TS, and the tics that result from it, are serious in and of themselves, often the most serious problems for those with this disorder are not caused by TS. Clinical populations o those who suffer from TS also have other behavioral problems, especially obsessive-compulsive behaviors (2). As many as sixty percent of children treated for TS have symptoms associated with attention deficit hyperactivity disorder (ADHD). Other conditions that are known to occur simultaneously with TS are mood disorders such as depression and Bi-polar (4). In the previously mentioned study, conducted by McMahon, Carter, Fredine, and Pauls (2003), they noted that:


"Obsessive-Compulsive Disorder (OCD) or features or OCD emerged in 11 of the at-risk cases, but not in any of the controls, while Attention Deficit Hyperactivity Disorder (ADHD) occurred in 14 at-risk children but not in any of the controls" (4) .


Tourette's syndrome may result in difficulties in the child's education, learning disabilities that may encompass, but are not limited to, difficulty reading or writing, problems with mathematical computations, or perceptual problems (5). Knowing this information, what is it that teachers can do to help and encourage their students with TS?


While there is no cure for TS, and though there are numerous options which attempt to chemically combat the effect of Tourette's syndrome, the parents of children who suffer from TS as well as their teachers are required to think beyond the scope of chemicals. It is of great importance that TS is diagnosed early on. Because tics can alienate children from their peers, it is just as important for the parents to recognize the problem as it is for the teacher to nurture understanding in the classroom. Generally speaking, those diagnosed with TS have the same intelligence level as those who are not affected by the disorder, thus, students with TS should be held to the same standards as other students. However, that being said, additional measures should be taken to lessen stress and anxiety. Untimed exams and/or a separate room for exams help in reducing stress for the student. It is also helpful to for the teacher to give directions in stages, as too much information at one time may be overwhelming. As the urge to express the tic may at times become unbearable, teachers should make it clear that the student can leave the class, possibly to go to a "safe place," where the tic can be freely expressed. Perhaps most importantly, teachers and parents alike need to give positive feedback when the child performs well in a social or academic setting. For children whose actions often seem out of place, positive feedback is invaluable. Though there is still hope for a cure for Tourette's syndrome, until then, both parents and teachers must realize that Tourette's syndrome, if understood and dealt with lovingly, does not have to be a debilitating disorder.


References


Sources


1) Health: Diseases, Database of various illnesses


2)MDVU Library
, Good discussion of the causes of TS


3)SCoTENS
, discusses special education needs


4) Tourette's Syndrome
, Very good website for general as well as more in depth information on TS


5)Tourette's Syndrome Association , Helpful in reference to TS and education


Alcohol and Impulse Control
Name: Elizabeth
Date: 2004-04-13 00:36:04
Link to this Comment: 9311


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

One of the most visible ways alcohol affects an individual is the loss of inhibitions observed in those with blood alcohol levels as low as .01% (1). Every college student has experience with the behavioral effects of alcohol. Friends become more outgoing and appear to lose all inhibitions as they continue to drink. A normally shy individual may be table dancing or a quiet friend may be the center of attention. This paper will explore the possible causes of this outgoing and sometimes outrageous behavior as well as the reasoning behind the consumption of alcohol beyond an individual's limit that occurs during drinking.

The prefrontal cortex, located at the anterior end of the frontal lobes, is specifically responsible for normal control of impulses. The prefrontal cortex has been linked to impulse control because damage to this region of the brain can lead to loss of inhibitions (2). One particular example of prefrontal cortex damage is the injury suffered by Phineas Gage. Gage had a steel rod penetrate his brain. He survived the incident but had poor impulse control over his actions that had not been part of his personality before the injury (5).

Individuals who consume alcohol can show impulsive and reckless behavior similar to those with frontal lobe damage. Since the frontal lobes have been previously linked to impulse control through studying individuals like Gage, I hypothesize that alcohol may act on these same regions to cause a loss of inhibitions. Additional evidence that alcohol acts on the frontal lobes was discovered when chronic alcoholism was linked to structural and neurophysiologic abnormalities that can be observed on functional magnetic resonance imaging scans (8). Ethanol must be working on the frontal lobes in order to inflict this damage over time. Further study of ethanol's effects on the frontal lobes led to alcohol's specific interactions with two neurotransmitters.

Neurotransmitters are released into a synaptic cleft between neurons and can cause an excitatory or inhibitory response. An excitatory response is produced when a neurotransmitter from the pre-synaptic neuron causes the depolarization and release of a neurotransmitter from the post-synaptic neuron (3). An inhibitory response is caused when the pre-synaptic neurotransmitter inhibits the release of a post-synaptic neurotransmitter (3).

Two neurotransmitters, gamma-amino butyric acid (GABA) and dopamine are responsible for the loss of impulse control in those who consume alcohol. Dopamine causes an excitatory response at dopamine receptors in the frontal lobes (7). Alcohol increases the amount of dopamine acting on receptors and enhances the normal feeling of pleasure associated with the dopamine system (7). Alcohol may function like cigarette smoke to inhibit the action of enzyme monoamine oxidase, the enzyme responsible for breaking down dopamine in the synaptic cleft (7). Since dopamine is not broken down as efficiently when ethanol is present, it can act on the post-synaptic neuron for a longer period of time. The feeling of pleasure will be increased and the individual will want to keep drinking to maintain the sensation. Individuals want to continue to experience the feelings caused by dopamine, so they continue to consume alcohol. The response of ordering another drink when one is already visibly intoxicated can be explained by the pleasurable effect that an increased alcohol concentration has on the brain.

Alcohol also enhances the effects of the neurotransmitter GABA on GABA receptors in the prefrontal cortex (4). GABA neurotransmitters inhibit the release of other neurotransmitters from post-synaptic neurons. Ethanol co-binds with GABA neurotransmitters to GABA receptors on chloride ion channels (6). Ethanol causes the prolonged opening of the chloride ion channels and the greater uptake of chloride ions by the post-synaptic cell. The presence of chloride ions hyperpolarizes the post-synaptic neuron so it cannot conduct an action potential and initiate a response to stimulus (6). Since the post-synaptic neuron cannot release a signal, the ability of the neurons in the frontal lobes to inhibit socially unacceptable behavior is reduced. Decision-making is also impaired and the impulsive, uncontrolled behavior of intoxicated individuals results. Dr. Richard Olsen conducted research on specific GABA receptors. GABA receptors with beta-3-detla subunits remain open for an extended period of time when exposed to low levels of alcohol (1). This particular subunit probably has a higher affinity for the binding of ethanol. GABA receptors Dr. Olsen studied respond to much lower levels than GABA receptors with gamma-2 subunits and nervous system control over behavior can be altered after one drink (1). The varying binding site shapes of GABA receptors may explain the progressive loss of control that alcohol causes. Some receptors respond to lower levels of ethanol and as alcohol concentrations increase more GABA receptors are affected. The loss of inhibitions results because the post-synaptic neurons are progressively less able to conduct an action potential and illicit a response.

The effect of alcohol on the GABA and dopamine systems causes the loss of control that can be observed when individuals drink. Through excitatory and inhibitory synapses, the actions of certain neurotransmitters alter the behavior of an intoxicated individual. Further study of the specific binding of ethanol to receptors may lead to treatment of intoxicated individuals. Also, studying the effects of alcohol will lead to a greater understanding of the role GABA and dopamine neurotransmitters play in altering observable human behavior. .

References

1)Even a little alcohol Affects the Brain, This online article written by Steven Reinberg contains a summary of the Research done by Dr. Richard Olsen on various GABA receptors.

2)Executive Functions and Frontal Cortex, This is a website containing information on the function of the frontal lobes and specifically the prefrontal cortex.

3)Synapses, This article provides good background information on the structure and function of neurons as well as a description of excitatory and inhibitory responses.

4) Neural Activity and GABA/Glutamate in Prefrontal Cortex: A Combined fMRI/MRS-Study, This site states that GABA does function in the prefrontal cortex. It wants to study the specific amounts of GABA in individuals using fMRI technology.

5)The Story of Phineas Gage, This is the story of Phineas Gage, including his background and specifics about his injury.

6) How Drugs Affect Neurotransmitters, This site provides text and a diagram about the affects of GABA on a post-synaptic neuron.

7) Tobacco, Alcohol and Dopamine, This site discusses the many impacts that alcohol and tobacco have on the brain, including their effects on the dopamine system.

8) FRONTAL LOBE CHANGES IN ALCOHOLISM: A REVIEW OF THE LITERATURE, This site presents research data from those who studied the effects of alcoholism on the frontal lobes.


Would you like fries with that?
Name: Erin Okaza
Date: 2004-04-13 01:25:40
Link to this Comment: 9314


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Exhausted, you return home from work with a bag of McDonald's and flip on the television. Halfway through your juicy Big Mac, crispy fries and your 44oz. Coke, a public service announcement for the American Heart Association comes on at the tail end of the commercial break to tell you how you are currently sucking down enough saturated fat to harden the arteries of an elephant.

Whether or not you realize it, you are probably one of the millions of Americans bombarded by the anti-cholesterol revolution. Most people are aware of the well-publicized relation between high cholesterol and the risk it poses to our heart. However, a vast majority of individuals is unaware of cholesterol's surprising connection to behavior. This paper will investigate this rather interesting connection by first laying out the platform of the current cholesterol movement. Then, it will look at studies supporting cholesterol's impact on behavior. Next it will examine how these two viewpoints combine to provide a way of looking at "set-points" and the nervous system. Finally, it will consider why most people wouldn't anticipate this connection and the implications such a discovery might have about understanding ourselves.

It is well known that too much cholesterol in our blood is not a good thing – however is that the whole picture? For most people, the scare of coronary artery disease and atherosclerosis – where the insides of the arteries become hard and narrow due to (cholesterol) plaque buildup – is enough to make anyone shudder with any mention of cholesterol (1),(2). However, that does not mean that all cholesterol is bad. Lipoproteins carry cholesterol through the bloodstream in two types: LDL (low density lipoprotein), which cause buildup in the arteries, and HDL (high density lipoprotein), which carry cholesterol to the liver. Higher levels of LDL or "bad cholesterol" increase your chance of getting heart disease, whereas higher levels of HDL or "good cholesterol" do the opposite (2). There are healthy levels of both cholesterols in our bodies; however, there are no symptoms of high cholesterol so its only indicator is a blood test (1). In May 2001, the National Cholesterol Education Program (NCEP) altered the 1993 cholesterol guidelines (1),(2) by lowering the range of acceptable "normal" cholesterol levels. As a result, 13 million more Americans were advised to make dietary changes to lower cholesterol (3). The good thing, is that this measure is heightening people's awareness and generally increasing overall health. The bad side, however, is that this indirectly projects the mentality that "lower cholesterol is better". With the media and campaigns pushing an "a.s.a.p" lowering of cholesterol, are there consequences? Possibly ones we are not aware of?

We know that physiological deviation from what is considered "normal" can cause drastic results – high levels of bad cholesterol stymie the operation of our heart and cardiovascular system (1). But now lets challenge the completeness of this picture and ask, what about the other way around? How else does deviation from acceptable levels of cholesterol affect our body? Is there a consequence of having levels of cholesterol that are too low?

While the negative affects of cholesterol keep us maintaining low-fat diets for the benefit of our physical health, several studies raise suspicions that taking our obsession too far might be at a sacrifice to our mental health. Ignited by a Yale study proposing a cholesterol-serotonin hypothesis of aggression, Dutch researchers revealed consequences of low cholesterol by providing evidence that linked low cholesterol levels to increased depression in men (5). Subsequent studies support a connection between low/lowered cholesterol levels and adverse behavioral outcomes (aggressive behavior and depression) (4),(7). It is believed that cholesterol negatively affects the metabolism and activity of the brain neurotransmitter, serotonin, known to be involved in the regulation of mood. Other explanations target a certain type of fatty acid, omega-3, found in large quantities in the brain (6). It is speculated that low levels of omega-3 could possibly impact behavior through mechanisms still unknown. The focus of this information is not to undermine current wisdom and treatment of high levels of cholesterol on heart disease, but rather to focus on the possible connection between mental heath consequences and low cholesterol. The other significant consideration of such findings is how cholesterol might help us to better understand alterations in mood and behavior. More generally, these findings underline the notion that the role of the nervous system is more interconnected with, and impacted by, known physiological mechanisms than we were previously aware.

It is established that too much cholesterol is not good for you; however, it is incorrect to assume that the lower your cholesterol, the healthier you are. When we put the two pieces together, evidence advocating either side of the cholesterol argument suggests that the body is able to operate at maximum efficiency at an optimal level – a certain cholesterol set-point (8). Alternation of cholesterol levels below the "set-point," disturbs the consistency of serotonin metabolism and other unknown mechanisms that might act as a regulatory loop for behavior. An interruption of this process results in the previously noted behavioral outcomes. Cholesterol is something we cannot control; there are no symptoms of high or low cholesterol. We can't consciously manage the level of cholesterol in our body – inferring that such a regulation is not happening in our I-function. As a result, we have no direct control over our arteries clogging up with plaque or the metabolism rate of our serotonin and the "other part" of our nervous system must account for these mechanisms. In effect, we can extend this notion of the "other part" of our nervous system (I-functionless nervous system) to account for behavioral phenomena. We can use such reasoning to explain how cholesterol plays a role in behavioral outcomes, such as violence and depression, and occurs by way of set-point irregularity without the I-function.

Why did it not seem that these two sides could be put together to come to the above conclusion? Perhaps it has something to do with the fact that it is hard to actually bring to consciousness that which one is unaware of. For example, medical professionals might think they have a full explanation about the impact a certain molecule has on the body (in this case cholesterol), but not be aware of other existing pathways, loops or interactions. Cholesterol, studied from a physiological standpoint offered a very reasonable explanation for one particular set of medical outcomes. However, when approached from the standpoint of the nervous system, a new, previously unknown explanation is manifest offering further information about the linkage between cholesterol and behavior variation. In turn, we might question the true extent of our knowledge and ask if what we know really stops there.

Though this paper investigated the less known connection between cholesterol and behavior by using set-point variation and aspects of the nervous system, it raises concern over current knowledge of our physiological processes with emphasis on the completeness of what we think we know for sure. The nervous system offers an additional explanation about the connection between our bodies and behavior. If such connections were previously overlooked due to a lack of awareness about the existence of mechanisms between the nervous system and the molecular workings of our bodies, how might we become "aware" of mechanisms in our body that do not go through the I-function, but nonetheless exist and impact mental and physical outcomes? Another question arising from this discussion of cholesterol's impact on behavior through set-point alteration is if feedback loops that regulate set points are permanently alterable without the possibility of long-term negative consequences.

In this discussion, cholesterol is more than a culprit of heart attacks. As it turns out, we can use information from both sides of the cholesterol debate to shed unique light on how cholesterol can influence behavior by set-point alteration without our being conscience of what it is happening. Next time, don't be so quick to replace your #4 extra value meal with a soy burger, baked potato and a jug of OJ. Don't just do it...think for a second. Odds are, cholesterol goes to both the heart as well as the head.


References

1)Medicine Net, Site provides good general information about cholesterol in general, especially LDL and HDL

2)National Heart, Lung and Blood Institute, good information about anything having to do with the heart, heart conditions, cholesterol and heart disease

3) Dr. Mercola web page , a doctor's commentary about change in cholesterol guidelines, includes JAMA citations

4)Skali homepage, a good compilation of articles documenting the bad effects of having low cholesterol, good citations to articles

5)The Brain , This site showcases an article reported by Reuters News about Depression linked to low cholesterol summarizing a study by Dutch researchers in Psychosomatic Medicine.

6)New Century homepage, This site displays an article about the connection between mood and food with good references.

7)Science Daily, article about the price of low cholesterol among women, from the Center for the Advancement of Health

8)Dr. Mercola web page, a doctor's commentary about the link between low cholesterol, aggressive behavior and depression, includes Journal of Behavioral Medicine


Psychological Components of Chronic Pain
Name: Natalie Me
Date: 2004-04-13 01:53:03
Link to this Comment: 9315


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

My sister suffers from a chronic autoimmune disease associated with chronic pain and fatigue. Over the years I have witnessed her struggle with the disease and observed her symptoms fluctuate with her mood. Am I suggesting she is faking her symptoms, forgetting them when happy or exaggerating them when down? Certainly not; I have merely noticed a common trend: mental state affects physical state. What is this unique mind-body connection? And how can one's mental state affect one's physical perceptions?

Upon my online investigation of chronic pain, I discovered the constant distinction being noted between ACUTE and CHRONIC pain. Acute pain is typically natural and 'healthy' pain. Pain normally serves a very useful function: to warn us of danger and to protect our bodies. "Without pain we would have no way of knowing that something was wrong" and would "be unable to take action to correct the problem or situation that is causing the pain" (1). Acute pain is short-term and involves a physically observable or physiologically provable source. Chronic pain, however, is persistent or recurrent, lasting for "at least three months and most probably for several years" (2). This is not considered a healthy body response especially with apparent nonexistent stimuli.

The problem is that this chronic pain has no specific etiology. There is no diagnostic test that can be done to prove you suffer from chronic pain (though tests have been done compare brain states of patients with increased pain sensitivity to touch to equal pain inducing stimuli in patients with 'normal' sensitivity that have found that those experiences produce similar brain states). There is also no proof of the exact psychological processes involved in the experience and management of chronic pain. Does chronic pain cause a bad mood, depression, anger, and anxiety or do those states cause chronic pain? It seems that no one really knows; "the exact medical causes of the chronic pain condition are unknown or poorly understood" (2). Research seems to suggest that the relationships are reflexive, Thomas A. reports that "pain and psychological illness have reciprocal psychological and behavioral effects" implicating a co-morbidity of depression and pain (3).

Again though, there does not seem to be a discernable cause for this chronic pain, nor its association with depression. Perhaps this is why in all my reading chronic pain is constantly being defended. Consider the following examples:


"Emotional stress and negative thinking can actually increase the intensity of the pain, but the presence of psychological factors does not mean that the pain is imaginary" (1).

"We've all heard it before: 'It's in your head'" (4).

:

"Sometimes those with chronic pain are blamed for their condition or made to feel like they were making it all up..." (2).

Where does the need come from to defend chronic pain against accusations that it is 'imaginary,' in one's head or, just a lie? Why is it necessary to declare chronic pain as real? Apparently this question seems to be the real one. Dr. Nortin M. Hadler reports that the "escalating discordance between feeling miserable and possessing no demonstrable primary pathophysiology" is a byproduct of a brand of medical science and the real problem with treating chronic pain (5). The western biomedical approach, with its focus on diagnosis and labeling as well as its symptomatic definition of health has produced a pathological focus in healing that mal-socializes patients and doctors to define disease in a detrimental way.

Western medicine is based on a specific duality that has pervaded culture since Descartes first separated mind and body. By treating the mind and body as separate, one is forced into having either a physical or mental ailment. "Reductionistic clinical thinking that has enslaved western physicians for generations" induces physicians to diagnose and label a disease along those specific and separate lines – mind or body (1). Patients begin to feel as though their disease must be one or the other, and for chronic pain sufferers, without a specific etiology to blame, western medicine turns to the other source: the mind.

I realize I am quite a distance from where I started. I began wanting to know how mood might affect pain or disease and general and have ended with a critique of our medical culture. The problems with our conceptualization of disease are numerous and I could spend volumes discussing the issue. Dr. Bennett argues that we should avoid using labels that, once culturally defined, stigmatize the patient. However this process is engrained in other spheres of our life as well, certainly it is something we cannot avoid without a great deal of social change. "To understand the language of pain, we must learn to listen to how the pain echoes and reverberates between the physical, psychological, and social dimensions of the human condition" and this is not something easy to do for patients and doctors both (1). As a sociologist currently looking at social movements, I can't help but wonder what sort of a collective behavior would be needed to change the way we define health, science and ourselves as both social and biological agents of action.


References


Works Cited:

1. http://www.addiction-free.com/pain_management_&_addiction_psycho_components_of_pain.htm

2. http://www.aboutarachnoiditis.org/website_captures/chronicpainhandbook/

3. http://rockhawk.com/chronic_pain_and_depression.htm

4. http://webhome.idirect.com/~readon/pain.html

5. http://www.rheuma21st.com/archives/cutting_edge_fibromyalgia.html

Works Consulted:

1. http://www.mindpub.com/art203.htm

2. http://www.pearsonassessments.com/resources/painprofile.htm


.


Monkey See, Monkey Do?
Name: Lindsey Do
Date: 2004-04-13 02:13:41
Link to this Comment: 9317


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In the words of the 18th c. Poet Edward Young, "we were all born originals, why is it that so many die copies?" (1). Indeed, if the recent discovery of "mirror neurons" in monkeys suggests a similar pre-existing brain structure in humans for imitative behavior, the question becomes: what does it mean to emulate others to the extent that we adopt observable behavior as our own? How can we define imitation as a conscious or unconscious aspect of human behavior from both a social and neurological standpoint? Can human behavior ever resemble "true imitation?"

"Mirror neurons" are considered to be one of the most exciting and controversial new developments thought to have potentially widespread implications across the natural and social science fields. Mirror neurons were discovered in the Macaque monkey's ventral pre-motor cortex, which controls hand and mouth movements. Neurons in this area, labeled as F5, were found to fire when the monkey observed an action performed by another (perhaps conspecific) creature (seeing another monkey or human grasping a nut) and when the monkey performed the same or similar action (grasping a nut) (2). The implications of this discovery become even more meaningful because the F5 area is homologous to Broca's region, which is thought to be involved in speech control as well as pre-linguistic analysis of other's behavior in humans (2).

Mirror neurons imply that we may not have to physically execute an action in order to imitate, but rather our motor system becomes active (observing neural activity) as if we were executing that very same action we are observing (on an unconscious level). Clearly, we should not be so quick to translate the neural activity found in the Macaque's to our own neural behavior. Humans have a higher consciousness that implies we have the ability to imagine ourselves acting, or to internally stimulate a vision of this action. However, envisioning ourselves imitating others does not necessarily translate into the actual imitation of these actions. Perhaps this suggests, that unlike the Macaques, we can consciously choose to imitate, if and when we do. How do we then distinguish between actions/observations that become internally integrated (conscious processing akin to learning) and echoed (unconscious processing) in imitation?

First, let us define what we mean by imitation. Imitation is defined as "to be, become, or make oneself like; to assume the aspect or semblance of; to simulate: intentionally or consciously; unintentionally or unconsciously" (3). Another description from the psychologist Thorndike, who was possibly the first to provide a clear definition of imitation within a social context, is given as "learning to do an act from seeing it done" (4). In other words, he suggests that we learn new behaviors by copying others.

These definitions suggest that imitation is not merely an unconscious, automatic reflex as suggested by mirror neurons, but a mechanism that involves a certain amount of integration and perception similar to learning. Gallese theorizes that mirror neurons allow us to implicitly perceive an action as equivalent to internally stimulating it (2), however, in humans, imitation seems to be inextricably linked to our higher consciousness.

For some, imitation involves an exact copy of behavior that is most commonly found in animals (i.e. the Macaque monkey, bird song, etc. See (5) for examples). If mirror neurons constitute what Vittorio Gallese proposes as constituting a "neural mechanism [that] enables implicit action understanding," then we have the capacity to represent and recreate the mental states of others (Theory of Mind) as part of our behavioral imitation. This notion underlies that mirror neurons might also provide us with the ability to distinguish our self from others (6), which is relevant in a social context. If we follow this idea, a dysfunction of mirror neurons would not only interfere with our imitative abilities, but also with our awareness of our relationship to others around us on an observable level.

Patients experiencing Anosognosia deny not only their own paralysis but also the paralysis of others (7). This case suggests that an individual's lack of awareness of his or her own physical capability is intrinsically connected to a similar physical ability observed in another. Echopraxia is another example of a possible impairment of mirror neurons and the imitative reflex. This disorder is explained as the "impulsive tendency to imitate other's movements. Imitation is performed immediately with the speed of a reflex action" (8). In this case, imitation is involuntary and spontaneous, suggesting that it is a behavior autonomous from the I-function. Unlike echopraxia however, individuals with imitative behavior do not imitate the movements of the acting individual, but rather perform an action identical to the observed one. It is the goal rather than the movement" that is imitated in this pathology (8). Although these actions cannot be simply reduced to a defect in mirror neurons, there is a certain imitative aspect inherent in these behaviors that suggest an unconscious connection between mirror neurons and how we act.

Laughing and yawning are also given as examples of other imitative actions, although they are thought to be "contagious" behaviors resulting from a stimulus. We can suppress these actions voluntarily if we choose to, but we can't deny that the observation of these actions will often generate a similar response in others. Regardless, laughter and yawning are not examples of "true imitation" because they are innate behaviors, not actions that we have learned to execute by observing others.

Clearly, imitation involves a certain degree of intentionality and goal-orientation that is inherent to our I-function. On the unconscious level of our "copycat" behavior, mirror neurons are said to function as a recognition and representation of specific actions/behaviors between others and ourselves. In order to get it "less wrong then", let me suggest a hypothetical situation: if we isolate ourselves in a vacuum, it is likely that we lose the ability to regulate our mind and body (without input to inhibit the action potentials generated by the brain). Therefore, taking a different stance, it seems logical to me that imitation might on a larger, social scope act as a regulatory mechanism. Mimicry enables individuals to know what they are doing is "ok" because they are acting like and along with others, creating a bond (9).

Perhaps mirror neurons have evolved as an evolutionary behavior for humans in order to inhibit corollary discharges, serving as a reference point for the "correct behavior" in a negative feedback loop/homeostasis within a social context. Although my hypothesis may be reductionist, it might be helpful to think of mirror neurons as homeostatic because observation (input) seems to be directly related to performance (output) in neural activity.

Imitation occurs in all ages. It might be interesting to research imitative behavior as age-specific: if children imitate more than adults, this might provide more evidence that mimicry can act on an unconscious level since children are not endowed with the same cognitive processing as adults. We often see young children imitating their parents, integrating innate behaviors with their observations (walking, talking, etc). But we also see adults watching others, adopting similar behaviors. For example, I often observe those who may not have the knowledge how to properly lift weights observe, then copy others (although not always correctly). Imitation evidently serves as a quick way to learn a new behavior that might serve us well—by watching others first, we can assess how these actions will be accepted and if they "work" or not. Our choice to imitate and whether we actually correctly assimilate these behaviors in our understanding and interpretation of them may be questioned.

Returning to my original, and perhaps unanswerable question, can humans even truly "imitate" if we are subject to so many internal and social forces? Imitation allows us to both consciously and unconsciously ape other's behavior, although it seems to occur more commonly on a level of self-awareness. Mirror neurons may suggest a neurological explanation for mimicry, but until we can pinpoint its exact function in learning/adopting behaviors observed in others, whether consciously or unconsciously, we must be careful to draw conclusions about its capabilities. If indeed mirror neurons follow an involuntary "monkey see, monkey do" role, then we must alter our concept of brain=behavior, in that our behavior is more reflexive of our external perceptions of the world and our relation to those around us.

References

1)Quotes

2)What Mirror Neurons Can and Cannot Do, a different take on Mirror Neurons

3)Online Version of the Oxford Classical Dictionary
; definition of Imitation

4)Imitation and the Definition of a Meme, Susan Blackmore

5)Animal Imitation

6)The Roots of Empathy: The Shared Manifold Hypothesis and the Neural Basis of Intersubjectivity, Vittorio Gallese

7)Ramachandran, Bio 202 lecture notes link

8)Shared Manifold Hypothesis from Mirror Neurons to Empathy, Gallese

9)Nature Magazine, an interesting link on "copycatting"


Fact--or Fantasy? The Truth Behind Munchausen Synd
Name: Shadia B
Date: 2004-04-13 02:15:32
Link to this Comment: 9318


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

You don your white overcoat and grab a stethoscope, expecting to conduct a routine examination. Your young, female patient enumerates a variety of complaints including painful swelling over her right breast. You notice multiple scars on her torso, question her about her medical history, and learn that "she has a history of similar recurrent swellings over the abdominal wall, which needed repeated surgical drainage on about 20 occasions". Her problem had started at the age of 17 when she was first diagnosed with immune deficiency. Soon after medication was administered, she developed symptoms suggestive of deep vein thrombosis in one leg. Since medication was given under supervision, she was thought to have developed a resistance to the drug. "She soon complained of bilateral painful swellings associated with weakness of the lower limbs and consistent with bilateral femoral nerve palsy and hematoma. Surgical evacuation was rapidly carried out but recurrent abscesses remained a problem." The list continues, each item more spectacular than the last. And yet, the cause of her illness remains undiagnosed. Baffled and confused you consult your fellow doctors and order a battery of tests, determined to detect the cause. Would it ever occur to you that your patient is really a pretender? In his case report(2)summarized above, Aamer Aleem, a doctor in the UK, illustrates a typical scenario a Munchausen patient presents.

Munchausen Syndrome is an extremely disturbing medical condition-often going undetected for decades. Not to be confused with hypochondriacs who experience physical symptoms of illnesses and visit doctors truly believing they are ill (4), those with Munchausen's make "capitalizing on, exploiting, exaggerating or feigning illness, injury, or personal misfortune"(1). a habit in order to gain the attention they feel cannot be gained by any other means. Named after a German soldier renowned for exaggerated tales, the disease is deemed a factitious disorder, and is predominant in females (71% of cases).(5). The disorder is relatively rare and incredibly difficult to treat. Awareness and early detection are crucial factors. Those afflicted with Munchausen Syndrome rely on the fact that a doctor will trust history and symptoms reported in order to fabricate an intricate web of deception.

Aleem's report continues to illustrate the difficulties faced by the medical community in properly diagnosing and treating the disorder. Although routine questioning reveals that her mother had suffered from breast cancer and that no near relatives were involved in the medical field, "suspicion is raised regarding a possible factitious nature of her problem because of an inability to explain the cause of her abscesses and the growth of multiple organisms from the lesions"(2). A high level of suspicion is required to detect Munchausen, and doctors need to be on the look out for one of these essential features: "pathologic lying (pseudologia fantastica), peregrination, and recurrent, feigned or simulated illness"(2). Supporting features include borderline and/or antisocial personality traits, deprivation in childhood, knowledge/experience in the medical field, multiple hospitalizations, and multiple scars coupled with an unusual or dramatic presentation.(2)

Ironically, those with Munchausen Syndrome really are sick, yet they rarely seek the right kind of medical advice. When confronted, they vehemently deny any claims and ingenuity is required to catch them. In this student's case, a psychiatric consultation was conducted (without giving the patient any hints about the suspected factitious disorder) during which she was judged very defensive and conflicted when responding. Soon after, when the patient was not in bed, the nurses found a syringe full of fecal material along with needles-the source behind the mysterious swelling and cultures. When the patient returned, she was informed and became very hostile. Finally, against medical advice, she left the hospital and was lost to follow-up(2).

In researching this intriguing disease, I was struck with the realization that Munchausen's highlights many issues of neurobiological importance. It is very much an extension of the mind-body riddle for within the seemingly physical nature of the victims' symptoms, there lies a neurological cause. What is it that any individual could possibly gain by harming themself? Research suggests that women who have led emotionally deprived childhoods and who may themselves have been physically abused or even victims of Munchausen's, are the most likely to be afflicted. Presenting oneself as a false victim is very much a Munchausen trait. Often suffering from "narcissistic tendencies, low self-esteem, and a fragile ego"(1), they crave the attention and sympathy a grave illness or seriously ill child, immediately elicits. Sufferers also relish the status of power and control that accompanies being the only person who "knows" while an intellectual medical community remains baffled. The real question remains-do they knowingly deceive, or are they themselves deceived?

A related disease, Munchausen Syndrome by Proxy (MSBP) is illuminating because in this case, the victim is not the MSBP sufferer. In fact, in this more dangerous variation of the disease, it is usually a very young child who will be targeted. Often the MSBP sufferer will assume a caregiver role, working as a nurse, perhaps in a ward for sick children or in a home for the elderly, or with severely handicapped people-"the common thread is a victim who is vulnerable, whose verbal skills or emotional state or mental condition prevents them from explaining what the MSBP person is doing to them and whose hold on life may already be precarious"(1). It has been estimated that one in five cot deaths (SIDS) is really a murder resulting from a mother with MSBP (1). Sufferers become adept at inflicting harm upon others in a manner that leaves little or no forensic evidence. Methods employed include restricting breathing by 'placing a hand over the mouth, lying on top of the baby, smothering, placing plastic or cling film over the person's face, withholding food and medicine, over-medicating or medicating when unnecessary, or delaying calling for medical assistance when an emergency arises". Then "when the victim reacts with a fit, breathing difficulties, collapse, etc the MSBP sufferer can-after ensuring the condition is sufficiently life-threatening-rush to the rescue and later be hailed as a hero for being such a wonderful, kind, caring, compassionate person for having saved this person's life" (1). Sadly MSBP is rarely suspected because very often the abuser appears to be an ideal caretaker-attentive, knowledgeable about their child's condition, and extremely interested in the medical field.

In closing, the calculative mentality needed to perpetrate a crime on a child in order to elicit sympathy suggests that the perpetrator is "conscious" of their actions. It is clear that premeditation is needed to research medical data and falsify symptoms, all the while outwardly placing oneself in a sorrowful situation. However many symptoms reveal a psychiatric origin. Those with Munchausen illustrate how very fine the distinction between pleasure and pain really is. Often they exhibit sadistic/masochistic behaviors exploiting their victim's pain for their own pleasure. Must the I-function be involved, or is this behavior pathological and uncontrollable? These are questions that remain to be grappled with within the medical and legal community. The debate over deliberate child abuse vs. psychological disorder remains unresolved. Coupled with the needs for early detection and appropriate treatment, these issues remain a priority.

References

1Bully Online, A detailed report on the two syndromes.

2)Case Report:Munchausen Syndrome, A very comprehensive case report and review of literature surround Munchausen.

3 The Merck Manual Site on Psychiatry in Medicine,

4Page Wise,gives overview of syndrome.

5)WebMD,Article by Daniel DeNoon entitled "Some Kids Cry Out in the Language of Illness"

6)Village Voice,Cybersickness: Article on Munchausen and the Internet

7)Feldman, Marc, MD. Munchausen by Internet, Southern Medical Journal. Vol. 93, No. 7, July 200.


Genetic basis for Violence
Name: amar patel
Date: 2004-04-13 02:56:43
Link to this Comment: 9319


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Aside from the strict science and reporting in genetic based cases, one of the major points that have been stressed in all studies of genes for behaviorism is the minimal affect they have compared to any environmental factors. All scientists agree on the idea that behavior stems from the nervous system, but the real question has been the degree that the nurturing environment plays in initiating certain behaviors. The easiest breakdown is comparing violence genes to those for alcoholism. Since it is assumed that alcoholism is genetic, one must understand that the genes cannot take any affect until someone is exposed to alcohol. In the same way, violence must be initiated by a case of abuse before the cycle can be perpetuated. It is quite obvious that no matter what chemicals or genes are found to be related to violence, all cases start from the impact of a patient's surroundings.

Although all violence was traditionally thought to be in the realm of sociology, or psychology, we are now finding increasing evidence of its biological initiation. Many recent studies support the notion of a genetic "deficiency" causing aggressive behavior. These genes code for certain enzymes that are responsible for the metabolism or synthesis of neurotransmitters. This genetic analysis will show the genes coding for the Monoamine oxidase (MAOA) and Tryptophan hydroxylase (TPH) enzymes (catalyzing proteins) have been linked to specific cases of violent behavior.

Each of these enzymes works on neurotransmitters to control activity in the brain. A neurotransmitter is essentially a type of chemical that carries a signal across a synapse between neurons. The primary neurotransmitters that have been associated with the onset of aggression or violence are Serotonin (5-HT), Norepinephrine, and Dopamine. These neurotransmitters are three of the most common chemicals found in the brain. Serotonin is responsible for different moods, appetite, sexual activity, homeostasis, and sleep. Norepinephrine is affected by stress and moods in the brain; it is also involved in the sympathetic nervous system. (2) Dopamine is used to regulate emotion, the "pleasure center" of the brain, and motivation. (3)

In order to comprehend the function and relation of neurotransmitters better, one must understand the way in which neurons communicate via synapses. A common nerve cell holds a partially negative charge, relative to its outside environment. There are various channels that allow the flow of positive sodium ions into the nerve cell. These sodium channels are activated when the channel prior to it has moved in ions. This essential "domino effect" of allowing positive ions into the cell creates what is known as an action potential. When the action potential reaches the end of the axon it will allow for the intake of calcium ions, and the release of synaptic vessels which contain the neurotransmitter chemicals. When these vessels reach the axon's cell membrane they release their neurotransmitters into the synaptic cleft. All of these neurotransmitters are taken up by the dendrites of the next cell's membrane.

The MAOA enzyme operates on the molecules leftover in the axon. Monoamine Oxidase is an enzyme used to metabolize the neurotransmitters Serotonin, Norepinephrine, and Dopamine. The purpose of the MAOA enzyme is essential to inhibit the reactivity of the neurotransmitters. (4) Any leftover neurotransmitters will be broken down by the MAOA enzymes. Since this enzyme is translated from a gene that is located on the X-chromosome, of which women have 2 copies and males only one, males have a greater probability of having a deficiency of the enzyme.

Another interesting aspect of the study conducted on MAOA stated that the link between violence and genetic mutations in which no genes for MAOA existed, proved inconclusive for an entire population. (4) The reason that these results are not conclusive on the entire population is in relation to the entire nature versus nurture battle. On a whole, the majority of the population has not experienced abusive situations. After narrowing the search criteria, the research did eventually find links between the MAOA enzyme and aggression. Such results further the notion that genetic backgrounds are not utilized without a behavioral initiator.

The most cohesive link was found between the MAOA enzyme
activity and adolescent conduct disorder in 'maltreated' males. (4) The conclusions drawn from these studies show that although there are instances of the MAOA enzyme being completely deficient, these cases are rare. There is, however, a large portion of the population which has a low MAOA enzyme activity. (4) Whenever neurotransmitters are released, from fear etc., they will remain in the synaptic cleft and cause more aggressive behavior. In previously abused children, this activity bolsters violent behavior by stopping Serotonin activity. (4)

The other enzyme that has been equally promoted as a cause of violent is TPH, an enzyme which is concerned with limiting the rate of synthesis of the neurotransmitter Serotonin. (5) The biology behind the TPH enzymes makes scientists aware of the fact that it is the only catalyst in the reaction producing Serotonin and therefore can limit its production.(1) Many studies have shown that altered Serotonergic activity exists in many males with suicidal and aggression issues. (6) Any deficiency in the amount of TPH produced creates a dearth of Serotonin in areas of the brain which use it to hinder impulsive behavior. Many published experiments show that in order to understand the prevalence of cases with TPH deficiency better, one must look at the genetics basis of the enzyme's production.

The TPH allele is associated with the gene A218C. One of the studies conducted with TPH enzymes showed that people with a single nucleotide substitution on the TPH gene, creating an A779C single, had more issues with aggression.(5) The presence of the A779C is what leads to a deficiency in the amount of TPH present in the brain.(1) The lack of this TPH will consequently cause a lower than normal level of Serotonin production. The low Serotonin level will lead to difficulties in inhibiting impulsive behaviors.

As seen in the MAOA enzymes, the lack of the TPH enzyme is also not something that is found in a majority of the population. When examining the various scientific studies, one cannot help but understand that genetics is not the sole factor in violent behavior. The scarcity of cases of violent behavior with relation to deficient enzymes shows that not all violent behavior can be accounted for through genetics. This is not to say that there is no genetic basis for behavior in genes, but one can safely maintain the notion that outside influence plays a larger role in the behavior of a person.

References

1) Dysfunction in the neural circuitry of Emotion regulation- a possible prelude to violence

2)Definition of Norepinephrine


3)Dopamine definition


4) Role of Genotype in the Cycle of Violence in Maltreated Children. Science Magazine Auguest 2, 2002. Vol. 297


5)TPH synthesis

6)Biology of Violence presentation


7) , Serotonin description


Shifting Realities through Vipassana Meditation
Name: Hannah Mes
Date: 2004-04-13 05:06:49
Link to this Comment: 9321


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Shifting Realities through Vipassana Meditation

It has been suggested in class that a disconnect of information exists between the "I-function", an individuals experience of being, and the "unconscious", discussed as the behaviors controlled by the central nervous system of which one is unaware of. I found this concept intriguing as my own experiences with Vipassana, a Buddhist meditation technique, allowed me to make the gap between my conscious and unconscious less sharp. This is an attempt at comparing and contrasting the relationship between these two realms, drawing upon both my own experiences and those, such as Pilou Thirakoul and Dr. James Austin, who have already explored this field.

My experience with Vipassana meditation began with a 10-day course that required several serious commitments from all potential students. All meditators accept 5 noble precepts for the duration of the course which include an abstention from killing, stealing, lying, sexual misconduct, and the use of intoxicants. In order to maintain an environment that is conducive to intense meditation, all students take a vow of "Noble Silence" in which one abstains from any type of verbal or gestural communication. All precepts are taken in an effort to preserve a sense of shila or morality that creates a state of mental purity that aids meditation.

The course began with a 3-day instruction of annapanna that focuses the mind on becoming increasingly aware of the flow of natural respiration. One observes the subtleties of unadulterated respiration and the mind becomes calmer and sharper, ready to enter the field of panna or wisdom. Vipassana, literally translated as "seeing clearly" allows one to change the habit patterns of the mind at the deepest level. 1a> Through a process of self-observation and sustained equanimity an individual is able to change the general flow of sensory information. Inputs that were previously only recognized by the unconscious can now be processed through the I-function. I realized that consciousness is a subjective state that exists at different levels of awareness for every individual. There are sensations constantly arising and passing away that are too subtle for our I-function to achieve without intentional observation. This only becomes clear when one observes the chaotic habit patterns within one's own mind.

Vipassana teaches that there is a strong connection between mind and body, and that by focusing on bodily sensations one can understand the concept of constant change or annicha at an experiential level. One observes a variety of sensations on the body while remaining equanimous and detached, observing the sensation without any feelings of craving or aversion. By maintaining the balance of my mind, my old habit patterns of "blind" reaction grew weaker and weaker. I realized that my concepts of pain and pleasure were states that, with practice, I could observe as an outsider. My self-awareness had reached new heights and I felt a deep connection to the ways in which my body responded to sensory inputs.

A similar thought pattern in regard to the pattern principles of "mindfulness" is reflected in Pilou Thirakoul's essay titled "Buddhist Meditation and Personal Construct Theory" On my last retreat I delved deeper into the practice of Vipassana. I no longer felt the need to change my posture during meditation periods. Sensations existed everywhere in a constant state of flux and flow. Although I could identify sensations as uncomfortable or pleasant, my mind focused less on a physical reaction. As neurologist and Zen meditator James Austin explains a similar experience of detatchment from sensation during a meditation session. He states, "Awareness was steering itself toward a vague layer beyond thought. Here, pain alone could be turned off, pain in and of itself." 4 He concludes with the idea that there exists both opiod and non-opiod mechanisms for changing the way one interprets pain.

I began to observe my body as an objective outsider, examining each individual part of my body. Eventually I experienced a complete dissolution of mind and matter with the experiential realization that all my sensations were just an amalgamation of impermanent vibrations. As Thirakoul explains, "Indeed, an understanding of identity as essentially a flow of psychic processes avoids any notion of a discrete, absolute, metaphysical self. This Buddhist doctrine of the non-existence of the self, or annata, is important to understand; for the self, or rather the illusion of self, is the primary factor which keeps individuals in the cycle of suffering." 2 Thirakoul touches upon the concepts of self-dissolution at a physical level. At this stage, information enters the central nervous system and the conscious mind simultaneously, resulting in a deepened awareness that is partnered with a mental equanimity.

Although the perspectives of other meditators such as Austin and Thirakoul prove helpful in drawing parallels between my own experiences and the experiences of others, the specific mechanisms for achieving increased awareness or a higher level of consciousness still remains unclear. Our neurobiology and behavior class has attempted to explain the connection (an at times disconnection) between our mind and body, our consciousness and unconsciousness, our I-function and our central nervous system. My own perspective on meditation and understanding of my own consciousness have shifted as a result of our class discussions. At an experiential level I felt this shift from a pervasive "unconsciousness" to an awareness that is generalized to many aspects of my life. This was done without any understanding of where information was being processed and through what specific methods new information became available to me.

In future discussions of consciousness and the I function I would encourage an even more detailed description of how this information is transformed at a chemical level within the brain. When information passes from the unconscious to the I-function what changes can we observe at a gross, physical level and at a more subtle chemical level? These are all questions that would allow for a more dynamic discussion of this exciting topic.

References


Resources:

1. http://www.spiritual-learning.com/meditate-mind.html

2. http://serendipstudio.org/bb/Pilou.html

3. http://serendipstudio.org/sci_cult/bridges/matspirit.html

4. Austin, James. Zen and the Brain. New York: Yale University Press, 1999.


Parsomnias & the I-function
Name: Jennifer
Date: 2004-04-13 09:23:21
Link to this Comment: 9328


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"Dreaming permits each and everyone of us to be quietly and safely insane every night of our lives."
William Dement, MD

For centuries people have been fascinated with altered states of consciousness. Through sleep, illness, or chemicals, people are awed by the actions of a person who appears to be not himself. Most sleep occurs separate from the waking world, safely and quietly in one's bed, with the I-function appearing to be turned off. But some sleepers are able to perform complex activities while in the non-rapid eye movement (NREM) stages of sleep and have no recollection of this activity the following day. It appears that the body moves without the I-function, and the I-function therefore has not recall of the previous night's events. (1) This phenomenon is called somnambulism or more commonly, sleepwalking. Another parasomnia that occurs less frequently is rapid eye movement behavioral disorder. During the REM sleep cycle, a person has vivid, life like dreams. In people with REM behavioral disorder, the body is not paralyzed during the dream and they act out their dreams which are often violent. Unlike true sleep walkers, those with REM behavioral disorder will remember their dreams clearly the next day, and think they were doing some logical task in their dream while they were actually doing something quite different. For example, one man ran head on into his dresser while dreaming he was tackling an opponent in a football game. (2)

To understand the anomalies in sleep among those who sleepwalk or have REM behavioral disorder, it is useful to examine the five stages of sleep The first four stages of sleep constitute non REM sleep, which is markedly separate in terms of the level of consciousness from the fifth stage of sleep, REM sleep. During the first stage of sleep, brain scans have shown rapid small brain waves and people have reported fragmented visual images often mixed with visual and auditory input from their surroundings. Between the first and second stages of sleep, people may experience hypnic myoclonia, the rapid contraction of muscles often preceded by the feeling of falling. Sleep paralysis also occurs in the early stages of sleep. (3) This occurs when the I-function appears to wake up, but the body hasn't released the chemicals to counteract the paralysis that is normal while asleep. In stage two, eye movement stops and brain waves become sporadic. Stages three and four are considered to be deep sleep, and people are often hard to wake up while in these stages of sleep. During stage three, longer delta waves begin to predominate over the shorter, sporadic waves. Stage four sleeping is characterized by the presence of only delta waves and no eye movement. Somnambulism and night terrors occur during the third and fourth stages of sleep. The person in these stages of sleep will have no memory of the events during this time period. About 75% of the night is spent in NREM sleep. The brain wave patterns display such a dramatic difference between REM sleep and NREM sleep that they are thought to be entirely different levels of consciousness, as different from each other as they are from the fully awake conscious state. (4)

In NREM sleep, the I-function appears to be turned off. People have no memories or explanations for events that occur during these sleep stages. The night terrors and somnambulism that occur during the NREM stages are not recalled by the patient. The only way to know that these events are occurring is through the observation of family members or injuries that occur while sleep walking (5).

During REM sleep, the I-function seems to be at a different level of consciousness, but not entirely absent. While the person in the REM stage of sleep is normally paralyzed and appears to be lying silently, their mind is quite active. This is evident from brain scans, and also from the patients' subjective experiences. It appears to be an alternate world for the I-function; a world that is not affected by external stimuli. Yet the I-function is alert, as evidenced by a person's recollection of "events" that seem to be occurring to them in their dreams. It's this sense of consciousness that allows a person in the REM stage of sleep to make more concerted movements. Frequently, the activity of those who experience RBD is much more violent and directed than the behavior of those with somnambulism.

During REM sleep, which is thought to be the most restorative stage of sleep, there are several key physiological changes. (6) The eyes move rapidly, the heart rate, breathing rate, and blood pressure become elevated, and breathing becomes shallow. The body is also unable to adequately regulate temperature while in the REM stage of sleep. REM sleep allows the I-function to temporarily exist in a world without corollary discharge, and in most individuals, without motor pattern generation. It is not well understood why the body seems to let its homeostatic settings shift during REM sleep, nor is it clear why this change appears to be restorative.

When people acquire a sleep debt, they do not cycle through the five stages of sleep normally. In severe sleep debt, they will advance directly to REM sleep from fully awake. This causes several problems. The sleep stages allow a person to transition from awake and conscious to dreaming. Without the gradual change, a person may experience dreams that appear as hallucinatory images while still partially awake. Or the person may not fully be paralyzed before entering REM sleep, which can result in REM behavior disorder. (4)

The sleep disorders mentioned have been extremely useful in understanding the workings of the brain during sleep. By noting the difference between true sleepwalking and REM behavioral disorder, it can be inferred that a person is aware of his brain activity during REM sleep, but not during NREM sleep.

The psychological explanations for sleepwalking and RBD vary. RBD patients almost universally have mild mannered, amiable personalities during their waking hours. These patients report vivid, violent dreams of being chased or attacked, and often injure themselves or their bed partner while acting out such dreams. Previously, psychologists and physicians had suggested that repressed anger caused these nighttime outbursts, but as more has been discovered about the neurochemistry, this idea has faded. Patients who exhibit classic somnambulism frequently lead stressful lives. Depression and anxiety both disrupt a person's natural sleep cycle. Stress management, cognitive behavioral therapy, and other psychotherapeutic treatments for these underlying disorders have proven moderately effective in eliminating somnambulism. (6)

New research has identified a gene that may be partially responsible for somnambulism. Some neurochemicals, such as dopamine and acetylcholine, are present in lower amounts in individuals exhibiting ambulatory parasomnias, but not enough data is present to show a causal relationship. (7)Researchers have postulated that those who are ambulatory during REM or NREM sleep lack a certain neurochemical necessary to inducing paralysis during sleep. This chemical imbalance has not been pinpointed and it seems unlikely that there is one direct cause of sleepwalking.

Much of the literature attempts to make a distinction between sleep disorders caused by problems of the brain and behavioral problems. This distinction does not seem to be helpful in understanding the nature of disease, as the separation of brain and behavior are really only indicative of our current perception and knowledge of the human nervous system. What is classified today as a biological disorder is classified as such because we can demonstrate clear biological causes of it. Unless the brain is able to be fully understood, the distinctions made to organize it will remain biased towards our perception and current knowledge. The biologically based problems seem to be more socially acceptable. For example, parents are told not to worry about their sleepwalking children, as this is a normal biological process. (8) However, only 15% of children exhibit any sleepwalking, so it is not by simple majority that a behavior is perceived as normal. Rather, a biologically based reason seems to justify the sleepwalking. The 6% of adults who sleepwalk are often advised to seek professional help. Adults with RBD or sleepwalking have benefited from cognitive behavioral therapy and the use of medication. (3) Clearly, what we think of as brain and behavior aren't separate, but intertwined in a complex relationship. Both neurochemical approach and the behavioral approach result in a change in behavior. Administering small doses of tranquilizers, such as clonazepam, frequently relieves all RBD symptoms. (9) Learning stress management techniques and other psychodynamic therapies also affect the frequency and severity of the parasomnias. (4) Studies demonstrating the effects of psychological approaches to parasomnias on the neurochemistry could help explain the relationship between brain and behavior in this case.

Sleepwalking, though fascinating, is a benign problem in the most of the 15% of children it affects. RBD is more serious, though less common, because of the violent outbursts often seen in these patients. Treatment for RBD using medication has been effective, which has given patients hope and led to more patients seeking treatment. It is also worth noting that many patients with RBD will later develop Parkinson's disease, although this relationship is not well understood, it is being studied in depth. (9)

Research considering the chemical changes during puberty of adolescents who stop sleepwalking might help explain the chemical differences responsible for creating ambulation during sleep. Studies analyzing brain activity during REM sleep of those with RBD could be analyzed, comparing the data from nights with ambulation and nights without ambulation to observe the differences in brain activity for still nights versus nights with activity.

The sleeping and waking mind continue to raise interesting questions about our perceptions of life, reality and free will. The law has wavered on the consideration of the free will of a sleeping person, sometimes acquitting those who commit crimes while asleep. (10). Science is yet to define that point of the brain, if such a place exists, where what we think of as the free will, or I-function, is physically housed, but the sleep disorders have demonstrated that consciousness is more variable than it was once believed to be. A vast continuum exists, encompassing the fully brain, the deeply sleeping and apparently unaware brain, and many unknown levels in between.

References

1) Sleepwalking Disorder Article

2) Sleep Disorders May be Linked to Faulty Brain Chemistry

3) Sleep Paralysis and Associated Hypnopompic Experiences Article

4) Yahoo Stress Health Center

5) Parasomnias: Sleepwalking, Night Terrors, and Sleep Related Eating Article

6) REM Behavioral Disorder Website

7) ABC Science Website, Article on the genetics of sleepwalking.

8)A to Z Answers for Parents, article on sleepwalking in children.

9)New York Times Article, informative website with a reprinted New York Times article on RBD.

10) Sleepwalking- Insanity or Automatism , an interesting compilation of legal cases involving sleepwalking and RBD.


What is the Function of Dreaming?
Name: Ghazal Zek
Date: 2004-04-13 09:52:34
Link to this Comment: 9329


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Plutarch, a Greek biographer and author (circa 46 - 125 AD) (1) is credited to having said, "all men whilst they are awake are in one common world: but each of them, when he is asleep, is in a world of his own." (2) Plutarch is essentially speaking of the phenomenon of dreaming. The idea that the mind creates its own world while asleep is quite thought-provoking. What is it about sleep that takes us to another world? Where does this other world come from? What purpose, if any does dreaming serve? One school of thought suggests that dreaming is a product of random electrical activity that the cortex tries to interpret (3) that really serves no purpose (4), while another insists that the purpose of dreaming has come about as a byproduct of evolution (4). Which story is right?—or rather, less wrong?

It will first prove helpful to understand the process of sleep. Sleep is a dynamic activity controlled by neurotransmitters acting on different neurons in the brain. We sleep in cycles of 5 stages: 1, 2, 3, 4 and Rapid Eye Movement (REM). Light sleep occurs during stage 1, where a person can easily drift in and out of sleep. People waking up from stage 1 sleep often experience flashbacks of fragmented images, and/or sudden muscle contractions called "hypnic myoclonia" which usually precede the sensation of just starting to fall. In stage 2 sleep, brain waves slow down and eye movement stops. Stage 3 and 4 are collectively called "deep sleep" as it is usually very difficult to wake someone up in either stage. During stage 3, delta waves (very slow brain waves) appear, interspersed with smaller faster waves which leave altogether during stage 4. During the REM stage, we experience shallow, irregular and more rapid breathing, our eyes move rapidly in various directions, our limb muscles become temporarily paralyzed, our heart rate and blood pressure increase, and males develop penile erections. When someone wakes up during the REM stage, they often describe outlandish, unfounded tales – those which we call: dreams. (5)

REM sleep begins with signals being sent from the pons to the thalamus which then relays the signals to the cerebral cortex. The cerebral cortex is the part of the brain used for learning, thinking, and organizing information, so this is an important point. Infants tend to spend much more time in the REM stage than adults, possibly for this very reason, that the REM stage stimulates the brain regions used in learning. (5)

Many scientists believe that the random electric activity is just that – random. They then assert that the cortex creates stories in order to makes sense of the signals being generated. (6) In late 2000, Antti Revonsuo published a paper in "Behavioral and Brain Sciences," asserting that the content of our dreams is not as disorganized as the aforementioned theory claims and that there is an evolutionary explanation to dream content. In essence, Revonsuo is suggesting that dreaming was selected for during our evolution, (7) but why would this happen?. Stating that waking experiences have a consistent and profound effect on dream content, Revonsuo hypothesizes that there is a biological function to dreaming – to stimulate threatening events and rehearse the perception and avoidance of threats. Revonsuo argues that the ancestral human lifespan was short and full of threatening situations, therefore, any mechanism that would stimulate these situations and play them over and over in different combinations would be advantageous for improving threat-avoidance skills. Finally, Revonsuo asserts that this ancestoral mechanism has left some traces in the dream content of the present human population.

Since one cannot be certain of the validity of a hypothesis, it will prove helpful to discern which hypothesis seems "less wrong." Revonsuo's idea about the original purpose of dreams simply provides us with a more complete look at the story behind dreaming. That is to say, it is by no means a complete idea on its own. While it is interesting to think that some of the content of our dreams may have had an evolutionary function, it should be noted that dreams are not predictable. (8) Each person experiences life differently, and through dreaming, can create experiences that will be unique to them, therefore entering a "world of his own" as Plutarch suggested.

As modern-day humans, we are not faced with the same limitations as our ancestors. Our survival and chances of reproduction have little to do with our threat avoidance capabilities. So, if we assume that dreams initially served as a feature of evolution, what function, if any, does dreaming serve in humans presently? On the one hand, we could revert back to the original theory, with a twist. We can suggest that dreaming serves no real function at present. For example, people having suffered through traumatic ordeals often complain of nightmares. Dreamless nights would in fact be helpful in these situations, as far as mental health is concerned. So while dreams are sometimes a welcome escape from reality, other times reality is a welcome escape from our dreams. On the other hand, dreams perhaps serve a more fundamental purpose nowadays. In recalling our dreams, we are able to learn about ourselves using a broader spectrum of information. Above all, it is important to keep in mind, that we are all different. We therefore experience the world differently, react differently, and dream differently.


References

1)E-classics.com background on Plutarch

2)A website containing famous quotes about sleep

3)HowStuffWorks.com : Sleep, A simple explanation of the process of sleep.

4) The reinterpretation of dreams: An evolutionary hypothesis of the function of dreaming., Abstract from Behavioral and Brain Sciences, Dec 2000 v23 i6 p877.

5)Brain Basics: Understanding Sleep, A detailed explanation of sleep and dreaming from the National Institute of Neurological Disorders and Stroke.

6)HowStuffWorks.com: Dreams, A simple explanation of the process of dreaming.

7)Dreaming and Consciousness: Testing the Threat Simulation Theory of the Function of Dreaming, More on the Evolutionary basis of dreaming from Revonsuo, et al. PSYCHE, 6(8), October 2000.

8)From Genomes to Dreams, an essay by Paul Grobstein, Winter 1991, from the Serendip website of Bryn Mawr College.


Not Just the Baby Blues: The Tragedy of Andrea Yat
Name: Elissa Set
Date: 2004-04-13 11:28:39
Link to this Comment: 9331


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Many of us envision motherhood as a joyous time in women's lives. Holding one's own newborn baby in her arms must bring great happiness to mothers. However, what happens when those feelings begin to subside, and those feelings of happiness are replaced with anger, hate, guilt, and loneliness? On June 20, 2001, Andrea Yates had those feelings overcome her and she killed all five of her children by drowning them in the bathtub (8). Yet, as disturbing and shocking as the event was, what surprised many other people is that there were many mothers who understood or sympathized with Yates.

"As I was changing my son on his changing table, an intrusive thought started running through my head, 'What if I push him off the table?'" (3)

"I would look at the baby and just say, oh, how vulnerable it is. I could put a pillow of the top of it. Its neck was so tiny, it could break so easily." (3)

Up to eighty percent of women suffer from baby blues after they have children (6). Ten percent suffer from postpartum depression (6), and about one in 500 have the most serious case, postpartum psychosis (6). Andrea Yates suffered from postpartum psychosis and this led her to kill all of her children. Her illness began after she had her fourth child and tried to commit suicide. After her fifth child, she tried to commit suicide again, and she was hospitalized twice (9). However, both times, she was released from the hospital while she was still ill. Finally, the postpartum psychosis took over Yates when she drowned all of her children. While Yates was afterward able to recognize that killing her children was a horrendous thing to do, at the time she was not in a stable mental state of mind. This event exemplifies how serious postpartum psychosis is. Though rare, baby blues can escalate to postpartum depression, which could then turn into psychosis if left uncared. Psychosis is not a mental illness that can be cured with a few visits to the therapist or a prescription to an anti-depressant. More research must be conducted in order to understand the nature of the disease, and how to help the women and their families who suffer from it.

The least detrimental of the three illnesses is postpartum blues, also known as the baby blues. The baby blues usually occur in the first few weeks after childbirth for women and they can include mood swings of happiness and sadness. The new mothers can feel irritable, stressed, and lonely. These feelings may last only for a few hours or for multiple weeks (6). It has been shown in many cases that women can overcome the baby blues without receiving professional counseling or medication (5).

Postpartum depression is more serious than the baby blues. The feelings of sadness, anxiety, irritability, and stress are also apparent, yet far more acute than in the baby blues (5). The women's ability to function everyday is affected, and she may neglect the care of the baby (5). Other symptoms include fatigue, exhaustion, confusion, and changes in appetite (3).

The gravest case of postpartum illness is postpartum psychosis. Though extremely rare, it is the most dangerous, and requires medical attention for recovery (5). In addition to the symptoms of postpartum depression, postpartum psychosis also includes visual and auditory hallucinations (5). Frequent thoughts of hurting the baby may enter the mother's mind, and she may actually carry out those thoughts (3).

The exact cause of depression is still not known, because it may vary with each individual. The term "depression" can be used to describe a variety of moods, from mild feelings of sadness to deep severe melancholia (4). There are theories due to biological, genetic and environmental factors. The biological factors are related to the hormone, such as cortisol. Cortisol is a hormone that controls the body's response to stress, anger, and fear. When people are depressed, cortisol will peak in the morning, and does not decrease later in the day, as it does in normal people (1).

A possible neurobiological factor is that there may be an imbalance of neurotransmitters in the brain (1). Neurotransmitters are chemicals that help the brain cells communicate with each other. Two neurotransmitters linked to depression are serotonin and norepinephrine. When there are deficiencies in neurotransmitters, impulses sent between nerves are decreased (4). Deficiencies in those neurotransmitters cause changes in sleep habits, increase irritability and anxiety, and may make individuals feel sadder and fatigued (1).

Postpartum depression may also have some other causes besides those of regular depression. When a woman is pregnant, her female hormonal levels change dramatically. Estrogen and progesterone increase during the pregnancy, and after childbirth, the levels decrease rapidly down to the levels before the woman was pregnant (5). These fluctuations are similar to those prior to when a woman menstruates, and can be more irritable and depressed. With postpartum depression, the levels of estrogen and progesterone may not be decreasing at a normal rate, causing an imbalance in the system. This may lead to symptoms of the various forms of postpartum illnesses.

While forms of postpartum depression were recognized in Yates, she never completed any treatment of her depression or psychosis due to insurance limitations (3). However, her husband and her doctor did not recognize the seriousness of the situation. Apparently, her husband, Russell, said to a friend, "I'm not going to coddle her, I'm not going to hold her hand. She needs to be strong, she needs to help herself." (2). However, when depression is as deep as Andrea Yates' psychosis, the ability to help oneself is incredibly decreased. Proof that Andrea was definitely suffering from postpartum psychosis is that she would hear voices in her head telling her to hurt other people, including the children (3). However, Russell still did not see Andrea as a threat to their children, despite two suicide attempts, including one after the birth of her fifth child (9). Scarily enough neither did Andrea's doctor, who two days before she killed the children, did not believe that Andrea needed to be hospitalized (3).

Unfortunately, it has taken the deaths of all of one family's children to shed light on the gravity of the issue of postpartum depression. Postpartum illnesses can affect any mother, whether she has had one baby or four, and it can recur, as can be seen with the case of Andrea Yates. Since neither her husband nor her physician was able to recognize that Andrea was suffering from a serious illness, more research must be done in order to understand the disease, and how to recognize it. These events, though rare, can be prevented. Psychosis is not something that people can just snap out of, but must be treated with great care as it is a disease that obviously has severe consequences. Although, in the trial for Andrea Yates, the jury did not believe that she was insane at the time of the killing, it is obvious that she has suffered from postpartum depression and psychosis. Her illnesses do not excuse the fact that she committed these atrocities, but learning more about the illnesses will help people understand why she did it, and how to prevent other situations like this.

References

1) Causes of Depression

2) "I Could Just Kick Him"

3) More Than the Baby Blues

4) The Neurobiology of Depression

5) The Postpartum Depression

6) The Postpartum Depression

7) Postpartum psychosis: a difficult defense

8) Postpartum Psychosis to blame for murdered Houston Children?

9) Russell Yates describes wife as a victim


Music, Emotion and the Brain
Name: Geetanjali
Date: 2004-04-13 12:18:05
Link to this Comment: 9335


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

There is a beautiful passage in a book called "Home of the Gentry", by Ivan Turgenev, where the protagonist of the novel listens to a piece of music being played on the piano that touches him to the very depths of his soul. I will quote part of this passage, since it describes very eloquently the almost mystical power that music wields over the human mind, a power which I find fascinating.

"The sweet, passionate melody captivated his heart from the first note; it was full of radiance, full of the tender throbbing of inspiration and happiness and beauty, continually growing and melting away; it rumoured of everything on earth that is dear and secret and sacred to mankind; it breathed of immortal sadness and it departed from the earth to die in the heavens." (10)

The tremendous ability that music has to affect and manipulate emotions and the brain is undeniable, and yet largely inexplicable. Very little serious research had gone into the mechanism behind music's ability to physically influence the brain until relatively recently, and even now very little is known about the neurological effects of music. The fields of music and biology are generally seen as mutually exclusive, and to find a Neurobiologist also proficient in music is not very common. However, some do exist, and partly as a result of their research some questions about the biology of music have been answered. I will attempt to summarize some of the research that has been done on music and the brain in recent years. I will focus in particular on music's ability to produce emotional responses in the brain.

One great problem that arises in trying to study music's emotional power is that the emotional content of music is very subjective. A piece of music may be undeniably emotionally powerful, and at the same time be experienced in very different ways by each person who hears it. The emotion created by a piece of music may be affected by memories associated with the piece, by the environment it is being played in, by the mood of the person listening and their personality, by the culture they were brought up in: by any number of factors both impossible to control and impossible to quantify. Under such circumstances, it is extremely difficult to deduce what intrinsic quality of the music, if any, created a specific emotional response in the listener. Even when such seemingly intrinsic qualities are found, they are often found to be at least partially culturally dependant.

Several characteristics have been suggested that might influence the emotion of music. For example, according to one study (11)(12), major keys and rapid tempos cause happiness, whereas minor keys and slow tempos cause sadness, and rapid tempos together with dissonance cause fear. There is also a theory that dissonance sounds unpleasant to listeners across all cultures. Dissonance is to a certain degree culture-dependent, but also appears to be partly intrinsic to the music. Studies have shown that infants as young as 4 months old show negative reactions to dissonance. (3)(6)(9)

It is possible to both see and measure the emotional responses created by music in the brain by using imagery techniques such as PET scans. However, as these emotional responses would generally be caused by factors out of the experimenter's control, the data collected would be very difficult to interpret.

A recent experiment dealt with this problem by attempting to minimize subjectivity, by measuring responses to dissonance. (1) Dissonance can consistently create feelings of unpleasantness in a subject, even if the subject has never heard the music before. Music of varying dissonance was played for the subjects, while their cerebral blood flow was measured. Increased blood flow in a specific area of the brain corresponded with increased activity. It was found that the varying degrees of dissonance caused increased activity in the paralimbic regions of the brain, which are associated with emotional processes.

Another recent experiment measured the activity in the brain while subjects were played previously-chosen musical pieces which created feelings of intense pleasure for them. (2) The musical pieces had an intrinsic emotional value for the subjects, and no memories or other associations attached to them. Activity was seen in the reward/motivation, emotion, and arousal areas of the brain. This result was interesting partly because these areas are associated with the pleasure induced by food, sex, and drugs of abuse, which would imply a connection between such pleasure and the pleasure induced by music.

Experiments such as these are not able to answer such questions as how or why the emotional responses were created in the first place. However, their results can still be informative. These two experiments both show that music has the power to produce significant emotional responses, and they localize and quantify these responses within the brain.

Another quantifiable aspect of emotional responses to music is its effect on hormone levels in the body. (5)(7) There is evidence that music can lower levels of cortisol in the body (associated with arousal and stress), and raise levels of melatonin (which can induce sleep). (5) This is outwardly visible in terms of music's ability to relax, to calm, and to give peace. Music is often used in the background hospitals to relax the patients, or in mental hospitals to calm potentially belligerent patients. It also can cause the release of endorphins, (7) and can therefore help relieve pain.

Love for and appreciation of music is a universal feature of human culture. It has been theorized that music even predates language.(8) There is no question that music has grown to be an important part of human life, but we can only guess why. It has been theorized that music is important evolutionarily, (8) but all such theories are at this point conjecture. No concrete evidence has been found that music is evolutionarily beneficial. There are many questions one could ask about the powerful link between music and the brain, but very few answers exist. How does music succeed in prompting emotions within us? And why are these emotions often so powerful? The simple answer is that no one knows. We are able to quantify the emotional responses caused by music, but we cannot explain them.


References

1) Blood, A.J., Zatorre, R.J., Bermudez, P., and Evans, A.C. (1999) "Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions." Nature Neuroscience, 2, 382-387.

2) Blood, A.J. & Zatorre, R.J. (2001) "Intensely pleasurable responses to music correlate with activity in brain regions implicated with reward and emotion."Proceedings of the National Academy of Sciences, 98, 11818-11823

3) Harvard Gazette Archives. Cromie, William J. (2001) "Music on the brain: Researchers explore biology of music."

4) Harvard Gazette Archives. Cromie, William J. (1997) "How Your Brain Listens to Music."

5) Musica Humana. Heslet, Prof. Dr. Lars. "Our Musical Brain"

6) transcript of episode of Closer to Truth. "What Makes Music So Significant?" Interview with Jeanne Bamberger, Robert Freeman, and Mark Tramo, conducted by Robert Kuhn.

7) Time Reports. Lemonick, Michael. (2000) "Music on the Brain: Biologists and psychologists join forces to investigate how and why humans appreciate music."

8) Levitin, Daniel J. "In Search of the Musical Mind", (2000) Cerebrum, Vol 2, No 4

9) Tramo, Mark Jude. "Biology and music: Enhanced: Music of the Hemispheres." (2001) Science, Vol 291, Sigue 5501, 54-56

10) Turgenev, Ivan. Home of the Gentry.

11) "The Biology of Music.", (2000) The Economist

12) "Exploring the Musical Brain", (2001) Scientific American

13) The Power of Music


Cocaine Addiction
Name: Shirley Ra
Date: 2004-04-13 12:54:53
Link to this Comment: 9337


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Addiction to cocaine is amongst the most severe health problems facing the United States. For example, in 1997 there were approximately 1.5 million Americans twelve years and older that were chronic cocaine users (1). The question presented, then, is given how negative and damaging cocaine is to a user's health and society as a whole why is it that people addicted to cocaine have great difficulty quitting? One possible answer to this question is biological; namely that cocaine alters the normal state of the brain making it difficult to quit. Two properties which make cocaine one of the most addictive popularly used drugs is that it is reinforcing when administered acutely, but also produces obsessive use if administered chronically (2). However, arguments can be made that cocaine addict's perpetual abuse of the drug is, at least in part, a result of social factors. In other words, it is not only cocaine's biological effects on the brain that makes it difficult for addicts to give up the use of the drug. If the fact that perpetual cocaine abuse resulted from biological and social factors was better understood and more widely accepted, we would be able to better help cocaine addicts quit using the drug.

In order to combat cocaine addiction we must first understand what addiction is. The World Health Organization defines addiction as a "behavioral pattern of compulsive drug characterized by overwhelming involvement with the use of the drug, the securing of its supply, and a high tendency to relapse after withdrawal (3). There is an established sequence of events that defines addiction. First, there are the euphoria effects that the drug of abuse produces. Second, tolerance develops, meaning that the addict needs more and more of the drug to produce an affect. Finally, there is physical dependence, in which the addict feels they need the drug to survive; they are addicted. Under this definition a person can theoretically be addicted to almost any substance, for example chocolate. However, while it may be difficult for a person to refrain from eating chocolate all would concede that it is much more difficult to quit cocaine once addicted. The question is why this is the case.

Like chocolate, cocaine is associated to the nucleus accumbens. The nucleus accumbens is known as the brains pleasure center since several studies demonstrate that pleasurable stimulus, such as sex, food and other drugs of abuse, cause an increase in activity in this are of the brain (4). The mesoaccumbens dopamine (DA) pathway, which extends from the ventral tegmental area (VTA) of the midbrain to the nucleus accumbens (NAc), has been linked to the reinforcing effects of cocaine. This was found through intracranial self-stimulation, a process which consisted of implanting electrodes into different regions of an animal's brain, and demonstrated that when dopamine is involved reinforcement of the behavior increased (2). In essence, this shows that pleasurable events, such as sex, chocolate consumption and cocaine abuse are accompanied by a large increase in the amounts of dopamine released in the nucleus accumbens.

Given the similarities in which pathways are activated, why is it that the use of cocaine is more difficult to quit when compared to other fore mentioned pleasurable events? A person addicted to chocolate and cocaine both will have excess amount of dopamine released from the nucleus accumbens, but both individuals will react to that biological circumstance differently. The initial effects of both the chocolate and cocaine will be euphoria, but after these pleasurable stimuli are removed the individual addicted to cocaine will experience very severe physical withdrawal effects, whereas the individual addicted to chocolate will be able to cope with its loss. Is that due to the fact that the excess dopamine is derived from different pathways? Is the initial euphoria effect stronger in cocaine? It is clear that at least part of the answer lies in the way that cocaine biologically affects the brain.

In the dopamine pathway of individuals not addicted to cocaine, dopamine is released by a transmitting neuron into the synapse, where it binds to receptors in the postsynaptic neuron, propagating a signal. After the binding has occurred, the dopamine reuptake transporters (DAT) of the presynaptic cell reuptake the remaining unused dopamine back into the cell (5).

As mentioned earlier, cocaine's major effects are thought to be due to action on dopamenergic systems. In addicted individuals, cocaine has the ability to bind to the dopamine reuptake transporters (DAT), therefore, blocking them from reuptaking dopamine, consequently resulting in an accumulation of dopamine in the synapse. This accumulation of dopamine causes continuous stimulation of the post-synaptic neuron, resulting in the euphoria commonly reported by cocaine abusers. Cocaine also affects serotonin and norepinephrine reuptake transporters, enhancing the levels of these neurotransmitters in the cell (6). The latter is important since researchers speculate that more than one neurotransmitter is responsible for the pleasurable feeling cocaine provides. In addition, cocaine simulates the "fight or flight" response, by increasing activity of the sympathetic nervous system, due to its action on norepinephrine transport (7). Some of the increased activity is illustrated by constricted blood vessels, dilated pupils and increased heart rate as well as blood pressure. In other words, cocaine has a great variety of biological effects on the brain which lead to a very strong addiction.

Cocaine's biological effects on the brain also make it very difficult for an addict to quit abusing it. When an individual becomes addicted to cocaine the repeated euphoric responses to the drug alters the brain, creating a dependency within the addict's brain. The individual will therefore, continue to take cocaine to re-experience the extreme euphoric effects of cocaine. Also, addicts continue to take cocaine because after cocaine administration dopamine levels decrease significantly compared to normal pre-consumption levels. Therefore the addict feels a "low" and the immediate response to ease this low is to administer more cocaine to raise the dopamine levels. It is clear then that a significant reason why addicts find it difficult to quit cocaine is because their brains are biologically altered. In a sense, it could be said that the brain is no longer biologically whole in that it no longer produces dopamine levels in the way it once did.

The fact that addicts develop tolerance or sensitization to cocaine also makes it difficult to quit abusing the drug. After chronic administration of cocaine the brain reduces the number of dopamine receptors on the dendrites of neurons. As a result, there is less stimulation of the nerves in the dopamine pathway. This physical change in the brain alters the way it responds to different doses of cocaine. This is where tolerance develops in many addicts, wherein a larger dose is needed to attain the same euphoric effects initially experienced. Other addicts experience sensitization, in which the user becomes more responsive to cocaine without increasing the dose. Recent research has investigated why some addicts experience sensitization and others tolerance. Is it due to different brain make-up or is it due to manner of administration of the drug? In either case it is still clear that both of these phenomenon's present yet another biological hurdle that a user must overcome when quitting cocaine.

However, the obstacles a cocaine abuser faces when trying to quit her or his abusive tendencies are not exclusively biological. The cocaine abuser also faces several psychological/social barriers in the path to becoming drug free. After constant administration of cocaine the phenomenon known as place conditioning becomes activated. The place conditioning theory suggests that the environment in which you administer cocaine will be associated with the act of cocaine use (8). For example, if a drug addict purchases cocaine at a specific grocery shop and experiences the drug effect shortly thereafter, eventually the grocery shop becomes linked in the mind of the drug addict to the rewarding effects of cocaine (8). This has been extensively proven in animal models, where rats return to the environment where they administered cocaine. In humans, place conditioning might cause addicts to have an overdose. This is because addicts are accustomed to administering the drug in a particular environment and begin to associate the rewards of the cocaine to the environment itself. Therefore, in a different environment, not associated with the administration of cocaine, the same dose will produce a larger effect because the environmental cues are not associated with the rewards of the drug. Perhaps more significantly, this demonstrates that the environment can itself lead to a greater addiction to cocaine since the environmental stimuli will constantly remind the user of the pleasurable effects of cocaine.

It follows then that the difficulty in quitting cocaine cannot be 100 percent biological. If addiction was only biological then place conditioning would not be an issue. All too often the view that brain=behavior, meaning, that brain elicits behavior, is acceptable as complete. However, in this instance the environmental stimulus has the power to elicit brain stimulation ending in craving of cocaine, demonstrating that biological and social/environmental factors are deeply intertwined and each play a role in rendering cocaine incredibly addictive.

Most relapses occur when an individual returns to the environment where he/she would administer the drug. Exposure to such cues and stimuli reminds the addict of the feeling and taste of cocaine therefore, they will begin to crave cocaine. But how exactly do the environmental stimuli trigger the drug craving? Recent research has brought forth that the extended amygdala might play a major role in "in context" craving. The extended amygdala is part of the limbic system, a region of the brain associated with memories and emotions. Researchers at the National Institute on Drug Abuse believe that it is in the extended amygdala where memories relating to drug administration are converted into craving for that specific drug (8). As mentioned earlier, memories give rise to craving when the environment where cocaine is abused becomes a conditioned stimulus. The latter is the reason why so many people relapse and continue to be addicted.

Unfortunately, to date there is no one treatment that will eliminate addiction or all characteristics associated with addiction. Perhaps finding a treatment to cocaine addiction has not been overly successful because researchers do not fully account for the biological and social factors which make it extremely difficult to quit using this drug. It follows that a combination of medical treatment (to address biological factors) and counseling (to address the social factors) would be most beneficial to addicts. Drugs are being developed aiming at blocking cocaine from binding to the dopamine transporters allowing for the reuptake of dopamine, which may prove very effective at stabilizing the biological factors. However, such medication alone will not suffice. In terms of combating the social factors, the most effective counseling is "cocaine-specific skills training" which consist of identifying the environments and stimuli that triggers craving in order to control and avoid such stimuli (4).

The problem of cocaine addiction in America will not disappear overnight. However, a greater understanding of why cocaine addiction is so uniquely strong will help lead to discovering a better understanding of how to combat it. It is important to understand that both biological and social factors work together to form cocaine's powerful addiction. As such any effective treatment must aim to counteract both. As such, we must first fully understand cocaine addiction and its properties before we can hope to eradicate it.

References:
1)NIDA Home Page,Various information about Cocaine. Specifically Statistics.

2)Article on Addiction, Describes biology of Addiction.

3)Substance Abuse Facts,

4)National Institute of Health., Various information about cocaine addiction and health hazards, treatments etc.

5)Effects of Cocaine Biologically,

6) More information about Cocaine and how it effects your neurotransmission,

7)Research on Cocaine,

8)Amygdala and Memories,


Munchausen By Proxy
Name: Emma Berda
Date: 2004-04-13 17:09:58
Link to this Comment: 9343


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Although Munchausen By Proxy was first described less than thirty years ago by Dr. Roy Meadow(1) it has garnered much press in recent years with appearences in "The Sixth Sense" and "Law and Order" and high profile court cases such as that of Kathy Bush. But what exactly is Munchausen By Proxy and why does it occur? How could something that destroys our image of a mother exist in society?

"MBP is sometimes called Munchausen Syndrome by Proxy, Munchausen by Proxy Syndrome, or Factitious Disorder by Proxy. All of these terms apply to a well-established variant of maltreatment (abuse and/or neglect) in which caregivers deliberately feign or produce ailments in others. The perpetrator deliberately misleads others knowing that there is no reason to believe the victim has an underlying physical and/or psychological-behavioral problem. The signs and symptoms perpetrators falsify or create are usually physical."(2) In 98% of the documented cases the mother is the one with MBP. There are several ways that Munchausen By Proxy in manifested in behavior. Sometime the perpetrator will falsly report that their child has an illness, other times the perpetrator will create evidence of a problem but will conceal their role in it. The perpetrator may also exagerrate a real medical problem that the child has. Finally, the perpetrator may worsen an already existing ailment or cause a problem in the child on purpose.(2) These final two manifestations are the most intrigueing from a behavior point of view because the directly contradict our ideas of maternal instinct.

There is no set profile for MBP. (2) But there are some basic facts that usually apply to MBP perpetrators. Perpetrators usually seem to be "normal" and have loving relationships with their victms. However, MBP perpetrators are usually good at decieving and manipulating people
and may have a history of feigning problems in themselves.(2) They often have a dramatic flair and sometimes falsly accuse others of wrongdoings, if charged with wrongdoing themselves they will vehemintly deny it. "MBP perpetrators do not necessarily have to have extensive health care knowledge or be particularly intelligent. It does not take special knowledge to engage in many kinds of MBP maltreatment."(2) This fact is especially important to remember since many people often dismiss accusations of MBP by saying that Ms. X couldn't possibly outwit a team of doctors. All information reported in this paragraph is preceeded by the word usually or often. There could easily be a MBP perpetrator who posseses none of these characterics or an innocent mother who has all of them.

Munchausen By Proxy is extrememly difficult to diagnose. Each case must be taken by itself and while past information can be useful it should not be used to determine whether MBP is the cause of a child's illness. There is a vary broad range of charasterictics that are attributed to MBP.(3) This can lead to misdiagnosis of MBP in mothers of severly ill children. Even if MBP is suspected it is hard to get physical data to prove it. Sometimes a hidden camera can catch the perpetrator inducing illness but most of the time its just the word of other people against the perpetrator. Kathy Bush was found guilty of aggravated child abuse and put in jail without any definitive evidence.(4) It was her word versus that of her daughters doctors, nurses, and the police. Because Kathy Bush had previously been named mother of the year by Hillary Rodham Clinton the case garnered national attention. MBP perpetrators can vary greatly in their behavior. Another suspected MBP perpetrator is Marie Noes whose 10 babies sucessivly died in the 1940's-1960's.(5) Unlike Kathy Bush who was a doting mother, Marie Noes seemed to have little interest in her children. When one child was in the hospital for 2 monthes she visited only twice.(5) Without delving deeper one would never guess that these two women were perhaps commiting similiar acts. MBP seems to be the only thing they have in common.

Perpetrators of MBP can come from vastly different backrounds. Some are rich, some poor. Some were abused as children, some were not. There is no set of conditions that MBP seems to arise from. Because of this we cannot know what drives these women to do this. Most of the literature says that MBP perpetrators do it so that they can get attention and sympathy from doctors and other medical staff. This is a reasonable conclusion but it leaves unanswered why these women would need attention so desperately that they would harm their children. It is important to note that MBP perpetrators do not seem to possess any sort of homocidal tendancies. Although MBP can lead to the death of the child, that death is often an accident caused by miscalculation of the MBP perpetrator. (2) These women are not looking to rid themselves of their children, the children are merely a means to an end.

How could somebody's need for attention be so great that they would go so far as to harm their own children? We tend to think of maternal behavior as a natural occurance but maybe it is not. Perhaps then maternal behavior is a result of human society. It could be something we percieve that does not have any neurobiological/behavioral foundation. This would make MBP much easier to account for because if there is no beavioral basis then it is not so strange that something like this could occur. MBP shocks us to our core because we cannot imagine a mother harming her innocent children. But what if this is just what society tells us? What if there is nothing in our genes that tells us to nurture our children? Do perpetrators of MBP have a psychological prohlem or do they merely deviate from our societiy's norm?

Since the perpetrators of MBP are so different it is likely that it does not derive from a specific psychological problem. It seems likely that these perpetrators probably have other psychological problems that may contribute towards MBP but mostly they just deviate from what we expect from mothers.

References

1)Basic MBP Information
2)A rich MBP resource
3)General MBP Information
4)News Articles about the Kathy Bush Trial
5)An Article about the Noes Family


Angsty Teenage Depression
Name: Amanda
Date: 2004-04-13 18:38:18
Link to this Comment: 9346


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


Depression wreaks havoc on nearly one in ten people in the United States (1). While it is not a discriminatory disease, affecting all races, ages, and both genders, it is becoming a more common diagnosis among children and teenagers. Despite the barriers, eighty to ninety percent of medically treated patients improve (1). Before treatment though, depression can cause feelings like being "tired, listless, hopeless, helpless, and generally overwhelmed by life. Simple pleasures are no longer enjoyed, and their world can appear dark and uncontrollable" (1). These symptoms can be especially devastating to a teenager.

Depression is diagnosed with a person experiences a number of symptoms. First the person needs to always be sad or anxious for at least two weeks. Secondly, the person needs to have one of the following five symptoms: appetite changes (not because of a diet), insomnia or oversleeping, fatigue and energy loss, restlessness, guilty or worthless feelings, concentration and thinking difficulty, or thoughts of death and suicide (1). While there may seem like a lot of symptoms, each one is important in the implications it has on the victim's life.

Depression can be caused by a number of things including biochemistry, genetics, personality, and the environment. Each of these, or a combination, can form a dark cloud over a person. Biochemistry is related because "deficiencies in two chemicals in the brain, serotonin and norepinephrine, are thought to be responsible for certain symptoms of depression, including anxiety, irritability, and fatigue" (1). Genetics also runs in certain families, which leads scientists to believe that there is a genetic twist that encourages depression. For example, in my family, seven out of twelve people within my father's brothers, sisters, their children, and his parents have been depressed at one point or another in their lives. There is a definite genetic link. A person's character can also lead to depression. People who normally have low self-esteem or are pessimistic are more likely to become depressed than those with great self-esteem who look at the world optimistically. Lastly, the environment can lead to depression. "Continuous exposure to violence, neglect, abuse, or poverty may make people who are already susceptible to depression all the more vulnerable to the illness" (1). A stressful environment encourages a person to downward spiral. At http://www.teachhealth.com/#stressscale a person can take a stress test to see what his level of stress is.

While all sorts of people suffer from depression, it can be exceptionally difficult for teenagers. Teenagers have all factors, biochemical, genetic, personality, and the environment, in depression's favor, especially the last two. Teenagers are a group of people who already have horrible self-esteem. Out of all age groups, teenagers statistically have the lowest self-esteem as a group. Puberty encourages the most popular of adolescents to become shy and nervous. "The vast hormonal changes of puberty are severe stressors. A person's body actually changes shape, sexual organs begin to function, new hormones are released in large quantities. Puberty, as we all know, is very stressful," states the Health Education website (2). Girls begin to grow body parts they are unaccustomed to and feel they must hide while boys begin to get deeper voices and then become hairier. Everyone gets acne and a lot of people get braces. Most teenagers do not know how to deal with the raging hormones and thus become shyer about themselves. The middle or high school environment does not help. Cliques form and exclude people. If a girl does not wear the right outfit, she can be the outcast for the rest of the year or if a boy cannot catch the Frisbee, he can be "out". Socially, in middle and high school, people can be brutal. Teenagers may be forced into substance abuse, which becomes more accessible. All these factors can lead into depression.

Teenagers may show specific warning signs that others should notice for depression. They may have scholarly problems because of skipping classes, poor concentration, lack of interest, or low energy. This can even lead to teens dropping out. The low grades that are achieved add to the self-criticism and then encourage low self-esteem. This cannot only cause anger, depression, or indifference, but a change of social scenes into one that encourages drugs and alcohol. Depression should also be looked for with teenagers who have eating disorders, extreme feelings of ugliness, and in those who cut themselves (3).

Once someone is depressed, it is extremely difficult to break out. As with all illnesses, the farther along it is, the harder it is to cure. While there are things that a person can do for herself, such as exercise or relaxing, if it is clinical depression, the cure will probably take more. There are support groups such as the National Foundation for Depressive Illnesses, Inc. and the Depression and Bipolar Support Alliance (4). While these support groups are beneficial, most people will probably need the guide of a psychiatrist, who will also do the diagnostic evaluation. If the doctor feels that psychotherapy will not be the most effective cure, she can also prescribe antidepressants. Antidepressants, which usually take about three to six weeks for full effect, help correct chemical imbalances in the brain (1).

The psychiatrist will also encourage "talk therapy." Because of this, it is important for a patient to choose a doctor he or she is comfortable with. One thing to look at is how much the doctor includes the patient in making decisions. The UK Depression Alliance website encourages asking, "How do you go about deciding which treatment is right for me?" (4). This helps enable a patient to find a comfortable doctor. There are specific doctors for adolescents with depression. These are exceptionally helpful as "grown up" doctors might forget about the extremely tough time that teenagers go through.

Depression is a life-altering illness that not only affects the patient but her friends and family. If not treated, it will cause problems throughout life until something like suicide. While depression is noticed in adults, it has also become a distinct problem in children and teenagers. While in some communities, especially middle to upper-class suburbia, depression in adolescents is being diagnosed and treated, it is being neglected in other areas. Teenagers are just as important to have depression treated, especially as they are "the future of America."

WWW Sources
1.) 1)American Psychiatric Assocation, Founded in 1844

2.) 2)Health Education: Stress, Depression, Anxiety, Drug Use , For Classes

3.) 3) Kids' Health , Depression

4.) 4) Depression Alliance , UK Alliance

References


The Implications of Bilinguality and Bilingual Aph
Name: Prachi Dav
Date: 2004-04-14 00:17:10
Link to this Comment: 9354


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The bilingual, and the polyglot, for that matter is an individual whose status as the speaker of two or more languages has been widely discussed. The past has seen a disdainful view of bilinguals transform slowly into a more positive and currently, even glowing, regard for their language flexibility. A point of view concerning language learning among infants involves the conception that babies are born with the innate capacity to learn any language in which it is immersed (1) (furthermore, exposure to multiple languages allows the infant to learn as many), and therefore the baby is a universal language speaker, giving support to Chomsky's conception of the existence of a "universal grammar" that forms the basis of every existent language. Bilingual ability and status have implications for the individual's identity and for neurolinguistic study involving a wider academic interest. Particularly, bilingual aphasia is fast becoming, for the study of the bilingual brain, that which aphasia was for the now heavily studied monolingual brain.

Human language has captured both the artistic and scientific imagination for, perhaps, centuries. Bilingualism, therefore appeals equally to the same imagination for both it's theoretic and practical implications. Bilingualism as a cognitive state that supposedly requires the sharing of cognitive resources has been openly frowned upon and a dates and ignorant assertion by Laurie (1980, in Wei, 2000) (3) declared:

"If it were possible for a child to live in two languages at one equally well, so much the worse. His intellectual and spiritual growth would not thereby be doubled , but halved. Unity of mind and character would have great difficulty asserting itself in such circumstances."

this point of view, among other myths (2), had been very popular and eventually blended into a purist, monolingual viewpoint of bilinguals which accepted into the bilingual category only those who are absolutely proficient in both their languages while all other speakers of more than one language have been relegated to either one of a long list of subordinate categories (alingual, semilingual, covert bilingual) (3), (4). This view has been emphatically refuted by Grosjean (1989) (4) who asserts the need to define bilinguals in accordance to the contexts of language usage. The former view has to a great extent been obliterated and bilingualism is now believed to be advantageous in cultural, social, cognitive and even transnational domains. These successive realisations to which researchers in this field have come have broadened the scope for it's study.

Researchers have been very interested lately in understanding the cortical representation of both a native and a second language. Particularly, curiousity as to whether or not these languages converge upon similar brain areas has been piqued given various contradictory findings that indicate both the shared and divergent representation of language in the bilingual brain. Those who want to understand the regions involved in language processing and production have looked both to neuroimaging studies among normal bilinguals and studies involving clinical populations of bilingual aphasics. In support of the view that propounds anatomical overlap between first and second languages, Chee et al. (1999) (5), (6) showed, in an fMRI study that during word-stem completion among Mandarin-English bilinguals, the task resulted in similar activation of the left prefrontal region, involving the inferior frontal gyrus, the supplementary motor area and the occipital and parietal areas bilaterally, during the task in both languages. These results argue for shared lexicons between first and second languages. The structural dissimilarity between Mandarin and English provide a rigorous test for the hypothesis regarding shared cortical representation for bilinguals' languages for it is surprising that two such divergent languages overlap in terms of lexical representation.

Additionally, Illes et al. (1999) (7) provided support for the above findings whereby in another fMRI study, they also showed inferior frontal gyrus activation among Spanish-English bilinguals performing semantic judgment tasks in both languages. These studies concern a question integral to the study of language, that is, to what extent do overlapping cortical representations for vastly differing languages imply similarity between them in terms of personal identification and comfort with the languages? This is a question to which we will return in the following paragraphs. These studies, although showing what seem to be reliable findings, are confounded such that although age of language acquisition (a factor thought to affect language lateralisation in the brain) (8) remains stable at approximately age twelve, level of language proficiency (a further issue impacting upon language lateralisation) (8) is unreported or reported as "moderate." The lack of attention paid to such intervening variables must be corrected if reliable results are to be obtained. Certainly, however, a swift scan of the frequently cited literature supporting shared anatomical correlates between native and second languages is limited (9), (10) and often barely comparable due to the variety of linguistic tasks employed to understand either language comprehension or production in the bilingual brain. The findings, however, that second language learners may display shared cortical areas between their languages are interesting for they implicitly refute the classic assertion of a critical window of time for language-learning (by implying similar proficiency in both first and second languages) and they reiterate the phenomenon of brain plasticity. The latter statement a propos plasticity is particularly relevant in terms of Obler's stage hypothesis (11) which asserts that language learning moves from right hemisphere lateralisation in the early stages to left hemisphere overlap with the native language as proficiency increases. However, a test of this hypothesis requires some knowledge of findings whereby L1 and L2 are seemingly separately localised in the brain.

In 1997, Kim et al (12) used the fMRI method to examine cortical activation among a range of bilinguals who were proficient in various languages. The participant pool was divided into two groups, early (L2 acquisition before the age of five) and late (after the age of twelve) bilinguals. The results suggested anatomical variation between early and late bilinguals such that although early bilinguals showed similar activation in both Broca's and Wernicke's areas during a silent sentence generation task while late bilinguals displayed common activation in Wernicke's but not in Broca's area. These results indicate a role for late language acquisition, suggesting that the "critical period" concept cannot be discarded and that to some extent language learning after a certain age is differentially represented in the brain. This point of view both confounds the conclusions that can be drawn from studies cited earlier but adds confounds inherent to the study. The study included sentence generation tasks which can barely be compared to the early single-word generation studies for the former requires additional and more complex linguistic tasks in contrast to the latter. Additionally, a silent sentence generation task is a measure whose accuracy is difficult to measure across participants. However, other reports do support the finding that L1 and L2 may be anatomically separate in the bilingual brain (13) both in scientific terms and in experiential terms whereby the difficulty of becoming proficient in a second language beyond a certain age indicates, intuitively, that some corresponding anatomical difference too, must exist.

Not only do observations from various experiemental studies provide a source of information reagrding the interaction between different languages in terms of cortical representation, but recovery patterns among bilingual aphasics (14), (15), (16) too allow for the contruction of hypotheses regarding the anatomical correlates of language. The patterns of recovery, selective, parallel, differential, antagonistic, blended and successive (17) observed in previous cases of bilingual aphasia, when combined with the knowledge gleaned from neuroimaging studies must allow for a more comprehensive assessment of the processes involved in maintaining the two languages in the brain. The scientific examination of bilingual aphasics must be combined with studies concerned with impact of this impairment on the identity of aphasics (18), for identity is often attached to one's language and damage to this ability may also cause devastating effect on the aphasic himself.

The study of bilinguals and bilingual aphasia has a great deal of promise both for the study of identity as attached to language and for the mapping of multiple languages in the brain. Studies of bilingual aphasics and the recovery patterns observed within and among their languages have challenged existing accounts of language representation in the brain. A consolidation and analysis of the various findings is ongoing and will perhaps lead to a growth in knowledge regarding the various aspects bilingualism.


References

1)Timothy Mason's Site

2)A Note on Myths about Language, Learning, and Minority Children

3) Wei, L. (2000).The Bilingualism Reader. Routledge: London ; New York.

4) Grosjean, F. (1989) Neurolinguists beware! The bilingual is not two monolinguals in one person. Brain and Language, 36, 3-15.

5)Nature: Science Update 6) Chee, M. W. L., Tan, E. W. L., & Thiel, T. (1999). Mandarian and English single word processing studied with functional magnetic resonance imaging. The Journal of Neuroscience, 19, 3050 056.

7) Illes, J., Francis, W. S., Desmond, J. E., Gabrieli, J. D. E., Glover, G. H., Poldrack, R., Lee, C. J., & Wagner, A. D. (1999). Convergent cortical representation of semantic processing in bilinguals. Brain and Language, 70, 347 63.

8) Obler, L. K., Zatorre, R. J., & Galloway, L. (2000) Cerebral lateralization in bilinguals: methodological issues, pp. 381-394. In Wei, L.The Bilingualism Reader. Routledge: London ; New York.

9) Klein, D., Milner, B., Zatorre, R. J., Zhao, V., & Nikelski, J. (1999). Cerebral organization in bilinguals: A PET study of Chinese English verb generation. NeuroReport, 10, 2841 846.

10) Chee, M. W. L., Caplan, D., Soon, C. S., Sriram, N., Tan, E. W. L., Thiel, T., & Weekes, B. (1999). Processing of visually presented sentences in Mandarian and English studies with fMRI. Neuron, 23, 127 37.

11)Acquisition of second languages 12)Kim, K. H. S., Relkin, N. R., Lee, K. M., & Hirsch, J. (1997). Distinct cortical areas associated with native and second languages. Nature, 388, 171 74.

13)Study sheds light on how brain processes languages 14) Junque, C., Vendrell, P., Vendrell-Brucet, J. M., & Tobena, A. (1989). Brain and Language, 36, 16-22.

15) Nilipour, R., & Ashayeri, H. (1989). Alternating antagonism between two languages with successive recovery of a thrid in a trilingual aphasic patient. Brain and Language, 36, 23-48.

16) Paradis, M., &Goldblum, M. (1989). Selective crossed aphasia in a trilingual aphasic patient followed by reciprocal antagonsim. Brain and Language, 36, 62-75.

17)The Neurocognition of Recovery Patterns

18)Bilingualism and Identity


In the Mind of a Serial Killer
Name: Chevon Dep
Date: 2004-04-14 00:57:09
Link to this Comment: 9355


<mytitle>

Biology 202
2004 Second Web Paper
On Serendip

The movie "Natural Born Killers" did not simply explore the subject of serial killers. It also dealt with the mentality and personal background that influences many of the real life serial killers of society. The release of such movies and documentaries dealing with this topic shows that there is a fascination with serial killers. But why? Ted Bundy, Charles Manson, David Berkowitz, John Wayne Gacy Jr., Jeffrey Dahmer, and Jack the Ripper are all infamous serial killers of the twentieth century whose behavior and personal background has been 'studied' by the media and psychiatrists. In many cases of serial killings, the behavior is influenced by either the past experiences/ backgrounds or the psychological processes of the serial killers. However, there is a difficulty in understanding the psyche of a serial killer, which means that only interpretations can be made regarding this topic.

The way in which the term "serial killer" came into existence is interesting. During the mid-1970s, the FBI agent Robert K. Ressler coined this phrase after serial movies. As Lippit argues, "Like each episode of a serial movie, the completion of each serial murder lays the foundation for the next act which in turn precipitates future acts, leaving the serial subject always wanting more, always hungry, addicted."(1) Serial killers 'addiction' to killing does not cease after the first time but instead increases. In fact the FBI estimated that at any given time between 200 and 500 serial killers are at large, and that they kill 3,500 people a year. (2) This high average among the serial killers shows that killing becomes a pattern that is difficult to break.

The inability to break such a pattern can be attributed to the brain function of the person. Since the frontal lobe deals with the decision-making, this could possibly be an explanation as to what is going on in the mind of a serial killer. If there is frontal lobe damage or abnormal activity in this region of the brain, there is an inability to make rational decisions. This is in no way serves as a justification for such behavior. Instead, it serves as a possible distinction between the mind of a serial killer and the mind of a 'normal' brain.

The interviews of the serial killer John Wayne Gacy address some important issues that are useful to understanding the relationship between his brain and behavior. During Gacy's childhood and adolescence, his father expressed contempt for his illness, psychomotor epilepsy, and the pampering by Gacy's mother. (2) This particular epilepsy can cause a clouding of consciousness and amnesia of an event, because it is in the temporal lobe that deals with visual output. Along with this, the behavior of a person can be altered and a burst of anger, emotional outbursts, and fear is displayed. (3) Symptoms such as these could have possibly been a factor in Gacy's behavior in his adulthood. Also, Gacy's father continuously said that John was going to be a queer and called him a "he-she". (2) Gacy internalized this verbal abuse from his father and applied it to his victims. He referred to his victims as worthless little queers and punks. (2) Due to his father's verbal abuse, did it lead Gacy developing a homophobia and thus raping, sodomizing, torturing, and strangling to death thirty-three young men over the course of more than a decade. This is a strong possibility. However, it could also be a mixture of his psychological mind with his childhood experiences and background. As Simon says, "Although character has a genetic component, much of it is shaped by the nature and quality of our early relationships and experiences."(2) Therefore both good and bad experiences become embedded in the child's developing personality and also have an influence on adult character, as in the case of many serial killers.

Even though the brain could be instrumental in determining the mind of a serial killer, it is important to point out that most serial killers have not lost their grip on reality and thus have some control over their decisions. For example, when the police interrogate serial killers, many of them are not willing to talk. Instead, they tell you what they want you to know and to some of them it is a mind game. (4) The serial killers realize that the police want the information and the answers can only come from them. Therefore, many of serial killers play games, which increases their 'appetite' to kill more people. These mind games leaves the police even more puzzled.

The strategy the serial killer develops can be equated with the idea of being labeled as a Dr. Jekyll and Mr. Hyde. This is an interesting concept to explore. Dr. Jekyll represents the 'normal' lives of serial killers that include working, having a family, and paying taxes. (2) On the other hand, there is the extreme, Mr. Hyde, who represents the dark side of humanity that tortures and kills victims. The ability to make the 'normal' and 'sinister' life two separate entities shows that the serial killers have control of their decisions to a certain extend. In fact, this ability furthers the yearning to kill more people until the authorities catch them. Simon argues, "Suspension of empathy is necessary for someone to intentionally harm other people, and it is usually accompanied by the psychological mechanism of devaluation and projection."(2) In order to carry out such an act serial killers not only have to disregard the feelings of the victims but also project their insecurities on their victims to have control. For example, Ted Bundy referred to his victims as "cargo" and "damaged goods." (2) Often times, serial killers have to place their victims in sub-human categories to execute the act with little or no remorse.

The pattern of killing is not the same for all serial killers. Believe it or not, they have specific targets. For example, Ted Bundy stalked young women with dark hair. There is no exact explanation for the specificity of victims. However, the history and experiences of the serial killer can provide some insight for such profiling. In the Jeffrey Dahmer case, African-Americans were the majority of the victims. Many psychiatrists have attributed this to Dahmer's job as a chocolate mixer. (1) This may sound a little far-fetched, but studies have been done to draw a connection. Since he worked at a chocolate factory, Dahmer combined his hatred of blacks with consuming dark food. (1) Another example of profiling occurs with the female serial killer, Aileen Wuornos, who killed truck drivers. Due to her experiences and background, Aileen targeted the truck drivers. It was not necessarily a psychological process in her case.

It is difficult to pinpoint what exactly causes serial killers to become serial killers. There are numerous factors that can influence such behavior. This leaves us with the questions: What makes a serial killer? Could anyone become a serial killer? According to Simon, everyone has trial evils that have same failures in empathy and devaluation of others.(2) Since these are characteristics of serial killer, does that mean everyone is a potential serial killer? If this is the case, are the serial killers just translating these feelings and emotions by killing people?

References


1)Lippit, Akira Mizuta. "The infinite series: fathers, cannibals, chemists..." Criticism. Summer 1996: 1-18, A Good Article

2)Simon, Robert. "Serial Killers, Evil, and Us." National Forum. Fall 2000: 1-12, A Good Article

3)Psychomotor Epilepsy, A Good Web Source

4)Warning Over Mind Games of Serial Killers." European Intelligence Wire. 21 Feb. 2004: 1-2, A Good Article


Forget About It: The Quest to Forget Bad Memories
Name: Millicent
Date: 2004-04-14 21:50:35
Link to this Comment: 9372


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Nightmares, violent flashbacks, and an inability to simply forget painful memories for even a moment, these are some of the consequences of experiencing a trauma. The haunting nature of the memories are often so horrible that erasing the memory all together is desirable. While in the past this idea of erasing memories only existed in movies, scientists are getting closer to methods to erase memories. The process is referred to as "therapeutic forgetting" (5). As research advances so do the debates on the ethics of the process. Therapeutic forgetting has opened a discussion among policy makers, scientists, and those who suffer from horrible memories. If successful, drugs that alter memories could help suffers of Post Traumatic Stress Disorder. However if abused the same medication could change the way we process emotional pain and hinder our ability to work through bad memories.

Post traumatic stress disorder, a disorder which often occurs in people who live through a traumatic experience, can be debilitating at its worse. Individuals with the disorder often times relive their traumatic experience through dreams and flashbacks of their violent memories (6). The psychiatric disorder is often associated with Veterans of War but its effects are felt by many survivors of traumatic experiences. In the most severe cases PTSD can be incapacitating because of the frequent flashbacks. Currently those who suffer from the disorder are treated with various types of therapies, sleeping pills, and antidepressants. While these treatments can be helpful in easing the pain associated with awful memories none of them solve the problem fully.
Recent research has shown that the ability erase or at least decrease the intensity of painful memories may soon be possible. One study led by Roger Pitman, of Harvard Medical School, tests the ability of the drug propranolol to impact the hormones involved with painful memories (5). This theory is based on the concept that painful memories become predominate in the mind. Adrenaline and norepinephrine are produced during a stirring experience. These homomones are believed to increase the ability of the brain to grab onto and hold a memory. As a traumatic experience is particularly stirring, memories from these experiences can be particularly haunting. The drug propranolol, once used to treat heart problems, goes to the brain and interferes with adrenaline and norepinephrine production (5). Pitman believes that when used immediately after a trauma the drug could help people deal with negative experiences. If Pitman is right, PTSD could be avoided by many people who live through trauma. Currently the study is incomplete but it still raises interesting questions for scientists, policy makers, and PTSD sufferers.

At first the benefits of therapeutic forgetting for victims of PTSD seem overwhelming. With the use of a drug that decreases the ability to remember traumatic experiences we could prevent the disorder all together. However many bioethicists believe that using medication to forget is unethical (5). Life is about overcoming obstacles and by overcoming or learning to deal with the disorder individuals are learning how to adjust to a problem. How can individuals have empathy and understanding of emotional pain if their painful memories are dulled? Also the effects of propranolol for example are not limited to negative memories. A stirring memory that is positive could also be faded(5).

As research and the possibility of creating memory erasing treatments advance policy makers have been forced to respond. As recently as October of 2003 the Presidents Council on Bioethics commented on drugs meant to help people forget trauma saying, "they could also be used to ease the soul and enhance the mood of nearly anyone". The Council then argues that the use drugs would then open a new market to help people avoid unpleasant thoughts which allow "our pursuit of happiness and our sense of self-satisfaction [to] become increasingly open to
direct biotechnical intervention"(1). For these reasons the Council opposes the pursuit of therapeutic forgetting until further regulations are established.

Despite the opposition to therapeutic forgetting it is difficult to explain why individuals should be forced to relive negative experiences with flashbacks and dreams that are debilitating. Certainly the risks the Presidential Bioethics Council cite are valid but the benefits for PTSD patients are crucial. In addition, many Veterans argue that those with PTSD, as a result of combat in a War supported by the government, should be supported by politicians. Some of this support can come in the way of encouraging researchers to continue researching therapeutic forgetting. Argument that memory alteration is unethical and dangerous because it could be used unnecessarily have some validity. However, some memories such as those that cause PTSD are so hash that no person should have to live through them once let alone multiple times in flashbacks.
Altering memories is a particularly interesting subject because its possibilities are so varied. In the case of therapeutic forgetting, however, it is important to remember that this treatment could not make a person completely remove a memory. It could only soften the intensity of the traumatic experience. Studies on these should continue so that we can one day look forward to avoiding PTSD. No one is suggesting that the drugs be used to avoid all emotional pain, but rather that they be used to aid those people who would otherwise be disabled by traumatic memories.

References


1.) 1) Government's Report on Bioethics,

2.) 2)Center For Cognitive Liberty and Ethics,article on memory


3.)3)Exploratoriam Memory Site, Interactive Memory Site

4.) 4)Infinity Web Site, an Informative Web Source

5.) 5)New York Time Web Site, New York Times Magazine article

6.)5)National Center for Post Traumatic Stress Disorder,

7.)5)Serendip, "Forgetting to Remember: The Source of your Symptoms?" by Kristine Hoeldtke


The Punch behind the Peck: A Behavioral and Physio
Name: Ginger Kel
Date: 2004-04-15 12:03:02
Link to this Comment: 9383


<mytitle>

Biology 202
2004 Second Web Paper
On Serendip

Robin Hood, reeling in the corner from nearly being skewered, gazes across the room to his Maid Merion. Disheveled from worry about her love, she lets out a sob. Then with joy she rushes forward to embrace her man. Miraculously, a second wind innervates Robin as he jumps up to..... Shake hands with his lady fair. Just doesn't have the momentum without the dramatic lip lock, does it (1)?

The kiss is arguably the most popular franchise of all time. Butterflies, Eskimos, and the French each have their own brand. Hospitals are fitted with equipment to bestow "kisses of life" to their patients (2). Teenagers, taking advantage of the dark, awkwardly embrace on front porch steps. Poets muse about it. Cher sings about it. Movies revere those who die for it. To make a long story short, mankind is batty for this simple act. Yet, there are organisms in nature that reproduce and thrive without smooching. Why is the kiss such a vital part of the human experience? What is the origin of this kissing behavior? Are people the only living creatures that find merit in the deed?

The average human being will spend two full weeks of his or her life kissing (3). For an organism to focus that much energy on any endeavor, there must be some advantage to negate the cost. Kissing is a positive re-enforcement behavior. To promote habit formation, participants in the activity are rewarded with pleasurable sensations. The organs involved in the kiss are well suited to this function. The lips and the area around the mouth happen to have the highest concentration of sensory nerves endings of all the tactile senses (4). As icing to the cake, the lips are also outfitted with a very thin layer of skin making them the most sensitive part of the body (5). So, could one claim the structure of the mouth was patterned by the kissing function? No, most likely, the lips are ultra sensitive to make humans more discriminating critics regarding what they should ingest. The pleasure potential of the mouth is a parallel role. What then causes a co-mingling of two sets of lips to be pleasurable? The warm and tingly feelings associated with pleasure are the outcome of a potent surge of dopamine, norepinephrine, and phenylethylamine in the brain (6). This "cocktail" of neurotransmitters, which is triggered by electrical signals from the lips, is received by the emotional portions of the brain (5). Almost immediately, the brain responds by producing feelings of elation similar to those induced by certain drugs--kisses: the ultimate anti-depressant?

The euphoria experienced from a kiss has a purpose. To repeat the above assertion, the body is not an altruistic entity. There is a catch to every gift. So, why does the body encourage the act of kissing? It can be a risky activity after all: a single kiss can exchange 278 species of helpful and harmful bacteria in the saliva, not to mention diseases (e.g. herpes and mononucleosis) and viruses (7). There are health benefits to kissing too. Studies have shown that kisses assist in the prevention of tooth decay, stress relief, weight loss, and can raise self confidence (8). However, it is possible that a few of these results may fall more directly under the Placebo Effect sphere. For example, my previous statement implies that kissing is a direct treatment for stress. This could be true, but it could also be faith in the treatment that really yields the desired result.

Have I answered the question I posed in the previous paragraph? Is dental hygiene reason enough to necessitate the use of satisfaction hormones? Contrary to how my dentist may feel, I'm inclined to say no. Evolutionarily, nature favors organisms that can survive to perpetuate the species. Thus, most resources in the body are devoted to bettering the odds of producing viable offspring. It is logical then to assume kissing would have a reproductive function as well. Kissing is oftentimes a precursor to sexual activity. So, the act of kissing could serve as a trigger for the release of sexual hormones. One of the theories behind the development of the kiss builds on this procreation principle. Many philematologists (people who study kissing) feel that the mouth kiss is a derivation of the "Eskimo" kiss. In this genre of kissing, companions rub noses as an act of greeting (9). This motion of noses creates a proximity that allows olfactory neurons to detect the other person's pheromones (5). Pheromones are an organisms' unique scent. They reveal the mood, health, disposition, and recent exploits of the particular individual (9). Thus, pheromones could be used as evaluation of compatibility as a mate. It is important to note that the "Eskimo" kiss is not exclusive to human beings. In fact, many animals practice this exchange of information (10). When your cat rubs his face against yours, he's sizing you up.

Is it plausible that mouth kissing could have evolved as a means of further testing genetic fitness? Perhaps, body fluids are a pretty intimate aspect of a person after all. In addition to bacteria, saliva contains immunoglobulin (a compound that binds to bacteria to signal disposal by the immune system). Stress and anxiety levels can also be measured in saliva by monitoring the breakdown of noradrenaline (11). In other words, a person can make a pretty educated guess about a potential mates' health just by swapping spit.

Kissing is somewhat of an enigma. In comparison to other aspects of life, scientists know relatively little about the embrace. The theory about kissing originating as a means of data collection (as explained above) is only one in many. Some experts feel that the kiss's roots are more superstitious. There was a belief, at a time, that "the human breath carried the power of one's soul (9)." Thus, kissing was a way for loved one's to exchange this power and merge their souls forever. Although tempting to toss this theory aside, it has as much credibility as any other theory. Remnants of this faith are still seen today: after all, why do you think the bride and groom kiss at the end of a marriage ceremony? Another theory asserts that kissing began as a descent from a prehistoric feeding practice. Frequently, mothers would do as the birds do—chew up food then push it into their children's mouths (10). Kissing later developed as a way the mother could convey her love for the child. This theory is interesting because it allows for the association of emotion with kissing (10). What began as a symbol of the mother child connection may have evolved to become the poster child for fondness in all relationships.

Why do all these theories vary vastly from one to another? Is kissing that challenging of a concept to pin down? It appears so based on what I have presented to you. The real issue in philematology, however, is whether kisses come from a genetic or a cultural origin; the classic "nature" versus "nurture" argument once again rears its ugly head. Scientists cannot formulate accurate "source" hypothesis without knowing if it needs to look in science or anthropology. Most modern research is taking a step backward to try to solve that conundrum. A German researcher, Onur Hunturkun, spent two years documenting the trends in how couples kiss. He found that most couples lean to the right when kissing; he interpreted this as evidence of genetic asymmetries of motor and sensory functions (12). However, he also noted that cultural identity affects the way that couples kissed (12). Hunturkun's findings give more insight into kissing's mysterious parents, yet we are still left at an uncomfortable place. Most couples lean right, which implies a genetic pre-disposition for kissing to the right. However, whatever codes for this asymmetry probably codes for all motor functions (e.g. right handedness); kissing just happen fall under its jurisdiction. Hunturkun also mentions the effect of culture patterns on kissing. This is further evidence that kissing probably comes from a mostly "nurture" background. Yet, to not exclude the theories of origin previous mentioned, it is possible that the "nurture" act may have stemmed from a "nature" need. Yet, without conclusive evidence, philematology continues to be a study of near leads and suggestion.

You don't usually walk out into a field and see two horses engaged in a passionate embrace. So are humans the only species that practices the kooky art of kissing? Surprisingly, no we are not. Although you will never see two horses making out, you will oftentimes see them smelling one another's head—the "Eskimo" kiss. Lawrence Katz, a neurobiology researcher who did studies mice, found that pheromones are critical for animals to receive information (13). Mice and other creatures have developed very powerful vomeronasal organs to read pheromones in detail. Humans have this ability as well, but to a much lesser extent; our evolution placed emphasis on sense of sight over sense of smell. Katz also deduced that pheromones are nature's prevention against inbreeding: "When mice met their genetic twin, certain neurons fired. When they encountered mice from a different strain, different neurons activated (13)." Thus, in animals, kissing serves a vital reproductive function: for finding both a responsive and genetically different mate. Could this research be further evidence that kissing in humans has a reproductive basis as well?

Ingrid Bergman mused that "a kiss is a lovely trick designed by nature to stop speech when words become superfluous (14)." In other words, a kiss is an act that communicates unmistakably without words. The bulk of this paper has been devoted to the scientific aspects of kissing. Yet, there are volumes of emotional and psychological implications behind the practice that I haven't even touched upon. Kissing may have evolved as a way to increase the fitness of a species, but it quickly became intertwined with emotion. It has since become a physicalization of untangible qualities like love, commradery, and devotion. Because kissing means so much in human culture, we owe it to ourselves to understand it fully. Maybe then we'll understand why marriages that lack kissing usually result in divorce (7).

References

1) Self Written Scene Parody of Robin Hood: Prince of Thieves, prod. by Morgan Creek, dir. by James G. Robinson, 144 min. , Warner Brothers, 1991, videocassette.
2)Kissing, Written by Peta Heskell
3)Fun Facts About Kissing, from HiCards
4)Kissing, from Barbelith Underground Community
5)Can a Kiss be Bettter than Sex?, Written by John Triggs
6)News: The Science of Kissing, Written by Rob Bhatt
7)Science of a kiss, Written by Raj Kaushik
8)Reasons Why Kissing is Good for You, from CoolNurse
9)First Kisses, Written by David Templeton
10)The Science of Kissing,Written by Edward Willett
11)Kissing—how it all began..., from NZGirl
12)Your Kiss is All Right With Me, Written by Amanda Gardner
13)There's No Mistaking Mouse Lust, Written by Jennifer Thomas
14)Quotations About Kisses, from The Quote Garden


Principles of Neurological Signaling
Name: Jean Yanol
Date: 2004-04-15 20:22:03
Link to this Comment: 9394

<mytitle> Biology 202
2004 Second Web Paper
On Serendip

. In order to understand how the nervous system works, we must research how parts of this system communicate with each other. The nervous system is the most essential part of our anatomy because it has influence on organ systems, controls motor function, houses our consciousness, and can influence many cellular components. Many of this system's actions are carried out through signaling from one part of the nervous system to another. Through an understanding of these signaling methods we can hope to fix problems that may arise in neurological signaling along with having a basic grasp on how our bodies function. If we did not research signaling in the nervous system neurological biomedicine would be at a nearly complete stand still. Neurological signaling has a variety of different components and acts in substantially different ways. Here I will discuss some properties of signaling and why they are important.

Probably the most important signaling apparatus involved in the nervous system's activities would be water soluble ion channels (pores), which can be opened or closed chemically or electrically. These ion channels usually only let one type of ion flow such as sodium ions due to the charge of the ion and the charge of the surrounding environment. Chemically these ion pores can be opened to permit ion flow by ligands, which are small molecules that attach to receptor proteins in a membrane in order to cause a change in pore conformation. One example is in fast synaptic transmission these chemical ion controlling ligands are either the molecule glutamate or gamma-aminobutyric acid which allow certain ions to flow through the pore, but not others and they are involved in many types of ligand gated ion channel activities. Ion pores however in some signaling cases are controlled by a change in the concentration gradient of ions. The gradient opens and closes the pore to control ion flow. As ions flow through the pore, electrical potentials are changed.

Ion permeability controls differences in electrical potential in order to send a signal is called an action potential. Action potentials can send signals very quickly over a relatively long distance in the body via the projections of neurons(1). They are the skipping of current from one node of ranvier on a neuronal projection to the next node. (2) Some of their roles lie in the control of immediate muscle movements among other things. Action potentials are the most immediate form of signaling and are used constantly by the nervous system in order to function in somatic control. Action potentials occur due to depolarization of the membrane; therefore the speed of action potential propagation is dependent on the speed of depolarization. Action potentials also travel better when there is less membrane capacitance, which is the ability of the membrane to store charge. (2) When discussing action potentials it is very important to discuss resting potential and synaptic potential. Resting potential goes hand-in-hand with action potential. Resting potential is the normal potential of the membrane which can change at anytime, but is the normal membrane state. If the membrane did not have resting potential, action potentials would be unable to occur. Synaptic potential is the term used to describe the release of a neurotransmitter over the synapse once an action potential has reached the end of a neuronal projection. The released neurotransmitter from one neuron travels over the synapse and can then if it comes in contact with another neuron causes changes in the neuron.

While action potentials are clearly a very important signaling method in relation to the nervous system, there are other neurological signaling events which can have a great impact on the body as well which are generally slower or have a lesser impact on other systems in the body than action potentials and rely more heavily on chemicals in their signaling mechanisms. These types of signaling can have a tendency to act more locally, meaning that they do not travel as much as an action potential might. These signaling methods include more chemically based approaches such as interactions between proteins associated with neurons. Chemicals can interact with cells in order to change concentration, conformation, activation, or formation of proteins, ions, etc. (3) There are different receptors which associate with different ligands and in the cell produce different responses. There are countless of different cellular interactions which occur in this way and disruptions in these types of reactions can cause neurological diseases as well such as Alzheimer's disease and perhaps schizophrenia among others.

In short, signaling and chemical interactions are important in understanding how parts of the nervous system interact with other parts of the nervous system and how the nervous system interacts with other somatic systems. By gaining knowledge about the types of signaling that occurs in the nervous system we can assess problems with the nervous system and possibly develop treatments for problems among other uses for this information. However processes of the nervous system are much more complicated than this and each reaction and process is unique, which is why the nervous system is still somewhat of a mystery to us today.

References

1)Neurobiology and Behavior, 2004

2)Nelson Lecture 4

3) Helmreich, Ernst J.M. The Biochemistry of Cell Signaling. New York: Oxford University Press Inc., 2001.


Mind Over Body: Studying the Placebo Effect
Name: Mridula Sh
Date: 2004-04-15 22:38:57
Link to this Comment: 9396


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"The power which a man's imagination has over his body to heal it or make it sick is a force which none of us are born without. The first man had it; the last one will possess it." -Mark Twain, 1903 (4).

You've been experiencing shooting pains down your shoulder blades along the sides of your back for a week, ever since you woke up the morning after that intense rugby game. You decide its time to see the physical therapist. Twenty minutes into the consultation and he has made note of your history, asked you to perform some maneuvers and given you instructions to ice and stretch those constricted muscles. "If the pain persists for a couple of days," he says, "take some ibuprofen." On the drive back home, you're already feeling better. Two days later you're back on the field feeling invincible. The answer to this miraculous recovery could lie in the mysterious, highly controversial benefits of the placebo effect.

The placebo effect is the "the measurable, observable, or felt improvement in health not attributable to treatment. This effect is caused by the administering of treatment that has no intrinsic therapeutic value in the healing process (1),(5). The Placebo effect first caused waves in the medical community when in the 1950's, Henry K. Bleecher of Harvard University published experimental findings which suggested that a significant number of patients (30-40%) suffering from chronic ailments improved after taking a placebo. (5) Over the decades the astounding advancements in medical technology and efficacious procedures have enhanced the quality and longevity of peoples' lives. Concurrent studies investigating the placebo effect have yielded unexpected results, often lessening the legitimacy of drug treatments with serious implications in the ethical, scientific and medical worlds. My research in this area stems from an interest in the neurobiological implications of such studies with a view towards understanding the psychological vs. physiological aspects of the human brain-body relationship. This paper will investigate the placebo effect, outline some plausible causes, study the ethical dilemma surrounding these inert substances and attempt to gain an understanding of the phenomenon through a study of the complex brain/body relationship.

In a study conducted by Tor D. Wager and colleagues, participants were exposed to a series of electric shocks.(7) At some point during the course of the experiment, a placebo skin cream was administered to a certain proportion of the participants. About 70% of these participants claimed to feel less pain after the application of the skin cream. Analysis of data collected using a functional magnetic resonance scanner gave evidence of reduced activity in parts of the brain associated with pain perception in participants who felt reduced pain. (7),(8) So what triggers these results? The answer might lie in the brain=behavior theory. While the placebo has no visible pharmacological effects, its psychological effects generate expectations in the mind of a patient of a certain consequence (a reduction in pain.) This expectation in turn influences the perception of feeling (In this case analgesia.) (7) The brain relies on the sensory input it receives. Thus the brain is continually changing in response to (supposed) altering stimuli. These neurological changes in turn cause behavioral modifications. Thus one might understand the placebo effect as essentially a biological change that results from a change that is largely psychological.

These findings also showed that the 70% who reported a reduction in pain showed a pattern of brain activity that was different from those that did not. (9) This result suggests a degree of plasticity of the human brain. Studies conducted by Walter A Brown confirm that depressed patients who respond to placebos differ in their biochemical pathways from those who do not respond to placebo treatments. (5) This could explain why only 70% of those treated with placebos in the shock experiment responded with the reduced feeling of pain whereas the other 30% did not.

Neurologists in British Columbia used PET scans to determine the amount of dopamine activity in the brains of patients with Parkinson's disease when given either a placebo or a drug that mimicked dopamine. Results showed that the brains of patients given the placebo responded by releasing as much dopamine as those that got the active drug. (3) If a placebo can induce the same result as an active drug then is there something more powerful than the chemical substance that causes this change? It turns out that the effectiveness of treatment is also a function of the mental approach of the patient receiving the treatment as well as the attitude of the physician administering it. (5)

Studies have proven that the biochemical responses to anti-depressant medication largely depend on the faith and mental outlook of the patient towards the treatment. (1) A patient who has high expectations of improvement is more likely to feel better than a skeptic. A thorough consultation with the doctor and the act of undertaking a therapeutic process boosts the confidence of the patient and gives one a sense of control over a condition that was previously hopeless. The alleviation of anxiety and generation of positive emotions trigger physical changes such as the activation of endogenous pain control centers that release endorphins to reduce symptoms of illness. (1) This explanation could give one a better understanding of the high success rate of homeopathic (and other alternative) treatments that use natural remedies to cure ailments. Charismatic practitioners make use of trust and beliefs of the patients to induce the body's own healing processes to bring about a change. (1)

If placebos are seen as broadly effective therapeutic devices, then why is their use so controversial? The mysterious power of the placebo effect is responsible for the ethical dilemma it causes. The placebo essentially makes use of the fact that if the brain can be "tricked" into thinking that the brain/body is being treated for an illness then it will trigger the necessary natural biochemical processes to bring about the change. If this is indeed how the placebo works then why are patients administered active drugs? The answer lies in the fact that the use of therapeutic placebos involves deception on the part of the practitioner. (1),(2),(4) It violates the fundamental principles of trust and faith that the doctor-patient relationship is based on. Yet it is ironic that it is this very violation that is responsible for the success of the placebo effect. Studies have shown that patients exhibiting the placebo effect stop doing so once they are told that they are on a placebo. Using the brain=behavior theory to interpret this result, it seems as though the brain in response to new conflicting information sets off a negative psychological feedback process which in turn induces the biological change that is seen as a resistance to the placebo.

Conflicting research results regarding the legitimacy of the placebo effect have also been a cause of concern. Recent research conducted at the University of Copenhagen produced results that disclaim its authenticity (1),(3). While some trials showed results, these successes were not significant enough to prove the powerful clinical effects claimed of these inert "drugs." Thus at present, the dearth of scientific knowledge regarding placebos, the lack of awareness of the manner in which they bring about change and other ethical issues surrounding their use has left the phenomenon of the placebo effect in a shroud of controversy. Yet this need for caution must not negate the genuine findings of clinical researches and dismiss entirely therapeutic placebo procedures used by practitioners.

We have become a pill popping society who has succumbed to the manipulations of large pharmaceutical companies. Intriguing phenomena such as the placebo effect reveal the healing power of the brain and environment. (4) A comprehensive understanding of this phenomenon requires an in-depth knowledge of the complex brain, and its neural mechanisms, a process that is still in its infancy. Yet results of studies are promising and who knows sometime in the future we just might be able to make those dummy pills work as well as drugs.

References

1)The Skeptic's Dictionary, the placebo effect.
2)Tamar Nordenberg, The Healing Power of Placebos. FDA Consumer magazine, January February 2000
3)W. Wayt Gibbs, All in the Mind. Fact or Artifact? The Placebo Effect May be a Little of Both . Scientific American, 2001
4)Kenneth E. Legins, Is Prescribing Placebos Ethical? Yes. American Council on Science and Health (1997 & 1998)
5)Walter A. Brown, The Placebo Effect. Scientific American (1997)
6)American Psychological Association press releases, Placebo Effect Accounts for Fifty Percent of improvement in Depressed Patients Taking Antidepressants.
7)Tor D. Wager, James K. Rilling, Edward E. Smith, Alex Sokolik, Kenneth L. Casey, Richard J. Davidson, Stephen M. Kosslyn, Robert M. Rose, Jonathan D. Cohen, Placebo-Induced Changes in fMRI in the Anticipation and Experience of Pain. Science Magazine (February 2004)
8)Dennis O'Brien, Brain response to placebo found: Study says pain reaction differs if patient believes treatment is effective Dennis O'Brien, Brain response to placebo found: Study says pain reaction differs if patient believes treatment is effective. The Baltimore sun Company (February 20, 2004)
9)Jerome Burne, Cured by an Imposter. The Times (London) (April 10, 2004


Creutzfeldt-Jakob's Disease - The Misunderstood Di
Name: Katina Kra
Date: 2004-04-19 10:20:58
Link to this Comment: 9433


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


It comes inconspicuously, through family genes, random mutation of bad luck, or even from contaminated body masses. Creutzfeldt –Jakob disease (CJD), named after the two men who discovered the pattern of its symptoms in the 1920's, is a disease in which the brain rapidly deteriorates and holes being to form within its structure. (1,2,3) It is the long incubatory period, though, of Creutzfeldt - Jakob disease (CJD) which makes it particularly hard to track and treat. The relation CJD has to the epidemic in Europe regarding Bovine Spongiform Encephalopathy (BSE) is crucial. BSE caused the deaths of nearly 200,000 heads of cattle, and a new variant of the disease emerged. (3) ) Despite the new scientific understanding of both diseases, a great deal of misinformation has been spread, and the fear of Mad Cow Disease has permeated the culture of Europe and many other countries. There is a difference between these two neurological degenerative disorders - BSE related vCJD and CJD - but this distinction is rarely clear to those without medical training; the relative lack of information, combined with its grim diagnosis has led many citizens and governments to a skewed perception of what the disease truly is.


In the early 1990's, when scientists and doctors began to notice a pattern of spongiform illness in patients in England, they looked at CJD as a possible cause for this believed "outbreak." However, when former cases of CJD were compared to the new cases, a difference was noticed. The age of the afflicted had dropped drastically; those who had this "new" illness were young, primarily under the age of 40. Typically, CJD occurred in older populations, predominantly those over fifty years of age. (1, 2, 3, 6) Even the course of the variant was longer; it would typically last longer within patient than CJD, but shared similar neurological symptoms. However, in the initial stages, CJD patients typically developed neuromuscular coordination problems as well as personality changes. In vCJD, by way of contrast, psychological symptoms appear first, such as depression and anxiety, later progressing to neurological degeneration. (2) Both have similar neurological consequences, but researchers determined that this vCJD was not necessarily the natural or identical genetic type as classic CJD.


Neurologically, CJD is one of the most devastating diseases of the brain. Photographs of CJD patient's after autopsies show a grim picture; a brain that has literally been eaten away, pockmarked with holes. (9) Symptoms begin suddenly after remaining dormant anywhere from 5 to 40 years, and rapidly degenerate the brain and body. The beginning stages start as mild dementia and possible psychological problems, with confusion and memory loss the most commonly found, along with loss of muscular control. Quickly, it proceeds to dyesthesia, a condition which causes pain sensations in areas such as the face or limbs, and severe mental impairment and muscle spasms known as myoclonus. Often, CJD patients lose their ability to speak, and much of their immunity, and become susceptible to illnesses such as pneumonia. At the peak of the symptoms, those suffering from the disease lose the ability to control themselves physically, have severe dementia or mental impairments, and commonly lapse into comas. (2) It is this rapid downward progression of the symptoms that make the disease so fatal, as death generally occurs one year after the onset of the disease, and approximately two years for vCJD.


Treatment today for the disease remains unclear; there is no physical way to prevent the brain from deteriorating, and all medical professionals can do is lessen the symptoms and pain associated with it. Even diagnosing CJD or vCJD is still very primitive, a brain biopsy or physical exam of the brain at autopsy has been the only proven way to tell. A promising new method of detection has come out using spinal cord fluid, but only detects the disease once visible symptoms have developed; meaning preventative treatment is still too late. In a recent study, scientists found that nearly 13% of diagnosed Alzheimer's patients after autopsy were actually suffering from CJD, showing that the symptoms can appear from many neurological degenerative diseases. (8) Even with modern technology and medicine, CJD and its variants can still not be cured, nor can it always be detected or diagnosed.


The disease itself though, is not what one typically associates with transmissible diseases; it's neither a virus nor bacteria. In early studies, it was believed to be a "slow virus," meaning there existed an extended period of time between the onset of infection to the progression of visible symptoms. More recently, scientists have found that abnormally shaped prions, which are protein structures within the body, are the probable cause for transmissible spongiform encephalopathy (TSE) disease. ((4)) It is when this prion becomes mutated and folded, becoming infectious, that the TSE will occur. It is now believed that these prions, random mutations of proteins, create the degeneration within the brain, forming holes and gaps that are so indicative of TSE. (1, 5)


CDJ has progressed relatively rapidly through human populations. There are three different ways in which one can become infected. In 85 to 90% of all cases, it is a sporadic and random mutation which leads to the disease. However, there is a certain genetic link to CJD, and approximately 5 to 10% of reported cases occur in someone with a familial connection to CDJ. However, the infected prion must have been altered within the gametes for it to be passed along. The last and most uncommon way of transmission is know as iatrongenic, which accounts for 1% of CJD cases. (1) This occurs when there is direct contact with CJD contaminated instruments or body matter. Formerly, cornea or dura matter grafts, the use of natural growth hormones, and unsterilized surgical tools could have possibly transmitted the disease. Now, because of the more controlled and sanitary settings in medical facilities, and the use of synthetically derived hormones, this way of transmission has become increasingly unlikely. With the different possibilities of becoming infected though, health associations still only estimate CJD occurs in approximately one out of every million people. (3)


However, for the variant CDJ, the prospects of the methods of transmission are much more frightening and unusual. When an outbreak of the prion related disease Bovine Spongiform Encephalopathy (BSE) occurred in mid 1980's in Great Britain, the only concern was the removed the cattle diagnosed with the disease. But when a large number of TSE cases became present in early 1996, scientists and residents became increasingly worried about the health of the population. As the two forms of spongiform diseases were compared, a strong similarity of the prions was seen and a theory developed as to how it this variant could be explained. The theory was that consumption of the beef products from cattle tainted with BSE could result in the formation of a spongiform illness within humans. The cattle could develop BSE from eating mixtures of scraps not fit for human consumption from the nervous system of ruminants, or other large, grass eating mammals. (3, 6) Despite no significant or direct proof that this was the cause of the illness, Great Britain and Europe erupted into a frenzy of fear. Imports of British cattle were banned, and people stopped eating beef out of the fear they would get "Mad Cow Disease." But from statistics, only between 140 to 155 cases of actual vCJD have been confirmed or suspected in the population. (7)


Yet, even with these relatively low numbers that are comparable to the infection rate of CJD, Europe, Japan, and the United States have all taken extraordinary measures to "prevent" transmission of the disease. Sanctions on importation of beef and blood donations have been put into place to "reduce the number of cases" of the disease. By restricting donations of blood by people who have spent three or more months in Europe since 1980, the United States and other countries are putting themselves at risk for a blood shortage, even when CJD and vCJD have not been proven transmittable through the lymphatic system. (2) But because of the long incubatory period of both forms of CJD, and the inability for scientists to detect the disease before symptoms, these preventative measures may still not protect the populations from the minor risks of the disease.


The undetecability and lethality of CJD and vCJD are what have created the uproar within many cultures and countries, leaving behind fear and confusion of what the disease truly is. With much of the disinformation still being given freely, fears of the public have not subsided, causing drastic actions, such as mass slaughter of cattle, with many restrictions still remaining on the cattle industry and in blood donation because of the risk of CJD or vCJD. In understanding what the diseases are, the differences between the original and variant CJD, and the transmission and symptoms of both, the misinformation of the people can be corrected and the worries subsided. However, this goal by the medical community to try and inform the public has been a difficult task. Many people are still not ready to accept what CJD and vCJD are and how they occur, and will not change their marked beliefs regarding CJD, allowing the disease to further become an issue of global politics, media, and what the public would like to believe.


References


1)NORD Site; The National Organization of Rare Diseases site with extremely detailed information.

2)Creutzfeldt Jakob's Voice; The CJD Voice's site, with information and many interesting links.

3)WHO Site; The World Health Organizations site with information regarding the global impact.

4)Kuru and TSE; A site describing Kuru, another TSE associated with cannibalism.

5)Encylopedia definition; The encyclopedia definition of what prions are

6)Massachusetts Health Department; The Massachusetts State information guide to CDJ and Mad Cow Disease.

7)Statistics and Numbers of CJD; The statistics of CJD prevalence.

8)Alzheimer's and CJD; A site describing the misdiagnosis of CJD as Alzheimer's.

9)Photograph comparing brains; A photograph of a brain suffering from a TSE compared to a normal brain.


On Heroin
Name: Ariel Sing
Date: 2004-04-19 17:43:00
Link to this Comment: 9443


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Heroin: one of the most potent drugs on Earth, addiction, poverty, tweaking, Mingus, track marks, creativity, dealing, poetry, blow, Warhol, needles, glamour, oblivion, AIDS, music, hallucinations, Coltrane, junkie, anesthesia, lines, joy, smack, the Velvet Underground, rush, brain dead, genius. Death. Life.

Heroin: no matter who you are, how you have been raised or what experiences you have had in life, when someone mentions it, you have a reaction, a brief flood of words and accompanying images. Maybe you are a recovering addict, maybe your best friend does it, maybe you campaign against it, or maybe you listen to the music and watch the movies. Even if you are none of these, or all of them, you cannot have escaped the inevitable stereotype that surrounds this deadly miracle drug: try it once and you become an addict for life.

In case you have not heard people talking about the instant addictive powers of heroin for yourself, here are a few examples perpetuating the myth:
"And once you try heroin, it's almost impossible to get off it without help." (1)
"After only one 'try', a heroin user can become addicted immediately." (2)
"All they have to do is try heroin once and that's all it takes." (3)
"Even trying heroin once can spell addiction"(4)

Before explaining how heroin effects the brain it seems necessary to describe the symptoms of use: rush, pleasure, euphoria, nausea, comfort, lack of pain, happiness, drowsiness, warmth, heaviness, constipation, floating, blurriness, contentment. (5)

"...the cold wash of anesthesia hit me it swept over me, a wave that started at the tip of my, rushing across my face to my head, running down my neck to my chest, crashing into a warm golden explosion in my stomach, my groin, a blessed sensation beyond the peak of orgasm and relief of nausea, as every muscle in my body relaxed and my head lolled gently my shoulder, every sense unwinding, unburdened of the crushing weight of pain I never even knew that I had: the rush, the wave, death, heaven, completion. For hours and hours. The hit. Sensual ultimatum...." (6)

And the symptoms of withdrawal: goose bumps, watery eyes, runny nose, tremors, hallucinations, panic, chills, nausea, cramps, diarrhea, vomiting, (7) drug craving, kicking spasms, bone pain, insomnia. (8)

"Relinquishing junk. Stage one, preparation. For this you will need one room which you will not leave. Soothing music. Tomato soup, ten tins of. Mushroom soup, eight tins of, for consumption cold. Ice cream, vanilla, one large tub of. Magnesia, milk of, one bottle. Paracetamol, mouthwash, vitamins. Mineral water, Lucozade, pornography. One mattress. One bucket for urine, one for feces and one for vomitus. One television and one bottle of Valium. Which I've already procured from my mother. Who is, in her own domestic and socially acceptable way also a drug addict. And now I'm ready. All I need is one final hit to soothe the pain while the Valium takes effect." (9)

And now for some science: Papaver somniferum is the derivation of heroin. And as many people probably know another derivative is commonly found on morning snacks. Papaver somniferum is the opium poppy plant. (As an interesting note: somni is derived from the Latin word for dream and ferum is likely from the work for wild or savage). When the secretion from this poppy is dried it becomes opium. The major component of opium is morphine. When this alkaloid is combined with acetic acid it forms heroin, technically diacetylmorphine, by "acetylation of the phenolic and alcoholic OH groups." (10)

Contrary to popular belief heroin has very little effect on the central nervous system. It is primarily the transportation mechanism for the highly potent morphine that is at its core. Imagine: you have a tourniquet wrapped around your biceps so that your veins will rise. The hypodermic needle has been filled with the heated heroin. You insert the spike into the vein and the heroin rushes through your bloodstream. It hits you blood-brain barrier, already having been converted to 6-mono-acetylmorphine (MAM) through hydrolysis. This compound, unlike pure morphine, is lipid-soluble and races through into your brain with almost no delay. Now the MAM rapidly brakes down into morphine and the rush is over, but the high has just begun. (11)

Once the morphine is in the brain it can go to work. One of the primary ways in which heroin creates its effect is by simulating the natural opiate-like neurotransmitters (called the endogenous opioids, which include endorphins) in the brain. There are receptors for these natural opiates (which will except both the natural and artificial varieties) on neurons containing GABA neurotransmitters. GABA proteins are involved in the inhibition of the release of dopamine.

Normally the GABA neuron receives a signal and releases a large number of neurotransmitters, these bind to receptors on the dopamine neuron and allow the Cl¯ waiting in the synaptic cleft to enter the dopamine neuron. This signals the neuron to release only a small, specific, amount of dopamine, which in turn binds to another neuron and leads to "normal" feelings of contentment or pleasure. (12)

The presence of morphine alters this pattern. When the morphine binds to the opiate receptor on the GABA neuron it represses the release of the GABA neurotransmitters, this in turn represses the amount of Cl¯ that is allowed into the dopamine neuron. Without the Cl¯ to inhibit it, the neuron releases a large amount of dopamine, leading to the feeling of euphoria and supreme contentment. (13)

The reason that coming down after taking heroin is so painful is because you have used up a huge quantity of dopamine in one rush. Thus your body has to make more before it can begin to release it normally again.

When a person becomes an addict, this problem only becomes worse, each use of heroin adding to the last. Finally, when the cells that create dopamine are put under a significant amount of stress, they will start to shut down, producing less dopamine. This is one of the reasons that withdrawal from heroin is so extreme. (14)

As with most experiences, once is not enough to make you an addict. The technical definition of an addict is "someone who is physiologically dependent on a substance [and] abrupt deprivation of the substance produces withdrawal symptoms."15 To become "physiologically dependent" means that your body needs to have the drug to function, without it you will go through withdrawal. It seems that the chemical actions that cause withdrawal come when heroin has been used so much that the body cannot function when only being supplied with the normal level of dopamine. If heroin is only taken once the user will suffer a "low" after taking the drug, because a large amount of their dopamine has been used up, but their neurons have not become damaged or adjusted to the drug, and do not require it to work, thus the person is not addicted.

None of this analysis includes psychological need. It seems quite possible that a person might try heroin just once and then continue to take it, and eventually become addicted, because they believe that they cannot live without the feeling that its creates for them. However it is important to realize that just because a person feels a need for the drug, it does not follow that the body has become addicted, or dependent upon that drug.

Thus, it is possible to see that while many people will assert that heroin can be instantaneously addictive, they are incorrect. Heroin is highly addictive and can cause serious problems for people who become addicts. This, however, does not justify the spreading of incorrect information. All people should be fully educated, and then allowed to make their own decisions. We cannot protect people from the truth, they will learn it and as adults, they must make their own choices.

While researching the effects of heroin, it seemed that no one was fully able to describe how using heroin feels, except for the Lou Reed and the Velvet Underground in the song, aptly titled

Heroin:

I don't know just where I'm going
But I'm gonna try for the kingdom, if I can
'Cause it makes me feel like I'm a man
When I put a spike into my vein
And I'll tell ya, things aren't quite the same
When I'm rushing on my run
And I feel just like Jesus' son
And I guess that I just don't know
And I guess that I just don't know

I have made the big decision
I'm gonna try to nullify my life
'Cause when the blood begins to flow
When it shoots up the dropper's neck
When I'm closing in on death
And you can't help me not, you guys
And all you sweet girls with all your sweet talk
You can all go take a walk
And I guess that I just don't know
And I guess that I just don't know

I wish that I was born a thousand years ago
I wish that I'd sail the darkened seas
On a great big clipper ship
Going from this land here to that
In a sailor's suit and cap
Away from the big city


Where a man cannot be free
Of all of the evils of this town
And of himself, and those around
Oh, and I guess that I just don't know
Oh, and I guess that I just don't know

Heroin, be the death of me
Heroin, it's my wife and it's my life
Because a mainer to my vein
Leads to a center in my head
And then I'm better off and dead
Because when the smack begins to flow
I really don't care anymore
About all the Jim-Jim's in this town
And all the politicians makin' crazy sounds
And everybody puttin' everybody else down
And all the dead bodies piled up in mounds

'Cause when the smack begins to flow
Then I really don't care anymore
Ah, when the heroin is in my blood
And that blood is in my head
Then thank God that I'm as good as dead
Then thank your God that I'm not aware
And thank God that I just don't care
And I guess I just don't know
And I guess I just don't know


References

1 ) Heroin: How Big is the Problem? , Channel 6 News: WJAC, 2003.

2 ) Goal One - Education , A Heroin Dealer a Day, February 23, 2004.

3 ) Fighting Drugs: Mother' Fears, Sorrows, Regrets , Chesterton Tribune, March 22, 2004.

4 ) Heroin Reaches the Well-To-Do Adolescent Population , Medscape Special Report, November 12, 2002.

5 ) Heroin , Drug Info Clearinghouse.

6 ) Carnwath, Tom, and Ian Smith, Heroin Century, London and New York: Routledge, 2002, 98.

7 ) Heroin Withdrawal , Narconon Southern California.

8 ) Info Facts: Heroin , National Institute on Drug Abuse, June 25, 2003.

9 ) Memorable Quotes from Trainspotting , International Movie Database.

10 ) Platt, Jerome J., and Christina Labate, Heroin Addiction: Theory, Research and Treatment, New York: Wiley-Interscience Publications, 1976, 48.

11 ) Platt, Jerome J., and Christina Labate, Heroin Addiction: Theory, Research and Treatment, New York: Wiley-Interscience Publications, 1976, 52-53.

12 ) How Drugs Affect Neurotransmitters: Opiates , The Brain from Top to Bottom.

13 ) How Drugs Affect Neurotransmitters: Opiates , The Brain from Top to Bottom.

14 ) The Science of Addiction , Somerset Medical Center, February 2003.

15 ) Addict , Dictionary.com, 1997.


Right Brain, Wrong Body:
The (Trans)Sexual Hy

Name: Emily Haye
Date: 2004-04-20 11:03:16
Link to this Comment: 9479


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Transsexuality, like homosexuality before it, is defined as a disorder by the American Psychiatry Association. The DSM-IV lists various criteria that one must meet to be diagnosed with what it calls "Gender Identity Disorder." (1) But transsexuality, also known as "gender dysphoria," is not a medical condition. It is not a disease, or a malfunction of the body. The body of a transsexual operates normally, just out of sync with itself. The brain operates as one gender, while the body operates as the other. The result is a fully functional individual who feels trapped in the wrong body. This is quite a phenomenon; for some reason, the brain and body do not correlate. Why? What can this disjunction teach us about the brains and bodies of people who are not transsexual? In what ways can transsexuality inform out thinking and understanding of the brain in general?

Gender and sex are often used interchangeably, but their distinct meanings are important in the study of transsexuality. Sex is a label assigned at birth based upon one's genitalia. It is further physically defined by genetics (XX v. XY) and the gonads: male or female. Gender, or gender identity, on the other hand, is a self-assigned condition. One identifies as a man or as a woman. Usually, the imposed sex and experienced gender are one and the same. A person with a penis and testes identifies as a man whereas a person with a vulva, vagina, and ovaries identifies as a woman. For transsexuals, however, this is not the case. They are one gender trapped in body of the opposite sex.

What does this mean, "trapped in the body of the opposite sex"? It implies that the self is not the body, because the self feels right and the body feels wrong. Self must be the brain, then, the way the body is experienced. If the brain is the right sex, then, does this mean that male-to-female transsexuals (MTF), or individuals with a male sex who identify as women, have female brains? Was the wrong one grabbed of the shelf during assembly?

If only it were that simple. It's not, of course. The brain is overwhelmingly complex, not just in its function but also in its development. Countless factors affect the adult brain, from the womb to the present moment. In the past two decades, however, neuroscientists have begun to uncover conclusive information about the brain's gender that begins to explain transsexuality.

If brain=behavior, then the brain must be sexually dimorphic. Men and women don't act the same, so naturally their brains aren't the same. Overall, the brains of men and women are similar; the dimorphism is of particular structures. A series of studies published between 1985 and 2001 by scientists of the Netherlands Institute for Brain Research document data on the sexual dimorphism of one of these structures, the hypothalamus, and its implication in transsexuality.

In 1985, D.F. Swaab and E. Fliers published their data on the volume and cell number of a region of the human hypothalamus known as the sexually dimorphic nucleus of the pre-optic area, or SDN-POA. Post-mortem analysis of the brains of 13 men and 18 women, between the ages of 10 and 93, concluded that the male SDN-POA is on average 2.5 times larger by volume than its female counterpart and contains an average of 2.2 times as many cells. No function was attributed to the SDN-POA at the time of publication, but the authors noted that "it is located in an area essential for gonadotropin release and sexual behavior in other mammals." (2)

Conclusive studies were published in 1995 and 2000 correlating the size of another hypothalamus region, the central nucleus of the bed nucleus of the stria terminalus (BSTc) with gender identity. The 1995 study determined that, regardless of sexual orientation, the male BSTc was significantly larger in individuals identifying as men than in those identifying as women. For the first time, MTFs were also studied, and their BSTc sizes fell in the female rather than male range. (3) The small sample size (only six M-F transsexuals) and clear results implies that the trend of female-sized BSTc's in MTF is a strong one, as a subtler trend would not appear in a small sample size. (4) The 2000 study confirmed the female-ness of MTF BSTc's. Again, hetero- and homosexual men, heterosexual women, and MTF were studied, this time with attention paid to the quantity of a particular cell type in the BSTc. Again, the MTF data fell within the female range. The study also published the first data of a female to male transsexual (FTM) hypothalamus, whose data fell within the male range. (5) These studies answer, at least in part, the question of what it means to have the right brain but the wrong body: the twelve MTF and one FTM all had BSTc's that correlated with their gender identity rather than their physical sex.

A myriad of important and useful questions are raised by these findings. There are major implications for two central ways that we, as a class, think about the brain. The first of these is the thus-far supported hypothesis that brain=behavior. (6) The second is the existence of an entity known as the I-function, one of the many interconnected "boxes" within the brain. (6)

At first, my thinking was that because psychotherapy has no affect on transsexuality, it is a case in which brain does not equal behavior, or perhaps one in which the brain is larger than the reaches of behavior. This, however, is not the case. The behavioral practices of psychotherapy act on the cerebral cortex, so of course they would have no effect on transsexuality, which lies, we think, in the hypothalamus. Other behavior, however, outside the setting and practices of psychotherapy, has no effect on transsexuality. Often times, transsexual people attempt to assume the gender role, or expected social and cultural practices, associated with their assigned physical sex. Some are able to come to terms with their feeling of gender dysphoria and live in the opposite gender role, while others cannot. I found nothing to suggest that a transsexual person has ever been "cured," or able to change their gender identity. It is always the gender role and/or physical sex that is modified to match the gender identity. Regardless, altered behavior is not changing the brain. Assuming the opposite gender role does not alleviate the gender dysphoria and therefore does not alter the hypothalamus. Why not? Why does brain=behavior not apply here? At this early point in the study of the brain it isn't possible to answer this question.

Another brain/behavior question is one of the chicken and the egg: Did the brain come first or did the behavior? In other words, does the structure of the brain lead to the experience of transsexuality, or does the experience of transsexuality, created by social and other non-biological factors, influence brain structure? (7)) The 1995 and 2000 studies discussed above lend some insight. Brain material of postmenopausal, as well as castrated and non-castrated MTFs was used. These samples did not exhibit statistical differences from their groups, implying that adult levels of sex hormones do not influence the structure of the BSTc. ((2), (5) The structure must then have been determined developmentally, and it was some case of altered exposure or sensitivity to androgens (male sex hormones) in prenatal and early postnatal development that caused the smaller, female-like BSTc in the MTFs. (8), (9)

Our understanding of the I-function is challenged by these studies as well. We recently concluded that the I-function must reside in the neocortex. Animals with neocortexes seem to display some degree of I-functionality, whereas animals without do not. (6) But we never determined what exactly the I-function is. It is the part of Christopher Reeves that cannot move his leg, although the leg can move. But how much of Christopher Reeves is it? Is it all of Christopher Reeves, as a self? The role of the BSTc in (trans)sexuality says "no." The BSTc is part f the hypothalamus, not the neocortex, and so is not a part of the I-function. And yet it is responsible for a huge part of the self, a part so large people undergo sex-change operations to bring their bodies in line with what the BSTc says. So self, that elusive entity, must be more than the I-function.

Transsexuality, though incomprehensible to those of us for whom gender and sex is aligned, is telling us a lot about our aligned selves. From recent studies, we know that our behavior can't always change our reality like many shrinks say. We know that our gender identity is established at a very young age, even to some degree, in the womb. We know that our gender identity, a huge part of who we are, lies at least somewhat in a tiny bundle of nerves in a structure so deep in our brains we can't even point to it. We know that our neocortex, which we so prize as a sign of human intelligence and consciousness, the seat of the psyche and the mind, isn't the sole seat of our selves. Our selves are more spread out, residing in part in a region of the brain that we share with rats, gerbils, and guinea pigs. And while we know these things, we don't know everything. In short, those of us for whom the distinction between gender and sex is not immediately important, for whom the two correlate, take a lot for granted. And we are all a lot more complicated than we think.

References

Cited References

1)Diagnostic and Statistical Manual of the American Psychiatric Association, Fourth
Addition, 1994.

2). "A Sexually Dimorphic Nucleus in the Human Brain.", Swaab, D.F, and E. Fliers. Originally published in Science, New Series, Vol 228, No 4703 (May 31, 1985), 1112-1115).

3)"A Sex Difference in the Human Brain and Its Relation to Transsexuality.", Zhou, J.N. et al. Originally published in Nature 378: 68-70 (1995).

4)genderpsychology.org

5)"Male-to-Female Transsexuals Have Female Neuron Numbers in a Limbic Nucleus.", Kruijver, Frank P.M. et al. Originally published in The Journal of Clinical Endocrinology and Metabolism, Vol 85, 2034-2041 (2000.

6)Grobstein, Paul. Class Lecture/Discussion, Biology 202: Neurobiology and Behavior. Bryn Mawr College, Spring 2004.

7)"The Chicken-and-Egg Argument as It Applies to the Brains of Transsexuals: Does It Matter.", Breedlove, Marc. At genderpsych.org

8)"A Role for Ovarian Hormones in Sexual Differentiation of the Brain.", Fitch, Roslyn Holly and Victor H Denenberg Originally published in Behavioral and Brain Sciences, 21: 311-352 (1998).

9)"Sexuality: Gender Identity.", Ghosh, Shuvo et al.

Other Interesting Resources

Gender Identity Research and Education Society (gires), Britain.

Sex Differences in the Brain, by Dr. Doreen Kimura.

Transsexuality: An Introduciton.

A collection of links to transsexual information, both scientific and personal

Notes on Gender Identity Disorder by Anne Vitale, Ph.D.

"Structural and Functional Sex Differences in the Human Hypothalamus.", Swabb, Dick F. et al. Originally published in Hormones and Behavior 40: 93-98 (2001).


Irritable Bowel Syndrome and Hypnosis
Name: Kimberley
Date: 2004-04-20 12:50:24
Link to this Comment: 9485


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Irritable bowel syndrome (IBS) is a disorder fraught with controversy. Its cause is unclear and its cure for all those who suffer with the syndrome is yet to be determined. Yet, it cannot be ignored due to the fact that it affects 10% to 17% of the population (1) (2) and billions of dollars go to physicians' visits, prescriptions and lost workdays every year because of IBS. (3) Diagnosis for IBS mostly involves eliminating an organic cause for the symptoms. The symptoms of IBS include relief after defecation, diarrhea or constipation. (1) Once enough testing has been done to decrease the likelihood that some other infection or disease is not the cause of symptoms, IBS is diagnosed.

As there is no known outward cause for the disorder, it must involve the body itself. This is apparent through the current modes of treatment. The treatments include avoiding foods that may aggravate symptoms, taking tricyclic antidepressants and hypnotherapy. (3) Avoiding certain foods may be related to individual's allergies towards those foods, which may have a genetic link. Also tricyclic antidepressants that reduce constipation such as Tegaserod or reduce diarrhea such as Alosteron have been proven effective in certain trials.(1) (3) The literature shows, however, that hypnotherapy is the most effective form of treatment for those diagnosed with IBS. (1) (2) (3) (4) (5) (6)

The fact that hypnotherapy shows positive results may be related to the high co morbidity of IBS and stress in patients.(1) Stomach discomfort is a common symptom of nervousness and anxiety. The author can recall many instances when she was in a stressful situation, such as before an interview or before presenting a project in class and feelings of an upset stomach would arise. It is through such examples that one becomes aware of how close the connection between thoughts and physical responses are. The upset stomach was a direct result of the nervousness. She could only get rid of that symptom but reassuring herself and decreasing the number of worrisome thoughts. The reduction of anxiety reduced the discomfort in the bowels.

Those with IBS may have a hypersensitivity to this symptom of stress. (6) Just as some feel hungry when nervous, while others loose their appetite, people with IBS may have more severe bowel problems connected to stress. It is unclear as to whether this is true in all IBS patients, especially those who do not see a doctor for treatment. However, for those with stress related or stress induced IBS, hypnotherapy has proven to be an effective means of treatment. (6)

Research on hypnosis has reduced much of the mystery surrounding the process. As Galovski and Blanchard (2) report, all participants in their study accepted hypnosis as a satisfactory treatment option for IBS. This overall acceptance may be generalized to the comfort level of the United States as a whole with regards to hypnosis. Today there are standardized ways to hypnotize an individual, with many free scripts and instructions available both in print and on the internet. (7) The basic theme underlying most of these ways to induce hypnosis are attaining a relaxed state. Generally the person stares at a fixed point and listens to the hypnotist (either an actual person or a recording) recite a script, which induces relaxation. (7) (9)

The Stanford Hypnotic Susceptibility Scale (8) is a way to determine how deeply a person can be hypnotized. There are 12 items or tasks the hypnotist asks the person under hypnosis to perform. One example is the hypnotist tells the person that they have no sense of smell. If the person were very susceptible to suggestion under hypnosis he or she would not react to a putrid smell. Someone who is not as easily influenced under hypnosis would draw back from the source of the smell. (9) The more times a person under hypnosis reacts in line with the given suggestion the more hypnotizable the person is. A person who is very hypnotizable would score a 12, meaning they acted in accordance with every suggestion. A person who is not at all susceptible to hypnosis would score a zero. The general population scores in the range of 5 to 7. (9)

Hypnotherapy is used as a way to make people more aware of their bodies so that they may have better control over their general functioning. Hypnosis has been studied in acute pain management and has been shown to reduce perceived pain in moderately and highly hypnotizable people. (10) It has also been used to relieve chronic pain such as that experienced by cancer patients by improving distraction techniques such as visualization. (6) (11) The person focuses attention away from pain and on to pleasant images thus reducing the experience of pain.

Hypnotherapy for sufferers of IBS seems to have the least effect on those that have symptoms related to diarrhea. (2) (5) This may be due to the types of suggestions given to patients under hypnosis. For example, in the study conducted by Galovski and Blanchard imagery such as easily flowing water equated with digestion and intestinal function was used. (2) This sort of imagery would be useful if the patient had symptoms of constipation, however for someone suffering from diarrhea their digestive tract functions too much like this image. Listening to suggestions like that would not show a reduction of symptoms and could potentially exacerbate the problem.

One study showed that most of the physiological effects of IBS remained, despite self-reports of improved symptoms during and after hypnotherapy. (5) This result is in line with other studies that looked at perceived distress on the body during hypnosis. (10) (12) Hilgard (10) showed that acute pain thresholds could increase under hypnosis but that the physiological responses to stimuli, such as heart rate, are similar to those not under hypnosis and experiencing the same painful stimuli.

Likewise, Williamson et. al. (12) carried out an experiment involving bicyclists and perceived physical effort. They found that hypnotized bicyclists had increased blood pressure and heart rate when they were cycling under the suggestion of going up an incline. However, under the suggestion that they were going down a hill their blood pressure and heart rate were the same as under the suggestion that they were going on a flat surface. Under all three conditions their speed and work done was the same. The participants reported that their perception of the work done was less while going down hill, but their physiological responses did not reflect that. (12) Here are examples of perceived pain or distress on the body being less than physical indicators would suggest.

This might indicate that in IBS, hypnosis may not cure all, most, or any of the actual symptoms but rather reduce or eliminate the perceived discomfort and pain associated with it. This conclusion would correspond with findings that those with IBS are hypersensitive to pain in the bowels. It would also be consistent with the high levels of co morbidity of IBS and anxiety disorders. (2) (3) One is then left wondering if the hypnotherapy is just treating the anxiety disorder. And if symptoms do actually remit were they just caused by the anxiety? If they were, did the person really have IBS or just an anxiety disorder with physical symptoms?


References

1) Talley, N. J. & Spiller, R. (2002). Irritable bowel syndrome: a little understood organic bowel disease? [Electronic version]. The Lancet, 360, 555-564.

2) Galovski, T. E. & Blandchard, E. B. (1998). Treatment of irritable bowel syndrome with hypnotherapy [Electronic version]. Applied Psychophysiology and Biofeedback, 23, 219-232.

3) Farthing, M. J. G (1995). Irritable bowel, irritable body, or irritable brain? [Electronic version]. British Medical Journal, 310 (6973), 171-176.

4) Houghton, L. A., Calvert, E. L., Jackson, N. A., Cooper, P., & Whorwell, P. J. (2002). Visceral sensation and emotion: a study using hypnosis [Electronic version]. Gut, 51 (5), 701-704.

5) Nash, M. R. (2004). Salient findings: pivotal reviews and research on hypnosis, soma and cognition. The International Journal of Clinical and Experimental Hypnosis, 52 (1), 82-88.

6) Vickers, A. & Zollman, C. (1999). Hypnosis and relaxation therapies [Electronic version]. British Medical Journal, 319, 1346-1349.

7)Hypnosis Script Library - Suite 101.com, free scripts to induce hypnosis

8) Weitzenhoffer, A. M., & Hilgard, E. R. (1959). Stanford hypnotic susceptibility scale, forms A and B. Palo Alto, CA: Consulting Psychologists Press.

9) Nash, M. R. (1997). The truth and the hype of hypnosis [Electronic version]. Scientific American. 277, 47-55.

10) Hilgard, E. R. (1967). A quantitative study of pain and its reduction through hypnotic suggestion [Electronic version]. Proceedings of the National Academy of Sciences of the United States of America, 57 (6), 1581-1586.

11) Reed, W. H., Montgomery, G. H., & DuHamel, K. N. (2001). Behavioral intervention for cancer treatment side effects [Electronic version]. Journal of the National Cancer Institute, 93 (11), 810-823.

12) Williamson, J. W., McColl, R., Mathews, D., Mitchell, J. H., Raven, P. B., & Morgan, W. P. (2001). Hypnotic manipulation of effort sense during dynamic exercise: cardiovascular responses and brain activation [Electronic version]. Journal of Applied Physiology, 90, 1392-1399.


Falling Down- Multiple Sclerosis, Proprioception a
Name: Michael Fi
Date: 2004-04-20 17:33:57
Link to this Comment: 9492


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Many patients suffering from Multiple Sclerosis have difficulty maintaining balance and walking and many suffer falls. These falls and other motor skill impairments result not only from the deterioration of motor neurons, but also from consequent decline in proprioceptive capacity. The inability of a Multiple Sclerosis patient to effectively process reafferent feedback amplifies the neuronal impairment found elsewhere in the patient's Central Nervous System.

Multiple Sclerosis (MS) is a demyelinating syndrome affecting roughly half a million Americans. MS degenerates myelin into plaques (known as sclerosis) which impair electrical conductivity along axons. The underlying mechanisms this process are not well understood at the present time. However it is known that as MS eats away at a patient's myelin sheathes, muscles may gain in average tension or become weak and difficult to mobilize. One of the basic diagnostic tests for MS involves the electrical measurement of evoked potentials, which are an estimate of the time it takes for the CNS to transmit action potentials. Demyelination slows the speed of evoked potentials. (1)

While MS erodes the body's ability to transmit sensory information and send motor instructions, the links which enable the body to marshal muscles for movement resultant from a stimulus decay as well. This decreases the precision and subsequently the utility of concerted motor functions. For example, while driving an automobile a difficulty in properly transmitting the sight of a car veering dangerously in traffic results in an increased braking time. If instead of an oncoming car, one were to see his foot about to step onto a loose rock on a slope, a lag time may prove equally dangerous. This affirmation of body position and subsequent integration is known as reafferent feedback.

While carrying out complex motor functions such as walking, a person must perceive of his position and movement in order to maintain balance, plot a course of action (or inaction) relative to this information and vary muscle movements appropriately. This complex phenomenon is known as proprioception. The sensory apparatus which are responsible for detecting changes in muscle movement and orientation are known as proprioceptors. These proprioceptors, containing myelinated neurons known as gamma motor neurons, are susceptible to sclerosis. (2)

Two common symptoms associated with all types and severities of MS are optic neuritis, which is a term describing plaque interference with optic nerve function, and vertigo, which is a dizziness which is often associated with impartial or irregular sensory input, inner ear problems or brainstem damage. (3) Motor impairment or ataxia is also symptomatic of MS.

Ataxia related to MS comes in several forms and is well documented. MS patients suffering from upper limb ataxia have greater difficulty than healthy individuals in pointing at objects in varying states of motion. (4) Patients with vestibular ataxia have difficulty maintaining a normal gait when they are not deliberately visually monitoring their movements.

While both upper limb and vestibular ataxia are manifestations of direct effects upon topographically specific regions of the CNS and brain pertaining to balance or coordination, proprioceptive ataxia results from a malfunction or deterioration of the gamma motor neurons themselves. (5)

No comprehensive studies have been able to quantify the deleterious effect of sclerosis upon gamma motor neurons and coordination nor have any studies been able to figure out why and how MS makes people lose their balance. But we do know that MS can destroy gamma motor neurons and optic nerves and cause people to fall and injure themselves.

Falling down (especially repeatedly) is a symptom that there are major nervous or muscular system malfunctions. Standing upright is perhaps the most significant evolutionarily derived trait that humans possess, on par with the opposable thumb. Thus we can likely assume that sclerosis-induced damage to the nervous system is rather extensive once a patient begins to fall down with regularity. A fall represents a localized failure of a portion of a balance-related system or a broad failure of an entire system. Either way, a fall represents a general failure of the CNS and other systems to properly maintain an erect state.

In the event of a localized failure within a system, an apt analogy for the resultant general malfunction (a fall) is the propagation of error through a system of equations or a computational model. Assume an MS patient with optic neuritis. Distorted visual imagery of a set of stairs represents an error in perception that is passed along when several other parts of the system attempt to act on the imprecise information. There may or may not be a direct relation between the degree of optic distortion and the degree to which the subject's foot deviates from its normal path down the stairs. Now assume a subject experiencing minor distortion of a proprioceptive signal from the foot combined with a minor distortion of visual input due to optic neuritis. Each individual distortion may be slight enough that alone it could be compensated for, however, the two distortions may be enough to send the subject tumbling down the steps- his foot being unable to feel its way onto the step.

Assessment of the cause of a fall can be difficult. The root problem may be ataxia or vertigo. One way to potentially rule out the malfunction of reafferent feedback and the optic nerves is through the Romberg Test for balance. This test simply consists of nudging a free-standing subject whose eyes are closed. (6) Perhaps in the future, cheap and effective scans for sclerosis may be able to reveal the site-specificity of plaque buildup. With this knowledge we could determine the correlations between site-specific buildup and proprioceptive malfunctions.

So what does it mean to fall? What does it mean to lose track of your own body? Falling is not just the result of a loss of proprioception or a malfunction of the reafferent feedback system. Falling down is a fundamental failure of the body to either locomote properly or protect itself. When one falls, one regresses into a more childish state, where walking cannot be taken for granted. Falling represents a physical (and mental) devolution, an unlearning of one of the most fundamental motor skills. Falling can be discouraging in the same way physical deterioration can be anguishing. MS patients can be depressive due to their inability to properly control their bodies. Scientists at the British MS Research Centre in Bristol have developed a machine to measure gait irregularities and inform patients if they are in danger of falling. (7) This machine will also serve as a collector of vital data on MS patients' falls and balance problems, enabling development of more comprehensive preventative means.

References

1) Canadian MS society diagnosis ,
Evoked potentials- time it takes CNS to send and receive signals. Demyelination slows time.

2) Proprioception literature review by Darryn Sargant from Australasian Journal of Podiatric Medicine.

3) National MS Society, MS and vertigo

4) MS patients have more difficulty pointing at objects in varying states of motion than non patients. Influence of visual and proprioceptive afferences on upper limb ataxia in patients with multiple sclerosis.
J. Neurol. Sci. Feb. 1, 1999. 163(1):61-9.
Quintern J, Immisch I, Albrecht H, Pollmann W, Glasauer S, Straube A.
Department of Neurology, Ludwig-Maximilians University, Klinikum Grosshadern, Munich, Germany

5) MS encyclopedia, vestibular ataxia- gait disorder. Result of improper visual proprioception.

6) MS encyclopedia, Romberg test

7) BBC: Bristol study in brief, "people are unaware of how bad their balance is..."


Technology and the Written Word
Name: Maria Scot
Date: 2004-04-22 08:12:36
Link to this Comment: 9540


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The history of the development of written language reflects the parallel relationship between the human capacity for communication and technology. The written word allows for populations to develop a shared, cumulative body of knowledge based on the experiences and records of previous generations. The human mind is ill suited to serve as a passive vessel for knowledge or ideas. In the process of acquiring new knowledge we cannot help but twist and revise the information in light of our own story, perceptions, and opinions. In creating a written language humans found a way for information to exist independently from a human host, allowing it to remain free of the distortion and bastardization that it would inevitably undergo. Our capacity for language is perpetually evolving to allow for better communication in order to gain more quickly and effectively new information and modify behaviors accordingly. To this end the recent development of computers and the creation of the Internet provide human society with an unprecedented forum in which the written language instead of serving as a means of preserving knowledge, is an immediately accessible source of information and method of communication to larger and more diverse groups than ever before. Such developments are altering our language structure as it now exists and making significant new demands on our methods of communication in the future.

Language has always been a fairly fluid entity, evolving and shifting to get the intended point across. As cultures emerged and faded out or merged into different groups, so with them went a variety of different systems of written language (3). The earliest languages tended not to be solely phonetic, ideographic or pictographic, but combinations thereof (3). Our modern languages, at least since the invention of the printing press, have tended towards phonetic alphabet based systems. The invention of computers drastically alters the situation. Computers offer a graphical interface that allows the use of graphical languages in a way that the printing press and typewriters did not.

Computers fundamentally alter the content timing and manner of written communication. They remove many of the technical difficulties that limited the ways in which text could be created and used effectively. Since the advent of the printing press in the mid-fifteenth century, texts and documents were more easily created using a phonetic alphabet. It is more practical to re-arrange a limited set of phonetic symbols to create all possible words than to have the thousands of individual ideograms and pictograms. In the Japanese language, for example, there are two distinct systems for writing. One of these systems, Kata Kana is a phonetic system of writing using 71 graphic symbols and when read must be comprehended syllable by syllable (unlike English words for example, which are easily identifiable just by looking at them). The second language, Kanji, is mostly ideographic and represents both sound and meaning and is comprised of over 40,000 ideograms (3). Obviously, it is easier to create a typewriter, printing press and even word processor working with 71 phonetic symbols rather than with 40,000 plus ideograms. The first Japanese typewriter, for example, was produced in 1915 and had a flat bed of 3,000 keys (4). As books became the preferred method for conveying ideas, stories and knowledge, practical concerns continued to encourage the use of alphabets over ideograms. Libraries faced the practical challenge of being forced to catalog and index texts. It is reasonably straightforward to index works written using an alphabet. It is significantly less straightforward to index ideograms. Computers remove most of these technical issues. Computers' graphical interface allows the use of graphical languages, presenting society with a new possibility for creating a sort of hybrid method of communication between ideographic and phonetic writing systems. Computers potentially can utilize the strengths of both writing systems to address the difficulty that various individuals have with phonetic processing by using other means to communicate information usually only available in a phonetic writing system. Dyslexic children, for example, can learn to read when the words are represented by single characters rather than a series of phonemes (1). Computers have the potential to use this distinction between phonetic and non-phonetic processing to the child's advantage by converting texts. Similar possibilities exist for victims of stroke or other brain damage whose phonetic processing abilities have been injured (1). When speakers of Japanese, for example, suffer certain damage to the left hemisphere, their ability to read kana is profoundly disrupted, while their ability to interpret kanji, or ideographs, is relatively undisturbed (1). Even in individuals without brain injury, the simultaneous integration of phonetic and non-phonetic processing presents new communicative methods. Beyond the technological possibilities presented by computers, the creation of the Internet is a space that has profoundly altered human communication. It provides a new forum in which texts composed of a mixture of pictographs, ideographs and the written word are immediately made available to a broad audience. Unlike books, much of what is written on the internet is not intended for posterity. Rather, the internet is meant to facilitate immediate communication. This development is a fundamental shift in the intention of written language and, as a result, the way in which the language is used is evolving to best suit its new purpose. To that end it has evolved it's own 'shorthand' in an attempt to allow for written communication to take place at the same speed as spoken language while retaining many of the subtleties of speech. In this context ideograms have come back into use, because they communicate a larger idea or sentiment in a single character. 'Emoticons', small graphical depictions of faces making a variety of expressions (smiling, winking, frowning), for example, have come into frequent use in Instant Messaging programs, because the written word alone does not provide the recipient with enough information about the exact meaning behind a message. (was it intended sarcastically, jokingly, seriously, ect.) In some ways it is much more a form of literally 'written speech' and as such is based in the phonetics of the spoken language. 'Want to' on the internet often becomes 'wanna,' 'going to' becomes 'gonna'. The technology of computers is allowing humans to continue to develop systems of written communication consistent with their ability to process and exchange information.

Humans are moving away from texts that are purely phonetic and evolving a writing system that makes use of a variety of communicative methods. Many of the forces that previously determined the nature of our language no longer apply and technological advances allow us to make use of the many ways in which humans can receive information and express their knowledge and thoughts. Joseph Brodsky once wrote that "..apart from pure linguistic necessity, what makes one write is...the urge to spare certain things of one's world-of one's personal civilization-one's own non-semantic continuum. Art is not a better, but an alternative existence; it is not an attempt to escape reality but the opposite, and attempt to animate it. It is a spirit seeking flesh but finding words." Perhaps what this more diverse, integrated system of writing moves us towards is not finding words so much as meaning. It again allows us more freedom in using written language to express ideas instead of forcing ideas through the sometimes inhibiting paradigms of language.


References

1)Kandel, Eric R. Principles of Neural Science. Simon & Schuster. 1991.
2)Birth of a Writing Machinediscusses the development of a Japanese type-writer
3)History of Cuneiform As the title of the page might suggest, this is a history of Cuneiform.
4)Early Office MuseumImages of early type-writers ect.
5)website of the International Dyslexia Foundation


Dreams and the Unconscious
Name: La Toiya L
Date: 2004-04-28 04:15:09
Link to this Comment: 9658


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The complicated yet interesting connection between dreaming and our brains is one that many scientists, philosophers and psychologists have grappled with. Dreams alone are strange and obscure and the human mind is intricate and complicated but their relationship is one that, when examined, is fascinating. A predominant figure in the history of dreams is Sigmund Freud, although not originating the concept of dream interpretation, was integral in developing some methodologies of utilizing the dream as a means of deciphering the psyche of the dreamer - particularly in uncovering and analyzing the dreamer's psychological problems. Sigmund Freud's analysis of dreams and their connection to our unconscious were one of a kind compared to other beliefs that were held during his time. Freud delved into the human mind in ways that many before him hadn't.
In Freud's book, "The Interpretation of Dreams" he describes five distinct processes which are brought into play during dreamwork. (1)

• Displacement: This is where the dreamer represses an urge, and then redirects
that urge to another person or object.

• Condensation: This is the process whereby the dreamer disguises a particular
urge, emotion or thought by condensing, or contracting, it into a brief dream
image.

• Symbolization: This is where the repressed urge is played out in a symbolic act.
For instance, in Freud's methodology the act of inserting a key into a keyhole
would have sexual meaning.

• Projection: This is the projection of the dreamer's repressed desire onto other
people, but should not be confused with displacement as it does not involve
objects. In projection, instead of dreaming about sleeping with their co-worker,
the individual would dream of their boss in bed with the desired sexual partner,
projecting the urge onto the boss rather than literally dreaming themselves in
the bed.

• Secondary revision: This is the expression Freud uses for the final stage of
dream production. After the individual undergoes one or more of the other four
dreamwork processes, they then undergo the secondary processes of the ego in
which the more bizarre components of the dream are reorganized so the dream has a
comprehensible surface meaning. This surface meaning, once arrived at through
secondary revision, is called the manifest dream.

Freud makes use of psychological techniques to interpret dreams and in doing so the dreams reveal themselves as a psychological structure, full of significance. Also through psychological interpretation the structure of a dream can be attributed to a specific place in the psychic activities during the waking state. By looking into the unconscious there's plenty to learn about the structure of the human mind. Freud argues that the structure of the human mind largely comes from the unconscious, while others argue that the human mind is solely based on the conscious. To assume that our mental structures are based on that which we experience consciously is ignoring a large portion of that which contributes to our minds, the unconscious. Some argue that certain behaviors are simple, natural, and can't be explained but in actuality all human behaviors complex or not can be better understood through exploring the convoluted world of the unconscious. Freud believed dreams were a door into the human psyche and a crucial part of understanding the mind.

There are other interesting viewpoints regarding dreams some of which support Freudian thought and others which negate it. It's widely accepted that we dream of what we have seen, said, desire, fear, or have done. "Experience corroborates our assertion that we dream most frequently of those things toward which our warmest passions are directed." (2). There are also theories on dreams and how they function both in the conscious and the unconscious. Theories, like those of Franz Joseph Delboeuf, state that the full psychic activity of the waking state continues in our dreams. Here the psyche does not sleep. The theory of partial wakefulness, on the contrary, argues that in dreaming there is a diminution of the psychic activity, a loosening of connections, and an impoverishment of the available material. The theory of partial wakefulness did not escape criticism even by the earlier writers. Dr. E. Friedrich Burdach wrote in 1830: "If we say that dreaming is a partial waking, then, in the first place, neither the waking nor the sleeping state is explained thereby; secondly, this amounts only to saying that certain powers of the mind are active in dreams while others are at rest. But such irregularities occur throughout life..."

The different concepts of Dreamwork explain how dreams are produced and function. Dreamwork occurs while we are asleep and at the point when the mind is about to start dreaming. The Dreamwork operation condenses ideas and thoughts of the mind into latent and manifest dreams in an objective to make dreams as unintelligible and incomprehensible as possible. Dreamwork produces dreams by used information from our conscious and unconscious. Dreamwork is how so many ideas and thought get condensed in our dreams into little story like films. Dreamwork has operations like condensation, dramatization, displacement, and word play that are tools in creating dreams.

Repression occurs in the mind when the human conscious feel threatened. Repression is a way the human mind protects itself from anything it cannot handle by blocking it out of the conscious psyche. Repression is a shielding method by which the conscious blocks something out which usually reoccurs in the unconscious state. Freud believed that even though a person may not be mindful of the repressed information it's still present, and is accessible through psychological techniques. It should be understood that the conscious mind doesn't always repress information or thoughts purposely, but rather it's the psyche doing so unconsciously. Freud believed that desires and ideas that society deems as unacceptable are dumped into the unconscious by means of repression.

Freud argues that the psyche is made up of the conscious and the unconscious mind and when analyzing the mind it is imperative in knowing that these two sectors are distinctively different. The mental aware mind of people usually uses the conscious mind. The conscious mind is where reality and our mind have created how we react, learn, and think. When we use or logic and learned behavior we are referring to our conscious mind. Even though conscious mind is referred to the most, it's minimal in size compared to the unconscious. The unconscious, which is the bigger part of the mind, makes up the majority of the psyche. The unconscious which is considered to not be used often is actually used the most. Although our conscious is directly linked to our behaviors, the reasons behind why we do the things we do are derived from our unconscious. The unconscious is storage for repression, emotions, memories, thoughts and feelings that have already happened to a person. Dreams are unique. No other individual can have your background, your emotions, or your experiences. Every dream is connected with your own "reality". Although this much is true, neuropsychology is making rapid strides in helping us to understand the aspects of self and society that affect our dreams.


References


1)Interpretation of Dreams

2)The Interpretation of Dreams (3rd edition) by Sigmund Freud

"Glossary of Freudian Terms."

"Dreamwork, Dream Library


Nature vs. Nurture: Are We Asking the Right Questi
Name: Natalie Me
Date: 2004-04-29 10:30:59
Link to this Comment: 9682


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


I began this paper with the intention of writing about the possible connections between memory and computers, and mind and technology. Recently, a class discussion on memory spilled over with a friend into a lunch table debate. I can't even begin to explain how the discussion maneuvered from computer chip memory, to individual and collective memory and knowledge to the controversial issues surrounding nature v. nurture. I realized that I was much more interested in investigating the question of nature v. nurture. No one can deny that both one's genes and environment impact a personality, so other issues must fuel the debate. This is the conclusion I came up with as I read through online materials. Let me outline my discovery:

I was taught growing up that nature played the major role in determining one's behavior. Not that my parents are profoundly positivist scientists, but my beliefs have much to do with theirs. I am one of eight children and the distinct differences in each one of our personalities convinced me that we must have been born with many personality traits already established, or with the groundwork for a predisposition to develop. How could one child be outgoing and social while another is extremely introverted? How could one be so devoted to studies while another is deeply committed to football? For the most part all eight of us have grown up with a similar upbringing and like experiences, but our behaviors, attitudes and intelligence vary wildly. Certainly this is due to some specific set of predispositions.

And then there are issues of personality that seem to manifest themselves at very early ages, before many psychologists will admit that a personality can even be formed. My father, at the age of four, one day after his parents had left for a dinner party, told his babysitter that he wanted to die, convinced that his parents were never returning. Where did this fatalistic disposition come from at such a young age? He has continued to suffer from depression his whole life. Hearing him discuss his condition and looking at the lack of precipitating factors always led me to believe that this, for sure was a caused by his genetic makeup and because this disease has impacted his life so dramatically, it certainly has made up much of his personality and subsequent behaviors.

First, I want to review some of the arguments behind each viewpoint. Let me start with the major biological processes involved in the nature argument. I know genes are responsible but what exactly are those? Beyond what I remember from eighth grade health-science, I realize I know embarrassingly little about how a gene might actually impact who I am. The scientists who set out on the Human Genome Project probably felt similarly and though their work has taught us a lot more, we still have much to learn. We do know that humans have about 30,000 genes. It was noted over and over again on the web that this is only twice as many as the simple fruit fly. So what is it that makes humans and human behavior so complex?

Genes contain a coded pattern of proteins and it is in these combinations that our humanity and individuality really come out. "Proteins are the chemical tool of every cell...the chemical environment inside the cell is controlled by what proteins are present [and] in turn the chemicals in the cell control what genes are active at what time" (1). With an infinite potential for variation in these genes, it seems conceivable to me that we could all be the result of mere fluctuations in a chemical process.

However, some scientists and philosophers disagree. According to Davies (2), Craig Venter, a scientists working on the Human Genome Project, doesn't believe that we do have enough genes to buy into biological determinism. A twin study done by Swedish scientists and published in the New England Journal of Medicine found that in determining cancer, environmental factors were much more reliable (3). Psychological and philosophical behaviorists (like the notable Skinner and Watson) believe that we are born with a blank slate and our personality and subsequent behavior are developed only through experience.

Once again, though, I have already conceded that both nature and nurture play a part. Are we trying to crack the puzzle of how much environment and genes exactly contribute? Twin studies (the most common method of studying this issue) have usually resulted in "assigning percentage values [that implicate] both genes and environment" (4). Is this really what we are searching for? I am beginning to feel as though I am missing the point.
Let me return to the natural acknowledgement that nature contributes some unknown amount and environment the rest. But the fact that people continue to make declarations on the side of nature or nurture leads me to either want to throw the whole thing out, or figure out why it remains such a compelling question. Why haven't people accepted a shared source for behavior? Why the continued discussion and debate? What makes this topic such an omnipresent force in science and society? I determined that cultural or social features make this issue such a salient one.

Let me now consider those social implications of this debate. If we are going to buy completely in to the biological determinism argument, what are the possible consequences of doing so? First of all, legal and ethical consequences are far-reaching. We haven't even begun to contemplate all of the subsequent results if we declare a 'criminal' gene. Genetic discrimination might become rampant. Though some states have passed genetic privacy laws, once testing, identification and record keeping begin, what is to keep us from becoming unemployable and uninsurable (2)?

Many are concerned that criminal acts could be justified by a 'bad' or 'criminality' gene. This could also render those individuals unemployable and uninsurable. The possibility of a 'gay' gene also concerns many conservative and gay-rights groups. Labeling sexuality as a purely biological (or non-biological) process overrides an individuals autonomous choice of lifestyle. But it also might establish an imperative for equal rights (5). Both sides of all these issues seem to have some moral claim to proving or disproving biological determinism.
If we could link many behaviors and personality traits to particular genes, would that allow us to manipulate those genes and processes on an as-desired basis? There are far-reaching bioethical considerations for this. The link between man and machine is even implied in this situation. Even if it is possible, can we allow ourselves to depend on technology to rid the human race of bothersome diseases and conditions?

One article I read linked this issue to memory and collective or individual knowledge. I was somewhat surprised at this, not recognizing the impact this debate has on those arenas of discourse. LeDoux (6) argues that synaptic plasticity plays a large role in mediating between different types of memory that allow us to learn, while at the same time it depends on our inherent traits or processes. He says that explicit memory, the "ability to consciously recall past events" interacts with our programmed implicit memory, or "our ability to see the world the same way other humans do." An example of this would be that "we are born with the ability to act afraid, but we usually have to learn precisely what to fear" (6).

LeDoux believes that "the nature/nurture debate operates around a false dichotomy: the assumption that biology, on the one hand, and lived experience, on the other, affect us in fundamentally different ways" (6). For LeDoux, implicit memory and explicit memory allow us to learn from two sources concurrently. Might this false dichotomy be the assumption upon which we are basing our discussion that is leading us around in circles?


Rothman (3), reports that:
"to label a disease as genetic-only is to propagate the idea that an individual is doomed to live with his or her genetic makeup. Conversely, classifying disease as environmental only does not explain the role of genetic variations that increase susceptibility to environmental factors. These labels serve no purpose and are misleading" (3).

It seems we have stumbled upon a either a roadblock, or an exit from a never-ending discussion. What might be a better question to ask? Or, what other factors and issues should we be looking at? The conclusion that we have been giving the wrong issues our undue concern is the idea I came to, and began my paper with. I found an article on sociobiology that seemed to offer a glimpse into just how narrow the nature v. nurture debate is.

The article analyzes 1993 NORC General Social Survey data in attempt to discover people's belief's on what they think determines how their life turns out. They looked at five different factors: God, genes, society, individual work and effort, and chance (7). I was most interested at the society or culture option. The article argues that "distinguishing nature vs. nurture in terms of determinism vs. free will is probably erroneous when one considers the extent to which enculturation patterns minds, selves and behavior" (7).

What my research has done is to connect many different issues to the one I was originally concerned with. I now have a sort of mental map of controversy in my head. I do agree that perhaps we are asking the wrong question, or maybe we are just attempting to simplify concerns that have far reaching implications. Whatever the case, I think that we could look to socialization processes to help us identify new questions to ask. Socialization process taken beyond the realm of 'nurture' may provide an alternative to the old dichotomous debate.

As I finish up, I do have one final thought: through my reading I have been reminded that biology is a not a value-free field. Looking back at my experience and many others' I wonder about the extent that religious values have impacted this ongoing public, political and academic dialogue. Certainly religious concerns are implied here, invoking ideas such as a pre-Earth life or a purely scientific universe. Perhaps this is why many people cling to a science-based answer, or one that invokes a more mystical world... and come to think of it, where does this behavior come from? Nature, nurture or society? Might this be the real question we should be investigating?


References

1)http://environmentalet.hypermart.net/psy111/naturenurture.htm

2)http://www.pbs.org/wgbh/nova/genome/debate.html,

3)http://www.cancer.org/docroot/NWS/content/NWS_1_1x_Nature_vs_Nurture_The_ Debate,

4)http://www.cdc.gov/genomics/info/factshts/nvsn.htm,

5)http://members.aol.com/leolighterx/orientation.html,

6)http://home.att.net/~xcar/tna/ledoux.htm,

7)http://www.trinity.edu/mkearl/socpsy-2.html,


Some Thoughts on Smoking
Name: Tegan Geor
Date: 2004-04-30 16:54:14
Link to this Comment: 9708


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

If I were to wake up tomorrow without arms or a mouth, I would light a cigarette with my feet, and smoke through my nose.

This might be what a person could consider a serious problem.

Cigarette smokers, often more than others, can tell you what is bad about smoking cigarettes. Smoking cigarettes puts a person at risk for bronchitis and emphysema. Heart disease. Strokes. Ulcers, cataracts, and osteoporosis. Pretty much every cancer a person can get, highlights including lung, mouth, kidney, uterine, cervical, prostate, and colon cancer. (1). (1) Cigarettes make you smell funny, turn your teeth yellow, make it harder to smell things or to taste things, decrease the elasticity of your skin, and cause people you scowl at you as they walk in and out of buildings.

These are things I know.

And yet, I—along with 46 million some-odd other Americans—still smoke cigarettes. A third of us try to quit every year, and only about 3 percent of those will succeed without outside treatment by way of therapy or prescribed drugs. (2).

So why is it so hard to quit? The answer to that question may be found in some new research on the matter.

The nucleus-accumbens is an area of the brain connecting the ventral tegmental area and the prefrontal cortex. The pathway from the vertral tegmental area through the nucleus-accumbens to the prefrontal cortex is often referred to as the reward pathway: (3). this is the area of your brain which reinforces rewarding behavior—eating, having sex, and numerous other things which human beings do and do often because they make them feel good. The neurons along this pathway use the neurotransmitter dopamine. When a person lights and begins smoking a cigarette, dopamine production is stimulated in this area of the brain, having a calming effect on the smoker (Dopamine is also the neurotransmitter responsible for the high one gets using cocaine, opiates, and alcohol). Elsewhere in the brain, acetylcholine and norepinephrine are released: neurotransmitters that regulate mood, attention, and memory. (2). And if a smoker is anxious, aroused, or stressed, smoking can affect an increase in neuromodulators—chemicals which act to counterbalance neurotransmitters, effectively calming the smoker down.

It isn't hard to get cigarettes. Compared to other addictive substances, it isn't expensive. It is far more socially acceptable than many other addictive substances, too: alcohol may be more widely used, but socially it is far more acceptable to go to work having just smoked a cigarette than, say, having had a few beers. And one of the most troublesome things about nicotine—at least so far as being able to quit—is just how easy and effective cigarettes are at administering the drug. Smoking a drug is very nearly as efficient a way to deliver a drug as injecting it. And what might be the most remarkable thing about cigarettes is the precision to which a smoker can regulate her intake. Nicotine content in a cigarette is around .1 to .2 mg per cigarette, depending on the brand (4).: and at about 10-12 puffs per cigarette—after each of which a smoker could theoretically decide she had had enough—the ease at which one could administer tiny doses of nicotine is really quite remarkable.

And at some point, the ease and precision with which one can smoke will permanently alter your brain structure. (5). A tolerance develops for the stimulated neurotransmitter activity. And according to at least one study, (2). attention, memory, and reasoning ability decline just four hours after a smoker has not had a cigarette, and do not recover for days, even with no further use. It seems current evidence may indicate that brain function altered by nicotine may never return to pre-addiction levels.

All of these things are not looking good for my plan to eventually quit someday. There
is of course always hope: nicotine replacement therapies seem to work well, especially when combined with therapy.

References

1)NIDA, Site on Nicotine Addiction

2)Yahoo Addiction Center

3)Neurobiology and Addiction, Over-simple, but there are some cool pictures.

4)Info on Nicotine Content of Cigarette Brands , From the Vaults of Erowid, a pretty comprehensive if kind of flaky site about the chemical structure, history, of tobacco (and other things). Lots of information.

5)Forget the Eggs: Here's Why Your Brain Gets Addicted to Drugs

Also:

The Truth, a flashy anti-tobacco website for kids, I think.


Seasonal Affective Disorder: A Look at How the Win
Name: Sonam Tama
Date: 2004-05-04 00:35:20
Link to this Comment: 9754


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Most people I know here at Bryn Mawr College feel as though they are still experiencing a bit of the "winter blues" even though it is officially spring. Having lived in the Philippines most of my life, I did not have to worry about the "winter blues" but as I experienced my first East coast winter here, I could feel my moods change along with the seasons. Then I heard about Seasonal Affective Disorder (SAD) and although I know a lot of us get the winter blues, SAD is not just about having the "winter blues." It is a severe form of depression affecting possibly as many as 6 out of every 100 people in the United States and the long duration of SAD symptoms distinguishes it from the "winter blues."

SAD is a mood disorder in which people who are diagnosed suffer from symptoms of depression during the winter months, with symptoms subsiding during the spring and summer months (1). Episodes of depression that occur to the person are related to seasonal variations of light (shorter days in fall and winter) (1) and studies that arctic people suffer more from depression (4) gives strength to this idea and disputes the notion that SAD is a made up disease.
The most difficult months for SAD sufferers are January and February. Depending on the person and the geographical location, depression can last for several months with the following symptoms (2):

• mood fluctuations
• excessive eating and sleeping
• weight gain
• loss of interest in sex
• a craving for sugary and/or starchy foods
• fatigue
• social withdrawal
• seasonal episodes substantially outnumber nonseasonal depression episodes.
• full remission from depression occur in the spring and summer months

These symptoms have a damaging effect on sufferers as they are unable to function without continuous treatment.
SAD was first noted before 1845, but not officially named until the early 1980's when it was discovered by Norman E. Rosenthal, M.D. who observed a correlation between depression and season change after mapping the mood patterns of a group of people for a year. He found that many of the people in the group started to become depressed in the fall with their depression worsening through the winter and decreasing in the spring (3).

So how does SAD work exactly? Sunlight affects the seasonal activities, such as reproductive cycles and hibernation of animals (1). As the seasons change and sunlight patterns are altered, there are shifts in our "biological internal clocks" or circadian rhythm. This can cause our biological clocks to be out of "step" with our daily schedules. Learning about SAD helps us to learn more about the relationship between our body and the environment we are in. More specifically, it is the relationship between sunlight, melatonin (the sleep-related hormone secreted by the pineal gland in the brain (1)) and serotonin (the hormone associated with wakefulness and elevated mood). As night falls, melatonin levels increase and as sunlight emerges, melatonin levels decrease. Serotonin levels increase when you're exposed to bright light, which is why during the summer, moods tend to be elevated and the opposite occurs in the winters since shorter, darker days produce more melatonin (6).

Despite the controversy surrounding studies showing Phototherapy or bright light therapy (BLT) to have a placebo effect, this treatment has also been shown to suppress the brain's secretion of melatonin. Additionally, studies have shown that exposure to bright light may help those who suffer from SAD (2). The device most often used today is a bank of white fluorescent lights on a metal reflector and shield with a plastic screen (1).

A 1999 study by Dr. Timo Partonen and his colleagues at the University of Helsinki's National Public Institute in Finland suggests that more exposure to the sun during the summer may help decrease the mood problems months later in the winter. Partonen and his team found that blood levels of cholecalciferol naturally peak in the fall months, suggesting that during the fall, we use the cholecalciferol that we store up during the summer. Light stimulates the production of Cholecalciferol, which the body transforms into vitamin D, which then helps the body maintain higher levels of serotonin during the winter months (6). So if we try to get even greater exposure to the sun, we may store enough cholecalciferol and then produce more vitamin D, leading to higher levels of serotonin during the winter months than we might usually have. The study concludes that the amount of serotonin you have in the winter is determined by your exposure to light the previous summer – prevent or reduce depression during winter.

If phototherapy doesn't work, an antidepressant drug may prove effective in reducing or eliminating SAD symptoms, but there may be unwanted side effects to consider. Selective serotonin re-uptake inhibitors (SSRIs) are the most successful antidepressant drugs which work by helping naturally produced serotonin stay in the bloodstream longer, keeping your mood and energy levels higher (7). However, there seems to also be a condition called serotonin syndrome, wherein the brain contains too much serotonin, generally caused by interactions between serotonergic drugs, for example by concurrent use of MAOIs (class of antidepressant drugs used less frequently than other classes of antidepressant drugs due to potentially serious dietary and drug interactions they are used less frequently than other classes of antidepressant drugs) and SSRIs (6).

So, how do SSRIs and light therapy compare to one another? A study led by Dr. Daniel Kripke of the Circadian Pacemaker Laboratory at the University of California, San Diego concluded that light therapy benefits not only SAD patients but also people suffering from other forms of depression (6). The study, which was published in the Journal of Affective Disorders, also concludes that light therapy may help to alleviate SAD symptoms faster than antidepressant drugs and that patients who undergo both light and drug therapy could receive the greatest benefits (6).

Alternative therapies to combat depression include proper nourishment with intake of vitamin B complex, vitamin C, folic acid, calcium, potassium, and magnesium as well as regular exercise – and breaks - under the morning sun (to avoid sunburn) to stimulate circulation and release serotonins in the brain as well as meditations. One study found that an hour's walk in winter sunlight was as effective as two and a half hours under bright artificial light (1).

Finally, in being called a "seasonal" disorder, it may give the wrong impression that it is purely environmental. In fact SAD is a great example of the interplay between nature and nurture, biology and the environment. My most pressing question about SAD has been about the biological aspects of the disorder. Are some people biologically more inclined to have SAD? What happens when a person with SAD moves to an area with lots of sunlight? Also, there are statistics that show that younger persons (between 18 and 30) and women are at higher risk (2). But there are some websites suggesting that women may be more ready to admit to depression and ask for help than men (2). I think that the same could be true for younger people. Perhaps people living in areas with less sunlight are more aware of disorders like SAD and are quick to assume that they have it. What I am suggesting is not that SAD is not real but that there is always a greater chance that more people are being diagnosed – and report of having SAD - in areas where more people are wary of the disorder.
But going back to the biological aspects of SAD, there are also studies showing that people with SAD often have the disorder in their family history, and are more likely to have alcoholism in their families than people who do not have the disorder (3). The society for light treatment and Biological Rhythms has recently published extensive studies on SAD. More studies including ones on SAD and puberty as well as SAD and thyroid function are showing the biological inclination of certain individuals towards the disorder (3). Maybe some people build it up by not getting enough sunlight and others already lack the serotonin levels needed. All in all, the various forms of treatments and the many conflicting studies available on the internet indicate that there are currently diverging conclusions regarding SAD and that future study is necessary. But they also suggest that perhaps each individual patient may have a different solution to their problem.

References

1) What is Seasonal Affective Disorder?, , on the National Mental Health Association website

2) Seasonal Affective Disorder , , on the Healing Deva Alternative Therapies website

3) Seasonal Affective Disorder , Seasonal Affective Disorder, Augsburg College Server

4) Seasonal affective disorder in an Arctic community , on the Blackwell-Synergy journals website

5) About light, depression & melatonin , on New Technology Publishing website

6) Summer Sun for the Winter Blues, newsarticle on CNN website

7) on Wikipedia, the free encyclopedia


Pain, Pain, Go Away: Complex Regional Pain Syndrom
Name: Amy Gao
Date: 2004-05-04 17:31:22
Link to this Comment: 9757

<mytitle> Biology 202
2004 Final Paper
On Serendip

It is two in the morning. The paper is still four pages from being completed, the presentation patiently waits upon the printer to be rehearsed just once more, and you blink your sleepy eyes awake—or as functionally awake as you could be with one and a half pot of coffee—to realize that you still have eight hours left to work on all these demonstrations of your intelligence. Eight hours sans sleeping, of course, because by now that activity sounded as familiar as last Spring Break.

And then you feel the throbbing. The pulsation of your heartbeat grows stronger by the minute on the sides of your temples, and you start to realize that this is the beginning of something so well-known to yourself and many others. It had been given many names throughout the ages in different languages, but you thought that none of it sounds as well as its present incarnation, because by the mere pronunciation of the syllables you feel its affects. You call it....

Pain. We have all experienced this unpleasant sensation at some point in our lives, some more than others, some stronger than others, some lasting longer than others, whether it is in the form of headaches, muscle strains, stomachaches, or back pains. We have all developed creative ways to make ourselves comfortable when in pain; with cold or hot pads wrapped around the affected area, with essential oil massages, or just simply sleeping the pain off. Suffice to say, where there are nervous system extensions, there is a possibility for the sense of pain to occur. A Tylenol or an Advil would be the very most medication that we would do for pain; however, for others, it may take a lot more than just a simple pill to alleviate the pain.

Complex Regional Pain Syndrome, or CRPS, which is also called Reflex Sympathetic Dystrophy Syndrome (RSDS), is a symptom characterized by severe burning pain, pathological changes in bone and skin, excessive sweating, tissue swelling, and extreme sensitivity to touch.(1) It is a disorder that occurs at the site of injury after high-velocity impacts such as those from bullets or shrapnel; however, it may also occur without apparent injury to the individual. The symptoms of CRPS are characterized into two different types: CRPS I is the clinical term used to describe the patients who suffer the symptoms of CRPS but with no nerve injury, and CRPS II characterizes patients who experience the same symptoms with nerve damage.

CRPS is indicated by the gradual change of warm, shiny red skin characteristic of wound flesh into being cool and bluish. The pain that is experienced is out of proportion with the injury sustained and becomes worse progressively. In the more severe cases when the symptoms fail to subside after treatment, the joints eventually become stiff from disuse, and skin, muscles and bones atrophy. There may also be periods of remission and exacerbation that may last for weeks, months, or years. The cause of CRPS is unknown, and the various symptoms attributed to the onset of CRPS vary in their severity and duration.

What sets this disorder apart from the other symptoms that are collectively characterized as "pain" is that it concurrently affects the skin, muscles, nerves, blood vessels, and bones. It is a symptom more observed in individuals between the ages of 40 and 60; however, it can also strike individuals in any age category. The diagnosis of CRPS is to be made in the following context, according to the guidelines by the Reflex Sympathetic Dystrophy Syndrome Association of America: there is a history of trauma to the affected area associated with pain that is disproportionate to the inciting event plus one or more of the following: abnormal function of the sympathetic nervous system, swelling, movement disorder, and changes in tissue growth (dystrophy and atrophy).(4)

There is no single, stand-alone test that can be used to detect CRPS; however, there are laboratory diagnostic instruments that may be used in conjunction to detect the presence of CRPS. The two main techniques used to detect CRPS are thermography and X-ray. Thermography is employed to detect the changes in body temperature that are common in those afflicted with CRPS, and X-ray is used to observe any damages or changes in the bone structure.(4) In many cases physicians may order EMG, Nerve Conduction Studies, CAT scan and MRI along with the two aforementioned tests; and the results of these tests may be normal for the patients. These studies are often done to identify if there are any possible sources of pain.

The disorder is believed to affect millions of people in this country, and statistics complied suggest that it occurs after 1% to 2% of various fractures, after 2% to 5% of peripheral nerve injuries, and 7% to 35% of prospective studies of Colles fracture.(2) Due to the nature of this disorder, the diagnosis is often not made early; the mild cases may be cured with no treatment; however, others may progress through the stages and the conditions may become chronic or even debilitating. It has been noticed to spread in three major different patterns: in the continuity type, the symptoms may migrate from the initial site of the pain to another part of the body; in the mirror-image type, the symptoms may spread from one limb to the opposing limb; in the independent type, the symptoms may even jump to a distant part of the body.(5)

CRPS is defined by some experts to progress in three stages, involving physiological changes in the affected area; however, this claim has yet to be independently verified.(3) During the first stage, the affect area suffers from severe pain along with muscle spasm, joint stiffness, rapid hair growth, changes in the blood vessels that result in change of color and temperature of the skin. This is believed to occur over a period of about three months. The symptoms that would mark the second stage of CRPS are intensifying pain, swelling and decreased hair growth among others; it is expected that the symptoms that characterize stage one would be worse during stage two. The final phase of CRPS is when irreversible changes occur in the skin and bone; the limbs would have very limited movement and may even become contorted, and there is noted atrophy of the muscle.

Even though current research has not yet demonstrated a definite mechanism relating in jury and CRPS, however, scientists have formed a plausible explanation for the occurrence of CRPS. The fright-flight response mechanism in is very important for survival. Once it is initiated, the sympathetic nervous system is activated as a response to injury. The firing of the sympathetic nerves cause the contraction of blood vessels in the skin, and as a result, forces blood deep into the muscle and consequently enables the victim to use his/her muscle after an injury to escape from the danger; there is also a decreased supply of blood to the skin to reduce blood loss. As this is an "emergency" response activated when the individual is in danger, it is typically shut down in a fairly short amount of time after an injury. However, for the individuals who develop CRPS, the sympathetic nervous system does not shut down after an extended period of time. As a consequence of the unregulated sympathetic activity at the site of injury, there is an inflammatory response that causes blood vessels to spasms, which lead to even more pain and swelling, which leads to more pain and even more response.(5)

There is no single magic pill for CRPS; therefore early detection and treatment are very important. Treatments for this syndrome involve pain reliving and rehabilitation of the limbs (or body parts) affected. Initially, if the symptoms are mild, non-steroidal anti-inflammatory drugs or naproxen sodium may be the suggested medication; sometimes prescription painkillers may be required for the more serious cases. Physical therapy, when employed at an early stage, may improve the mobility and strength of the body parts affected; the earlier the detection of CRPS, the more effective the therapy may be. Psychotherapy also may be considered as part of the patient's therapy, as the individuals afflicted with this syndrome may suffer from depression or anxiety which may make rehabilitation more difficult. Sympathetic nerve blocks are another consideration for the treatment; they may do wonders for the pain-reliving of CRPS, and they may be administered in two ways: the direct blocking of sympathetic receptors or the placement of an anesthetic next to the spine to block the sympathetic nerves.(3)

CRPS patients can have a higher chance of rehabilitation if the syndrome was diagnosed early. There are many treatments that are often used in conjunction to alleviate the pain caused by the symptoms and to regain mobility of the body parts that have been rendered dystrophic or atrophic by the syndrome. As there is no defining biochemical pathway associating injury and the onset of CRPS, there are no precautions (aside from being extremely careful and not to get injured) that one could take to fully lower the possibility of CRPS occurrence.

Further investigation for the cause of CRPS and the association between injury and CRPS (and with the spontaneous onset of CRPS without evident injury) are necessary. Drugs also need to be developed to satisfy the need of patients with CRPS, as they may be more prone to painkiller addiction than others because of the unclear nature of their disease and the constant need of painkillers to help to alleviate the pain. There are many patients and their families who suffer from this debilitating syndrome, and with more research and clinical trials we may develop techniques that can be employed for those in need.

References

(1)National Institute of Neurological Disorders and Stroke on RSDS

(2)Reflex Sympathetic Dystrophy Syndrome

(3)NIND RSDS fact sheet

(4)Reflex Sympathetic Dystrophy Syndrome Association of America

(5)The Mayo Clinic on CRPS


Hypnosis
Name: Laura Silv
Date: 2004-05-04 22:40:20
Link to this Comment: 9759


<mytitle>

Biology 202
2004 Third Web Paper
On Serendip


In this class, we've talked a lot about the blurred line between reality and what lies beneath. We've talked about dreams and daydreams and fantasy and memories as vivid as if they were being relived. As I thought about all these truths manipulations thereof, I started thinking about hypnosis as yet another way to blend confound the mind and further blend the line between reality and not-reality. Most people think of hypnosis as the way the media presents it, in the dark, stuffy living room of an eccentric professor waving a watch in front of your face, or the fringed tent of a gypsy at a carnival. But like most things in the media, this is probably a farce, an extreme exaggeration of a misunderstood practice. People use hypnosis to stop smoking, lose weight, get over their phobias and recover repressed memories, yet so little is known about the process. So I resolved to try to find out the truth about hypnosis – if it works, why it works, and how it works.


The common view is that a hypnotized subject is in a half-asleep state – even the words "hypnotism" and "hypnosis" were taken from the Greek verb for sleep, hypnos.(1) Many people think that the subject is controlled by the hypnotist in a completely dominated situation, like Laurence Harvey's character in The Manchurian Candidate. Rather, the Skeptic's Dictionary uses three descriptive qualities to convey the state of hypnosis: "(a) intense concentration, (b) extreme relaxation, and (c) high suggestibility"(2), as well as heightened imagination.(3) HowStuffWorks.com relates the hypnotized state to driving a car, reading a book or watching a movie: "You focus intently on the subject at hand, to the near exclusion of any other thought."(4) Such activities, and the ability to immerse one's self so completely in them, are called by some experts a form of "self-hypnosis".


Subjects are alert and have free will throughout the hypnotic process, but their brains work differently; the brain waves which operate at high levels when one is fully conscious are less active, and those which are active in dreams are more active than in normal consciousness.(5) From this evidence, a school of thought has arisen which believes hypnosis to be an "altered state" of consciousness. Objectors, who believe in the "unconscious reservoir" theory of hypnosis, say that a change in brain chemistry and behavior is not enough evidence to suggest an "altered state". Sneezing causes such changes from normal consciousness, yet that is not considered an alternative state. Rather, "reservoir" followers believe that hypnosis crosses the divide between the conscious and the subconscious, where you have an entire lifetime's (or more) worth of memories at your disposal, ones which are not available to the conscious mind.(6) People under hypnosis can recall past lives or repressed and forgotten memories, even ones that happened before one had faculties for conscious long-term memories.


Most commonly, one will hear about people going to hypnotists to be cured of certain minor psychiatric diseases – phobias, cigarette cravings, over-eating, et cetera. This is called hypnotherapy.(7) In this practice, the hypnotist also acts as a therapist, helping you find the subconscious root of your fears or cravings and work through them, as one would in a normal therapy session, so that when you "wake up", you will no longer be afraid of snakes or want to smoke.


The power of the mind is very important to hypnotism. One must be able to access the information "reservoir" yet still cooperate with the hypnotist. A vivid imagination helps with one's ability to be hypnotized and the results it yields, and one who does not believe in hypnosis cannot be hypnotized at all.(8) The hypnotists' suggestions to the subject can guide but not control. However, in a case where imagination and suggestibility are so central to the process, that sometimes one's imagination can go outside the scope of the subconscious into falsehood. A subject can recall false memories or past dreams which never really happened. Because such cases of false recollections are impossible to completely distinguish from cases where true memories are recalled, hypnotism is still regarded as a non-science and with skepticism by many.(9) Also, the recovering of repressed traumatic events can be detrimental to the subject. For these and other reasons, few companies and establishments use hypnotism, and why it remains a primarily private enterprise, yet one relatively easy to find. There are billboards, radio commercials, over one million websites – even a touring program where you can become a certified hypnotist.(10) Clearly it's a process which remains in demand, no matter what the skeptics have to say about it.

References


1) How Hypnosis Works


2) The Skeptic's Dictionary


3) How Hypnosis Works


4) How Hypnosis Works


5) How Hypnosis Works

6) The Skeptic's Dictionary


7) How Hypnosis Works


8) The Skeptic's Dictionary


9) The Skeptic's Dictionary


10) Hypnosis.com


Army of Barbies: The New Culture of Narcissism
Name: Michelle S
Date: 2004-05-05 09:18:21
Link to this Comment: 9794


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"Beauty has no obvious use; nor is there any cultural necessity for it. Yet civilization could not do without it."
-Sigmund Freud (Civilization and Its Discontents)

As Freud expressed so eloquently, beauty is something that is integral to the functioning of our society. Beauty is a factor, which we base our preference for mates on. Recently, there has been a deluge of products, programs, surgeries, and machines to help people enhance their beauty. Mainstream culture has become obsessed with conforming to a "beauty ideal" that has been perpetuated by the media's preoccupation with aesthetic perfection. Plastic surgery is one of the major architects of this beauty ideal. The increase in beauty ideals through media and culture, as a result of plastic surgery, may result in an unrealistic beauty standard, decreasing pools of possible mates. Altered mate selection in turn will affect mate selection and thus, reproduction and overall population.

In recent years, the human assessment of attractiveness and specifically human beauty standards has been studied extensively. However, few academics have analyzed the importance of sexual selection in the determination of beauty. Sexual selection arises from the process of sexual competition, which often controls individual competition for mates. Antlers on stags, peacock tails, and frog air sacs are among traits that have evolved in order to compete sexually (2). These traits do not enhance the survival prospects of individuals, but improve the likelihood of obtaining a mate. These traits support the idea that aesthetically pleasing characteristics may be directly correlated with the endurance and physical health of animals (2). Therefore, reproductive success and parasite resistance could be related to the relative attractiveness of many living organisms (3).

Current empirical and theoretical studies reveal that mate preference is founded upon visual, vocal, and chemical cues (2). Many secondary sexual characteristics, such as facial and body features, develop under the influence of sex hormones. In analyzing female beauty, it appears that youth, fertility, and health symbolizes ideal beauty (2). Characteristics that influence attraction assessments include form of face and body, structure, beauty ideals, voices, age, decoration, cosmetics, body scent, hair color, hairstyles, and both cultural and temporal dynamics (2). Neotenous facial features in females such as a small lower face, lower jaw, and nose, and full large lips are some of the most highly admired traits (2).

Other significant physical traits which are associated with reproduction and the capacity for survival include levels of estrogen, amount of body fat as well as measurements of hips, waists, and breasts. High estrogen levels are beneficial because of the ability to cope with toxic metabolites, which indicate stronger general immunity to illness (3). Waist and hip ratios are also indications of health because they are known to be a measure of a women's ability to produce male offspring. The size of the female breasts, positive pelvic tilts, and waist to hip ratios are so favorable because of their strong indication of a woman's reproductive success. The fat to body mass ratio of 1:4 was found to best maintain stable female sex steroids. In order to strengthen this signal, fat must be distributed over specific areas, such as the buttock and breasts. Overall, the optimal female phenotype is a symmetric one since these characteristic reinforce the idea of reproductive performance. Body bilateral symmetry represents quality of development (3). Lack of balance and evenness in proportions is considered less than ideal since it may cause health and performance problems in the future. The combination of these characteristics establishes the ideal physical aspect of the beauty standard.

The chemical perspective of beauty is much more subtle. Pheromones are chemical signals emitted by an individual that influence the physiology or behavior of other animals of the same species. Pheromones have been shown to influence the perception of beauty. Specifically, human body odor has been proven to influence mate choices because of their association to secondary immune systems (3). Chemical signals, caused by odor, travel through brain pathways and can directly have an effect on the emotion awareness Odors have the ability to produce a positive or negative mood or feeling, thereby directly modifying social perception of others(3). The possible effect of pheromones on mood makes body odor a likely mechanism through which attraction assessment can be made (3). A study of pheromones showed a positive correlation of women's attraction to male body scent and the corresponding symmetry of his features. Women preferred the body scent of males that possessed more symmetrical features (3). Because secondary sexual characteristics, such as facial and bodily attractiveness, are health certifications, the signals of individuals' respective pheromones signify quality of phenotypic and genetic quality.

Clearly, beauty is an important feature of human life on many levels. It is embedded in culture and society. Beauty and the development towards ideal beauty, influenced by what is deemed attractive, has become an obsession in popular culture. Mainstream media has determined what traits and physical features are most attractive, many of which are valued for a brief moments before the public tires of it, and a new feature is publicized.

In order to increase beauty, humans have incorporated the use of human decoration. Human decoration can manipulate or alter the perception of beauty. An extreme form of human decoration is cosmetic surgery. The fascination with drastic feature alteration, combined with advancing technology has triggered an increase in the popularity of cosmetic surgery. These surgeries restore function and/or improve the appearance of tissue (4). Cosmetic surgery provides individuals a method to alter characteristics and obtain larger breasts, better waist to hip ratios, fuller lips, or improve facial symmetry. These procedures effectively produce the illusion that an individual possesses the preferred reproductive and immunity attributes desired by males. This "created" beauty will give the appearance of health to members of the opposite sex.

Outward appearances are increasingly more important in society. This emphasis on physical allure creates an elaborate quest for beauty in society. Beauty magazines isolate famous actors, and models which resemble complete beauty. Reality television shows have begun to focus on the popularity of plastic surgery. Music Television's "I Want a Famous Face" is a show that documents a new and disturbing a phenomena, the desire of generation X to mimic and behave like favorite celebrities who are adored by the public. Fox Television's show "The Swan" invites 40 average looking women to undergo severe physical alterations, and includes a combination of plastic surgery, severe dieting, and personal training which are monitored by doctors, dieticians, and physical trainers. The show establishes a Darwinian perspective of beauty as contestants are eliminated each week, and the finalists compete together in a beauty pageant. "Extreme Makeover" from American Broadcasting Television is a similar program which interviews average individuals and changes their unflattering features with the help of cosmetic dentistry, plastic surgery, and exercise.

The problem with these shows is that plastic surgery is becoming commonplace. Viewers feel the changes are successful, quick, uncomplicated, and can be accomplished with little pain. The American Society of Plastic Surgeons is concerned that these shows are creating unrealistic expectations concerning plastic surgery. The television creates a dangerous audience bias because individuals appear more secure, contented and satisfied with their life because of these cosmetic changes. During 2003, cosmetic plastic surgery has increased by 32% since 2002 (4) with more than 8.7 million procedures conducted within the United States alone (4).

As a result of the media's emphasis is on women's beauty and enhanced features, men and their sexual selection will also be manipulated (2). The "Farrah effect" is identified as the phenomena in which men improve their personal beauty standard after observing the media's portrayal of beautiful women (2). If beauty does symbolize health, and humans can artificially create beauty (i.e. plastic surgery, extreme dieting, and personal training), how will this affect sexual selection? The artificial construction of beauty will influence sexual selection. As more individuals use synthetic processes to improve their appearance, the beauty standard will adapt by absorbing enhanced feature upon enhanced feature. The media will alter the beauty standard, until it becomes impracticable to achieve. As the beauty standard increases at an increasing rate, the pool of possible mate choices diminished, which decreases the chance of finding a possible mate.

Society is constantly adjusting the beauty standards. Modern media's emphasis on beauty and its relationship with status, success, and quality of life create today's beauty standards. The media has saturated the public with the processes and effects of plastic surgery. As beauty enhancements generate status and become more popular and commonplace, particular enhancements will lose their advantage once too many people use it. Therefore, a negatively reinforced cycle is created. Once too many people enhance a specific feature, the original advantage will eventually be lost through overuse, and another enhancement will be celebrated. This will continue to happen until individuals utilize so much cosmetic surgery, that they no longer include any of their natural features, and are primarily composed of synthetic materials.

These plastic people will contribute to consistently improving beauty standards. Attractiveness standards will be raised by idealizing beauty enhancements, and consequently these synthetic individuals will regulate beauty. Ultimately, this will generate unreal expectations of mates. If the beauty standard, which is largely manufactured by the media, is perceived as being more beautiful than naturally produced traits, mates will no longer base selection upon realistic standards. This will eventually create a higher proportion of single individuals, possibly resulting in a significant reduction in offspring production. As the media continues to generate an unrealistic ideal, and a portion of the public continues to dedicate themselves to this ideal, mate selection will eventually include impractical preconceived notion and prejudices.

It is important to remember that these expectations are only based upon the visual element of beauty. There are a multitude of other dimensions which are involved in mate selection. Pheromones, movements, and vocalization all play a part. On a less scientific level, personality, similar interests, hobbies, and social status can also influence the selection process considerably. Consequently, although the beauty standard may evolve drastically as a result of a collective change, the desire for perfect physical beauty will be balanced by other less scientific dimensions of mate selection, such as personality, disposition, and a shared future outlook. In addition, the discrepancy between plastic beauty and pheromones may also lower the elevated beauty standard on a more subconscious level. A recent study found the symmetry of men's faces was directly proportionate to how attractive female's assessed their respective body odors. This supports the idea that non-visual, fixed characteristics such as pheromones, and body movement, counter the increasing assessment of beauty. The inability of humans to change certain aspects helps to keep the beauty standard more constant. It prevents mating standards from rising to a level that will negatively affect procreation.

It seems that in theory, media's push for perfect people will lead to a saturation point. There will come a point at which no higher level of beauty will exist. No further enhancement of beauty can be modified or created. By the cycling theory mentioned previously, beauty enhancement is based of gaining desired features. Each desired feature enhancement is replaced with the next one. Desired enhanced features become commonplace leaving natural beauty as the only option for the next fad in beauty. The media has inadvertently created a pattern encompassing the original cycle theory. Once synthetic beauty is too widespread, rare natural beauty features will be desired. Natural beauty features will begin to gain popularity, characteristic by characteristic, thereby lowering the beauty standard. This lowering of the beauty standard will progress to a low saturation point of beauty standards. The media portrayal of these new standards will eventually result in the desire of enhanced features once again. Equilibrium between these opposing states will establish. Sexual selection and the desire for procreation will remain in tact. Media's initial role, in promoting synthetically enhanced features, will eventually morph to one promoting natural beauty. Although society will continue to be exposed to the extremities of plastic surgery, the combination of chemical and personality characteristics will counteract the influence of media and society, and effectively stabilize mate selection and procreation.

References

1) The Culture of Beauty , General Site about Plastic Surgery Rise

2) Darwinian Aesthetics: sexual selection and the biology of beauty. , Journal article concerning beauty (General qualitative discussion)

3) (Homo Sapien) Facial Attractiveness and Sexual Selection: The Role of Symmetry and Averageness. , More quantitative discussion of beauty

4) Cosmetic Surgery on the Rise , Updated information about plastic surgery

5) Physical Beauty Involves More than Good Looks ,

6) The Psychoanalytical Construction of Beauty ,

7) 10 Cosmetic Plastic Surgery Predictions for 2004-From the American Society for Aesthetic Plastic Surgery ,

8) 2003 Cosmetic Surgery Statistics Show Strong Increases , Statistics about plastic surgery


Latah
Name: Amanda Gle
Date: 2004-05-05 18:09:22
Link to this Comment: 9796


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Culture bound syndromes are found in numerous societies around the world from America to Africa to the Pacific. These syndromes are noticed in social and cultural patterns of the various episodes of the syndromes. "The political nature of deviance designations [are] relative to whose interests are being served by such labels." Also, "the conspicuous gender aspects of those defined as deviant" are important in determining culture bound syndromes (1).


In Malaysia, a culture-bound syndrome called latah is found. Latah is a "behavior complex built around hyperstartling" (2). Latah is the globally experienced syndrome of hyperstartling taken to an extreme. The word latah means "ticklish" in Malay (3) (pg. 13). Latah is a culture-bound syndrome as there are "particular cultural conditions that are necessary for the occurrence of that syndrome" (4). While it is a culture-bound syndrome, I feel that evidence shows it is also a neurological illness.


Latah occurs in the performance-oriented Malaysian society due to repetitive scares and being poked. Pawang Lumun, an indigenous healer, stated, "If we don't startle them with pokes in the ribs, they don't become latah. If we keep poking a normal person like that, he'll become a latah. It doesn't take long: five days of poking over and over, little by little a person gets quite flustered." (5) (pg. 1) While anyone can become latah through this series of irritation, more women than men are latah. Specifically these women are lonely older women, whose husbands work often or have passed away. Perhaps the women who become latah are pushed to the edge of society and are looking for attention and a place of importance in a subconscious route. That more women are latahs could be because people are more afraid to poke men than women, for if a strong man was to lash out he could hurt someone badly. Thus it is safer to startle women.


The Malaysian villages that latah occurs in are rural, though there are a few unexpected cases of latah in the city. These villages' societies are based on manual labor, specifically boat-life and farming. The villagers have to look for entertainment in performance, including latah. There are three types of latah: immediate response (when one is startled), attention capture latah (doing other's actions and obeying others), and role latah (a combination of immediate response and attention capture latah) (5) (pg. 5). The three types demonstrate the various forms of subconscious and conscious attention seeking. Latah allows people to be the center of attention in the entertainment world. "Its manifestations take two main forms, a startle reaction often accompanied by coprolalia, and a compulsive mimicry, persisting despite the victim's conscious desire to stop" (3) (pg. 3) Despite the desire to stop there might be an unconscious desire to perform and gain attention.


To understand why latah is a culturally bound illness, one must understand a few key points. One is that everyone startles. If a loud alarm goes off suddenly or a gun is shot, one's heart will race. Sir Hugh Clifford stated, "...anyone who desires to really account for this affliction must, I am convinced, begin by analyzing and examining and explaining the pathology of the common start or 'jump' to which we are all in a lesser or greater degree subject" (5) (pg. 195). The startling is a natural reflex that can lead one to protect oneself. Startle reactions happen to everyone. They can include the "swearing or the repletion of a purposeless phrase [that] can occur in startled normal persons" (3) (pg. 5) Not only can startle reactions be from a noise such as a loud bang or a shot, but also "the startle stimulus can be a sudden realization of a social situation," such as one's fly being undone or something green in one's teeth (3) (pg. 5). These social settings can cause people to swear or do repetitive startled motions.


These startled-motions can be found in other neurological illnesses such as catatonic schizophrenics and people with Gilles de la Tourette's syndrome, also known as maladies des tics. People with brain damage also have startled reactions. The brain-damaged people have the largest reactions that might be "due to loss of a sense of wholeness" of the self (3) (pg. 5). Schizophrenics are similar. Tourette's Syndrome is the most similar to the disease and used to be thought of as the same as latah. They are different as Tourette's begins to manifest itself in childhood with tics and verbal outbursts where as latah usually starts later in life. Also, Tourette's Syndrome does not require a startle for an outburst. Despite similar conditions, latah is not found elsewhere in the world.


A second point is that in comparison to the Western world, the Malaysian culture allows one to become latah. The continuous poking and startling makes a person jumpy. If other cultures encouraged the aggravation that included the torment of poking, and if almost any person was startled repeatedly, in the end, he or she would share traits with those who are latah. For example, an American will swear or jump if surprised, as it is one of the automatic responses to an external stimulus. But those who are not startled repeatedly will not perform when startled or even continue to swear. Despite this, there is "no obvious connection between its manifestations and the beliefs of the people among whom it mainly occurs" (3) (pg. 3). The beliefs do not cause the disease but they can influence the culture, which allows it. Doctors say it is a "psychiatric syndrome that psychoanalytic teaching would view (like most other syndromes) as a form of regression to an earlier developmental phase" (3) (pg. 6). Does the Malaysian culture encourage children to play games to become latah?


Hyper-suggestibility may occur in places that encourage hypnotism and games relating to it. One game played by Malaysian children is this:
In the game known as main hantu musang [the polecat spirit game] the principal player goes on hands and knees, is covered by a white sheet, and is said to be hypnotized into unconsciousness by the others who march round and round him, stroking and patting him and repeating the following words [words omitted]. After, the player is said to be possessed and is quite unconscious of his humanity. He chases the others, climbs up trees, leaps from branch to branch and so far forgets himself as to run the risk of injury by venturing on boughs too frail to bear his weight. In the end he is called to his senses by being addressed repeatedly by his name (3) (pp. 6-7).

This form of hyper-suggestibility probably has implications later in life such as latah.


One must also examine performance latah if one is to see the evidence that latah is culture-bound. There is reason to believe that in role latah, people self-stimulate using an "ick" sound if they are not fulfilling expectations and feel devalued. They use the latah to perform to others' standards. There are many incidences of latah behavior that show no signs of startling, but yet as the person is a Latah, she will get away with whatever is done. In the Western societies there are other such ways of doing thus, such as being the class clown in school or the person that everyone goes to for a joke. In America the people who fulfill the "latah role" are of every age, from misbehaving children to crazy old ladies.


Though there are cases of those under latah becoming completely obedient, and doing such acts as undressing and doing work for another, these cases are extreme and rare. They are also completely understandable. These obedient times are explained in America as having temporary insanity. For example, there was a latah woman in Malaysia who after being startled was told to kill. She was holding a knife at the time and her immediate reaction after the startling was to kill the woman next to her. In America, a good lawyer would have the woman plead temporary insanity. When in court the judge had a plank with sharp nails sticking out of it.
The judge said, "Now we'll test whether you're a real latah." A policeman came up behind the latah and poked her in the ribs, and he shouted, "Slap those nails!" Right away the old lady slapped down on those nails, and blood began to gush from her hand. The judge had to agree. "Truly, this woman is a real latah. This old woman is not guilty; the guilty one is the person who poked her." So the woman who poked the latah was the one who was sentenced to be hanged" (1).

In Malaysia her defense included her being a latah and the hyperstartling syndrome caused her to perform the murder.


Latah is a neurological illness. Doctors have found that there are seven factors that create development of latah in people, especially women.
1. Repressed wishes probably out of an infantile sexual character, adequately cathetcted and seeking an outlet.
2. Stimulus generalization leading to nonsexual stimuli being misinterpreted as sexual.
3. A masochistic tendency resulting in a failure to defend against the provocative stimuli and perhaps provoking such stimuli instead.
4. Dissociative child-rearing practices conducing to hyper-suggestibility.
5. The rewarding of hyper-suggestibility in adults, as by the introduction of beneficial but little-understood knowledge that could most rapidly be mastered by rote learning.
6. Suppression of lengthier dissociations or trance states through which the repressed wishes could obtain fuller expression.
7. An inflexibility of impulse control that leads to exaggerated startle reactions (such as occur also in catatonia) and thence to temporary suspension of inhibitions when startle occurs (3) (pp. 16-17).

Some of these would be expected to form neurological disorders or neuroses in the West and other countries. Other factors would not form disorders in the West, which leads to this neurological disorder being culture-bound. Latah has a list of symptoms, which makes it medical. Symptoms include "coprolalia and coprophrasia [that] describe verbal obscenity, mimesis (mimicry), echolalia for verbal mimicry, echomamia and hyperimitation (mimicking the general behavior of another), and echopraxia, echopraxis, or echokinesis (body mimicry)" (1). These symptoms as well as those of fatigue are all medical symptoms found in other disorder. This leads me to conclude that latah is a neurological disorder.


Each case of hyperstartling around the world is different; especially because of the culture it is found in. "Who may startle who, how it happens, where and when depend on the culture" (2). Because of this, latah in its own role is culture-bound. It is the Malay way of dealing with hyperstartling persons.


We, as Westerners, need to ask, "Should we classify latah, which is a culture-bound syndrome, with Western medical terms?" The clinical evidence to make it an illness is a bit weak for most Western doctors. While it is a mental abnormality, there must be a reason that those in the Malaysian culture do not classify it as such, but do classify it as an illness that is socially acceptable.

References

1) Bartholomew, Dr. Robert E. Exotic Deviance: Medicalizing Cultural Idioms—From Strangeness to Illness. 2000.


2) Simons, Ronald C., prod. & direct. Latah: A Culture-Specific Elaboration of the Startle Reflex. Indiana University Audiovisual Center, Bloomington, Ind: 1983.


3) Lebra, William P., ed. Culture-Bound Syndromes, Ethnopsychiatry, and Alternative Therapies: Volume IV of Mental Health Research in Aisa and the Pacific. 1976.


4) Pashigian, Melissa PhD. Notes. Medical Anthopology. Bryn Mawr College. 11/5/02.


5) Simons, Ronald C.. Boo! Culture, experience, and the startle reflex. 1996.


Behavioral Response to Smell: A closer look at the
Name: Sarah Cald
Date: 2004-05-05 19:30:26
Link to this Comment: 9797


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Humans utilize several senses to gather information about the world around us: sight, hearing, smell, taste and touch. Of the five senses, smell is often viewed as of lesser importance. In fact, people spend most of their lives trying to cover up and hide smells. Case in point, witness the rampant success of the deodorant business. Smell cannot be avoided; it is ubiquitous. Although research has furthered our knowledge of the mechanism of olfaction, very little is known about how smell and behavior are linked. This paper will investigate two questions. Firstly, how is olfaction linked to behavior, and secondly what is responsible for the behavioral responses to smell?

One would think that behavior and smell are closely related, what else could explain the various responses to certain smells? Some people like the smell of gasoline, while others are repulsed by it. Is there any scientific evidence that supports this theory? Indeed, research has shown that smell and behavior are closely linked. It is well documented that airborne chemicals influence our behavior without our being aware of smelling anything at all. Researchers have recently obtained brain-scan images showing that certain parts of our brain, including brain structures that control emotion and memory, become activated in response to an airborne compound at such a low concentration that we have no conscious awareness of it (1). Growing evidence that supports the existence of an unconscious, sixth sense is found in the study of pheromones.

Pheromones are airborne chemicals produced by one animal and detected by another of the same species, and are a common form of communication between animals. Pheromones influence the behavior of the animal that senses them. For example, male pigs release the hormone androsterone when they breathe. This hormone makes female pigs eager to mate (1). Although research suggests that humans have lost the ability to receive such odor molecules there is evidence that shows another organ that seems to be specialized for the detection of pheromones in other animals, such as humans. This organ, the vomeronasal organ (VMO), is connected to the nasal passage by a small opening about an inch behind the nostrils. Recent evidence shows that the VNO in humans serves to send pheromone-carried signals to the brain. Women rate androsterone, the male hormone, as more "pleasant-smelling" near ovulation than at the beginning or end of their cycle (1).

A study conducted at Brown University also yields evidence that odor and behavior are closely linked. More specifically, this study suggests that emotions can become conditioned to odors and subsequently influence behavior. In this study, 63 female undergraduates were asked to play a computer game that, unbeknownst to them, was designed so that they could not win. During that time, the students were exposed to a novel odor. Then were they given a 20-minute break and then sent to a different room to take a set of word tests. There were three rooms: one with the same scent as the room where they played the game, a room with a novel scent, and a room with no scent. Participants who performed the word tests in a room with the same scent as the computer game room exhibited increased frustration compared to the other groups of participants exposed to different smells (2).

Perhaps the greatest evidence that smell affects behavior is found in menstrual cycle synchrony. First reported in a college dormitory, this phenomenon involves the involuntary synchronization of menstrual cycles among women who live together as a result of pheromone signals (3). While it is highly likely that humans exude pheromones, no one has ever isolated and identified one.

All odorants are processed in humans through the same olfactory pathway. Odorants are collected in the sensory epithelia, located in the upper regions of the nasal cavity (4). Odorant molecules are absorbed in the mucus layer of the sensory epithelia where they then travel to receptor cells, which are located on the cilia that line the nasal cavity. These cilia each comprise receptor proteins that are specific to certain odorant molecules (4). Binding of an odorant to a receptor protein results in an activation of a second messenger pathway. There are two known pathways to date. The most common, involves the activation of the enzyme adenyl cyclase upon the binding of an odorant molecule (5). This enzyme catalyzes the release of cyclic AMP (cAMP). The increase in cAMP levels causes ligand-gated sodium channels to open, causing a depolarization of the membrane. This depolarization results in an action potential (5). These electric signals are carried by olfactory receptor neurons through the olfactory bulb. The olfactory bulb then relays information to the cerebral cortex resulting in sensory perception of smell (5). On average humans can recognize up to 10,000 separate odors (6), yet only have about 1,000 different olfactory receptor proteins (7). It is within the olfactory bulb that combinations of odorant molecules can be organized to signal the brain for specific smells (7).

Can knowledge of the mechanism of olfaction help identify what accounts for behavioral response to smells? Research involving the olfactory bulb and the memory of smells suggests that this organ may have some small role in the relationship between behavior and smell. It is well known that sensor cells in the nose die and replace themselves with new nerve cells ever 30-60 days (8). This fact led investigators to question how smells, like an apple pie baking in the oven, are remembered by animals. Scientist concluded that new nerve cells send out long extensions that find their way to the same spots in the olfactory bulb where the preceding nerve cells connected. In this way, the "road map" of odors remains constant throughout life (8). Although this finding is fundamental in understanding how odor information is encoded to the brain, it remains unclear how this information is decoded.

As I learn more about the role of the olfactory bulb, I become more convinced that it plays some part in the behavioral response to smell. So far, the olfactory bulb has been shown to function as the center where olfactory combinations of molecules are formed prior to being sent to the brain; the olfactory bulb has also been correlated to the memory of smells, serving as the "endpoint" of the road map of olfaction. Further research has shown that signals sent from the olfactory bulb are sent not only to the cerebral cortex, which is responsible for conscious thought processes, but also to the limbic system, which generates emotional feelings (9). These findings, as I interpret them, suggest that the role of the olfactory bulb in behavior may be to organize input signals from odorants and re-format these signals into new outputs signals that can be interpreted by the human brain. If this is the case, then the olfactory bulb would have to have some consistent method of analyzing odors – otherwise how can the smell of an orange continually be perceived in the same manner? For example, octanol, an ingredient in natural gas and petroleum, exudes an orange and rose-like smell. By changing one atom in the molecule's structure, it becomes octanoic acid, which is characterized as a rancid and sweaty smell. The olfactory bulb must be highly specific to each of these odorant combinations in order to elicit just drastically different smell descriptions. In order to do this the olfactory bulb must comprise machinery that can form the same combinations of odors from various signals. Richard Axel M.D., an investigator at Columbia University College of Physicians and Surgeons and a pioneer in the field of olfactory research, explains this concept best:

"The brain is essentially saying... 'I'm seeing activity in positions 1, 15, and 54 of the olfactory bulb, which correspond to odorant receptors 1, 15, and 54, so that must be jasmine (7)."

The olfactory bulb must be able to consistently process signals 1, 15 and 54 as those of jasmine.

A finding by one of Axel's colleagues, Linda Buck, suggests that concentration of odorant may also play a role in the type of behavior. When indole, a substance found in both coal tar and perfumes, becomes concentrated, it smells horrible. However, when diluted indole gives off a fragrance similar to that of jasmine (8). This finding suggests to me that the olfactory bulb may be overloaded with signals when odors become highly concentrated. The excessive flood of input signals to the olfactory bulb may prevent accurate olfaction to occur. Just think of how you feel after you have walked down the perfume aisle at a department store, you are unable to smell accurately. Along these lines, the olfactory bulb may serve as the first step in behavioral response. Perhaps different people have different thresholds of olfactory signal receiving, and that people respond differently to the same level of an odor depending on how overloaded their olfactory bulb is. Additionally, it would seem reasonable to propose that differences in the olfactory bulbs of people results in different interpretations of smell. This seems logical because more is known about the similarity between human brains than the olfactory bulb. In fact, the entire components of the olfactory bulb remain unknown; perhaps they differ among individuals. While these are just hypotheses formulated from what I have learned thus far about olfaction, they are worthy of exploring further.

Our sense of smell is nothing to sniff at; it serves many functions in our lives and affects our behavior, emotions and memory. While investigation into the mechanism of olfaction has proved beneficial in understanding how signals are sent to the brain, there is still very little known about how those signals are interpreted and received by the brain. I am certain that by understanding how olfactory signals are interpreted in the brain, we can learn more about why certain odors elicit different behaviors in people. For now, it seems as though the olfactory bulb may have a larger role in behavior than initially suspected. Not just an organization center, the olfactory bulb may also function as the beginning phase of behavioral response to smell.

References

1)A Secret Sense in the Human Nose: Pheromones and Mammals
2)Odors Summon Emotion and Influence Behavior, New Study Says
3)Pheromones: The Smell of Beauty
4)Monell Chemical Senses Center – An Overview of Olfaction
5) Lancet, Doron. "Vertebrate Olfactory Reception." Ann. Rev. Neurosci. 9 (1986): 329-355.
6)The Mystery of Smell : The Vivid World of Odors
6) The Mystery of Smell: How Rats and Mice – and Probably Humans – Recognize Odors
8)Researchers Sniff Out Secrets of Smell
9)Sensing Smell


Night Shift Effects
Name: Elizabeth
Date: 2004-05-05 22:19:57
Link to this Comment: 9798


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In emergency services, the night shift from 19:00 to 06:00 hours must be staffed. Those who work these shifts are familiar with the consequences of staying up all night and then having to go to class or to another job during the day. EMTs who work all night experience drowsiness which is believed to coincide with poor reaction times (1). This paper will examine the cause of this slowed reaction time and sleepiness found in those who work the night shift.

Circadian rhythms are cycles in physiological functions such as sleep (2). Most circadian rhythms are controlled by the suprachiasmatic nucleus or SCN (2). The SCN is a pair of brain structures that together contain about 20,000 neurons located in the hypothalamus, just above where the optic nerves cross (2). Light that reaches photoreceptors in the retina creates signals that travel along the optic nerve to the SCN (2). The clock mechanism controls sleeping patterns depending on light cues with the cyclical changes in the concentration of the hormone melatonin (3). Neurons from the SCN connect to the pineal gland where melatonin is produced (4). When signaled, melatonin is released into blood and acts on receptors throughout the brain and body to alter functions associated with the sleep/wake cycle. The body's level of melatonin normally increases after the sun sets and acts to inhibit systems in order to promote sleep (3). Melatonin can affect multiple body systems including the reticular activating system (RAS), which is responsible for the alertness of an individual.

The RAS is composed of parts of the medulla oblongata, the pons and midbrain (5). This region receives sensory signals and stimuli from the environment and other regions of the brain and coordinates these signals to produce an output (5). In the reticular activating system Gamma-aminobutyric acid (GABA) receptors are inhibitory (6). Molecules can bind to these receptors causing chloride ion channels to open. The presence of chloride ions hyperpolarizes the postsynaptic neuron and prevents a signal from being passed (7). When regions of the RAS are active, nerve impulses travel to other areas of the brain, increasing activity associated with consciousness such as improved reaction times. Among other areas, the RAS can send signals to the motor cortex (8). The motor cortex coordinates the movement of muscles in response to sensory inputs that the brain receives (9). If the RAS is slow in releasing a postsynaptic neuron to the motor cortex due to the presence of molecules that stimulate GABA receptors, the central nervous system will take a longer period of time sending an action potential to motor neurons in order to cause a movement.

Melatonin may allosterically bind to GABA receptors, producing an inhibitory effect in the RAS that causes the increased drowsiness and poor reaction times observed in night shift EMTs (10). At night, the SCN signals the increased production of melatonin because no sunlight is present. Because of this greater concentration, enough melatonin is present to bind to GABA receptors in the RAS (10). During the night shift, individuals remain awake but the SCN continually signals to sleep by the presence of melatonin. This continued presence of melatonin inhibits neurons and leads to a drowsy feeling and diminished response times.

I will test the theory that a decrease in my reaction times should correspond to drowsiness over one of my night shifts. I will sleep approximately eight hours and test my reaction times every two hours from the time I wake up through the end of my shift. The day I chose to collect data was sunny so that my retinas would be able to send the sensory information of sunlight to my SCN. I stayed up all night at my eleven hour shift (19:00-06:00) and tested my reaction times on an internet test site (11). The test involved hitting a stop button with a mouse curser when seeing the screen color change from white to dark pink. The screen color changed at varying times to reduce the possibility of conditioning to a specific time change of the screen color. Results are included in table 1. I believe that if a sensory signal, for example a changing color screen, is received by the neurons in the retina at night, the signal is processed through the RAS more slowly due to the presence of melatonin. The post-synaptic neuron may not send a signal as quickly to the motor cortex. The motor cortex most then send an action potential to muscle neurons in order to cause the observed output of clicking a mouse button to demonstrate the screen has changed color.

Table 1,

As this test shows, poorer reaction times are associated with working a night shift. The RAS receives the visual stimulus that the screen changes from the optic nerve, but neurons take a longer period of time sending this sensory stimulus to the motor cortex due to the presence of melatonin. Reaction times appear to remain constant until an increase around 02:00. The increased reaction times that occurred at 02:00 were also associated with a feeling of sleepiness. This observation is consistent with the hypothesis that the RAS is responsible for the control of alertness and the relaying of sensory signals to the motor cortex. If the RAS is responsible for both drowsy feelings and reaction times, increased melatonin at GABA receptors in this region should affect both at approximately the same time. Once the sun has set, reaction times alertness may not be immediately affected because my rhythms have been set to stay up later due to my normal sleep patterns of falling asleep between 01:00 and 02:00. Melatonin release may be delayed until this time. Also, there may be a several hour delay before sufficient melatonin concentrations exist in the blood stream to cause an effect on the GABA receptors. Reaction time improvements between 06:00 and 08:00 coincide with sunrise and an increased feeling of alertness. This may be because the SCN signals the repression of melatonin production when the optic nerve signals the presence of sunlight and as a result, less will be present in the RAS. In the absence of melatonin, the neurons of the RAS can send signals to the motor cortex more efficiently. Overall, this data set provides some evidence to suggest that night shift worker reaction times are slowed by the interaction of SCN, Melatonin, and the RAS.

The data set collected seems to support the hypothesis that emergency medical technicians have slowed reaction times when working at night that also correspond with drowsiness. In order to have more accurate results, my study will need to be repeated multiple times. Further fMRI data could also be used to study the specific brain regions involved in drowsiness and slowed reaction times. Through my informal study and the observations made by others, the cyclic nature of melatonin concentration controlled by the SCN appears to effect neurons in the RAS, causing the feeling of drowsiness and increased reaction times.


References

1) A literature View on Reaction Time, This is a good site for resources on the study of reaction times.

2) Sleep and Circadian Rhythms, Background information on the SCN and circadian rhythms.

3) Learn about fireflies, biological clocks, and using VCR codes, A USA TODAY online article that discusses the role of melatonin and the effect of light on the SCN.

4) Third Eye - Pineal Gland, Diagrams and lots of information on the Pineal Gland.

5) Sleeping Disorders , This site provides a discussion of the reticular activating system.

6) CNS Depressants: Sedative-Hypnotics, The role of GABA receptors in sleepiness.

7) Synapses , A discussion of the different types of synapses.

8) ADD ADHD: Reticular Activating System, Evidence that the RAS signals the motor cortex through studying ADD.

9) Probe the Brain, Pictures and description of the motor cortex.

10) The optic tectum of the salmon:site of interaction of neurohormonal photoperiodic and neural visual signals, The GABAergic neuronal system and melatonin receptors. Interaction of GABA andmelatonin.

11) Test your reaction time , This is the test I used when determining my reaction times.


Understanding Meditation Through the Central Nervo
Name: Hannah Mes
Date: 2004-05-06 18:22:39
Link to this Comment: 9803


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


It has been suggested in class that a disconnect of information exists between the I-function and the central nervous system, a concept that I found intriguing as my experience with Vipassana, a Buddhist meditation technique, allowed me to make the gap between my conscious and unconscious less sharp. The class discussions challenged me to think about meditation in terms of a series of physiological responses that could be observed, documented, and analyzed. The broader implications of this explanation meant that the mental state achieved through meditation did not require discipline, but could feasibly be induced by other chemicals. While attractive for a variety of reasons, I resisted this methodology as overly simplistic. Part of my experience with meditation has been mental transformations that were only achieved through determination, persistence, and patience. Individual commitments to these mental states are not factored into an explanation that draws primarily from transformations in the central nervous system. Ultimately, the idea that the central nervous system could describe a meditative state was incomplete because it could not quantify the mental changes I experienced.

Before this course, I had only considered meditation in terms of mental transformations that had occurred in my conscious mind, or I-function. This explanation made sense because the most noticeable changes were in my general attitude, interaction with other people, and pervasive feeling of balance. I knew that meditation had impacted my mental state as I felt extremely relaxed but what occurred in my central nervous system and how much this influenced my altered state of consciousness remained unclear. I researched this these changes because it challenged my understanding of Vipassana as a purely mental practice. After learning about the physical changes in my central nervous system, I shifted the way that I thought about my I-function and central nervous system as acting independently of each other. Instead of working autonomously, each of these states shared information and were deeply influenced by each other.

In a preliminary discussion on this topic, I explored the similarities and differences between my experiences with meditation and the experiences of other meditators, such as Pilou Thirakoul and Dr. James Austin. (See "Shifting Realities Through Vipassana Meditation" (1).) I ended with the following question: "When information passes from the central nervous system to the I-function what changes can be observed at a gross, physical level as well as at a subtle, chemical level."
This paper will delve deeper into a discussion that addresses the specific transformations that are occurring in my central nervous system and how this shifts my understanding of meditation in a broader sense. These physiological changes, which have been knowingly induced by the meditator, have real consequences on the individual's mental state. By examining these changes from the physical level where information is quantified and concrete one can begin to theorize about the abstract changes at a psychological/mental level.

Through research I discovered that these sensations of relaxation include, "generalized reduction in multiple physiological and biochemical markers, such as decreased heart rate, decreased respiration rate, decreased plasma cortisol (a major stress hormone), decreased pulse rate, and increased EEG (electroencephalogram) alpha, a brain wave associated with relaxation."

I realized that there was a change in my brain waves when I was meditating but could not conclude as to whether or not this had an effect on my I-function, or conscious mind. Alpha brain waves, oscillating in the range of 7.5-13 cycles per second, occur in meditation and hypnosis. (2)Although some scientists have argued that the existence of Alpha brain waves suggests deeper thinking and a propensity for creativity, others argued that these waves occur when there is little visual or sensory input. I argue that although these waves may not induce an altered state of consciousness, that these brain waves have a direct relationship with the mental states of a meditator. I know that the combination of these physiological markers were linked to states of deep relaxation from my own experiences with Vipassana.

For example, on my 10 day Vipassana course all students take a vow of "Noble Silence" in which one abstains from any type of verbal or gestural communication so as to maintain an environment that is conducive to intense meditation. By limiting the amount of sensory input that the I-function receives, an individual can focus on internal changes with less distraction. By closing one's eyes and "self" off from the rest of the external world, the workings of one's "internal" world become more apparent.

In an experiment done by Benson and Wallace at the Harvard Medical School in 1963, the researchers found that certain physiological changes occurred during meditation. The meditators all demonstrated a fall in metabolic rates as they demanded approximately 20% less oxygen after a few minutes of meditation. General activity in the central nervous system slowed down, illustrated by the predominance of the parasympathetic branch of the CNS, responsible for the sensation of relaxation. They concluded that the state experienced by the meditating subjects could be described as "wakeful and hypermetabolic" and that meditation produces a "complex of responses that marks a highly relaxed state." (4)

I began with the premise that an altered state of consciousness experienced during meditation could be understood as a series of physiological transformation in the central nervous system. With that said, I began to wonder if this state could be simply a series of chemical responses that could be systematically induced by other chemicals, or if the repetition and work that are part of the traditional method of meditation were just as important. If my mental state could be accounted for in terms of physical changes, then did any of my mental changes occur independently? What was my role in pro-actively changing my state of consciousness? Was my experience actually the effect of chemicals in my parasympathetic CNS that were the by product of my mental states- or was my mental state the by-product of these chemical transformations?

Although I know that there is a relationship between these physical and mental changes, I make no claims as to what order they occur in and how deeply they influence changes in the other. I can confidently assert that although there meditation can be understood in terms of the central nervous system at physical level, this provides only superficial understanding of meditation. The physiological answers do not describe my experience with changes in my mental state.

Meditation has produced mental states of deep relaxation and increased awareness. Situations that were previously confusing or hard to analyze became clearer. As a result, I felt more emotionally, mentally, and physically balanced. After working through a particularly difficult meditation session I felt a sense of achievement and mental strength. This could not have been achieved through chemical means, but only by the experience of long meditation sittings over a substantial period of time. I argue that even if this desirable state could be reached in an easier way, (by chemicals), it would not be recommended.

When one has practiced this method of meditation, the response becomes learned over a period of time. One can chose to experience that same sensation of relaxation and balance without any chemical prompt. The discipline involved in meditation is great, but the result is a mental transformation with consequences that have far-reaching effects. The physiological changes that occur with meditation should not be mistaken for meditation or the experience of enlightenment. Instead, the central nervous system should be seen as changing due to the will of the I-function. Enlightenment does not occur spontaneously, it is a final goal that must be worked towards.

The relationship between my central nervous system and I-function has always existed, but it is only recently that I have become more aware of it. I was surprised to find that there was so much activity in my central nervous system that I had been "asleep" to. This has led me to believe that my CNS is another part of my mind that is already awake unto itself, and that through an "awakening" of my bodily sensations, I can "awaken" to the sensations (or states) of my mind as well. While these physical changes help describe the physical sensations associated with meditation, they fail to describe the mental sensations that occur simultaneously. Inducing a physical state can create an environment in which a specific mental state can be achieved. Other mental transformations must occur for this to happen otherwise all beings could achieve enlightenment through a series of different chemical reactions.


References


1)First Biowebpapers,

2) Alpha Waves
,

3) Austin, James. Zen and the Brain. New York: Yale University Press, 1999

4) Holistic Meditation,

5) Organization of the Central Nervous System ,


Health: Mind and Society III
Name: Aiham Korb
Date: 2004-05-06 20:55:33
Link to this Comment: 9804


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


In the previous papers, Health: Mind and Society I and II, we established that the interactions of many different variables are responsible for health and disease outcomes. We also saw how psychosocial factors influence the physiological homeostasis by affecting the neuro-endocrine and immune systems. These connections, noted in the biopsychosocial model, have helped us understand the importance of environmental influences on the body. Drawing from the links noted in the last paper between stress and socio-economic status, we will continue to focus on the biological effects of stress in disease progression. Some of the studies which we are to mention will further suggest how societal forces and structures are most often the source of stress. As for now, let us take a step back and recall the experiment about stress and socio-economic class.

In the last paper, we looked at a study by Brydon et al., in which the experimenters found significant differences in the heart-rate and Interleukin-6 level recovery between two SES groups (high and low), after exposure to a stress-provoking task in the laboratory. The conclusion was that people of low SES have a "dysfunctional adaptive response" to psychological stress due to chronic stress-related increases in IL-6 and HPA activity (1). This to say that, as a result of chronic stress, those of low SES have problems maintaining and recovering bodily homeostasis. The article tells us that "IL-6 is sensitive to psychological stress" (1). This is a key point, as we have already spoken of some of the harmful effects of increased IL-6 levels. Yet, in addition to these negative effects, stress-related increases in IL-6 may also have considerable implications for general morbidity and mortality, particularly in old age (1). It is true that besides the risk of cardiovascular disease, elevated IL-6 levels are associated with a spectrum of other age-related conditions including: osteoporosis, cancer, stroke, arthritis, dementia, type 2 diabetes, frailty and functional decline (2). Such findings may be of great relevance to our study of stress. For example, this may imply that chronically stress-inducing environments, to which people of low SES are usually exposed, are actually increasing the likelihood of future morbidity and mortality. These chronic stress-related increases in levels of IL-6 may in fact be accelerating the aging process.

Another phenomenon pointing in the direction of these claims is the overworking of the HPA axis in the elderly. Aging is associated with changes in the function and regulation of the HPA axis, including higher cortisol levels and slower neuroendocrine recovery from stress (2). These patterns are highly similar to those we observed in the low SES group in the Brydon experiment. This should make sense, because IL-6 does stimulate the HPA axis. And as we have seen in the first two papers, the overworking of the stress and neuro-endocrine responses causes immunomodulating (or a suppression of the immune system). HPA hyperactivity also corresponds to other negative outcomes on health, such as central obesity, hypertension, insulin resistance, and dislipidaemia (all risk factors for coronary artery disease) (1). These correlations of IL-6 and HPA hyperactivity between chronically stressed individuals, and those experiencing old age should suggest a valid relationship between stress and the aging process. It is highly possible that chronic exposure to stressful environments not only increases susceptibility to disease, but also speeds up the physiological aging mechanisms. Thus far, we have seen some of the effects which environmental stress has on the onset of disease, we will now turn to look at its role in the presence of malady.

Of the possible causal pathways in which stress influences health, we have been mainly concerned so far with the "direct effects". In this pathway, we showed how stress leads to disease via the physiological responses which may severely disturb homeostasis, such as high blood pressure, HPA hyperactivity, and high cortisol and IL-6 levels. There are also the "indirect effects" of stress, which work by producing unhealthy behavior changes, such as sleep depravation, substance abuse...etc. However, it is the "interactive effects" which will draw our attention in this paper.

In the presence of disease, the "direct effects" largely shift to the "interactive effects" as the negative physiological responses of stress interact with those being caused by the disease. This intensifies the progression of the disease, and even leads to its exacerbation. In the case of AIDS, for example, there is strong evidence to link the hyperactivation of the physiological stress-response with the progression of the disease. In fact, two pathways have been investigated as potential mediators of effects of psychosocial factors and HIV progression: the HPA axis and the sympathetic nervous system (SNS). In vivo and in vitro studies have helped develop a detailed pathway linking activation of the SNS to HIV progression. "Individuals who demonstrated higher levels of sympathetic nervous system (SNS) activity to a variety of challenging laboratory based tasks show poorer suppression of viral replication following initiation of highly active anti-retroviral therapy " (3). The malfunctioning of the neuroendocrine and immune systems, which worsens the state of the disease, is largely influenced by psychosocial factors, as we have demonstrated before. What Kemeny also found was that the psychological (or mental) states of HIV-positive individuals predicted the progression of the virus. Her studies found that two cognitive appraisals were highly correlated with AIDS onset and mortality: negative expectancies about future health, and negative appraisal of self. These psychological factors (pessimism, negative affect, etc.) were associated with more active physiological stress-responses, and therefore with negative health outcomes as well.

In order to account for psychological states, we must consider them in the larger context of the psychosocial environment. Indeed, Kemeny asserts that the mentioned cognitions were shown to be highly associated with a trait termed "rejection sensitivity" (3). While this may be understood as a personality trait that varies among individuals, rejection sensitivity (and personality in general) is strongly shaped by experiences and the environment. "One context that can enhance the likelihood of chronic negative views of the self is a family history of rejection" (3). An example Kemeny cites is that of a study of HIV positive men finding that rejection sensitivity around one's homosexuality predicts a more rapid onset of AIDS and eventually accelerated mortality (4). In this case, it is certainly factors of the social environment (prejudices, attitudes, homophobia, lack of acceptance, etc.) which are the major influences responsible for inducing low self-appraisal and rejection sensitivity. Indeed, environmental and contextual factors largely affect health outcomes. As pointed out before, this is not restricted to humans. Animal studies using a primate model of HIV found that social stressors, such as separation and housing changes, predict accelerated disease progression in the infected animal (5). Thus, changes within the immune system have been demonstrated to correlate with social stress, support (or lack there of), negative affect and such psychosocial factors. These experiments constitute a small sample of the accumulating evidence supporting the significant role which environmental circumstances (generally) and psychosocial factors (specifically) play on health.

Social isolation and the lack of social support predict morbidity and mortality from cancer, cardiovascular disease, and several other causes (6). Since stress is not only perceived personally, but also through the prism of social interactions, social environments may lessen or exacerbate the physiological responses to stress. Social relationships, and societal structures on a larger level, may act as "buffers" against, or catalysts of infection and the progression of disease. Studies have shown that people exposed to such chronic social stresses for more than two months have an increased susceptibility to the common cold (7). For instance, loneliness is a relevant and important factor in pre-disease pathways, and is a major factor in the mental health of cancer survivors (6). The diagnosis of cancer has indeed been associated with increased dysphoria (abnormal depression), family problems, and feelings of loneliness and isolation. Also, social isolation is found to correlate with increased risk of death from cancer as well as stroke (6). The physiological effects associated with chronic (social) stress will unfold over long periods of time, increasing the vulnerability to a host of different diseases, including cancer. The biopsychosocial model supports this argument. Within the "psychosocial processes", resources of social support influence (directly and indirectly and through complex interactions) "health behaviors", "life stress", and therefore impacts the functioning of the neuroendocrine and immune mechanisms. This in turn may cause vulnerability, disease onset and progression, and finally affecting survival and the quality of life (2).

Considering the fact that heart disease and cancer constitute the two leading causes of death in the U.S., we should by now begin to question more seriously the environment that fosters such causes of mortality. In the search for cures and medications which may only reduce the disease's symptoms, we should as well be looking to prevent these pathologies by identifying and eliminating their sources. For example, the SES experiment discussed in the previous paper was able to find significant associations between the environment to which are exposed people of low SES (usually characterized by chronic stress and low social support), and their negative (physiological) health outcomes (1). This implied link between socioeconomic status inequalities and poor health consequences is extremely important. It forces us to reconsider our limited and rather biased perception of, and approach to health. Psychoneuroimmunology invites us to do the same thing. Just as the word "neuro" lies in the center of the term, psychoneuroimmunology considers the nervous system as the central link between the psychological state and the functioning of the immune system (the body's main defense system). Rather than focusing only on the physiological aspect of health and malady, psychoneuroimmunology proposes a more comprehensive and interdisciplinary approach to health. Such an integrative model may help to explain, for example, why a healthy economy does not necessarily mean a healthy population (which is especially the case in the U.S., where social and economic structures are based on competition and inequality, rather than on cooperation, and social support). Finally, the integration of these and other interacting variables into the way we define and approach well-being will be a necessary step towards "getting it less wrong".


Sources:


1) Socioeconomic status and stress-induced increases in interleukin-6, By Brydon, Edwards, Mohamed-Ali et al. Brain, Behavior and Immunity 18. 2004. p. 281-290.

2) Psychoneuroimmunology and health psychology: An integrative model, By Erin Castanzo and Susan Lutgendorf. Brain, Behavior and Immunity 17. 2003. p. 225-232.

3) An interdisciplinary research model to investigate psychosocial cofactors in disease: Application to HIV-1 pathogenesis, By Margaret Kemeny. Brain, Behavior, and Immunity 17. 2003. p. 62-72.

4) Social Identity and Physical Health: Accelerated HIV Progression in Rejection-Sensitive Gay M. By Cole, S., Kemeny M., and Taylor, S. Journal of Personality and Social Psychology 17. 1997. p. 320-335.

5) Social separation, housing relocation, and survival in simian AIDS: a retrospective analysis, By Capitanio, J. and Lerche, N. Psychosomatic Medicine 60. 1998. p. 235-244.

6) Loneliness and pathways to disease, By Louise Hawkley and John Cacioppo. Brain, Behavior, and Immunity 17. 2003. p. 98-105.

7) The Mind-Body Interaction in Disease. By Esther Sternberg and Philip Gold. Scientific American. 2002.


Motherless Brooklyn: Living with Tourette's Syndro
Name: Chevon Dep
Date: 2004-05-07 10:10:05
Link to this Comment: 9808


<mytitle>

Biology 202
2004 Third Web Paper
On Serendip

The label freak has taken on several meanings over the years. It is no longer limited to those in the circus such as the bearded lady, the hairest person, or the lobster claw people. It has spread to include a variety of people, who are often considered outcasts by society. For example, people with Tourette's Syndrome have being catergorized as freaks because of their uncontrollable actions such as cursing and twitching in public. Therefore, the lack of knowldge the public has of Tourette's Syndrome leads them to call people living with this disorder horrible names such as retard, idiot, and freak. In John Lethem's Motherless Brooklyn, he explores such abuse that someone living with Tourette's Syndrome encounters in their life. The reader is able to observe how the main character, Lionel, portrays his disorder and how others in the novel portray him. Although Motherless Brooklyn is a novel, it does address what Tourette's Syndrome is and how one lives with it on a daily basis.

Tourette's Syndrome is a neurological disorder that causes uncontrollable pyhsical or verbal outbursts. Nina Burleigh writes that the uncontrollable outbursts are due to the brain's receptors for dopamine not working properly. (1) Dopamine is neurotransmitter that helps control inappropriate impulses to move or speak. One noticeable feature of Tourette's Syndrome is tics, which are the involuntary motor and vocal movments and sounds. (2) Tics are commonly noticed in early elementary-age children. Although tics are common at an early age, there are several cases in which the diagnosis for Tourette's Syndrome does not occur until years later. For example, Nancy Dreher talks about a Canadian surgeon who began having tics at age seven, but was not diagnosed until he was thrity-seven. (3) This shows that there is a lack of knowledge about the disorder and that more research needs to be conducted. Since many of the patients do not know how to classify the tics, they attempt to hid it from the public. Lionel explains, "Of course I was vibrating too, vibrating before Minna rounded us up, vibrating inside always and straining to keep it from showing." (4) Since Lionel did not know what was wrong with him, he felt it was best to just conceal it for as long as he could.

According to Kelly Prestia, "As a child develops and matures, tics may become more complex and may appear in facial gestures or movements that imitate others, or completely different tics may occur."(2) This is the case for Lionel in Motherless Child, where his nervousness sets off his tics. Lionel says, "I felt myself knitting my brow exaggeratedly, a tic, and wanted to tell him to wipe the grin off his face: Everything he was seeing was not to his credit."(4) He could not control what the doorman was observing, because he could barely control his own actions. Common tics include eye blinking, shoulder shrugging, grimacing, head jerking, yelping, and sniffing. Tics can also pose danger to body. Prestia writes, "Motor and vocal tics can cause excess wear and tear on the individual's body, causing damage to organs, muscles, and joints." (2) In some instances, medication can be used to reduce the symptoms.

The more complex tics involve several muscles groups, which leads to more involuntary actions. (3) Dreher writes, "A small number of people with Tourette's Syndrome may also have a compulsion to shout obscenities, something called coprolalia, or to constantly repeat the words of other people, called echolia." (3) Throughout the novel, Lionel has these same compulsions. For example, Lionel replies to Minna's question by saying, "Scott Out of the Canyon! I don't know why, I just –fuckitup—I just can't stop." (4) Instead of repeating the exact words of Minna, Lionel mixes the letters up and also screams out obscenities. Due to the progression of the disorder, Lionel is unable to stop the outbursts. However, he does attempt to suppress them on several occassions. For example, Lionel recalls, "Language bubbled inside me now, the frozen sea melting, but it felt too dangerous to let out." (4) Like many Tourette's Syndrome patients, Lionel experiences embarassment and anxiety due to the lack of understanding and acceptance of their tics by others and therefore try to suppress the tics. Prestia agrues, "Although tics are involuntary due to neurological basis, some individuals can "hold in" their need to release tics until the tics can be released at an appropriate time or place. This is exteremely difficult, however, and may cause the tics to intensify when they are released."(2) Since it is a neurological disorder that involves the control of movements, there is not much a patient can do to contain the actions and the outbursts. In his attempt to suppress the tics, Lionel undergoes some changes, which eventually leads to a violent release of the tics. He says, "So I kept my tongue wound in my teeth, ignored the pulsing in my cheek, the throbbing in my gullet persistently swallowed language back like vomit."(4) Lionel even recognizes that it is a rare occassion that he could actually get through a moment without ticcing.

Tics are not the only characterstic of Tourette's Syndrome. Prestia points out, "Many individuals with Tourette's Syndrome have comorbid diagnoses, such as learning disabilites, obsessive-compulsive disorder, and attention deficit /hyperactive disorder." (2) But what is the connection between Tourette's Syndrome and these other diagnoses? Since Tourette's Syndrome is rapid involuntary movements, traces of the other diagnoses can develop. For example, a Tourette's Syndrome patient has a habit of imitating the actions and repeating the words of others, which can also be categorized as a symptom of obsessive-compulsive disorder. In the novel, Lethem refers to the tics as compulsions, which suggests their connection to obsessive-compulsive disorder. Lionel says, "For me, counting and touching things and repeating words were all the same activity." (4) Since such activites are part of both Tourette's and obessive-compulsive disorder, it is difficult for a distinction to be made between the two. The lack of distinction can be a reason for the undiagnosis of Tourette's Syndrome. Although there are several disorders associated with Tourette's Syndrome, it is believed that Tourette's Syndrome does not directly affect intelligence, and many students with Tourette's Syndrome have an average or above average IQs. (2) The combination of Tourette's syndrome and the other disorders does influence the overall intelligence of someone who does have these disorders.

On several occassions, Lionel attempts to separate his Tourette's self from his actual self. For instance, he says, "Bailey was a name embedded in my Tourette's brain, though I couldn't say why. I'd never known a Bailey." (4) Lionel is making it clear that his Tourette's brain has its own set of characteristics, and should remain independent. This distinction also shows that Lionel cannot control his Tourette's brain. When Lionel talks about the compulsions, he makes a statement that his brain is trying to create new tics. (4) The creation of tics by the Tourette's brain and Lionel's suppression of the tics turns it into a battle that Lionel cannot seem to win. Lionel comments, "The freak show was now the whole show, and my earlier, ticless self impossible anymore to recall clearly." (4) His life before Tourette's Syndrome is slowly fading away, as the disorder becomes more progressive. Later in the novel, he refers to Tourette's as his other name. (4) He begins to accept the fact that he has Tourette's Syndrome and that it is part of his life. Lionel decided to learn more about the disorder so that he could live a 'normal' life. He said, "I read books about the drugs that might help me, Hadol, Klonpin, and Orap, and laboriously insisted on the Home's once-weekly visiting nurse helping me achieving a diagnosis and prescription, only to discover an absolute intolerance: The chemicals slowed my brain to a morose crawl, were a boot on my wheel of self." (4) Although there are drugs available to reduce the frequency and severity of the symptoms, they also come with side effects that can make living with Tourette's Syndrome worse.

People with Tourette's Syndrome not only have to deal with the inability to control their own movements but also the constant misconceptions about their disorder. A few of the misconceptions are laziness, weird, and badly behaved. These misconceptions are particularly detrimental to school-aged children, who have to deal with being teased and viewed negatively by their teachers and peers. They may begin to internalize the misconceptions and believe them to be true. According to Prestia, "Many students with Tourette's Syndrome are at risk for developing poor self-esteem and self-confidence, in some cases, leading to depression." (2) Since he is called names such as half fag and freak by Minna, Lionel begins to develop low self-esteem. Lionel describes himself as "undersold goods, a twitcher, and a regrettable, inferior offering." (4) These negative labels imposed on Lionel shows that there is a lack of knowldege of the disorder. As long as there is a lack of knowledge about Tourette's Syndrome, the misconceptions will remain.

In order to increase the awareness of Tourette's Syndrome, it is necessary to have informational sessions in schools. This will allow both the students and teachers to become more familiar with the disorder. Once teachers begin to understand it is not a matter of the students being lazy or behaving badly, they can develop startegies that can build the children's esteem and lead to better academic performance. Since Tourette's Syndrome is often accompanied by other disorders, breaking down assignments and giving students work in smaller sections can limit the number of incomplete assignments. It is equally important for the peers to understand the characerstics of Tourette's Syndrome so that it can break the cycle of misconceptions of the disorder. Living with Tourette's Syndrome is not a choice. And for members of society to attach a negative label does not help Tourette's Syndrome patients' lives become better in any way but instead makes it even more difficult to combat the disorder.

References

1) Burleigh, Nina. "Why she couldn't stop cursing." Redbook. Nov 1998: 1-6, A Good Article
2) Prestia, Kelly. "Tourette's Syndrome: characteristics and interventions." Intervention in School & Clinic. Nov 2003: 1-10, A Good Article
3) Dreher, Nancy. "What is Tourette Syndrome." Current Health. Oct 1996: 1-5, A Good Article
4) Lethem, Johnathan. Motherless Brooklyn. New York: Vintage Books, 1999, A Good Book


The Credibility of Rational Emotional Behavior The
Name: Michelle S
Date: 2004-05-07 12:12:43
Link to this Comment: 9809


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Rational Emotional Behavior Therapy (REBT) is a cognitive-behavioral treatment developed by Dr. Albert Ellis, and falls under the behaviorist school of psychology. Cognitive-behavior therapy focuses upon the individual's ability to make significant changes in their life without understanding why the change is occurring. Ellis is a clinical psychologist specializing in psychoanalysis. The foundation of REBT was based upon Ellis's desire to aid in the improvement of his patients. He became discouraged by the lack of progress of clients, and attempted to create a program which would alter the perceptions and restrictions individuals impose upon themselves, which prevents them from obtaining self-confidence, and success in their lives (1) . However, can REBT be considered an effective method of treatment specifically when the therapy opposes any effort to find the root and causes behind patient problems, and wholly emphasizes ethical egoism during the progression towards recovery?

REBT is based upon the principle that human emotions and behaviors are the direct product of what individuals believe, presume, and think. These beliefs influence how people perceive themselves, others, and the surrounding environment. Ellis also identifies an individual's biology as a partial influence upon their thoughts as well. He felt that many of his patient's beliefs were deceitful and incorrect, and consequently, caused them to make false assumptions. These beliefs are categorized as irrational thinking, and cause unfounded guilt, depression, and feelings of worthlessness. Ellis established two specific requirements that define an irrational belief. This includes obstructing an individual from pursuing their objectives, and creating consistent negative emotions, which cause stress and confusion. This emotional strain causes the individual to damage themselves, others, and their life. The second requirement of irrational thinking is that it alters reality, and is a false impression of what actually occurred. Ellis provides examples of irrational beliefs, and specifically outlines three main ideas that appear to dominant patient thoughts. These beliefs include the idea that individuals must be exceptionally competent, or they are worthless. The second belief is that other individuals must treat them well, or they are awful, and the last belief is that the world should provide people with contentment, or they will die (1) .

Irrational thinking generally occurs in the subconscious, and is therefore, difficult to control. In order to combat these illogical beliefs, Ellis also created a Rational Self-Analysis diagram, which patients could utilize to think through their behavior. Yet, many psychologists, psychiatrists, and academics criticize Ellis's theory. One of the most significant arguments against REBT is that the therapy requires individuals to alter their irrational beliefs, but there is no attention to finding out how and why these irrational beliefs were first obtained. Therefore, the client has no point of reference when attempting to modify belief systems that are deeply embedded in their identity. Another considerable problem with REBT is that the framework relies upon self-interest. The patient analyzes their own thoughts, considerations, and actions, and is taught that the perception of others is of no consequence. Although this may help patients to become more autonomous in decision-making, patients must understand that there is a social-interest component in their belief system. Patients must be aware of the sentiments and principles of others, without allowing it to oppress their own choices. However, Ellis supports self-interest above social-interest, which becomes more of a secondary aspect of REBT(2).

There have been a number of case studies conducted, which report the progress of clients utilizing the techniques of REBT. One example could include a college student who is having trouble living with his roommate. The student could be fearful of confronting his roommate on issues because of irrational beliefs. The student is fearful of not being well liked. This ultimately leads to avoidance behavior regarding his living conditions. According to the theories of REBT, the student must correct his irrational beliefs by realizing that individuals can, and are willing to change their behavior. It is also acceptable for people to voice their opinions, and share their feelings about situations. The student must realize that there is no need to be unhappy if his is disliked by his roommate. Everyone seeks admiration, and esteem, but it is not necessary to be content. Another example of a patient utilizing Ellis's teachings is an individual who is an overachiever. The patient only feels competent and worthy of love when he performs well in school, excels at sports, and is praised at his job. If the patient has a discouraging experience on an exam, he feels insignificant and worthless. The associated irrational belief in this situation would be that the patient must do well to deserve affection and love. The patient must ultimately realize that love is unconditional, and is not dependant upon his abilities, or lack thereof. The patient may do poorly on an exam, and recognize his feelings of unhappiness over it, but not allow himself to become insecure about relationships.

The purpose of Rational Self Analysis, and REBT in general is based upon Ellis's realization that patients can contribute to their progress, by examining the validity of their beliefs. Although the theory is criticized for not exploring the sources of discontent among patients, this appears to be the advantage of REBT. Ellis focuses upon an individual's actions, and the associated belief, which is causing the action. Discovering historical causes for behavior can be abstract, and too conceptual. Ultimately, understanding the reasons behind a behavior is mutually exclusive of recovering from the behavior. In addition, irrational beliefs are often created because individuals are apprehensive of how others will perceive them. Ellis's decision to make social interest a conditional aspect of recovery, and not the end aspect is sensible and insightful. Patients who suffer excessive irrational beliefs must learn to put their own self-interests above others, and make it a desired, but not necessary part of their own contentment. Therefore, the structure and models behind REBT can be considered both effective, and perceptive methods of curing individuals of their self-destructive habits, and the most substantial arguments against Ellis can be identified as necessary conditions for full recuperation of patients.

References

1). A Brief Introduction to Rational Emotive Behavior Therapy. New Zealand Center for Rational Emotional Behavior Therapy

2)REBT, Philosophy and Philosophical Counselling

3)American Psychology American

4)The prince of reason , Interview with Ellis


Color Blindness and its Neural Implications
Name: Allison Ga
Date: 2004-05-07 13:09:57
Link to this Comment: 9810


<mytitle>

Biology 202
2004 Third Web Paper
On Serendip

A person who is color blind does not see the world the same way as someone who has regular color perception. The existence of color blindness in individuals exhibits the idea of a relationship between how we perceive the world through our eyes and how that information is then processed by the brain.

Color perception depends on how the information communicated from the eyes to the brain is interpreted. In order to illustrate the important factor of sight, the mechanics of seeing will be discussed. Outside images are projected through the cornea and the lens onto the back of the eye, known as the retina. Rods and cones, photoreceptors in the retina, are the receptors of different wavelengths of light which are interpreted as color. These wavelengths send the information through the optic nerve to the brain. Although there are more rods than cones, cones are highly sensitive to wavelengths of light which indicates why it is possible to discern various shades of the same hue. Rods, though, are important for vision in low light. For example, to avoid bumping into furniture while walking through the house at night, looking to the side facilitates discerning objects in the dark. Peripheral vision enables vision at night due to the more dense location of rods on the sides of the eyes (1). This information is communicated to the brain through the optic nerve, but this nerve can malfunction causing a misinterpretation of the information sent from the eye to the brain.

One with regular color perception has three types of cones which absorb and communicate the major bases of color: red, green and blue. There are two main types of color blindness: red-green color blindness also called deuteranopia and blue-yellow color blindness also called protanopia. Red-green color deficiency means that they have trouble differentiating between the green, yellow, red spectrum while blue-yellow color blindness means that they cannot see blue or yellow and instead identify these colors as white or gray (2). People who have either of these color detection deficiencies are referred to as dichromats. This condition occurs due to different absorption of light in the retina and a malfunctioning of the cones. Thus, the information sent to the brain is different than those whose rods and cones function regularly. A very rare type of complete color blindness is called achromatopsia which means that no color is deciphered besides gray (3).

Color blindness can be genetic, acquired through defects of the eye, or as a result of brain damage. One case study presents the circumstances of a man who had a virus, was left with neurological damage and, as a result, was rendered completely color-blind (achromatopsia). He could identify some objects on the basis of their color boundaries but otherwise could not identify color (4). Brain damage affecting the data processing between the eye and the brain infers that the brain has to function in order to interpret what the eyes are observing. Without the help of the brain, outside objects would merely be images with no meaning. The brain is important to process and understand what the body is coming into contact with, which is why it is integral for the eyes and brain to function properly.

Color blind people learn what those with regular color perception perceive as green, blue, red, etc. They are taught to identify colors of objects in a way that is different from what they actually see. Often, when asked to identify a color, a color-blind person will respond, "I've been taught that it is ..." Their color education presents the fascinating reality that what they see would be identified as another color to those with regular color perception. A green insensitive dichromat who perceives red as both red and green will view a red flower and identify it as red, because they have been taught to do so. On a website that illustrates a red flower through the eyes of a dichromat, I would identify the flower seen by this person as yellow (5). Their definition of red is not the same as my definition, but how can we be certain that all of our perceptions of "red" are the same? Those who grow up without knowing they are color blind have an idea of the outside world that is different than others, which leads to an important question: how can we truly define what is in front of us? We are unable to experience the world through any one else's perspective but our own, which promotes the idea that color blindness may be one obvious way that human perception differs.

Can the brain present images to us that are not real, or do not adequately represent the outside world? To illustrate the brain's ability to fill in blanks, the notion of a blind spot will be discussed. If one draws a cross on the left side of a page and a dot on the right side, closing the left eye and staring at the cross with the right eye while moving toward and away from the paper will make the dot disappear (6). This indicates that the brain has filled in this gap to look like the rest of the paper since the blind spot, also known as the optic nerve, cannot see the dot. Although we consciously know that a dot exists, the brain does not process it as being there.

It is interesting to note that we do not have control over how our brain processes the information it receives. For example in the optical illusion of a checkerboard, one square appears to be a different color due to shading and although we can consciously tell our brains that it is a certain color we still perceive it differently. Color blindness is part of this argument because it is a documented and studied color detection deficiency in which people perceive the world differently. The inability to detect the spectrum of colors makes it difficult for them to know what others see, as it is not simple for those with regular color perception to envision how color blind people view the world.

Color blindness is one way that we can observe the brain's role in humans' ability to decipher what is around them. The existence of color blindness emphasizes that visual observation is different for everyone. Consequently, it must follow that people cannot have the same visual experience because the brain can interpret and present visual objects differently. While a certain standard is retained, such that people will be able to agree on the shape and color of what they are looking at, we can never be sure of how others visualize the world.

References

1)Georgia State University's website, details the mechanics of the eye in fairly simple terms.

2)Color Blindness.

3)Science Daily, a detailed account of the different types of color-blindness.

4)JSTOR, David Hilbert's article "Is Seeing Believing?" from PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1: Contributed Papers (1994), 446–453.

5)Firelily, helpful visuals of how color blind people view the world.

6)Biology 202's Serendip website, blind spot experiment you can try.


The Multidimensionality of Post Traumatic Stress D
Name: Prachi Dav
Date: 2004-05-07 15:25:56
Link to this Comment: 9812


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Post Traumatic Stress Disorder (PTSD) has been named many things and often it has been called a myth. Names such Irritable heart during the US civil war, Shell Shock in World War I, Battle Fatigue in World War II and Post Traumatic Stress Disorder during the Vietnam War shows that the symptoms believed to be integral to the disorder have metamorphosed as time went by, various battles were fought and several people were injured. Given the intense human suffering that underlies this disorder it is surprising that what PTSD has been most identified with is the belief of its nonexistence. PTSD as a catchphrase has allowed the perpetuation of a simple view of trauma. This view tells trauma victims "get over it," and to get on with their lives. PTSD, however, as a consequence of trauma is far beyond such advice and has been shown to be severe enough to affect personality, memory and overall functioning in devastating ways. In fact, PTSD has been used to study the character of entire societies under immense stress (1). The following paragraphs will examine the nature of PTSD and the profound effect that it has individuals and societies involved in trauma such as the Rwandan genocide and World War I.

PTSD impacts upon individual lives on various levels and the present is a time at which these levels are being systematically investigated in the hope of eventually understanding the complexities of the experience that is PTSD. "Post-traumatic Stress Disorder (PTSD) is a persistent and sometimes crippling condition precipitated by psychologically overwhelming experience," (2). The condition develops among a significant number of those individuals who have been exposed to traumatic experience and if left untreated, is capable of perpetuating itself for years. PTSD affects wide ranging aspects of human functioning including psychological, physiological, social and occupational areas. Furthermore, there is significance in understanding that PTSD is a condition whose existence has been recognized throughout the ages and hence cannot be described merely as a modern phenomenon, a product of an exceptionally violent era of human existence 2)Post Traumatic Stress Disorder,diagnostic criteria and more useful information. Additionally, it is evident that of those individuals who experience trauma of some nature, only a fraction is diagnosed with PTSD. Some sources, for example, assert that approximately twenty percent of crime victims meet the diagnostic criteria for PTSD and that the likelihood of developing PTSD varies by the crime or trauma experienced 2)Post Traumatic Stress Disorder,diagnostic criteria and more useful information.


The diagnosis of PTSD is carried out through the usage of the specific guidelines as laid out by the fourth Diagnostic and Statistical Manual (DSM IV). The DSM IV stipulates that the symptoms outlined below must recur for the duration of a month or more and cause a significant interference in social, occupational or other important areas of function. DSM IV guidelines indicate also that PTSD may be diagnosed if the individual had experienced an event in which their own or another's life or identity were at particularly serious threat and where they were likely to be injured. Furthermore, the response to the event must involve an extreme emotional reaction such as horror, fear or helplessness. Additionally, persistent re-experiencing symptoms are characteristic of PTSD and include intrusive thoughts, recurrent nightmares, flashbacks, and distress and physiological reactions elicited by trauma reminders. Furthermore, trauma-related numbing of general responsiveness and/or avoidant symptoms has to be present for the diagnosis in addition to two or more symptoms of arousal. One ought to pause and reflect not only upon the severity and multidimensionality of the traumatic event but of the experience as it is represented in the individual. All too often it is forgotten that symptoms are not solely a list of criteria in a diagnostic manual but a composite that represents only partially the depth of human experience as it is viewed through the lens of trauma.

Interestingly, research shows that crime victims are more likely to receive a subsequent diagnosis of PTSD than are non-crime victims 2)Post Traumatic Stress Disorder,diagnostic criteria and more useful information. Differences between the sexes as to the crime most likely to lead to PTSD also seem to exist. The former finding shows that the more violent the insult against oneself, the harder it may be for human psychological mechanisms to adjust and hence this disorder results which in itself jars the whole system. The latter finding seems linked to the social construction of violence whereby the insult that seemingly leads to a larger impact in terms of trauma-related disorder, is larger for those crimes that are socially construed as highly negative and often severely sanctioned against, for instance, the matter of rape.

Continuing along the theme of readjustment to the trauma, researchers have developed models that attempt to explain PTSD and its various symptoms. The salience of two features that characterize PTSD, the disorganized and incomplete nature of deliberately retrievable trauma memories and re-experiencing of the trauma through vivid flashbacks has led to their particular analysis through the Dual Representation theory 3)Cognitive Neuroscience and Post Traumatic Stress Disorder, interesting resource about the dual representation theory. Brewin proposed this theory in order to explain the peculiar nature of symptoms in PTSD. While flashbacks are highly vivid experiences and can literally place the individual back into the trauma (showing time distortion, a typical pattern in PTSD), the patient nevertheless has problems retrieving other information deliberately. In his theory that attempts to unite both cognitive aspects of psychological models and the neuroscientific knowledge that currently exists with regards to memory systems, Brewin maps two different kinds of memories of the traumatic event on two known memory pathways.


One representational format in which the storage of trauma-related memory is known as "verbally accessible memory" (VAM) and this supports the strategic deliberate and automatic accessibility of ordinary autobiographical memories. The information contained here may be edited and in general, it interacts with the remaining autobiographical memory base that forms ones personal context in terms of memory. This indicates that the trauma is, at some level, represented among those memories that form an overall picture of ones life. However, the limitation of the memory base is that it is "mediated by limited-capacity serial processes such as attention," 3)Cognitive Neuroscience and Post Traumatic Stress Disorder, interesting resource about the dual representation theory. This limitation implies that during the traumatic events, the cognitive resources necessary to attend to the situation may simply be absent or severely constrained thereby restricting the processing of the event as a whole. Importantly, however, emotions do accompany the memory of the trauma where those occurring both during an after the traumatic event are recorded and are available when the individual considers the event. The other type of memory representation of traumatic event is proposed to be "situationally accessible memory" (SAM). The memories encompassed in this category include information obtained from a wider perceptual-level processing of the trauma such as visuospatial information and do not require the extent of cognitive resources as do VAM memories when processing external stimuli. The information represented here is not believed to have received a great deal of conscious processing and it is also thought to form the basis of trauma-associated nightmares and the flashbacks given that these too are characterized by their detail and affective nature in comparison to normal memories. It is important to establish that this form of memory is not verbally based and therefore cannot necessarily be communicated to others nor does it interact with autobiographical memory and thus may not form any part of the autobiographical memory context of an individual's life. Given that SAM may house the above kinds of information, the triggering of trauma-related memories may be a function of the memories held in this store given that in day to day life, people cannot control their exposure to or encounters with various sights and sounds.

Brewin proposed neural correlates for the hypothesized memory representations whereby SAM memories are mediated by rapid subcortical pathways from the sense organs to the amygdala via cortical structures such as unimodal sensory cortex, association cortex and the hippocampus which all project to the amygdala independently. The amygdala is believed to be responsible for various hard-wired threat responses. VAM memories, on the other hand, are hippocampally mediated and given that the hippocampus is responsible for laying down declarative memory and that processing here involves more synapses and is slower, Brewin assumes coherency and sophistication inherent in these memories. Some evidence does exist for this theoretical mapping whereby hippocampal function shows impairment under high stress levels whereas amygdala functioning is generally enhanced during periods of high stress. It is interesting to see a relatively clean mapping of psychological theory on neuroscientific knowledge but even more interesting is what these mappings may tell us about ourselves. The certainty has always existed that we absorb and store in memory various features of our environment without our personal, conscious knowledge. The phenomena associated with PTSD and delineated by this theory indicate that we store experience without the real experience of what is stored. That the stored components resurface in the form of nightmares among PTSD sufferers indicates that a not so dissimilar process may be constantly occurring among people in general. The model laid out by dual representation can be considered an extension of conceptually parallel pursuits in other areas, for example, the development of models for normal language through the study of aphasia and aphasics. However, the trauma component must always be kept in mind given that it may have such diverse and wide-ranging effects on individuals that a simple comparison with those inexperienced with such trauma is impossible particularly when examining complex processes such as those underlying dreaming.

An additional and very important component of the dual representation model is the concept that the flashbacks, reflecting a non-hippocampally dependent image-based memory form, may be a cognitive effort to transfer information from the SAM representation to the VAM system. Specifically,

"By deliberately focusing attention on the content of the flashbacks, individuals can effectively recode the additional sensory information associated with periods of intense emotion into verbally accessible memory. In so doing, providing the danger has ceased, the information will acquire a context which includes temporal location in the past, cessation of immediate threat, and restoration of safety. This is turn will assist the process whereby reminders of the trauma are inhibited by cortical influences from activating the person's panoply of fear responses." 3)Cognitive Neuroscience and Post Traumatic Stress Disorder, interesting resource about the dual representation theory.

The significance of this hypothesis is the development of the belief in the malleable nature of memory and the forms of information contained in memory. This flexibility is, of course, an important aspect of human experience especially when one attempts to imagine a world where experiences are simply fixed in memory during their first occurrence and never open to change subsequent to this event. The plasticity of human thought rests perhaps on reinterpretations of "reality" which then changes to accommodate the current thought process. The dialectics between thought and memory could not exist without this emphasis on memory flexibility.

Furthermore, the stress placed by this model on the ability to create a memory piecemeal, to build upon memories by integrating previous memories with one another conflicts with opposing views that assert the exercise of memory creation as a single event. The formation of coherent memories is probably a many-staged process that is further subject to an abundance of other influences. For example, individuals' memories of past events often do not correspond with documented facts of the reality in question 4)Eyewitness Memory Effects. This shows that memory is not infallible and in the present context the possibility of the natural provision to PTSD sufferers of a mechanism through which their trauma memories can be re-contextualized and placed on the correct temporal plane along with integrating less affective and more factual information into the memory base producing flashbacks, is certainly fortuitous.

Along the same lines, if the hypothesis is in fact correct, then it is possible to make a contribution to the literature supporting the processing of trauma memories to recover from the trauma itself (5) , (6). Therapeutic exposure involves engagement with trauma memories in an attempt to build a consolidated and coherent narrative of the trauma. This often involves both in vivo exposure to trauma related stimuli and imaginal exposure and has been conceptualized in some detail by Foa and Rothbaum (7). The therapeutic advantage gained from this process is hypothesized to be the systematic desensitization of patients to trauma-related stimuli.

An analysis of PTSD is not sufficient when purely conducted through the use of theory. The qualitative and subjective aspect must be given the same importance as is theory for this literature must not lose sight of the individual suffering of which PTSD as a syndrome of terrible symptoms, is comprised. Through accounts of their distress, sufferers of PTSD can show both the side of PTSD that researchers often label "cruel" or "devastating" and simply leave it at that, and the immense impact that PTSD has on functioning from day to day and often from mere hour to hour. The following paragraphs trace the narratives provided by PTSD patients as they recount details of their own experiences. This is an attempt to provide a more meaningful and eloquent account of PTSD beyond merely DSM IV criteria although within them the symptoms specified by the manual are evident.

There are those who propose PTSD to be reflective of the man-made nature of the trauma imposed upon them and for many, the belief in eventual death by the hands of another human being (8). Kelly (8) posits that the disturbances in PTSD may be explained by personality changes grounded in the "unending threat of death at human hands." Although this may true in a general existential manner, it may not have validity specific to various cases. This view has metamorphosed through time to arrive at a more quantitative and research-based evaluation of PTSD. However, it is proposed here that a return to a narrative-based approach may be very beneficial.

Narratives show, for example, the temporal and reality distortion associated with traumatic experience:
A Vietnam medic, for example, had "to permit...old reality to slide away...through a membrane" (8), p. 14. The only other reality for those with severe trauma is the constant wait for deaths arrival, "to be a dead man on leave...who only by chance is not where he belongs," (8), p.14.

The disordered nature of life after trauma was explained regarding the experience of a Vietnam veteran whereby the world feels "bereft of order; as though the whole reasonable and decent constitution of things, the sum of all he had experienced or learned to expect were...mislaid somewhere...; no outrageous circumstance...no new, mad thing...could add a jot to the all encompassing chaos that shrieked about his ears..." (8), p. 14.

The intrusive nature of trauma memories is characterized by General Dallaire, the Force Commander of the United Nations Mission to Rwanda (UNAMIR) during the Rwandan genocide. It is said that he cannot bear the smell of fresh fruit for it will throw him into a state of extreme depression; he says "I can't sleep. I can't stand the loudness of silence," 9)Post Traumatic Stress Disorder. The reminders of the trauma of genocide severely affect Corporal Cassavoy who served with General Dallaire for he puts aside his memories as a movie but, he says that is only effective "until you get the smells. The smells are the worst things that trigger a memory. It's like a film starting up in your head," also, he states: "there are foods I can't eat any more. Grilled chicken. Can't eat it. It looks like a dead body. Rusted vehicles, can't go near them. Children, I have a hell of a time, looking at little kids, especially newborns because they were a plaything with the Hutus," 9)Post Traumatic Stress Disorder. These accounts of the experiences that constitute trauma provide a concept of and context in which to think about therapy for those with PTSD.

An interesting and more recent approach has been that of Bessel and Schumann in combining the experiential approach to trauma with the study of German society in the post World War II context (1). Their approach takes psychological study of trauma and places it into a new multidimensional space where scientific and quantitative approaches to PTSD are applied toward the understanding of social experience and the character of an entire society shaped by trauma. A great many individuals in German society were subjected to one or another form of trauma and the progression of a traumatized society through time using PTSD as a lens seems a novel. The authors do stress the pitfalls associated with this type of study that attempts unite so many different levels of information and yet the appeal of the pursuit lies in the recognition of the importance not becoming impervious to the individual and cultural aspects of this disorder when its scientific study is being conducted.

Returning to early conceptualizations about PTSD where the syndrome was regarded as a myth rather than genuine reactions to the experience of traumatic events, one can safely say that PTSD sufferers ought not to have their experiences negated by these assertions. General Dallaire himself asserts "you cannot put these things behind you...and the more people say that, the more you get mad because you know these things will not disappear. Time does not help," 9)Post Traumatic Stress Disorder. It is clear from the above narratives that uniformity in type of symptom across those diagnosed with PTSD and the certainty of their existence has allowed for the construction of a category of psychological disorder as caused by traumatic events. Where knowledge and speculation will go from the current state is in the hands of those who are particularly committed to the understanding of PTSD as an incredibly human experience. Therefore, perhaps there is some truth to the concept that PTSD, in many cases, is a disorder resulting from man-made catastrophes. This account leaves you with Vietnam Iliad as composed by a Vietnam War veteran (8).

VIETNAM ILIAD

Anger be now your song,
For the gently ballad of youth
Were hushed in the roar of fire,
Soul-eating flames,
Leaving bitter ashes.

Your rage sings
Like a whistling sword,
Cleaving the tenuous hold
Others made on you.

You do not feel the gentle wind
Of home-this soil
Does not hold the print
Of your foot.

What war do you now fight?
Who is your enemy,
When you face you?

Stilling the guns
Only stops the killing:
The dying continues
As you destroy you. A Vietnam War Veteran

References

1) Bessel, R. & Schumann, D. (2003). Life After Death: Approaches to a Cultural and Social History of Europe During the 1940s and 1950s. Cambridge, UK: Cambridge Press.

2) http://www.state.ak.us/admin/vccb/pdf/stress.pdf Post-traumatic stress disorder: summary including diagnostic criteria and other aspects such as biological and populations at risk.

3) http://www.boerhaave-commissie.nl/bibliotheek/Neurobiologische_en_Klinische_/02_Brewin.pdf. Brewin, C. R.: Cognitive Neuroscience and Posttraumatic Stress Disorder

4) http://ess.ntu.ac.uk/miller/cognitive/ewt.htm Interesting source about eyewitness effects concerning memory.

5) Allen, J. G. (2003). Challenges in treating Post-Traumatic Stress Disorder and attachment trauma. Current Women's Health Reports, 3, 213-220.

6) Meichembaum, D. (1994). Treating post-traumatic stress disorder: a handbook for practice & therapy. Chichester: John Wiley & Sons.

7) Foa, E. B., & Rothbaum, B. O. (1998). Treating the trauma of rape: cognitive behavioural therapy for PTSD. New York: Guilford.

8) Kelly, W. E. (1982). Post Traumatic Stress Disorder and the Vietnam War Veteran. New York: Brunner/ Mazel.

9) http://www.soulselfhelp.on.ca/ptsdtorstar.html Post Traumatic Stress Disorder


To Be or Not to Be
Name: Erica Grah
Date: 2004-05-07 16:08:44
Link to this Comment: 9813


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Everyone, at one point or another, has had the experience of being in either extreme emotional or psychological pain. The duration of the pain is not always dependent upon an easily recognizable origin, and there are some individuals to whom such pain is absolutely torturous and unbearable. The reality of the situation is that there are some whose pain stems from various external factors, some whose originates internally and others whose is a combination of both. Nevertheless, there are times in which the final outcome is less than favorable (in the eyes of those on the outside) and simply tragic.

Suicide is a complicated and generally misunderstood topic, mainly because we try so hard to understand it. There are many forms of suicide, ranging from the suicide bombings that we hear about everyday, to assisted suicide, to that which is believed to be the result of mental illness. It is this last form on which I will base my paper. More specifically, I will assume for the purposes of the following discussion, that the suicide and suicidality to which I refer exists in conjunction with either unipolar or bipolar depression.

The purpose of this paper is not to provide an explanation as to why people commit suicide, but rather approach the tragedy that is suicide from a nonstandard scientific and neurobiological standpoint. This will involve examining the internal processes of the nervous system and their contribution to the suicidal state of mind. Considerable emphasis will be placed on the relationship between the I-function and the rest of the nervous system and how these work, both independently and with each other, to produce a suicidal state of mind. Ultimately, this paper aims to create a different perspective on the existence of suicide.

The role of the I-function
The I-function is cited to be the part of us that does the experiencing. It differs from the rest of our nervous system in that it can report back to us a conscious account of what occurs in both our internal and external worlds. It is our perception and our understanding of that perception. People are individuals partly due to the existence of the I-function. It allows us to consciously separate ourselves from each other and to recognize our individuality. The I-function is also the only account of reality that we have. It is highly subjective and usually overpowers the reality that outsiders tell us to believe. Therefore, the I-function plays a significant role in how we perceive ourselves and others, and it defines a place for us in a world that often makes little sense.

To say that suicide does not make sense is a reflection of what our I-functions are telling us. We may believe it, but we know that we believe it because of our I-function. The same idea follows for suicidal individuals. Their I-function is able to make sense out of what may be perceived widely as an incomprehensible act. Depression – depending on who you ask – has a less fatal yet similar effect. Many people say that depressives have a skewed sense of reality, but fail to realize that in a depressive state, that which the I-function experiences is as close to reality as one can get. Trying to force the I-function to believe something other than what it encounters warrants a lesson in futility. However, the fact of the matter is that most depressives do not commit suicide (1) . The theories as to why this is vary, but in the end, there is a very thin line between not wanting to live and wanting to die. As a result, we can assume the existence of a few extra experiences within the I-function's catalog that distinguish between someone who is severely depressed and one who is suicidally depressed. These are simply two states that exist on the same continuum.

Here, I propose dividing the I-function into two parts, in order to explore the chaos that often exists within the suicidal mind. On the one hand, there are the hopeless thoughts that make the individual believe that nothing is worth living for because all-encompassing pain will always be the experience of the I-function (2) ,(3) . On the other, there are the feelings which desire that those in the other part of the I-function be removed. Suicide, in effect, comes down to one part of the I-function wanting to completely obliterate the part that seems to be causing (read reporting) the pain, in essence waging a war of the mind. The ultimate goal, then, is to reach a bearable state of equilibrium, in which the individual's psychic pain perception can be assuaged by the part of the I-function that wishes to reduce it, without either side necessarily winning the war. Suicide is thus often about pain reduction in the midst of hopeless despair. In the reality of those who complete suicide, one side of the I-function has won, whereas in the general external reality, all is really lost.

The role of the rest of the nervous system with the I-function
Suicidality, particularly that which results in a completed suicide, is not solely a factor of the I-function. The rest of the nervous system plays a significant role, in conjunction with the I-function, in a person's disposition toward suicide. Many suicides are impulsive acts (1), (4). Although the suffering individual may have planned an eventual attack on herself, it is more likely that the final moment resulted from impulsive behavior. Impulsivity is widely inherent in nature. There are many processes and states within us over which we have little or no control. This case is the same for individuals who are predisposed to impulsive behavior. Combining a lack of impulse control with the chaos of the mind can be a recipe for disaster, as is the case in many suicides.

Additionally, a person's ability to cope with life's stressors, be they internal or otherwise, can have a large impact on their behavior given any painful, stressful, or generally unbearable set of circumstances. Coping mechanisms are relatively inherent, which is why the need sometimes arises that individuals must be re-taught new ones. The intrinsic temperament with which we are all born can make us susceptible to high stress levels, therefore reducing our threshold for stressful situations (1) . This in turn has an impact on the ways in which the I-function assesses such a taxing state. Again, compounded upon variable levels of mental stability, this can lead to suicide.

Recognizing that we experience states that our I-functions can only report but not change is an important component in realizing the different aspects of suicidal behavior. It is important to examine the nature of suicidality – in light of that which the I-function has no control over – to determine the true role that the I-function plays in suicidality and suicide completion. First, recall that the I-function reports pain consciously. There are many times, with respect to depressive illness, that input is nonexistent. There is a sweeping feeling of despair that seems to develop from one's very core and acknowledgment of it is given by the I-function, but it's nature has no name. It is not difficult to immediately assume, then, that suicidality works in much the same way. If there is an instinctive reaction that all is lost and death is infinitely better than earth's "amenities," then what prevents us from believing this is one more thing over which the I-function has no control?

There are often no words that can ever accurately describe what suicidality truly means, in both the internal and external realms. If the I-function experiences difficulty in trying to attach a thought to the "wrong-ness" it perceives, it must construct a reality – as it does in so many other instances – for the individual to discern and understand. However, because the feeling usually overpowers any attempt to describe it, the thoughts that enter into the mind of a suicidal person are often extremely inferior to what could be said if adequate words actually existed. This leads oftentimes to severe misinterpretation of the suicidal person's description of his feelings. The nature of suicide is therefore as inexplicable in the brain as it is incomprehensible in the outside world.

The possibility of prevention
The realization that the I-function perpetually fails to provide an accurate representation of what it perceives is important in effective suicide prevention, and reveals the flaws in some methods of prevention. It's widely known that most depressives exhibit lower levels of the neurotransmitter serotonin in their brains (1), (4). The same is true for many, but not all, suicides. However, despite the research that cites the successful use of some SSRI antidepressants as a method of suicide prevention, there are many reasons to believe that ultimately, prescribing antidepressants would simply delay the inevitable (5). There are many people who say they want to die but would never kill themselves. But, for those for whom this is not true, increasing serotonin levels in the brain may change (or at the very least, mask) the (true) state of the brain, but not necessarily remove what is inherent in some other part of the nervous system. Dealing with the condition existing in some other part of the nervous system in such a medical way is different from removing the state altogether. Therefore, so long as the state exists, there will always be that susceptibility to suicide and suicidal behavior (5).

This susceptibility, whether genetic or brought on by the stress of comorbid mental illness, is what makes suicidal individuals veritable ticking time-bombs. At this point, anything could be that final straw. Kay Jamison describes this best:
"A slight affront or loss may quickly create a flash point from a lethal mix of elements. It is as with fire: dry grass and high winds may remain, in themselves, only dangerous possibilities, elements of combustion. But if lightning falls across the grass, the chance of fire increases blindingly fast; it leaps from slim to given" (1) .
The blaze that is suicide can either grow gradually from a lit cigarette in a garbage can or from throwing a match on a kerosene-drenched house. There is very little that can be done in the way of external inputs that can remove that ache. The nervous system must be transformed to reduce suicide risk. However, it is important to differentiate, even if minimally, between the mental illness alone and the mental illness accompanied by suicidality. Treatment of the mental illness may reduce suicide risk, but it will not remove it. Fully understanding this aspect of suicidality would make suicide prevention much more effective.

Another example of prevention that is rather ineffectual is the positive talk that suicidal individuals are recommended to tell themselves (6). It is highly ineffective to try to convince the I-function that a state other than the one it perceives exists. Unless it senses the removal of that anomalous condition that is suicidality, there is nothing that can be done to inform it of otherwise, to the extent that it is digested as truth. Just as the I-function has no control over the suicidal state existing in another part of the nervous system, forcing the I-function to believe something different will not remove the core problem.

There are many reasons to believe that suicide is not preventable. However, if this were the case, there would be a much higher rate of suicide within the population. But, we don't ask why people choose to live. Instead, we try to understand why they choose to die, which will never result in complete comprehension unless experience reveals the answer. We don't generally know what occurs in the mind of a person who commits suicide. But, for those who do see the light of day, it is an interesting question to ask what brought them back from the proverbial ledge. Can we give credit to the I-function or some other factor? Does this mean that they will never attempt suicide? In thinking about what I've read regarding this topic, the answer is probably not.


References

1) Jamison, Kay Redfield. Night Falls Fast: Understanding Suicide. New York: Vintage Books, 1999.

2) Understanding and Helping the Suicidal Person , on the American Association of Suicidology website, which provides research information and links to help resources on suicide

3) History of Suicide , from the Suicidology Web, which has a lot of information and many articles on suicide

4) Why? The Neuroscience of Suicide, article from the Scientific American website

5) Suicide and the Mind , from the Suicidology Web

6) How to stay safe if you're considering suicide , from the Mayo Clinic web site, which provides extensive health-related information


. . .And My Knees Turned to Jelly
Name: Erin Okaza
Date: 2004-05-07 17:08:40
Link to this Comment: 9814


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

I look at the recital program in my hands. The boy ahead of me has just begun playing. I nestle in my seat to listen – I like this piece, I remember learning it the summer before. I know the piece and can estimate that I have about 10 minutes before I play. As he comes up upon his final run, I am interrupted from the serenity of my world and told that I need to get ready to play. Without warning, as if on cue, my heart starts racing. As I walk to the eves, I feel as if my legs can't support the weight of my body. I hold my violin tight for fear of it slipping out of my sweaty hands. Before I know it, I am on the stage. As I raise the instrument to my chin, my arms shake as if to visibly compete against the sound of my heart pounding in my ears. Just before I place my finger on the black fingerboard, my mind takes off. The though of every note I had ever missed in practice is returning to haunt me. Yet, despite every ounce of my body wishing to get out of this situation, I continue to play, determined to capture every note.

Every week at lessons, I fight stiff fingers, a pounding heart and racing thoughts. During jury examinations, weeks of practice seem to evade me and I spend most of my energy trying to control my shaky hands to keep from dropping my instrument. Ask me any day, I will tell you how much I hate to play the violin in front of people. Yet, it seems as if I continually come back for more. The phenomena that surrounds performance anxiety, is the basis of this paper's discussion. In it, we will investigate the procedures incorporating this element of "conflict" whereby a musician experiences opposing and concurrent tendencies of desire (to perform) and of fear (as manifested by performance anxiety). We will use the notion of setpoints to explain the existence of our homeostatic ability to cope with fear and the implications this has on the unique nature of performance fear. Next, we can extend discussion of performance fear by employing the notion that boxes are capable of signal generation and provide a way to pinpoint the origin of our fear. Following our identification of where fear originates, we can integrate the I-function as a way to account for our racing thoughts. We will then consider the implications brought up by this network of thinking by examining how this model explains the relationship between fear and anxiety. Finally, we will consider questions raised by this way of thinking about performance anxiety.

Setpoint regulation allows our body to maintain a certain degree of equilibrium. Highly complex systems of inhibitory and facilitatory mechanisms located in areas such as the amygdala and hippocampus deal with the induction, processing and extinction of fear. Facilitatory mechanisms related to anxious states include neurotransmitters such as monoamine and neuropeptide. A universally inhibitory neurotransmitter located throughout the brain gamma-aminobutyric acid. (1) , (2). They are some of the many processes continuously working so we can maintain a degree of emotional equilibrium during the course of our day. Setpoint mechanisms allow us to respond to perceived threats, exhibit the appropriate response and then restore our body back to the equilibrium state (1). This homeostatic control was exhibited as I sat calmly waiting for my turn to play. The moment my name was called, I responded with fear. The natural behavioral response that followed was escape. Once, I was done playing, I sat down and was able to listen to the concert again.

In most cases the emotional response is generated by a real or physically perceived threat (3). External stimuli such as seeing a snake produce a stress that elicits a response to the fear of it biting. The response was in direct response to external environmental situations. However, in musicians it appears that this type of external stimuli is lacking. The standard concert hall seats people, not mass murders trying to attack you. There is no immediate physical danger in what you are doing. However, the intense and very real sense of dread still exists. The women who came to tell me it was time for me to play didn't pass me a death sentence, though it felt like she had. Even before she spoke to me I saw her approaching and felt my heart begin to race. If there is no physically threatening stimulus triggering our homeostatic response to threats, where is the threat originating? Boxes, within our nervous system, with signal generating ability provide a way to address this question.

The notion of boxes within our nervous system with the ability to generate signals without physically visible stimulus offer a way for us to explain the location of origin of the threat. Boxes within the nervous system are able to create and generate signals on their own; in effect, nervous systems are affected by their own function – without involvement of the I-function. A box that is generating a signal on its own, can affect the input signal of another box. This can continue until there is a network responsible for generating a particular autonomic response such as increased heart rate, sweaty hands, weak knees, or a that familiar knot in the stomach (4). Given this, what then triggers a box to self-generate a signal? Perhaps this lies in the unconscious. A previously adverse experience to performing may have conditioned a particular response in our unconscious. Such responses might stem from a recollection of memories relating in some way to the current experience (3). As a child, I remember playing in little group recitals with other kids, but remember not liking to stand in front of the group, exposed to everyone's parents. It is quite possible that the fear has been imbedded deep within my unconscious and it triggers a box that generates triggers when I am in a concert atmosphere before strangers. Regardless if the exact source of this particular response is known or unknown to the individual, the ability to unconsciously trigger a network of boxes within our nervous system to respond proportionally to the danger associated with the perceived threat does exist. Thus, the signal generating boxes within our nervous system affect our behavior by responding to threats unconsciously which result in the manifestation of a biological response.

While the rest of the nervous system has become conditioned to respond to fear via unconscious stimuli, the I-function is unaware of the process by which such stimuli come about. What the I-function is aware of though, are the hours of practice put into preparing the piece, and the sense of readiness that accompanies this knowledge. In effect, there are now two different views of what is happening. On one end, we have the unconsciousness nervous system, conditioned to respond to certain stimuli and generate feelings associated with fear. The response is biologically evidenced as my stomach twists in knots and my heart pounds in my ears. On the other end, the I-function recognizes that I am totally prepared to play, knowing the hours of "drill.stop.repeat" training that went into preparation. Additionally, my I-function is aware that I love the play the violin because I love music. Then, I see the woman walking toward me and all of a sudden, as my heart rate increases, and my hands become like sponges. My I-function realizes that there is a mismatch between the intrinsic view of this experience generated by my unconscious nervous system, and the view of the I-function. The I-function then tries to resolve this conflict in viewpoints by creating a story.

The I-function is aware of the arousal of biological mechanisms and the response it creates to fear. As a result, the I function must account for this is some logical way and does so by constructing a reality that takes into consideration all of the conscience, analytic and symbolic aspects of the experience. Efforts of our I-function to resolve conflict are at times logical but irrational. Cognitive distortions offer a description of the types of stories the I-function creates to make sense of the biological responses and bring it together within the context of the surrounding situation (5). For example, the moment I placed my bow on the string, and had my fingers in position to play the first chord, I could only remember the notes that I had messed up on during countless hours in the practice rooms. My I-function created a story, very logical, but nonetheless irrational. The cognitive distortion was one where I thought that this performance was all-or-nothing (6). The situation demanded perfection; I placed my fears on the assumption that I could not mess up for fear of disaster unfolding. Any wrong note, any missed bowing was doom, my thoughts began to race as my I-function pieced together all the times that I did miss a note or had intonation problems.

Despite the thoughts of escaping this torturous situation, I kept playing. The excitation of the nervous system never quite calms down to the "normal" emotional state that existed before the performance, and until after the full threat of performance is gone. But during performance, the homeostatic mechanisms regulating the setpoints kick in. The same mechanisms responsible for creating a biological response also aid in brining it to a relative equilibrium while performing, though not at the same level of comfort as previously (5). This occurs as the I-function is able to rationalize the situation further – people are still in their seats and since they are there they do not want to inflict doom upon you or your instrument. These mechanisms offer an explanation as to why, in my experience, playing becomes bearable.

The nervous system presents a very interesting model by which to investigate and better understand the processes associated with why we do what we do despite the conflict between pleasure and pain. However a major implication of looking at performance anxiety in this fashion is the relative uniformity at which the above biological processes occur. In effect, if these mechanisms occur in everyone, why is it that some musicians can thrive on performance and others debilitated by the I-functions conflict resolution techniques?

We can use this model to observe that the key to performance does not solely rest on the amount of practicing, or the type of teachers you have, but upon your ability to control the extent to which the I-function is able to create stories. We are all human and the biological processes that convey to us feelings of fear are natural (6), (7). We have no control over the signal generation of boxes in our nervous system or the homeostatic responses to stimuli to keep our body in equilibrium. In effect, a certain amount of "stage fright" is inherent to all who perform, and can be translated to heightened physical and mental alternates that have the potential to act as performance enhancers (7),(8). It has been shown that music students who use the unconscious biological response of the I-functionless realm in addition to channeling the I-function's conflict resolution, to move away from creating a cognitive distortion and convey the feelings of the music the composer intended, that the individuals felt a sense of euphoria (9). This suggests the possibility that fear becomes anxiety when the connection that allows for a positive conflict resolution in the I-function can't be made. The "stage-fright" inherent to any performance, becomes disabling when the I-function is allowed free range and creates alternative irrational realities to account for the biological responses to fear.

The neurobiological model derived in this paper presents an interesting way to look at performance fears. The network of neurobiological concepts used to explain the process of performance anxiety, raise questions regarding specific aspects of our model. The concept of "set-points" and homeostatic regulation raise questions about what factors determine "set-point" levels. For example, after 4 or 5 minutes of playing, I was able to feel less anxious, however, I still felt nervous. Given the discussions surrounding setpoints, is there is a specific range of setpoint elevation that corresponds to fear? If so, are they identifiable and can they be isolated to help people with extreme performance anxiety? Another question that arises from looking at performance anxiety in terms of this model regards those people who are able to make the connection that allow for positive conflict resolution. Are they able to control their I-function better, or are the biological responses, though still there, just less in intensity? If so, could a lesser intensity of biological responses to performance possibly mean that the unconscious has less control over some individuals than others?

In this paper, performance anxiety is more than just a case of bad nerves. It is a complex neurobiological network of "setpoints," the unconscious's response to environmental stimuli and the I-function's attempts to piece together a story with missing information. This discussion proposes a model that maps the neurobiological processes that allow musicians to perform amidst conflicting emotions of fear and desire. We may not all turn out to be the next principle chair of the Philadelphia Orchestra, or even the next American Idol, but maybe now, we can extend this template to better understand ourselves and improve our performance in some way. In my case, trying to keep my stomach from tying itself in knots and my fingers from turning into pestles as I scratch out a tune on the 'ole four string.


References

1) Progress in Neurobiology , An article in Progress in Neurobiology via Science Direct about the neurobiology of the hierarchical processes of mood.

2)Central Practice, A site about anxiety and different mood states

3)The Neurobiology of Stress and Emotions, an article from the International Foundation for Functional Gastrointestinal Disorders.

4) J.Kimball's Online Biology Textbook , site about nervous system


5) Coping with Music Performance Anxiety , from University of Wisconsin - Eau Claire

6) Clammy Hands and Inner Voices , from Eastern Michigan University


7) How Teachers Can Help – Performance Anxiety , an article published by American Music Teacher


8) Effects of Stress on Music Performance , from Ithaca College Music Department

9) Performance Anxiety, Motivation and Personality in Music Students, from Ueno Gakuen University


Keep Your Eye on the Ball? The Sweet Science of H
Name: Michael Fi
Date: 2004-05-07 22:37:47
Link to this Comment: 9815


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The Florida Marlins Mike Lowell digs into the right handed batter's box. He toes the ground and his hands grasp the bat loosely. The count is 2 balls and two strikes. He turns his eyes towards the pitcher, Philadelphia's Billy Wagner, sets himself and awaits the pitch. The catcher Mike Lieberthal crouches and signals a low fastball to Wagner. The pitcher enters his windup and hurls a 98 miles per hour fastball backspinning hard. The pitch moves towards home plate, and the Lowell readies his hands and body to swing. He holds back. The pitch smacks the catcher's mitt low and out of the strike zone. The umpire calls ball three, the count is full. The pitch took about four tenths of a second to reach home plate. The next pitch comes just as hard. By the time the ball is halfway through its flight, Lowell begins his swing. He leans in and his swing is smooth, his bat meeting the ball chest high over the plate. The ball hurtles high into the left centerfield bleachers for Lowell's third home run of the night. In just a fraction of a second, Mike Lowell and his bat have tied the game.

This dance is performed thousands of times every summer day. The best batters make productive contact with the ball only 3 in 10 times they come to the plate. Major league hitters are able to make some form of contact with the ball roughly 80% of the time. (1) Just as importantly, only the best hitters can decide not to swing at a pitch hurled at immense speeds over a short distance. What enables a person to judge the characteristics of the flight of a baseball thrown by a skilled pitcher such that he can mobilize his body and execute a complex concerted motion to hit the ball? What mental and visual skills enable a batter to hit a well pitched ball? What are the most important neurological characteristics of a successful batter? Hitting, which some call the most difficult in sports, is a masterful coordination of eye, mind and body.

Strength in several visual skills is necessary to hit a baseball. Acute eyesight is extremely helpful. A batter can be aided by using his dominant eye most effectively through his choice of batting stance. Precise eye movements enable a batter to follow the ball effectively.

Visual skills alone cannot ready a batter to hit a baseball. Many ballplayers with raw physical talent and visual acuity have failed to become successful hitters. Michael Jordan, one of the greatest athletes of our time, could not hit above .220 in the minor leagues even with his undeniable physical prowess. What Jordan may have lacked is practice. With experience a batter can develop the complex muscle memory necessary to execute a swing. Furthermore, pitchers don't throw the same pitch every time, and a hitter must spend time learning how to recognize and hit pitches that curve, dip or knuckle at varying speeds.

Perhaps the most statistically significant characteristic which distinguishes professional baseball players from the general population is visual acuity. According to Dr. Daniel Laby, Dr. David Kirschen and Tony Abbatine of the Cutting Edge Baseball School, 81% of professional baseball players posses 20/15 vision or better, with 2% possessing almost perfect 20/8 eyesight.(2) 20/20 vision is the standard which ophthalmologists use to denote average visual capability. An individual with 20/15 vision can resolve a spatial pattern 20 feet away with the same clarity and detail as an individual with 20/20 vision resolves the same pattern 15 feet away. (3)

This visual superiority enables one to clearly determine a ball's flight more quickly than the average person. At 60 feet, where the ball leaves a pitcher's hand, a batter with 20/15 vision generates neurological input regarding the ball's spin or movement at an efficiency correlating to an average person's perception of a ball 45 feet away. Disparities in visual acuity become particularly important in determining a ballplayer's skill when subsystems are analyzed. For example, major league players tested higher on contrast sensitivity (discriminating against backgrounds) than did minor league players. (4)

It is uncertain how heavily visual acuity really affects a hitter's capability. Sharp vision helps a batter acquire sight of the ball, but does not aid him in tracking the ball in flight. Furthermore, although a 20/15 individual may pick up the pattern of spin on the ball earlier than a 20/20 individual, this early acquisition of input does not help a batter integrate the appearance of the ball such that the batter can determine the type of pitch and its future movement. This skill requires many other visual and neural systems.

When a pitch is released, the batter tries to keep the image of the ball squarely projected onto his fovea, where visual acuity is greatest. When acquiring the ball as an image, the hitter would be best served to face the pitcher with his dominant eye closest to the mound. There is debate as to whether one's dominant eye actually transmits visual information faster than its non-dominant mate (some say that the dominant eye transmits information 10-13 times faster than its mate (4)). However, most ophthalmologists agree that the "cyclopean eye" is located closer to one eye than the other in eye dominant people. One eye (and its fovea) is preferred for sighting an object.

About 65% of the population is right eye dominant while 35% is left eye dominant if the population is divided into only left and right eye dominant persons. There is no correlation between handedness and ocular dominance. However, in a study of University of Florida ballplayers, not only were cross-dominant (right hand batting, left eye dominant and vice versa) players represented at a proportion twice that in the regular population, they also had higher batting averages than uncrossed dominant players. A further subgroup, central-eye dominant players, outperformed crossed and uncrossed hand-eye dominant players. These central-eye dominant players have the ability to sight equally well with either eye. (5)

As a pitch moves toward a right-handed batter, the ball is first picked up by the left eye. If this eye is dominant, the image of the ball is more easily caught on the fovea. Having one's dominant eye closest to the pitcher is helpful in getting an early read on the ball's flight characteristics. Both eyes are needed to effectively judge the distance of the ball, and the non-dominant eye (in this case) must also acquire and track the ball as it moves towards the batter. (6) Perhaps the high performance of central-eye dominant players can be explained by their ability to site the ball equally well through their entire visual field.

However effective a player is at sighting the ball, the ball moves and must be sighted continuously throughout is flight. Robert Watts and Terry Bahill cite three types of eye movement as necessary to track a pitch. Saccadic eye movements scan laterally and are used when reading. Vestibulo-ocular eye movements maintain fixation on an object while the head is moving. Smooth-pursuit eye movements are used when tracking a moving object. All of these movements must be integrated and harmonized in order to effectively track a pitch. (1)

It is important to note here that it is physiologically impossible to track an oncoming moving object all the way to the bat and still make the series of recognitions necessary to swing at the ball. This statement holds true for Billy Wagner fastballs and slow pitch softballs. Most coaches tell players to "see the bat hitting the ball," but this is impossible. The angular velocity of an object passing as close to the batter as a pitched ball is such that it cannot be matched by head and eye movements combined. Anticipatory head and saccadic eye movements may allow a batter to see the ball cross the plate, but render a useful swing virtually impossible. The most skilled professional baseball players lose sight of a fastball five to six feet from home plate. In order to see the bat hit the ball, one must "take his eye off the ball." Even the great Ted Williams, whose 20/10 vision and mental baseball acuity were legendary, personally debunked his claims to being able to see ball-bat contact. (1)(4)

So if a batter can quickly acquire a pitched ball and track it to the plate using all three requisite eye movements, what does he do if the pitch is a split-fingered fast ball and dives into the dirt just 4 feet from the plate?

''Sure, I think I had good eyesight, maybe exceptional eyesight, but not superhuman eyesight," Ted Williams once said, "A lot of people have 20/10 vision. The reason I saw things was that I was so intense . . . it was discipline, not super eyesight.'' (7)

Hitting a baseball wouldn't be the most difficult feat in sports if you only needed good eyesight to do it. One musn't forget that what makes contact with the ball is the bat, not the eye. Being a pitcher wouldn't be much fun if any Joe out of the stands with 20/10 vision and finely tuned saccadic eye movements could hit a home run. Once in a while naturally gifted ballplayers come around who can hit any pitch, anywhere, any time. But what separates the pros from the hacks more than anything else is discipline, physical coordination and critical periods of learning. (8)

For every 99 miles per hour Billy Wagner fastball, there is a Barry Zito curveball, which falls from the batter's shoulders to his knees between 30 feet away and the catcher's mitt, twisting away from a left handed hitter's reach at a mild 70 miles per hour. The swing for Wagner's heater is much different than the one for Zito's hook, and a batter needs to know the difference to hit the ball as it is pitched.

Practicing a swing strengthens the synaptic pathways corresponding to the particular swinging motion. Practicing the visual acquisition and determination of its probable behavior strengthens other pathways. If these pathways are adequately strengthened, much of the responsibility for acquiring a pitch, tracking it and swinging becomes possible independent of the self- it becomes habit. Subsequently, reaction times are decreased and hitting effectiveness increased.

Ted Williams' three principal rules for hitting were (9):
1. Get a good ball to hit. Know which balls you are capable of hitting well and which balls are likely to get you out if you hit them
2. Do your homework. Know the pitcher, know his pitches. Think about what the pitcher is liable to throw you.
3. Be quick with the bat.

These three rules allude to the diverse functions your nervous system must carry out in order to hit successfully. Rule 2 emphasizes the importance of preparing to properly integrate the information conveyed by the pitcher's throws in advance of your at-bat. Rule 1 stresses quick acquisition of the ball and rapid integration of its future behavior. After determining the type of pitch by comparing its spin to that of pitched balls one has seen in the past and reasoning the location of the ball in space once it nears the plate, the batter may apply rule number 3.

Billy Wagner, pitching from the stretch, grips the ball tightly in his left hand. Mike Lowell has seen five consecutive fastballs. He knows that Wagner counts on his fastball to blow by hitters and hasn't thrown a slider since the inning's first pitch, when Miguel Cabrera struck out on three pitches. To hit Wagner's fastball, you have to guess that it's coming. Lowell steps into the box and settles into his low stance. His eyes rest squarely on the pitcher's point of release. Wagner exhales and comes to the belt, lifts his right leg high and catapults the ball in a slinging motion somewhere between overhand and sidearm. As soon as the ball leaves Wagner's outstretched fingers, Lowell knows it's a fastball. His left eye spots the ball directly on his sensitive fovea and the tight backspin renders the red-laced ball white. Synapses fire through his optic nerves and brain. Every Billy Wagner fastball he's ever seen tells Lowell that this pitch is indeed a four-seam fastball. Lowell lifts his lead leg slightly and moves his hands towards his back hip. His eyes scan from left to right and his head turns in concert. The ball will be outside if it goes straight, but Wagner's fastball tails, and Lowell's instincts tell him it will bend back into the top of his strike zone. By now there is no time for further integration, Lowell's motor neurons snap his body into action and send the bat towards the pitch while his head and eyes continue to swivel rightward. He makes contact.


References

1) Watts, Robert G. and A. Terry Bahill. Keep Your Eye On the Ball- Curve Balls, Knuckleballs and Fallacies of Baseball. Revised Ed. W.H. Freeman and Co., New York. 2000.

2)Cutting Edge Baseball School., Dr. Daniel Laby, Dr. David Kirschen & Tony Abbatine discuss the visual mechanics of hitting.

3)York University, web-book entitled The Joy of Visual Perception.

4)Penn State University sports medicine executive newsletter., "Atheletes should play from the eyes down."

5)Psyched athletic performance, Feature: "Eye and Hand Dominance- Baseball Performance" by Dr. Paul Schienberg.

6)University of North Carolina Psychology Dept., "Visual Pathways to the Brain."

7)USA Today/ Baseball Weekly Special Report, 6/6/1996., "In every sense, Williams saw more than most."

8)University of Connecticut., "Activity-dependent tuning of synaptic circuits in development."

9)Tedwilliams.com., Ted's rules for hitting. Also feature's his "happy zone" annotated strike zone.


Music of the Neuron: The Effect of Rhythm on the N
Name: Kristen Co
Date: 2004-05-08 09:20:27
Link to this Comment: 9816


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

For years scientists have been doing observational studies on how music has an unconscious effect on the nervous system. They claim that certain melodies can alter our moods, enhance our intelligence, influence our behaviors, and even heal us if we're sick. Although many studies have been done on the effect of music, not much is known about how exactly this effect occurs. In order to uncover the scientific truth of music's influence, it's necessary to get down to the most basic element of music, rhythm. Rhythm is the backbone of music, providing structural support upon which other aspects such as melody, tone, and harmony are added to create a song. Observations and modern scientific experiments suggest that rhythm does indeed have an effect and could be the key to understanding the complicated biological response due to music.

Rhythm is the primal element of music that has been used for centuries. Shamans, Native Americans, and African tribes are just a few groups who used or still use rhythms, primarily in the form of drumming, for rituals. These cultures frequently employ rhythm as a means to maintain health, both mentally and physically. The idea behind rhythm-healing is that the steady rhythms of the drums influence internal bodily rhythms which are causing distress (1). This technique of healing through rhythms evolved separately in cultures all over the world and still persists today in many forms.

The basis of this technique is the fact that rhythm is inherently biological. Rhythms can be found throughout our body in systems such as the cardio-vascular, respiratory, and glandular systems. Most importantly, rhythms are also prevalent in the nervous system. One level of rhythm is in the movement of action potential signals throughout the neurons (2). These patterned signals can be seen when neurons work together through the internal corollary discharge signals or the central pattern generators which provide for the patterns of the motor symphonies performed by the body.

The nervous system is also crucial in integrating and coordinating the rhythms of other systems of the body so that they can work together in one cohesive unit (2). In general, the heart beats independently of the nervous system, but the nervous system does have the ability to increase or decrease the rate of beating (3). The nervous system has great control over the muscles. As a result, the nervous system can translate neural impulses into bodily movements. The nervous system has some control in nearly every one of the bodily systems, keeping them in homeostatic balance.

The nervous system is also adept at interpreting the rhythmic aural or visual waves that enter the body through the sensory systems. The ear receives these rhythms as sound waves and transduces them into action potentials to be interpreted by the brain. The waves are funneled in by the outer ear and hit the tympanic membrane where they are amplified. They then travel through the ossicles of the middle ear and the oval window to reach the cochlea. In the cochlea the waves travel through a fluid to the Organ of Corti where waves bounce off the basilar membrane. The basilar membrane is covered with tiny hairs called stereocillia that are attached to another, tectorial membrane (4). Movement of the stereocillia causes the opening of ion-gated channels which can allow sodium ions in to start an action potential. This signal then travels along the cochlear nerve to the brain which then interprets the frequency, duration, and volume of the sound (5). The perception of these signals is temporal as well as spatial, making the auditory system ideal for interpreting rhythms.

The rhythms of the body and external rhythms found in music are connected through the process of entrainment. Entrainment occurs when environmental cues such as light or sound are interpreted by the brain and integrated into the body's rhythms. One example of entrainment is the effect of light on the body's circadian rhythm. Light is absorbed by the retina and transduced into neural signals which are sent down the optic nerve. The signal is taken to various regions of the brain as well as to the suprachiasmatic nuclei (SCN) in the hypothalamus. The SCN influences the circadian rhythm by releasing chemicals to the brain and other parts of the body (6). Although less studied, auditory signals can have a similar effect on the body's rhythms by prompting neural and chemical changes which affect one's behavior.

One theory is that rhythm is entrained to neural oscillators that work to coordinate the perception of sound waves with the action of the body through the beat of music. In this way, music is an example of an "action-perception cycle" (7). The perception of rhythm through the ears has a strong effect on the nervous system which in turn causes action in other systems of the body. This effect can be achieved through certain pattern sensitive neurons that when activated, translate the incoming signals into neural codes that can distinguish rhythm (8). These rhythms are then produced internally by the measuring of intervals. This system would allow us to integrate rhythms into behaviors without even consciously hearing the beats (9).

For example, rhythms in music have an unconscious effect on the corollary discharge signals which cause the lifting and lowering of the foot. This tapping of the foot is a frequent bodily interpretation of rhythms in music. Other muscular systems such as those which allow us to dance or to sing may also be affected by these auditory signals. In this way, auditory functions are very connected to physical movement.

After sound waves are transformed into neural impulses, the signal travels to various regions of the brain. No one part of the brain has been identified as the music region of the brain. Rhythm is interpreted by both right and left hemispheres of the brain. Rhythm also provokes activity in the motor cortex. The activity in the motor cortex can be observed even if one is merely thinking about rhythm, suggesting that rhythm makes us think of moving even if we don't (10).

Some scientists, as the result of neuroimaging studies, suggest that regions such as the supplementary motor area, premotor cortex, basal ganglia and parietal cortex are areas of the brain that enable the entrainment of sound waves into motor and perceptual tasks. There have been studies, however, which have identified neural circuits throughout the brain that are believed to play an important role in the integration of auditory as well as visual information (7). The overall conclusion that can be reached as the result of these studies is that sound has an effect on a wide range of areas; therefore it most likely has a wide range of effects on our bodies.

The very nature of sound is temporal and the auditory system is specialized to process incoming signals in terms of rhythms (11). Sound waves are measured in cycles per second or hertz (hz). Each cycle of a wave is a single pulse or beat of sound. Using an electroencephalograph (EEG) to monitor activity in the brain, four different patterns or waves have been detected based on these rhythms. Each of the wave ranges has been linked to different mental states. Beta waves of fourteen or more hertz are common during waking hours and are associated with concentration and alertness. Alpha waves which are between seven and twelve hertz are found when dreaming and are linked to states such as creativity that are right below conscious awareness. Theta waves occur with beats of four to seven hertz. Theta brain waves are present during sleep and are associated with memory and learning. Delta waves are prompted by rhythms between one-half and three hertz. These waves occur during deep sleep and are associated with healing (12).

Centuries ago, the shamans used different rhythms or beats to reach different levels of consciousness (12). There are many groups who persist today who still employ the powers of rhythm to influence mental and physical states. The idea of interpreting rhythm through sound waves is given credit by the fact that deaf people can still interpret and be affected by rhythm (13). The Rhythmic Entrainment Intervention Institute works to blend traditional shamanistic methods with modern western ideas to create CD's for general use as well as special rhythmic CD's to help people with disorders such as autism, schizophrenia, and ADD (1).
In the 70's, biophysicist Gerald Oster discovered what is known as brain wave technology. He claimed that the two hemispheres of the brain work together to create what he called a binaural beat. This technology uses special rhythms at specific frequencies to help a person enter into one of the four brain wave states. Today, organizations such as Brain Sync continue to promote the use of different rhythms to achieve different mental states (12).

Brain wave technology is just one way in which people have attempted to connect the natural rhythms of the body with external rhythms. Another example of entrainment is Eurhythmics. Eurhythmics is a form of music education founded by Swiss educator and musician, Emile Jaques Dalcroze in the early 1900's. Eurhythmics focuses on bringing about a bodily awareness of rhythms. Dalcroze claimed that whenever we hear some sort of music, our brain interprets the rhythms and unconsciously sends signals to our muscles to move in accordance to the beat. Eurhythmics strives to bring this unconscious awareness of the rhythms into our consciousness. This is achieved through physical manifestations of music. The idea is that if students learn music in this physical way, they will gain a deeper understanding of the music itself (14).

Music therapy is also a growing field, proving the undeniable impact of music. This type of therapy uses rhythm as an organizer as well as an energizer (15). Holistic music therapy uses rhythm to bring the body "back to nature." This technique operates under the theory that rhythm can affect the nervous system which in turn can affect our emotional states. For example, playing music at less than eighty beats per minute can lower heart rate which can bring about a more relaxed emotional and mental state (16). This effect is most likely the result of auditory input which influences the vagus nerve and the limbic system, the emotional system of the brain (17).

The manifestation of music through the nervous system is a complex process that scientists are only beginning to understand. In order to uncover the way music influences our biological systems, it is necessary to look at the effects of rhythm, the most basic component of music. It is clear that there are internal rhythms, and observational experiments suggest that these internal rhythms are influenced by the external rhythms of music. As new scientific connections are uncovered, practices such as brain wave therapy and eurhythmics will be better understood and further employed. The interpretation of sound waves, however, is just one of many emergent properties that lead to the coordination of our bodily systems and helps us understand how brain equals behavior.


References

1)Rhythmic Entrainment Intervention Inc. , "Blending ancient techniques with modern research findings"

2)Neuroscience at Macalester website a>, "Music Process and Perception"

3)World Federation of Societies of Anesthesiologists website a>, "Control of Heart Rate"

4)BBC Science and Nature website , "Nervous System: Hearing"

5)Howard Hughes Medical Institute , "Tip Links Pull Up the Gates of Ion Channels"

6)Endeavour Vol. 21(1) 1997, "The 'internal clocks' of circadian and interval timing"

7)Nature Neuroscience website, "Swinging in the brain: shared neural substrates for behaviors related to sequencing and music"

8)Nature Neuroscience website, "Transduction of temporal patterns by single neurons"

9)Nature Neuroscience website , "He's got rhythm: single neurons signal timing on a scale of seconds"

10)Harvard University Gazette website , "Music on the brain: Researchers explore the biology of music"

11)Nature Neuroscience website , "How do our brains analyze temporal structures in sound?"

12)Brain Sync website,"Research Findings"

13)Nature website , "Feel the Music"

14)Bethlehem Music Settlement website , "Scholarly Publications"

15)Mostar Music Center website, "Music Therapy Principles"

16)Allied Health Professionals Association website , "Holistic Music Therapy"

17)Online Ambulance website , "The Mechanism of Sound Therapy"


Tourette's Syndrome - Cure?
Name: Sonam Tama
Date: 2004-05-08 21:53:08
Link to this Comment: 9818


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


Recently, I saw a show about Tourette¡¦s Syndrome (TS) on the Oprah Winfrey Show. What caught my eye was a man with TS who seemed to have the most violent tics I had ever seen. But after undergoing surgery he no longer has any tics. In fact, looking at him now, there is no way to tell that he ever had TS. That show aroused my curiosity about the condition especially since it raises questions about voluntary and involuntary movements and the link between brain and behavior. I wanted to see what this new treatment and the widespread attention it is receiving may mean for those with TS.


Unlike a lot of other disorders, people with TS cannot hide their disorder as they suffer primarily from involuntary movements, commonly referred to as tics. These repeated motor ticks usually involve the mouth, face, head, or neck muscles and can be as simple as forceful eye blinking or as complex as bending over and touching the ground. Additionally ¡V although less common ¡V people with TS have vocal ticks, which are repetitive involuntary vocalizations of unintelligible sounds such as sniffing, grunting, or throat clearing but can also be as complex as uttering whole phrases (1). Other interesting complex motor and vocal phenomena include coprolalia (involuntary and in- appropriate swearing), copropraxia (involuntary and inappropriate use of obscene gestures), echolalia (involuntary repetition of the speech of others), echopraxia (involuntary imitation of the actions of others), and palilalia (involuntary repetition of parts of the individual's own speech) (1). Individuals suffering a mild form of Tourette's will not suffer from both motor and phonic "tics" but from one or the (2).


These and other symptoms typically appear before the age of 18, more specifically between the ages of two and fifteen (2), and the condition occurs in all ethnic groups. Adult males are affected 3 to 4 times more often than females while it is nine times more common in male children. Although the symptoms of TS vary from person to person and range from very mild to severe, the majority of cases fall into the mild category (3).TS is often a lifelong disease which usually develops in childhood with the median age of onset being 7 years. However there is often a decline in the severity of symptoms after puberty.


Named after Gilles de la Tourette, a French neuropsychiatrist who successfully assessed the disorder in the late1800s, TS is a complex neurobehavioral disorder now associated with various other disorders such as obsessive-compulsive symptomatology, Attention Deficit Hyperactivity Disorder (ADHD) and other behavioral problems (1).. So, although TS is a visible disorder, its underlying cause has not been pinpointed yet. However, an abnormality of one or more of the chemical neurotransmitters in the brain has been found to be an implication (2).


Although thought of as a psychiatric disorder for most of the 20th century, due to the identification of many biological factors over the past 20 years, including the effectiveness of pharmacologic therapy and the findings on strong genetic links, TS has been reclassified as a neurological movement disorder. And because TS has prominent behavioral as well as motor manifestations, it spans both psychiatry and neurology. Thus, it is classified as a neuropsychiatry disorder (1).


Many studies conducted on families indicate that there is a spectrum of genetically related tic disorders, ranging from transient motor and vocal tic disorders to full-blown TS. Although these differ in duration and number or type of ticks, the fascinating thing is that the character of the tics themselves is similar throughout the spectrum. These same studies also suggest a genetic relationship between tics and OCD, which is overrepresented in many TS families (1).


In addition to the genetic link, there are studies showing an environmental influence on the condition. Two separate studies on twins reveal that Genetic-Environmental interactions appear to determine the phenotypic expression of the TS genotype. Since Monozygotic (MZ) twins have an identical genetic endowment and similar but not identical environmental experiences, twin studies are ideal for examining the relative etiologic contribution of genetic and environmental factors Both studies reported concordance rates of about 50% for TS in MZ twins but less than 10% in Dizygotic twins. When all tic disorders are considered, the concordance rate rises to nearly 80% in MZ twins and about 25% in Dizygotic twins. While these results suggest a primarv genetic contribution to TS, they also indicate that nongenetic factors must also play an important role, since only about half of the MZ twins are concordant for full-blown TS. Another study on MZ twins found the twins as having markedly different tic disorders ¡V with them exhibiting considerable difference in the frequency, intensity, and character of tics, despite their identical genetic endowment. The study concludes that there is a relationship between the genetic predisposition toward a tic disorder and a worsening of this genetic effect by adverse environmental factors (1). These findings suggest that in addition to genetic predisposition, crucial environmental factors ¡V prenatal as well as perinatal conditions ¡V affect the phenotypic expression of TS (1). Additionally, a number of pharmacologic agents, including stimulants and neuroleptics, can produce tics in persons with no known genetic predisposition while stimulants in particular may also aggravate preexisting tic disorders. This shows that purely environmental factors can influence the disorder (1).


For the past 30 years, TS has been treated with medications, behavior therapy, or both. Researchers link Tourette¡¦s to brain pathways where dopamine, one of the brain¡¦s chemical messengers, acts. The most effective medications block dopamine receptors. The best known of these medications, haloperidol (Haldol), is given in very low doses, and is effective in the majority of patients. Despite the low doses, side effects are common. They include sedation, weight gain, dry mouth and muscle stiffness. A similar medication, pimozide (Orap), can be less sedating, but it can affect the conduction of electrical signals that control the heart's pumping action. Another commonly used medication, clonidine (Catapres), is less effective, but it often is tried first since it has less troubling side effects. Clonidine has another advantage. It is also used to treat symptoms of ADHD, which accompanies Tourette's in some people. One type of behavior therapy is habit-reversal training which teaches the person to use a specific muscle movement or behavior to compete with the tic. Other common behavioral techniques are positive reinforcement, relaxation training and self-monitoring, in which the person learns when tics are most likely to occur (4).


Moreover, rather than ticks being the main problem, it is suggested that the more disabling part of having TS may be concurrent conditions such as ADHD, impulse control disorders, Obsessive Compulsive Disorder (OCD) and depression. Thus, it is also important to treat other psychiatric disorders when they appear with Tourette's syndrome (4).


An article that I came across states that the ¡§conventional medical community¡¨ have shown little interest in findings on environmental triggers over the past 25 years. It mentions a study in 2001 that ¡§proved¡¨ that even modest increases in room temperature could trigger Tourette syndrome symptoms. It also reports that with a significant increase in Tourette syndrome during the past two decades, that there is a strong environmental factor. Furthermore, there is a statement by Dr. Albert Robbins, an environmental allergist in Boca Raton, Florida, who says "Given that drug intervention is often not successful for Tourette syndrome, it is important for families and doctors to be aware of all environmental factors that could be impacting symptoms." All of this suggests that environmental factors have not been weighed as heavily as neurobiological (5).


Jeff Matovic, who has shown TS symptoms since the age of six, was the man on the Oprah Winfrey show. Despite receiving therapy and medication, his body soon grew immune to the medication and his condition, which had worsened severely with age (rare for tourette¡¦s), was considered one of the worst cases of Tourettes doctors at University Hospital Health System in Cleveland had ever seen. Doctors used a technique called ¡§deep brain stimulation¡¨ ¡V which has been used on patients with Parkinson¡¦s disease to help reduce the shaking associated with the condition -, which involves the implantation of tiny electrodes deep inside the brain beside the thalamus, which controls body movement movements. The electrodes are attached to platinum wires that run beneath the skin from the brain to two pulse generators implanted just under the patient's collarbone. They send out high frequency electrical signals continuously in an attempt to redress the shortfall that is causing the tremors or, in Mr. Matovic's case, uncontrollable movement. A tiny wire runs from inside the brain, beneath the scalp, down the neck and into the upper chest where the battery is located. Mr. Matovic has had electrodes implanted on both sides of his brain and tiny batteries implanted on each side of his chest because he suffers uncontrollable movement on both sides of his body. The pulse generators are powered by batteries that last three or four years. These pulses interrupt the brain's poorly functioning motor firings and restore them to normal, allowing brain's ¡§symphony¡¨ back its conductor. Within hours after the stimulator was turned on, Matovic became completely relaxed and was able to walk normally (6).


Following the success of this surgery, many questions remain which can only be answered in the future. The fact that the tics have stopped, will that also mean the ADHD and concurring disorders will cease or improve? Also, what does this mean for those with milder cases of TS? Or people who have tics due to purely environmental factors? Will they have to go through such a risky operation as well or will this procedure uncover new solutions for these milder cases? And what are the long-term effects on the brain? However, long-term studies have shown minimal to no brain damage in Parkinson's patients. This man who now functions normally on batteries ¡V which can be turned on or off ¡V will allow us, more than ever before, to see our bodies as complex mechanisms.


References


1)Tourette Syndrome Association, Inc, Tourette's Syndrome: A Model Neuropsychiatric Disorder (A Case Report)

2)Wellesley College Website, Tourette's Syndrome

3)Tourette¡¦s Syndrome Association, Inc, What is Tourette Syndrome?

4)E Health Forum, a forum on Tourette¡¦s Syndrome Treatment

5)Looksmart Articlesa>, New information for people with Tourette Syndrome

6)BBC News Article, Article on New Cure for Tourette¡¦s


Autism: Lessons in Neurobiology
Name: Jennifer S
Date: 2004-05-09 09:28:28
Link to this Comment: 9819


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


The progression of treatment, theories and the diagnosis of autism is representative of the way science is learned, relearned, and reconstructed. An idea is created, information is gathered and the idea becomes generally accepted. Autism is a disease characterized by severe delays in language development, an inability to form normal social relationships, uneven intellectual functioning and restricted activities and interests (1).

Treatment for autism revolved around psychoanalytical theories of poor parental connection, with little success, until news ideas were introduced (2). As research began to show other environmental and genetic causes for autism, the treatment shifted. However, the standard treatment for autism still relies heavily psychotherapeutic techniques, which suggest that biology can possibly be changed through behavior modification, but also indicates that the implementation of new ideas takes much more time than the creation and exploration of these ideas.

Neurobiological research moves slowly, but scientists are working on autism in a manner similar to the paradigm we worked with in class- the idea of getting it "less wrong." (3) The idea that was commonly accepted fifty years ago blamed cold, withdrawn mother for her child's disease. The physicians and researchers of the time dubbed these women "refrigerator mothers" and based the therapies for autistic children on the Freudian school of psychoanalysis. (2)

Before scientists had imaging techniques and the ability to measure and understand the level of neurochemicals present, they relied on the information available to them. In the mid twentieth century, it seemed to doctors that children with autism completely lacked social skills and the ability to form bonds with others. The prevailing view of this period, and a view that is still of some significance today, was that children learned how to socialize and form bonds through their interactions with their mother. When children appeared unable to form attachments, the mother's lack of attention and warmth towards the child was thought to be the culprit. As new information is able to be processed and understood, scientists have entirely abandoned the idea that mothers can cause their child's illness. In fact, in many cases children are thought to develop autism before birth.

The advent of brain imaging studies, such as MRIs and PET scans has allowed scientists to observe what anatomical differences are present in individuals with autism and associated disorders. While these studies have shown significant differences in autistic individuals from the general population, there is no consistent anomaly present. This suggests that autism spectrum disorder may actually be a category of diseases with similar presentations that are caused by very different biological factors.(4)

Autism can cause a variety of symptoms, and in some cases, specific symptoms are often linked to certain areas of the brain. For example, autistic children who exhibit head banging frequently have an enlargement of the diphoid space in the parietal and occipital bones. The link between self-injurious behavior and this space is being explored with hopes of creating treatment that is more specific. In other studies, the results seem almost contradictory. For example, vermal hypoplasia and vermal hyperplasia are seen in individuals with nearly identical symptoms. (4)

Other studies analyzing the levels of the neurochemicals serotonin and dopamine have shown significant differences between individuals with autism and non-affected individuals. As with the brain imaging studies, these studies are varied and cannot conclusively diagnosis autism. Individuals with autism also may exhibit altered glucose metabolism in the brain, which is thought to alter the brain's level of function in the affected areas. (2)

While the differences in levels of serotonin between individuals with autism and the general population are marked, the difference between non-affected and affected family members is often slight. These family members often suffer from other diseases affected by serotonin levels, such as depressive disorders and obsessive-compulsive disorder. The role serotonin plays in the development and progression of autism is unclear, however, individuals with autism who have elevated levels of serotonin are more likely to develop major depression or obsessive-compulsive disorder. (2) It is unclear whether the elevated serotonin level is responsible for the autism or simply an additional anomaly present that contributes to a higher incidence of other disorders.

While there is no biological factor that can be used to indicate the presence or severity of autism, those with mild cases of autism appear to have less dramatic alternations in the anatomy of the brain than individuals with more severe autism. The main criterion for diagnosing autism continues to be based on psychological examination, and a myriad of tests to rule out other diseases with known physiological symptoms. The psychological testing includes analysis of a person's imagination, social skills, and language skills. (5) A person's intelligence quotient, which is measured by psychological testing as well, continues to be the most reliable factor for evaluating the prognosis for an afflicted individual. (2)

As the core understanding of autism has shifted from a purely psychoanalytical view to incorporate the physiological causes, physicians have become less afraid to suggest the diagnosis of autism, and parents are less likely to deny the problem. Early diagnosis and intervention are key to successful treatment. Therefore is extremely important for parents and physicians to be aware of the signs of autism and be comfortable discussing the possibility of their child having this disease. With the stigma removed, the parents are able to take an active role in finding an appropriate special education program for the child and managing the child's health care. The family is able to work as a unit to help the sick child.

The growth in treatment options and the quality of life for those with autism have substantially benefited from the scientific discoveries. (5) Scientists hope to develop treatment that will act on the specific anatomical problems rather than addressing all autistic patients through a psychosocial developmental program. Understanding the specific differences in brain chemistry and size could help differentiate disorders that currently fall under the term autistic spectrum disorder. It is likely that many of these diseases will receive separate names and treatment as scientists uncover the specific physiological causes and progression of these diseases.

Currently, only a small percentage of autistic children are able to tolerate medications, and the success with these medications has been very limited. Medications are generally aimed at treating the symptoms of the disease and other disorders frequently seen in autistic patients. Ziprasidone is prescribed for patients with excessive aggression, SSRIs are used to treat those with serotonin disorders, naltrexone is being studied for efficacy among the subset of patients with autism who self-injury, and methyphenidate therapy has been useful in treating hyperactivity. (3)

These drug therapies all have potentially serious side effects when used in children without autism, but children with autism have a higher occurrence of seizures and often are more sensitive than most patients to psychopharmacology. Because the risks of medicinal treatment often outweigh the potential benefits, very few studies have been done on psychopharmalogic treatments for autistic children. Autistic adults, who have a higher occurrence of psychological co-morbid disorders, respond well to medicinal treatment for these disorders. (2)

Other theories about the development and progression of autism have led to more direct approaches. For example, autistic children with biotin-responsive encephalopathy have improved in all symptoms with the addition of biotin. Other children have improved with specialized diets to completely remove gluten and casein. (2)

The gluten/ casein free diets are treatment based on the "leaky gut" theory. This theory suggests that autistic children have tiny holes in the lining of their intestines, which is thought to be caused by a number of factors, including viral infections and toxins. These holes allow peptide components of the proteins gluten and casein to seep into the blood stream. These peptides acts similarly to morphine in the body, and passes through the blood-brain barrier and affects brain development. Many parents opt to try a gluten/ casein free diet without evidence that their child has a leaky gut, but other parents have tests done to determine the intestinal permeability of their child before beginning a diet that requires such careful planning. These treatments are some of the first treatments to specifically target the causes of autism and help curb its progression. (6)

Autism results in "impaired thinking, feeling and social functioning- our most uniquely human attributes." (7) Individuals with autism often lack a theory of mind leading them to believe that others know everything they know. They may come off as self-centered because they do not understand that others have feelings and thoughts different from their own. In a sense, autistic individuals appear to be lacking the core components of the I-function. They do not have a sense of self as something separate from the world. (1)

The part of our brain that we feel is most human, the conscious self, is a part of the brain that remains very controversial. Some argue that the I-function cannot or should not be mapped out but those who are eager to map the brain have difficulty identifying what section composes the I-function. By examining the areas of the brain that are affected in autistic individuals, scientists can begin to understand the role of these areas of the brain in healthy individuals, and begin to understand the placement of the I-function in the brain on an anatomical level.

The National Institute of Mental Health (NIMH) is developing a database of imaging studies on normal brain development in children and those with autism. Scans comparing the structural and functional development and maturation of autistic children and normal children will help lead to early diagnosis and differentiation of the many types of autism within the autistic spectrum disorder label. ((7)) Scans of the brains of siblings of autistic children could also provide important clues to the development and progression of autism.

As more knowledge is accumulated and the physiological causes of autism become better understood, it is likely that the treatments will shift from psychosocial treatments to more medicinally based treatments. The perception of individuals with autism, and the perception of their parents, is also likely to change as the diagnosis, treatment, and prognosis for individuals with autism shifts to a more pharmacological approach.

Autism is a unique disease because it teaches us so much about the mind and about the impact of research on the perceptions of a disease. As a disease, autism has shifted from something to be covered up and hidden to a disease with many unclear biological causes that remains a frustrating mystery. Removing the stigma has allowed research to advance, and allowed parents to become open to treatment options that incorporated them. NIMH research is currently exploring the benefits of treatment geared towards the specific characteristics a child and his family. (7)

Intensive, sustained behavioral therapy and special education remain the most valuable and popular treatment options, but the information gained from brain studies and other physiological studies is likely to change both the treatment and diagnostic procedures for autism.

References


1)Autism Primer: Twenty Questions and Answers
2)"Pervasive Developmental Disorder." eMedicine.
3) Biology 202 Course Website.
4)"PET Scanning in Autism Spectrum Disorder." eMedicine.
5)Autism & Langauge: Description and Diagnosis
6)" 'Leaky Gut' and the Gluten/ Casein Free Diet." Center for the Study of Autism.
7)"Unraveling Autism." National Institute of Mental Health.


Types, Causes, and Treatments of Depression
Name: Shah Hossa
Date: 2004-05-10 05:28:15
Link to this Comment: 9821

<mytitle> Biology 202
2004 First Web Paper
On Serendip

<hl><b>Types, Causes, and Treatments of Depression</b></hl>

Basics of Situational vs. Clinical Depression

At least as it is portrayed in more mainstream literature, depression has two basic manifestations: situational and clinical. Within the context of situational depression, a single external source or a few external sources of stress are usually the root cause; as for the clinical variety, one's biochemistry would be at the root of it. Symptoms of both include the following: irregularity in sleeping and eating patterns, lack of/low energy, disliking of oneself, irritability, lack of hope, and suicidal ideation. In addition, for both, a variety of stages can occur, including that of mania or anxiety. This variation makes it difficult to pinpoint the exact nature of the problem and give a proper diagnosis.

Situational Depression: Causes and Treatments

Situational depression often occurs when one experiences some sort of loss, i.e. when something that was of importance to one's life and sense of self is no longer accessible, for whatever reason. Common examples include losing a home, a loved one, a career-related position, and so on. Some mental health care professionals actually feel that the experience of this sort of depression is actually a healthy one; usually, those who do not go through this sort of grieving process have proven to suffer in the long-term. The extent to which it is experienced varies from individual to individual, and from situation to situation. If it continues for a long period of time, it can turn into a chronic condition, one that requires treatment vis-à-vis counseling therapy and perhaps even medication.

Clinical Depression: Causes and Treatments

Sadness that lasts over a period of a few weeks and that interferes with one's daily rituals is usually classified as clinical depression. This type usually runs in families, and is normally treated through the consumption of medication, counseling therapy, or a mixture of both. It can have one or more types of root causes, which can usually originate from and be described as one of the following:

1. Biological: referring to a lack of proper functioning in neurotransmitters;

2. Cognitive: a predisposition to experiencing low self-esteem and thinking negatively;

3. Co-occurrence: likeliness of its occurrence along with conditions such as diabetes, Parkinson's disease, Alzheimer's disease, cancer, hormonal disorders, heart disease, and substance abuse – i.e. alcohol consumption, illicit drug use;

4. Medication-related: as a side effect;

5. Genetic: as something passed down through the family line;

6. Situational: as mentioned, if this situation perpetuates and becomes chronic, it can change into something biochemical.

Symptoms of Depression

For both situational and clinical depression, common symptoms, some of which have already been mentioned, occur. They are the following:

1. a low mood that endures, either one of sadness, anxiety, or an empty feeling; 2. excessive or inadequate amounts of sleep; 3. excessive or inadequate consumption of food; 4. a lack of pleasure in activities, hobbies, etc. previously enjoyed; 5. the tendency to be restless or easily irritated; 6. constant physical conditions such as headaches, constipation, chronic pain, etc., that cannot be alleviated using conventional treatments; 7. problems concerning concentration, memory, and decision-making; 8. tiredness; 9. feelings of guiltiness; hopelessness, and/or worthlessness; 10. suicidal ideation, and/or thoughts of other forms of death 11. a swift change in mood from feeling silly to serious and irritated, or even angry, in children.

WWW Resources/Bibliography

"When Is It Depression?

"Depression"

"Goodbye Depression"

"Bipolar Disorder"

"Mood Disorders"


Trauma: The Brain's Reactions and Coping
Name: Shah Hossa
Date: 2004-05-10 05:56:24
Link to this Comment: 9822

<mytitle> Biology 202
2004 Second Web Paper
On Serendip

<hl><b>Trauma: The Brain's Reactions and Coping Strategies

Darwin's Theory of Evolution and Survival Responses

In his theory of evolution, Charles Darwin put forth the hypothesis that several expressions and emotions experienced by animals and human beings were reactions triggered by the instinct in these beings to survive. The basic concepts behind two basic divisions of these have become integrated into popular speech: they are known as "fight" and "flight." These responses have been found in people with stress, and trauma, as part of their survival involved experiencing and coping with these situations. Theories of evolution developed, however, to include more complicated responses to survival. This included more complex emotional responses to stress and trauma, which are the following: feelings of abandonment, grief, stress, depressive symptoms, defeat, betrayal, and/or guilt; and may even have implications for responses for one's moral sense of self, such as frustration from disbelief, shame, "survivor guilt," and changes in one's beliefs, values, sense of ethics, and what one feels one's meaning and/or purpose in life is. Of course, these reactions are echoed in one's neurobiological system, and thus a "mental pathway" of one's response can be outlined, too.

Neurobiological Structures Involved in the Experience of Stress and Trauma

The amygdala is located in the brain's temporal lobe, and is comprised of two basic structures – the corticomedial and basolateral nucleus groups. It acts as the intermediate part of the pathway between the part of the brain responsible for the functioning of the senses and the part of the brain responsible for constructing emotional meaning; in other words, where sound, imagery, tastes, and so on are brought together with memory. The hippocampus, which is located in the temporal lobe also, physically is in the form of a lengthened protrusion, and processes memory and other sorts of information; it is considered the "gateway" to the limbic system, which is indeed where memory and emotion are finally put together as one.

Neurological Reactions to Stress and Trauma

The amygdala acts as an alarm of sorts, i.e. usually, a part of the brain known as the neocortex is responsible for reacting to stimuli, so normally sensory signals travel from another part of the neural structure known as the thalamus, to the neocortex, which itself processes information through many levels before initiating a response. But in the case of a traumatic event, a sort of override occurs in which first the signals travel from the thalamus, through a single synapse to the amygdala, allowing it to react before the neocortex, and thus allowing the person to have a more rapid reaction. The signals are still sent to the neocortex, but only after they have been sent to the amygdala.

Then, adrenaline and a similar hormone known as norepinephrine – both responsible for reactions of fear and anxiety at the physiological level – is transmitted throughout the body through the adrenal gland; these neurotransmitters elicit the reaction of activating receptors of a nerve known as the vagus nerve, which relays heart-regulation information from the brain, as well as communicating back signals to the brain as responses to the release of the abovementioned neurotransmitters. Neurons based in the amygdale are thus signaled to travel to other parts of the brain, to set in place the memory of the event. This sort of trigger impresses memory more strongly upon the brain than if the event had been more of a casual occurrence, since intense emotional response is involved; and of course, the stronger the reaction, the more significant the impression left upon the memory.

The impression will not include a recording of all the details of the occurrence; only very particular details will be encoded, whereas other parts of the event will have little or no encoding in memory whatsoever, since it is not a normal neurochemical environment within which this whole process happens. This can result in adverse effects regarding the way an individual learns, habitates, and responds to stimulus.

If the trauma does not cease to persist, the individual experiencing it most likely starts undergoing a process known as post-traumatic stress disorder, which begins with the feeling of losing control and being helpless.

Post-traumatic Stress Disorder

The imprint upon the memory includes, as mentioned, a neuron-based network ready to respond to the experiences of any future stimuli that was present during the occurrence of the traumatic event. Thus, if one were to experience any, or a combination of that stimuli, one's neurobiological system would trigger a response – the same one the person experiencing the incident had at the time of that occurrence. This gives the traumatized person fewer options and flexibility than a non-traumatized person in terms of the responses s/he can have to that certain set of stimuli encoded within the traumatic memory. Exposure to that set of specific stimuli will trigger a physiological response, also – victims of trauma have been known to, upon such an exposure, have a response such as one of an accelerated heart rate, abnormal skin behavior, a rise in blood pressure, and so on. This can continue to happen for the individual years, and even decades, beyond the occurrence of the first even that triggered it all.

Survival Strategies

There are sets of coping mechanisms, called "survival strategies" by some, that are involved in reacting to stimuli of any sort and also can be involved in trauma therapy. They are, as responses, an integrated part of a person's neurobiological system, and as such are part of neurobiological evolution. They first and foremost operate on an abstract functionary level, between a person's encodings of reflex and instinct, which involve reactions in the primitive cortex, the midbrain, and the limbic system – appropriately, parts of the brain concerned with survival of the self and of the species. Of course, some of these can malfunction if trauma is experienced; those that do not help one adapt to situations, whether they include trauma or not. They include the reactions of rescue, attachment, assertion, adaptation, fighting, fleeing, competition, and cooperation. As such emotional and physical responses aid in one's survival, which includes stress and trauma, it contributes, as implied to the evolution of the species overall, too.

WWW Resources/Bibliography

"Speculations on the Neurobiology of EMDR."

. "The Neorobiology of Trauma."

"Introduction to Survival Strategies."

"The Body Keeps The Score: Memory and the Evolving Psychobiology of Post Traumatic Stress."


Biological and Psychological Causes of Bipolar Dis
Name: Nicole Woo
Date: 2004-05-10 22:21:56
Link to this Comment: 9825


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Since bipolar disorder, also known as manic depressive illness, is an extremely debilitating disorder that affects about one percent of adults, it is not surprising that there is a continual search for a cure. However, the answers that scientists and researchers give in an attempt to explain bipolar disorder are somewhat surprising. Scientists and researchers make a clear distinction between physical causes (genes, etc) and "other," psychological causes (stressful life events, environment). Contemporary thought surrounding bipolar disorder states that "most, but not all" causes are inherited, or rather, biological. However, I would argue that by making such a clear distinction between an inherited predisposition towards bipolar disorder and an emotional response to a traumatic situation creates is less helpful when thinking about bipolar disorder and creates a false dichotomy between the body and the (emotional) self. In addition, even the explanations that focus on purely biological causes do not fully answer the complex question of causal factors.


When discussing the causes of bipolar disorder, scientists and researchers can generate quite an impressive list of probable biological causes. Recent research has even been specific enough to locate susceptibility to bipolar disorder to "two overlapping genes found on the long arm of chromosome 13" (2). However, despite the fact that these two genes are now believed to be factors in bipolar disorder, they are believed to be only one of multiple genes that contribute to bipolar disorder. Another purely biological element that is thought to be responsible for bipolar disorder is the serotonin transporter. The serotonin transporter is essentially a protein that absorbs excess serotonin in the brain. The activities that serotonin is involved in, such as, sleep, feeding, and moods, are all affected in those who suffer from manic depression. In a recent research group in the United Kingdom, researchers found that, in individuals with bipolar affective disorder, there was a common variation in the serotonin transporter gene. This studied implicated the variation in the serotonin transporter as causing instability in the amount of serotonin in the brain.

Though scientists have been able to become more and more precise in their specifications of causal factors, whether bipolar is attributed to a loss of gray matter in the prefrontal cortex or a low serotonin level in the brain stem, these explanations merely deal with the end result of a process that is still not understood (5). Why should the brain lose gray matter in the first place? Although it is useful to identify the fluctuations in the brain that enables bipolar disorder to develop, it seems that the larger solution, preventing the causal factors from beginning, has yet to be fully answered or explored. However, while scientists can attribute bipolar disorder to complications involving chromosome 13 and serotonin transporters, these explanations fail to answer the true causal questions. For instance, why should the genes overlap on chromosome 13 in the first place? Or, why and how did the serotonin transporter change in those who develop Bipolar disorder?


In addition, discourse surrounding purely biological causes does not take into consideration the fact that these biological causes may have come about as a result of the environment that the person has been in. If someone has a lower amount of gray matter in their brain stem, it does not necessarily follow that this was a condition that were born with. It is a possibility that their life experiences somehow reduced the amount of gray matter in their brain. If scientists began to see that not everything that happens to the body is purely "biological" (meaning genetically inherited) but can be affected by outside influences, the connection between the body and its experiences could prove to be a helpful idea.

However, while the field of preventing bipolar disorder rather than merely treating it after it has already developed seems to be less widespread, studies have inadvertently begun such work. A study investigating the similarity between bipolar disorder and schizophrenia, conducted by researchers at the Johns Hopkins Children's center, University of Cambridge, and the Stanley Medical Research Institute, led researchers to trace the cause of these disorders to reduced expression of genes known as oligodendrocytes (3). Oligodendrocytes are responsible for the development of myelin within brain cells. After comparing the preserved brains of individuals with schizophrenia, bipolar disorder, and a group of individuals with neither disorder, researchers noticed that most oligodendrocyte-specific and myelin-associated genes were greatly reduced. Robert Yolken, M.D., a neurovirologist at the Children's Center, strongly believes that "These results provide strong evidence for oligodendrocyte and myelin dysfunction in patients with schizophrenia and bipolar disorder"(3). As both schizophrenia and bipolar disorder tend to emerge in adolescents and young adults, observing myelin abnormalities could help scientists and doctors examine children whose family histories put them at risk for developing bipolar, and enable them to receive treatment before they exhibit symptoms.

Scientists seem to make a very clear distinction between biological and emotional causes. This could be an acceptable divide if those who suffer from bipolar disorder were consciously aware of how the impact of their emotions/trauma/stress affect their physical condition. However, many who suffer from bipolar disorder state that they have no control over what they are feeling. They cannot simply pull themselves out of their depression or mania. As those with bipolar disorder are not consciously allowing the situations in their lives to negatively impact their health, the lack of control that many claim to experience suggests that their conscious self (their I-function) is not aware or in control of what is happening. Of the I-function is not conscious of the effects of the outside world on the body what, then, is remembering the traumatic events (death of a loved one, etc)?

Former members of elite military forces are known to react violently when woken up or surprised, despite the fact that situations that require such a reaction no longer appear as well as the fact that they do not consciously tell their bodies to react in this way. However, their nervous systems (or unconscious) retains what their response to similar situations had been. Is their reaction a result of their body? Their previous experiences? I would venture to say that both contribute to affect their behavior. Though this manifestation of the nervous system retaining a memory of prior events/situations is quite different than those with bipolar disorder, it still demonstrates the nervous system's ability, not only to retain data that the I-function or conscious self is unaware of, but to affect reactions of the body.

As scientists believe that traumatic events (such as the death of a loved one or stress) have an impact because a genetic disposition towards bipolar disorder, traumatic events are not seen as triggers for or sole causal factors of this disorder in and of themselves. And, as most individuals have the same general response to the death of a parent, it seems fairly safe to claim that there is something more complicated occurring in the body of someone with bipolar disorder, as events such as this can cause an onset of symptoms. However, this raises the question of why and when the symptoms of bipolar disorder do occur. If it simply the result of a genetic disposition to bipolar disorder combined with the right traumatic event, why is there so much variation in the onset of this disorder?


The predisposition towards bipolar disorder does seem to be inherited (1), as the presence of the disorder in an earlier generation greatly increases the probability of developing bipolar disorder, as seen in case studies. However, genes cannot be the sole cause for bipolar disorder. If this were the case, the identical twin of someone diagnosed with bipolar disorder would always contract the disorder, something that has been disproved by research (4). While scientists feel that, in some patients, bipolar disorder is caused by a single gene for most others it seems to be a mixture of certain genes, a bad cocktail one could say, that causes the disorder (4).

In a study through the Pittsburgh School of Medicine, researchers found that upsetting life events often did coincide with manic (not depressive) cycles (4). A possible explanation for why stressful events cause an onset of bipolar disorder is the fact that stressful life events tend to disrupt the sleep/wake cycle. While loss of sleep due to anxiety has a passing effect in a healthy individual, for those who suffer from bipolar disorder the consequences are far more serious. Though the onset of manic episodes can be attributed to a disturbance in normal biological cycles is helpful in discussing the condition of patients after diagnosis, the problem of how bipolar initially develops still remains.

While doing research, it became apparent that I had raised more questions than supplied answers. While scientists are able to explain the role that environmental factors, such as steroids and seasonal factors, play in the development bipolar disorder (as these things directly affect the body), it was far more difficult to find answers about how stressful life events actually affect someone's body. How does losing a husband cause a drastic enough change in the body to cause an onset of bipolar disorder? Though scientists tend to create two different categories of causal factors, biological and psychological, it seems eminently clear that the psychological factors are causal only because they have biological repercussions. In the discourse that surrounds bipolar disorder and its causes, it seems as though it would be more useful to examine the body and its emotions as a complete entity, instead of unnecessarily compartmentalizing psychological and biological causes.

References

1) 1) About Bipolar Disorder , Website that addresses basic questions surrounding bipolar disorder

2) 2) The Doctor's Guide-Global Edition , Discusses the link found between Serotonin Transporter Gene and BPAD

3) 3) Eureka Alert ,article discussed by Johns Hopkins University that
discusses the possible similar genetic origins for schizophrenia and bipolar disorder

4) 4) Personal MD- Your Lifeline Online , discusses the role that stressful events are thought to play in triggering manic

5) 5) Young and Bipolar , discusses the rate of children and young adults who are being diagnosed with bipolar disorder


The Lonliest Guy: Reality and Mind to Mind Connect
Name: Dana Bakal
Date: 2004-05-12 14:01:37
Link to this Comment: 9840


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Throughout the course of this year's Neurobiology and Behavior course, one theme has made us aware, made us uncomfortable, and given us a very new perspective on the world and our place in it. This theme is that of reality: what we perceive or experience often has little or no correspondence with anything that is outside. We see color when color is not a property of the world, hear sound when it does not exist as sound without us to hear it, and even deduce a united self when there is clearly much the "self" does not know about what the rest of the brain is doing. When a tree falls in the forest and there is nobody there to perceive it, it has no color or sound, and if someone were there to perceive it, that person would not experience everything they perceived, or perceive everything that was happening. By looking at the visual system, comparing the ways it is activated by sight and by hallucinations, I assert that visual reality, and by extension sensory reality in general, is not reflected inside the brain of one individual, but must instead come from the interactions of many brains.
When a person looks at the world, the images are processed in a very complex manner by the visual system. Light bounces off objects and hits the retina, where it passes through two layers of cells, the bipolar cells and ganglion cells (for an excellent diagram, visit http://psych.colorado.edu/~dhuber/p2145/small_retina.jpg), before being turned into electrical signals and sent to the brain (http://webvision.med.utah.edu/sretina.html). These signals then travel to many locations within the brain, the main ones being the lateral geniculate body, then onwards to the primary visual cortex (http://psych.colorado.edu/~dhuber/p2145/pathways.jpg). Different parts of the brain are important for perceiving different aspects of the visual input
The information going to the brain bears little resemblance to the information received by the retina. It is a collection of stories, an interpretation of differing information. As Ernst Mach wrote in 1897, "We do not see optical images in an optical space, but we perceive the bodies round about us in their many and sensuous qualities." Color and brightness are both illusions, stories the brain makes up to make sense of the input. Brightness seems straightforward; more light equals more brightness, but it is not. Center-Surround visual fields in our eyes mean that some light we see cancels out other light we see. look at http://psych.colorado.edu/~dhuber/p2145/hermann.gif for a demo. Color is also unreal in that it is a combination of cone action and color mixing, which can create colors not in the rainbow (http://serendipstudio.org/bb/neuro/neuro04/notes.html). Even the placement of objects in the 3-D world is a story, made up by the brain in response to the different information coming in from the retina of each eye.
This, then is vision, and it is what we have decided is an accurate reflection of the outside world, or as accurate as it can be given the ambiguities of the visual system. There are clear and serious problems with calling this reality, given the aforementioned oddities of the visual system, but it is the agreed upon standard for reality. It is generally assumed that individuals see things very similarly, because they are looking at the same reality.
But what about hallucinations? Although these do not pass through the retina, they nevertheless seem to the brain as if they were "real" images. In visual hallucinations, the very same parts of the brain are activated as are activated by visual input. Discussing the association of complex visual hallucinations to the parts of the brain normally responsible for processing visual input, R. Joseph, Ph.D., writes "Presumably, the anterior-inferior temporal lobes and associated limbic nuclei give rise to the most complex forms of imagery because cells in these areas are specialized for the perception and recognition of specific forms, including faces and people (http://www.brain-mind.com/Hallucinations88.html)." In other words, the parts of the brain associated with looking at faces and people are also activated by hallucinating faces and people.
A neurobiology web paper by Jennifer Cohen clarifies further, stating that " An fMRI study of persons with Charles Bonnet syndrome found a very high correlation between the types of hallucinations experienced by these patients and increased activity in the corresponding visual area of their brains. For example, patients hallucinating in color showed activity in an area known to be the color center in the fusiform gyrus while a patient hallucinating in black and white showed activity outside of this region... So it seems that whatever is responsible for hallucinations of this sort stimulates them through the same means we use to interpret our visual reality under normal circumstances (http://serendipstudio.org/bb/neuro/neuro01/web2/Cohen.html).
Hallucinations are not considered real vision, but they nevertheless activate the exact brain regions activated by what we consider "real" sights.
Given the problems and ambiguities of sight, and the brain activation patterns common to both vision and hallucination, it is possible to assert that there is no difference. If the brain=behavior model that we have been developing in Biology 103 holds true, then whatever is in the brain, the nervous system, is what matters. It does not matter whether the "vision" was from outside or inside, and it is not truly possible to determine which it was by looking at brain scans. Nevertheless, we have decided that real vision is separate from hallucination, and moreover that real vision is the only valid view of the world. How, if there is no in-brain difference, do we make the distinction? It is a social distinction. What we call vision is "real" because more people can come to a basic consensus about what they see. Hallucinations are unique to the individual, and without this social referencing of reality, they are not considered valid.
Even within the visual system, social referencing between boxes is evident. The differing information from the retinas, the perception of color and shape, everything about vision must be confirmed and checked against the information from the rest of the visual system. Multiple cues are used for things like 3-D vision (http://www.science.mcmaster.ca/Psychology/psych2e03/lecture6/psych.2e03.lecture6.html), just as multiple individuals are used for reality construction.
Given the connections and influences within and between one set of boxes, that of the individual human brain, we have to look at the implications of another set of connected boxes. That is, the connected brains of individuals, forming a network of influence just like inside the individual brain. Interesting questions brought out by this idea include: what is the impact of the internet on social networks of brains and on reality, how can we determine if there is an outside, and others, for you to think of and discuss.


References

1)University of Colorado, Diagram of Retinal Cells.
2)Webvision, Retinal Anatomy
3), Diagram of Visual Pathways.
4)University of Colorado, Center-Surround vision.
5)Neurobiology and Behavior Notes, Biology 202.

6)Hallucinations, Hallucinations.
7)Bryn Mawr Webpapaper, Visaul Hallucinations: Another Argument for Brain Equals Behavior.
8)Mcmaster, 3D Vision


The Game of Attraction
Name: La Toiya
Date: 2004-05-12 14:31:10
Link to this Comment: 9841


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

What is attraction and who or what are the players? If there are players then who or what is being played? For many, attraction is believed to be just physical, but there's much more going on than what meets the eye when attraction takes place. That which is involved is both physical and mental steeply based in biology, history, and society. Physical appearance is one of the most obvious players in the game of attraction. Research on the value placed on physical appearance began in the 1930s by sociologists who constructed mate selection questionnaires. Physical appearances and the role it plays in attraction dates back to the general theory of mate preferences by Charles Darwin. We as humans will go far in order to attract others to us. When we've become attracted to another person, we often change the way we act in effort to attract that person. We find interest in others and then behave in ways that will attract them to notice. These behaviors involve trying to conceptualize and incorporate what we think attracts the other person often being physical characteristics such as hairstyle, wardrobe and other visual attributes tied to ideals of beauty. There is so much involved in attraction even when dealing with just the physical point of view.

In the past decade, there has been an intriguing amount of interest among sociologists in Darwin's theory of evolution concerning sexual selection and mate preferences. Darwin's theory of sexual selection suggested that individuals compete with members of their own sex for reproductively relevant resources held by numbers of the opposite sex and from the other sex's point of view; the process of mate preferences comes into affect. These core issues of competition are what lead me to think attraction is a game, not in the sense that it's funny or artificial but rather real and in the respect that there is something to win in mate preference whether it be sex, children or a future. Being able to choose which of the opposite sex is more attractive, males "naturally" seek out and take advantage to copulate with as many different females as possible, especially ones that display the fertility markers of youth. ON the other hand the female is in most cases more sexually cautious. This to me can be connected to the key biological difference between the sexes; women bear a limited number of offspring, whereas men can sire many children.

The rules and strategies in the game of attraction exist on all levels. Babies seek out and enjoy contact with many individuals and furthermore they seek out and accept comfort from them. Babies attachments have four features which Hazan and Diamond state are evident in there overt behaviors directed towards and attachment figure; seeking and maintaining physical proximity (proximity maintenance), seeking comfort or aid when needed (safe haven), experiencing distress on unexpected or prolonged separations (separation distress), and relying on the attachment figure as a base of security from which to engage in exploratory and other nonattachment activities (secure base). (1) It's an interesting point that these same behaviors will be revisited and redirected to a mate later in life. Just like infants, adults tend to seek out contact and comfort from different people. There is even evidence that the chemical basis for the effects of close physical contact may be the same for lovers and mother-infant pairs.
Oxytocin is a hormone that triggers labor in pregnant women and milk letdown in nursing mothers. It's believed to be a stimulant with infant attachment and maternal care giving by allowing continuous close bodily contact. It also goes for lovers; Oxytocin builds sexual stimulation and excitement and is exchanged during sex. Oxytocin also plays a role in triggering daydreaming, that funny feeling in the pit of our stomach, and the fast heart beats when attracted to a person. (Sort of like stuff loaded in cupid's arrows.) The psychological and neurochemical process of releasing the Oxytocin may lie at the center of our attractions and passions.

As humans we are attracted to others by their physiques, personality, social and financial status, but many scientists argue that the effects of sensory input on hormones is essential to any explanation of mammalian behavior, including aspects of physical attraction. These types of hormones are known as Pheromones, body scents that we are unable to smell. How we pick up the body scents is by two tiny pits in the nose known as vomeronsal organs (VNO). They have a direct connection to an older more primal part of the brain than the area that process "normal" everyday smells.
According to Darwin, females of various species are more selective in choosing mating partners than their counterparts. Consider the role females play in the survivals of their offspring, which is why it makes sense that women tend to be attracted to a man with high standards social or financial. This helps insure that he will be better able to nurture her offspring. Although sexual strategies theorist acknowledge the mating behavior of men and women can be similar in some respects under certain ecological conditions, above all they emphasize sex differences. Men are thought to hold more value on physical attractiveness in their romantic partner, but that doesn't mean women have their own perception of men's attractiveness.

Studies show that woman value physical attraction less in a romantic partner than men. Physical appearance may please and excite a female, but appearance does not provide her with any information about the male's ability to defend and support her. (2). Women prefer the potential to be provided with future resources in context of seeking a long-term mating partner. But those that are looking for a short-term relationship don't have to look to far into those realms. Explaining how women that prefer a romance partner rather than just sexual partner would want the relationship to blossom from friendship, which might give women an incentive to make friends with attractive men. That makes sense in my life, I mean who doesn't have that friend that rarely dates but is the pickiest and most critical one? Well, I have a lot of them kind of friends, and I think it's for the same reason; their concept of attraction is driven by different (long-term) reasons. On the other hand, physical attractiveness is the primary for most, phew-almost used "all", men, Society is run by whom; Men, and what is attractive to men is what fuels media, religion, education, culture...you name it...yes that too! If you analyze attraction in this way u have to ask yourself how superficially structured attraction has come to be. And how it's woven into what we know as "Life."

Just as this paper may come across, my thoughts regarding sexual attraction are intricate and complex simultaneously. While hard to follow, my theoretically strategic approach to analyzing attraction reveals the intricate and complex nature of attraction including its causes and affects. People try too hard for attention, translating into false attraction. False attraction is has a snowball affect leaving in its trail false social constructions and notions of attraction. This overwhelming need to attract is the root to loving money, which is the root of evil. This evil (a.k.a. greed) has touched and been planted into all institutions of life. Wow, this is completely different from the point that I initially wanted to make... attraction really isn't a game. You know what they say, "Players get Played!"

References

1) Hazan, Cindy; Diamond, Lisa M. June 2000. "The Place of Attachment in Human Mating." Review of General Psychology Vol.4(2): 186-204


2) Singh, Devendra. Aug 1993. "Adaptive Significance of Female Physical Attractiveness" Journal of Personality and Social Psychology Vol.65(2): 293-307

Kohl, J.V. et al. 2001. "Human Pheromones: Integrating neuroendocrinoloy & ethnology." Neuroendocrinology Letters Vol. 22(5): 309-321

Insel, T.R. 2000. "Toward a Neurobiology of Attachment" Review of General Psychology Vol.4 :176-185

Berscheid, E.; Walster, E. 1974. "Physical attractiveness." Advances in experimental Social Psychology Vol.7: 157-215

http://www.units.muohio.edu/psybersite/attraction/: Living in a Social World

http://www.hhmi.org/senses/d230.html: Pheromones and Mammals; A Secret Sense

http://anthro.palomar.edu/evolve/evolve_2.htm: Darwin and Natural Selection


I Cannot Take Five Cookies
Name: Ariel Sing
Date: 2004-05-12 16:38:59
Link to this Comment: 9842


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

I cannot take five cookies. I cannot fly on the eighth day of any month. I cannot make a CD with eighteen songs. For me all of these numbers are wrong. There is no rational reason that they are wrong, they simply are. This phobia is a more mild example of obsessive-compulsive behavior. Many people have disabling obsessions or compulsions (or both) that they are unable to control completely. Many people who have Obsessive-Compulsive Disorder (OCD) are dramatically affected and, in extreme cases, it can control their entire lives. Often this impulse is detrimental to daily functioning, however there are situations in which obsessive-compulsive tendencies can be valuable. One of the places that OCD is beginning to be acknowledged and even lauded is within the art community. Especially in contemporary art, obsessive-compulsive patterns can often be recognized.

Two recent exhibits explore the nature of obsessive-compulsive habits in modern art. OCD, which has recently opened at the Boston Center for the Arts: Mills Gallery, intentionally uses many of the symptoms of this disorder to influence the art displayed. Dia:Beacon, a one-time box-factory which the Dia Art Foundation renovated for their permanent collection of large pieces of installation art, (1) simply happens to display artists that demonstrate OCD tendencies.

Common obsessions are:

"Fear of dirt or contamination by germs; fear of causing harm to another; fear of making a mistake; fear of being embarrassed or behaving in a socially unacceptable manner; fear of thinking evil or sinful thoughts; need for order, symmetry or exactness; excessive doubt and the need for constant reassurance." (2)

Common compulsions are:

"Repeatedly bathing, showering or washing hands; refusing to shake hands or touch doorknobs; repeatedly checking things, such as locks or stoves; constant counting, mentally or aloud, while performing routine tasks; constantly arranging things in a certain way; eating foods in a specific order; being stuck on words, images or thoughts, usually disturbing, that won't go away and can interfere with sleep; repeating specific words, phrases or prayers, needing to perform tasks a certain number of times; collecting or hoarding items with no apparent value." (3)

While many people exhibit these tendencies to some degree, the characteristic that distinguishes the severe form of the disorder is the overwhelming and unwanted nature of the obsessions and compulsions. The effected individual becomes a slave to the need. Surrendering to this fixation will often relieve a feeling of panic or fear - but it is only a temporary balm. (4)
Obsessions are defined as: "an unwanted, intrusive, recurrent, and persistent thought, image, or impulse." For people suffering from OCD these obsessions are not voluntary, although they do recognize that the thoughts come from within them and are their own. (5)

Compulsions are defined as: "a repetitive and seemingly purposeful behaviour [sic] that is performed according to certain rules or in a stereotyped fashion." These compulsions are not voluntary. While perhaps "seemingly purposeful" the behavior does not have an actual physical purpose. Its real purpose is to alleviate a fear or try to prevent a negative action from occurring. If the action is interrupted the person will often have to return to the beginning and start all over again. It bears mentioning that not all of these compulsions are manifested overtly. There is another less recognized form of this aspect of the disorder called "covert compulsion." A clinical example of this is: "a man had the compulsion to say silently a string of words whenever he heard or read of any disaster or accident." Thus the compulsion did not cause a physical action, but a mental one. (6)

OCD has often gone unidentified and untreated. Many people will not seek treatment if they find their obsessions or compulsions to be too embarrassing to reveal, or if they do not feel handicapped by their symptoms. (7) A mild form of OCD can even be useful as a motivational tool, driving people to explore areas of their personality they might otherwise ignore. Beyond a certain point the disorder becomes devastating and crippling. One of the realms where it has become acceptable, and even encouraged, to allow obsessive-compulsive traits to be seen is within the art world.

The modern art gallery Dia:Beacon has recently been opened on the Hudson River in New York. There are many installation pieces that demonstrates obsessive-compulsive patterns at Dia:Beacon. These range from the thousands of minute rectangles, which resemble large sheets of graph paper, drawn by Agnes Martin (8) to multiple walls covered in mathematical and musical writing and collage by Hanne Darboven. (9) One artist stands out among the others: On Kawara. This Japanese-born artist has traveled extensively during his life and has produced a large body of work. The sequence displayed at Dia:Beacon is called the Today Series. This consists of many paintings, each on a separate monochromatic surface. Printed on each surface is, "the date of the day on which the individual painting is executed, in the language and according to the calendrical conventions of the country in which Kawara is present when he begins it." While to many people this alone might constitute obsessive-compulsive behavior, Kawara does not stop there. He works with only eight set sizes of canvas, all horizontal. An original mixture of paint covers each surface. Although the shades are unremittingly similar, Kawara shifts the tones slightly throughout the creation of his series. There is even a ritual to the painting of the base: "four or five coats of acrylic are evenly applied to the surface of the canvas, on the sides as well as the frontal plane, and each layer is sanded down before the next is added, creating a dense matte surface." On top of this Kawara adds his hand-formed dates, maintaining a consistent sans serif type style, although his carefully chosen fonts very slightly from the beginning of the work to the end. The ritual and repetition of this body of artwork would be enough to place it in the obsessive-compulsive category. Yet Kawara performs one more ritual that transforms this into a true demonstration of OCD: If his painting is not finished by midnight of the day that he started it, he destroys it. (10) This destruction is a classic symptom of people suffering from a compulsion. (11)

Although it is easy to recognize the overt symptoms of OCD, they cannot be justly comprehended without a basic knowledge of the neurobiology that causes them. Unfortunately the mechanisms of OCD themselves are not yet fully understood. While current diagnosis is primarily based on the expression of symptoms, there is a great deal of research being done concerning the causes of OCD.

A large part of the information learned about OCD was first recognized because doctors noticed that medication often used to treat depression, such as clomipramine and SSRIs (Selective Serotonin Reuptake Inhibitors), helped to alleviate some of the symptoms of OCD. SSRIs act by preventing neurons (nerve cells) from reabsorbing the serotonin neurotransmitters that they have released. This causes the serotonin to be used for a longer period of time. Since serotonin is one of the chemicals that causes the brain to feel varying degrees of euphoria this extended use often elevates a person's mood. (12) Simply because SSRIs have alleviated the symptoms of some OCD patients does not necessarily mean that problematic reuptake of serotonin is what causes OCD. (13) Just as "although an analgesic might alleviate pain, one cannot conclude that pain is attributable to the absence of analgesic compounds," (14) so it is with serotonin. There is very little understood about the role of serotonin in regards to OCD. This is underscored by a study wherein patients suffering from OCD were treated with a drug that stimulates specific areas of serotonin production and saw their symptoms become significantly more severe. (15)

Recently new tests using positron emission tomography (PET) scanning have been completed. These tests compare "normal" people (i.e. people who do not have OCD or any other diagnosed neurological disorder) to people with OCD (16) and compare OCD sufferers who are receiving treatment to those who are not. (17) The use of PET scans in this fashion, although in an introductory stage, reveal that OCD patients have reduced amounts of white matter ("the portions of the brain and spinal cord which are white and composed of the long, thin extensions... of neurons.... In the brain, the white matter carries the nerve impulses...") (18) when compared to their "normal" counterparts. This implies that OCD is "a widely distributed brain abnormality." (19) PET scans have also shown that when OCD is treated, the caudate nucleus (a region of the basal ganglia, which is associated with the formation and destruction of habits) reacts by changing its activity relative to the amount of treatment received. All of these findings are preliminary and not fully comprehended. Much work remains to be done. (20)

One of the newer areas that has shown interesting results is in the field of genetic factors. A recent study found that a possible cause of OCD is a mutation on the hSERT gene, which permits neurons to collect serotonin. It was also found that there is the possibility of a second mutation on this same gene leading to even more severe symptoms of OCD. It seems that the people who have this mutant gene are the same people for whom SSRIs do not work. Currently more research is being done in this area. (21)

Until the time when OCD is fully understood and fully treatable, people who experience it often search for ways to channel their symptoms. The new contemporary art exhibit in Boston, OCD, demonstrates an example of successful channeling. Instead of the obsessive-compulsive traits of the eight presented artists being an unacknowledged contributor to the art process, the artists explore their traits. Each artist has a very different goal and medium. Jason Dean popped bubble wrap measuring forty-eight inches by one hundred and ten feet for four hours and twenty-three minutes. He videotaped the process and both the bubble wrap and the tape are on display. Chris Francione is exhibiting a "grid of his studies, spanning a 30 ft. wall.... Each study is [an exacting] 2x2 inches." (22) Nancy Havlick created many colorful eggs made out of sugar and Armenian spices and then precisely placed them into a pattern mimicking a traditional rug. (23) Matthew Nash (the curator of the exhibit) took nine photographs of war images and meticulously recreated them using 2500 M&Ms and Reeses Pieces. Morgan Phalen used surgical tools to make thousands of intricate paper cuttings representing household implements, such as scissors and knives, all grouped into distinct piles. Jennifer Schmidt carefully filled in the dots of standardized testing answer sheets to create detailed patterns. Luke Walker displays a video and collection of photographs that explore the contemporary cultural obsession "Junk in the Trunk." Joseph Trupia utilizes repetitive motion in his work: one huge canvas contains thousands of tiny enameled circles; another work is colored-in entirely with ballpoint pens. (24)

These artists are helping to bring the disorder into the public mind as a creative and intellectual factor. This also helps those who have no conception of how it must feel to have OCD understand the omnipresence of the problem. The acts and rituals performed by the artists seem overwhelming, and often meaningless, to the observer. The sheer amount of time and energy lavished on the projects mimics the frustration that people with OCD feel when compelled to repeat their patterns, even to the point of injury.

By presenting this exhibit Nash helps to demonstrate that there is a place for OCD in contemporary art, it no longer needs to be a hidden illness. It seems likely that once art historians and OCD specialists begin to review art, both modern and ancient, with an eye for OCD patterns, they will find a whole new world influenced by obsessions and compulsions. Perhaps eventually this disorder will be fully understood and treatable, but until then the least that we can do is to accept and support people who suffer from Obsessive-Compulsive Disorder and help them to experiment with their inherent traits.

1 2 3 4 6 7 9 10 11 12 13 14 15 16 17 19 20

References

1) Dia:Beacon Riggio Galleries , Dia Art Foundation.

2) Obsessive-Compulsive Disorder , WebMB Health

2) Obsessive-Compulsive Disorder , WebMB Health

4) Padmal de Silva and Stanley Rachman, Obsessive-Compulsive Disorder: the Facts, (New York: Oxford University Press, 1998), 3.

5) Padmal de Silva and Stanley Rachman, Obsessive-Compulsive Disorder: the Facts, (New York: Oxford University Press, 1998), 3.

6) Padmal de Silva and Stanley Rachman, Obsessive-Compulsive Disorder: the Facts, (New York: Oxford University Press, 1998), 5.

7) Obsessive-Compulsive Disorder , HealthLink: Medical College of Wisconsin

8) Agnes Martin , by Michael Govan

9) Hanne Darboven , by Lynne Cooke

10) On Kawara , by Lynne Cooke

11) Padmal de Silva and Stanley Rachman, Obsessive-Compulsive Disorder: the Facts, (New York: Oxford University Press, 1998), 5.

12) The Nervous System: Depression , Natural Health School

13) The Disease Ritual: Obsessive Compulsive Disorder as an Outgrowth of Normal Behavior , by Diana Smay

14) Frank Tallis, Obsessive Compulsive Disorder: a Cognitive and Neuropsychological Perspectiv, (New York: John Wiley & Sons, 1995,) 41.

15) The Many Different Faces of Obsessive-Compulsive Disorder , by James Broatch

16) Obsessive-Compulsive Disorder , HealthLink: Medical College of Wisconsin

17) The Many Different Faces of Obsessive-Compulsive Disorder , by James Broatch

18) White Matter: Dictionary Entry and Meaning , Hyperdictionary

19) Obsessive-Compulsive Disorder , HealthLink: Medical College of Wisconsin

20) The Many Different Faces of Obsessive-Compulsive Disorder , by James Broatch

21) Mutant Gene Linked to Obsessive Compulsive Disorder , The National Institute of Mental Health

22) OCD , The Boston Center for the Arts Mills Gallery

23) OCD , The Boston Center for the Arts Mills Gallery

24) OCD , The Boston Center for the Arts Mills Gallery


The Damaged Central Nervous System: Should We Fix
Name: Bradley Co
Date: 2004-05-13 10:37:29
Link to this Comment: 9845


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In the event of an injury, or debilitating disease, a person's mentality is to go to the doctor to fix it. Much like taking your car to the mechanic your body simply needs to reconnect existing parts or replace broken parts. Break an arm and the doctor realigns it and lets it heal, good as new. Shatter a hip and it can be replaced with a top of the line titanium version. The mechanistic fix it approach is taken because much of the human body has a unique healing homeostatic property. However, injuries to the central nervous system (CNS) are different. The CNS lacks the ability to regenerate cell bodies and survive (1). Yet we still maintain the fix it mentality to CNS injuries and illnesses, specifically spinal cord injury (SCI). Due to the complexity of the CNS, the progress is slow and cures are not in the near future. So why do we continue with this fix it mentality? Millions of dollars are spent each year on research and procedures related to CNS injury. Could this energy and money be spent in another manner?

The reason CNS injuries are not easily fixable does have its basis in the fact that it cannot self regenerate. However, it is not the only reason a cure for such things as a SCI is years beyond reach. The problem lies in that there are multiple tribulations that need to be overcome. Matters such as the inability of neurons to grow through scar tissue and the demyelination process create issues that need to be overcome. Researchers are following the mechanistic approach to the larger problem by dealing with each issue separately and taking small steps towards the ultimate goal of being able to understand, fix, and regenerate the CNS.

The starting point of fixing an injured CNS begins with understanding the damage done to it. When a spinal cord is injured by a trauma bone fragments, spinal discs, or ligaments can tear into the spinal cord tissue (2). Contusions and/or compression can also cause similar damage to the spinal tissue (3). However, it is not simply the axons cut and neural cells broken in the trauma event that is problematic. Further damage continues hours, weeks, and months after injury (2) (3) (4) (5). After the initial trauma heavy bleeding in the grey matter area causes swelling and further compression as well as spinal shock, which is a lowered blood pressure that interferes with the healthy neural electrical activity (2). Over the next few weeks damage continues through several processes. Nerve cells are killed by the excess release of neurotransmitters, such as Glutamate. The immune system, previously blocked from the spinal cord by the closed circulatory system can now cause an inflammation response as well as cell mediated response killing neural cells. Free radicals produced by normal cell metabolism usually held at low numbers in the spinal cord can now accumulate and cause destruction to healthy neural cells by disabling necessary cell molecules. Adoptosis, programmed cell death, is also accelerated in spinal cord injuries for reasons currently unknown (2). Healthy neural cells surrounding damaged tissue also tend to lose myelin, inhibiting action potential transmission along the axon (4). Scar tissue formed after injury is known to prevent neural growth and transmission as well (5). Fixing a CNS injury is more complex because damage can continue long after the injury itself. More difficulties arise adding more steps to fix.

One of the first steps in the approach to mechanistically fix a CNS injury is to overcome the inability to grow neuronal cells through scar tissue. Research has shown that glial cells accumulate at the injury site and produce a scar tissue which consists of chondroitin sulfate proeoglycans which hinders neuronal growth (5). To overcome this affect, the injection of chondroitinase ABC enzyme has been shown to dissolve the inhibitors and enable some damaged neurons to regrow across scar tissue (5) (6).

Another step in the course of repairing the CNS is reversing the demyelinization process. Myelin is a white fatty substance produced by the Oligodendrocyte glial cells in the CNS and Schwann glial cells in the Peripheral Nervous System (PNS) (7). Myelin is wrapped around the axon of neurons and increases the speed of an action potential (7) (8). Axons damaged in trauma can demyelinate due to an auto immune response but demyelinization can also occur from CNS diseases like multiple sclerosis (MS). MS is a disease not fully understood but is believed to initiate an auto immune response which attacks myelin, as well as oligodendrocytes, with macrophage, lymphocyte, and plasma immune cells. Normal remyelination is prevented due to the reduction of oligodendrocytes (7) (8). Demyelinated axons transmit nerve impulses ten times slower than myelinated axons (8). Demyelination is currently a large road block in the goal of curing CNS injury. Current research attempting to solve this issue involves stopping the immune system from killing glial cells and demyelinating with such drugs as beta-interferon (7). However, incomplete effect, side effects, and new arising problems have hindered a current solution.

It seems that along this process new steps arise with each step forward. The fix it solution continues to concentrate on taking one step at a time. Whether this be removing scar tissue inhibition or reversing demyelinization, the view is that eventually each step will be overcome. The beneficial aspect of conquering each step separately towards a big picture goal is the knowledge attained about each step. In the example of MS, a great deal of knowledge about the immune response and viable suppression of it is being studied. Research on this subject may lead to a breakthrough in another subject, such as a cure for AIDS, a separate auto immune response problem. While each step in the CNS injury problem can lead to diversions, they are often beneficial diversions creating more knowledge. More knowledge gives us more opportunities, and other possible solutions.

If a mechanic realizes that fixing the existing parts is not possible, the alternative is to replace the old parts with new ones. This is another potential solution in line with the fix it mentality of CNS injuries and illnesses. The idea of neuronal transplantation originates from the ability of the PNS to regenerate axons. If a person cuts their skin, the PNS can regenerate axons and restore feeling. However, if a CNS axon is severed or damaged the result is often some form of incurable paralysis. Dr. Mary Bunge, of the University of Miami in Florida, successfully transplanted peripheral nerve cells into the CNS of rats demonstrating survival and proper function (9). However, such grafting and transplantation techniques are not completely viable for humans due to the inability to obtain large quantities of PNS nerve cells. Instead research is currently pointed in the direction of neuronal stem cell transplantation. Neuronal stem cells are early developmental, slowly dividing cells that have the ability to differentiate into any type of neuron (10). Such implantations have been successful in rats (9), but due to stem cell research being a very young field, as well as the moral issues involved with human stem cell research, forward progress is currently at a slow steady pace. Although harvesting neuronal stem cells from embryos is of high moral debate, promising information was found when scientists discovered that the hippocampus region as well as ependymal layer in the adult brain contain neural stem cells disproving the theory that a human is born with a certain finite number of CNS neurons (11). This information could lead to procedures that trick the CNS into replacing its own damaged parts.

The replacement of damaged neurons with new neurons is promising because it requires the complete knowledge of how a subsystem, like stem cell differentiation, within a system, like the CNS, works. A mechanic rarely replaces a part that he doesn't know the inner workings of. It is the knowledge of these inner workings that is beneficial to society as well as the cure of CNS injury. This knowledge leads to discoveries and a discovery in one research topic could lead to ten more in a completely different subject. As the neuronal stem cell research field advances, it is apparent that with each discovery many new problems or steps will also form. However, science should not be discouraged when the advancement of one problem creates more. This is the basis of scientific theory, with every finding you should come up with more questions. The creation of more problems can lead to the assumption that the goal is unreachable. This is not an unreasonable solution, but rather an unproductive one.

Due to the complexity of the CNS and appearance that a cure for CNS injury and illness is not in the near, or even far future, another solution would be to veer away from the mechanistic fix it approach. This new approach would entail treatment that focuses on adapting symptoms to every day life. The idea is to teach the individual how to deal with the problem, rather than fix it. In essence, time and energy is spent coping with the injury or illness by improving daily life. This technique is quite common in diseases like autism.

Autism is a spectrum disorder that has great variation among individuals affected (12). Symptoms include, but are not limited to insistence on sameness, difficulty in expression, repetition, preferring to be alone, tantrums, little or no eye contact, spinning, and unresponsive to verbal cues. Affected individuals can vary in both the type and degree of symptoms (12). While much of the causes of the disorder are unknown, several brain abnormalities are common. Such abnormalities include the underdevelopment of the amygdala and hippocampus regions often associated with emotions, aggression, sensory input, and learning. Further physical abnormalities include either smaller or larger vermal lobules VI and VII of the cerebellum associated with attention (13). A biochemical abnormality is the elevated levels of serotonin in the blood and cerebral spinal fluid (13). These abnormalities have been found to be fairly consistent within autistic individuals. However, treatment has taken the direction of coping with and altering behavioral abnormalities rather than fixing physical abnormalities

The complexity of the brain has led autism treatment towards the goals of reducing autistic behaviors and increasing appropriate behaviors (13). Behavioral tools, like positive reinforcement and time outs, encourage communication and better social behavior. Meanwhile chemical treatments like diet alteration, Ritalin, and Secretin also try to improve behaviors but mainly try to improve general well being (13). (14). Modern popular treatment of autism represents the idea of assimilation. Trying to make the lives of affected individuals less harmful and more accepted by society. This is done by classifying their behavior as "abnormal" and trying to alter it to more "normal" behavior. This itself is disturbing because it seems that the label of abnormal is more harmful to the individual than the effort to assimilate is helpful. The effort to remove autistic individuals as societal outcasts infers that they are indeed second rate citizens. A utopian world would remove the classifications of abnormal and normal. This is the major fault of this method of treatment. It involves a dream of changing society. The attempt to cope with an injury or disease avoids the issue of fixing a problem by wishing for a better situation. Trying to aid an individuals daily life is beneficial for the individual but harmful to society. It affirms societal distinctions of good and bad with no real benefit to the society or the disease. The benefit is solely on the individual. One might argue that this is good because the individual receives all the benefit. However, it is just a disguise to the truly complete beneficial factor, a cure.

Helping an individual cope with daily life problems is beneficial, and should not be ignored. However, the mechanistic fix it approach has the ultimate goal of a cure. It is the cure itself and method of attaining the cure that is beneficial not only to the individual but society as well. Within the goal it is understood that progress is in the form of baby steps, but the fact of the matter is that each step is still progression. Advancement in science, be it large or small, is rarely a bad thing. It is this type of development that truly benefits both the individual and society. Through the attempt to reach one goal many others may be achieved. There is no doubt that a cure for CNS injuries and illnesses will eventually be found. It may even be several hundred years from now, but in the struggle many other achievements will also be reached due to that goal.


References

1)The Relationship Between Neuronal Survival and Regerneration, A comparison of PNS and CNS regeneration

2)National Institute of Neurological Disorders and Stroke: Spinal Cord Injury Hope Through Research, a detailed history, anatomy, and information on SCI

3)National Institute of Neurological Disorders and Stroke: Spinal Cord Injury Information Page, an overview of SCI treatment, prognosis, and research

4)What Happens in Human Spinal Cord Injuries?, an overview of human response to SCI

5)International Campaign for Cures of Spinal Cord Injury Paralysis, an overview of current neuronal regenerative research hypothesis

6)Spinal Research: Inhibition by Scar Tissue, a detailed explanation of scar tissue inhibition

7)Disorders of Myelination, an overview of Myelin and its disorders

8)Demyelination, explanation of the demyelination process

9)Promoting New Growth in damaged nerves, examples of regeneration research

10)Neuronal Regeneration in the Retinas of Fish: Insights into Neuronal Transplantation in Humans?, a study of neuronal regeneration in fish and humans

11)Stem Cell Research, an optimistic overview of stem cell research

12)Autism Society of America, an overview of common characteristics of Autism

13)Center for the Study of Autisma>, an overview of autism

14)National Institute of Child Health and Human Development, an overview of the use of secretin for autism


Do We "See" Hallucinations?
Name: Lindsey Do
Date: 2004-05-13 11:51:37
Link to this Comment: 9848


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Have you ever experienced a psychedelic vision—vivid splashes of color, flares, sparks or cloud-like forms? Perhaps this dynamic state includes geometric shapes, intricate patterns or even nonsensical, fabricated images that dance before your mind's eye? What does it mean when these surreal images intrude our everyday visual awareness? Hallucinations can be prompted by a slew of environmental, emotional or physical factors such as stress, hallucinogenic drugs/medication, sleep deprivation or mental illnesses (1). But if our brains "fill-in" a large portion of what we interpret as reality (2), then hallucination, as a perceived visual representation, could also be spontaneously generated by the visual pathway itself. By distinguishing between hallucination in a conscious, waking state, and in a semi-conscious (in-between sleeping and waking) awareness, perhaps we can better understand the relationship between visual hallucination and sight as a continuum of all forms of visualization and mental imagery.

Hallucinations are generally thought to be distorted or false sensory experiences generated by the mind rather than by external stimuli (1). Although these experiences may target auditory, tactile, olfactory and gustatory senses, my focus will be primarily on the visual modality. Hallucination can often be confused with objective reality—ranging from individuals who experience them as complete and entirely "false" pictures of the external world, to those who are aware of theseperception as non-existent objects. To simplify, we will primarily examine hallucination as a conscious experience in self-aware individuals, in particular, those affected by Charles Bonnet syndrome.

Charles Bonnet syndrome is characterized by complex visual hallucinations that occur in individuals with ocular degeneration whose sight is either severely or completely impaired (3). Unlike most who suffer hallucinations from pathological conditions, including delirium tremens, drug-induced hallucinations, narcolepsy-cataplexy syndrome, migraine coma, Parkinson's disease, Schizophrenia, epilepsy and Lewy body dementia, Charles Bonnet patients retain full awareness and understanding of reality (3). They are least likely to be distressed by their hallucinations, perhaps because of their higher cognitive function and preservation of CNS function and adaptability. In general, after an initial experience with hallucination, most non-psychiatric individuals find them non-threatening, and are able to discriminate their nature (3).

In experiments with Charles Bonnet patients, researchers used fMRI scanning technology to differentiate between visual perception and sensory input in order to attempt to localize hallucinations in the brain. Those with spontaneous hallucinations and those who responded to visual stimulation (who had never hallucinated before) exhibited similar activity in the ventral occipital lobe (4). Hallucinations of faces, textures and objects, in black and white and color, were found to roughly correspond to the functional, topographic organization in the ventral extrastriate visual cortex (4). For example, one patient who hallucinated objects displayed activity in the middle fusiform gyrus, an area that responds to visually presented objects (4). The implications of this experiment might suggest that the location of the activity within the visual cortex relates to the contents of the hallucination.

More importantly, although these patients were given external visual stimuli (which should theoretically be rendered useless for those who lack the ability to see objectively), their hallucinations paradoxically mirrored the activity of sight—in the absence of visual sensory input! If we follow the "picture inside the head" story, our brain contains the ability to generate a subjective view of reality, within the context of received sensory input. Perhaps upon the absence of this inhibitory input, our visual processing may become untrammeled—randomly releasing images manifested as hallucination, which may have no relation to the world outside us. In the experiments, the patients described the appearance and disappearance of their hallucinations as all-or-nothing events (4). Likewise, if the firing of neurons releases corollary discharge signals, which hinge on an all-or-nothing threshold, then perhaps these internally generated images are products of our nervous system in isolation.

What we "see" is facilitated by photoreceptors located on the back of the retina. These neurons rely on passive currents and permeability to generate integrated signals (which are largely inhibitory). These signals are sent to the brain where they are processed first by a lateral inhibition network before traversing the optic nerve (2). The horizontal alignment of ganglion cells act as "edge detectors," distinguishing light by a surrounding receptive field. If the signals sent by the lateral inhibition network depend on the relative amount of light, than a large amount of what we see is "filled in" (2). By using our eyes to make informed guesses about the world outside us, our unconscious ability to make mental pictures inside our heads become inextricably linked to the sensory input interpreted by our eyes.

Interestingly, it was reported that many could stop their hallucinations by opening/closing their eyes, or altering their field of vision in relation to the hallucination (3). This mechanism of control complicates the relationship between visual precepts (visual hallucinations) and vision. This finding seems to corroborate that hallucination, or even seeing, may be more a conscious process than we are lead to believe—although the "filling-in" is largely unconscious, we still have the ability to physically distort what we see in front of us (poking our eyeball, squinting, crossing our eyes, closing one eye, etc). By altering their direction or focus, Charles Bonnet individuals are not only altering/receiving new sensory input, but they are also disrupting what may have been a static, overloaded visual field. If hallucinations are seen in greater detail than real stimuli for these individuals, and are localized in external space (4), perhaps hallucination is the agitation or hypersensitivity of this innate "filling-in" mechanism, without the inhibitory suppression of external input. Furthermore, the perception of hallucinations as outside the self is strikingly similar to the act of seeing the world as outside our self.

If Charles Bonnet hallucinations share many of the same features as hypnagogic hallucinations (vivid imagery of animals and figures) (3), what does this suggest about visual precepts in fully conscious and drowsy states? The hypnagogic state is the brief transition between waking and sleep, often called the "half-dream state" or "pre-dream condition" (5). Reciprocally, the hypnopomic state describes the phenomena occurring upon waking from sleep. Unlike Charles Bonnet hallucination, visual precepts in hypnagogia are experienced in semi-conscious states. However, experiencing these visions with eyes closed, hypnagogists are similarly autonomous from visual input. Clearly, we do not have to have our eyes open to experience a hallucination since we have the capacity to generate a mental picture, sometimes even more clearly without external inhibitory stimuli.

Some have observed the ability to control these states, captured by Ouspensky's remark that "we have dreams continuously, both in sleep and in a waking state" (5). In fact, neurological evidence supports his statement, proving that the oscillation occurring in REM sleep is associated with consciousness; in other words, our "waking state" is only different from REM dreaming in that it is modulated by incoming stimuli (5). Indeed, the two neurotransmitters that are important in visual hallucination are also significant in regulating sleep patterns. Serotonin and acetylcholine, concentrated in the visual thalamic nuclei and visual cortex, are also gating alterations in sleep and arousal (3). An important aspect of sleep is the switching of the thalamic relay nuclei out of the waking mode (transmitting sensory input to the cortex) into the sleeping mode (in which sensory input is not transmitted). Serotonin can have the affect of blocking afferent information to the thalamus, which is also a junction for the modulation of inputs to the visual cortex (3).

The occurrence of both hypnagogic and Charles Bonnet hallucinations in a context where visual input is deemphasized as a result of sleep or blindness (or a reduced response to visual stimulation) cannot be mere coincidence. The evidence that sleep disturbances may encourage hallucinatory experience as a result of the incursion of REM sleep states (3) may suggest that hallucination arises when sensitivity to visual stimuli is suppressed.

But how do we reconcile the discrepancy between these mental images as projected onto the external world or reverted inwards, especially in altered states of waking and sleep? Researchers have suggested that the eye itself constructs patterns, or three-dimensional forms (5). Moskvitin states that the hypnagogic patterns he observed are "the actual material out of which the conscious mind 'builds' its representation of the external world" (5). Indeed, if the difference between objective seeing and hallucination/dream-like visions is the sequence in which these pictures are assembled, then sequences depicting our objective reality represent a limitation on otherwise unlimited combinations of forms spontaneously released from within (5). First, we generate imagery inside our head (corollary discharges) and then integrate this with the outside world; however, upon the absence or weakness of this product (visual stimulus), we substitute what comes first (hallucinations) as our reality.

However this raises another interesting question—how do we distinguish between this kind of hallucination as "real" or "imagined?" Charles Bonnet individuals may be said to experience "phantom vision" rather than hallucination, because they "see" realistic objects although this information is not reaching the brain (7). Pseudohallucination describes this phenomenon, differentiated from hallucination because those who experience it are aware that their perceptions are unreal. Likewise, a retrospective hallucination mistakes a non-existent memory for an actual occurrence (7). If cortical cell assemblies are repositories of complete, active images—or unconscious memory (forgotten material) that can be accessed and stimulated in hallucinations (8), it seems logical then that "real" and "unreal" images may be confused. Perhaps a person's ability to distinguish between these hallucinations as real or contrived hinges on the ability to consciously access their visual memory.

New research has come to the forefront, in which scientists claim that the internal circuitry of the visual brain can be mathematically replicated in geometric models that resemble hallucinations (6). Cowan, a Professor in mathematics and neurology reports, "because we know how the eyes are wired to the visual cortex, we can calculate what the patterns actually look like there (6)." The patterns are reflexive to geometric hallucinations, which take the form of checkerboards, honeycombs, tunnels, spirals and cobwebs (6). If the brain typically generates a pattern of activity across neurons, it becomes "unstable" in hallucinogenic states; these random discharges are organized and amplified into a pattern that is said to reflect the architecture of the visual cortex. If the brain is designed to build on simple processes to construct more elaborate experiences, than simple hallucinations like photopsia (light flashes), phosphenes (blue lights), scintillations (zigzags), geometric forms, checkerboard patterns, etc. translate to more complex formed hallucinations of people, object, landscapes, animals, etc (7).

Entopic phenomena are images produced by the structure of the eye, namely the eyeball, optic nerve and visual processing cortex in the brain (7). Entopic images are distinguished from visual hallucinations in that they are direct products of the visual system, whereas visual precepts are mental images. However, form constants (geometric images) constitute an important subset of entopic imagery, resembling hallucinations in that they draw from visual processing beyond the structure of the eye itself (7).

Ultimately, if visual hallucinations have a distinct pattern that makes use of certain properties like color, texture, depth, etc. that our eyes perceive, then they must be related in some way to actual sight. Since hallucination is often experienced in greater detail, described to be more vivid (or more real?) than typical images, perhaps this characteristic reflects the brains capacity to trump sensory perception and to release a pure, uninhibited image from the "picture-inside-the-head." Hallucinogens for example are drugs that stimulate hallucinations, pharmacologically affecting the inhibitory mechanisms of our visual process. In this altered state, our brain's internal generation of images becomes our reality. Hallucinations are arguably inextricably linked to what we see—maybe reality (as what we "see") is simply one big hallucination.

References

1)Hallucinations , Definition

2)Tricks of the Eye, Wisdom of the Brain , Bio 202 lecture notes on Serendip web site

3)Complex Visual Hallucinations: Clinical and Neurobiological Insights

4)The Anatomy of a Conscious Vision: an fMRI Study of Visual Hallucination

5)Waking Sleep, Gary Lachman , hypnagogia

6)Mathematicians view unstable activity in brain to better understand circuitry of visual cortex

7)Visual Hallucinations , in-depth powerpoint slideshow

8) Brain modules of hallucination: an Analysis of Multiple Patients with Brain Lesions

9)Background Information about Psychedelic Drugs, an interesting reference about Hallucinogens


Repressed Memories
Name: Emma Berda
Date: 2004-05-13 16:13:27
Link to this Comment: 9852


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In 1990 in Redwood City, California a trial was held for a murder that had occurred more than 20 years before. After a jury deliberation of a single day the defendant George Franklin Sr. was found guilty for the murder of Susan Kay Nason. The damning evidence was testimony from Franklin's own daughter who had witnessed the murder. (1) So why did this case not come to trial for more than 20 years? Because Eileen Franklin's memory of the murder was repressed for so long and without it there was insufficient evidence. Repressed memories are one of the corner stones of psychoanalysis but how valid are they? Eileen Franklin is an extreme example, as most people do not witness murders. However, repressed memories of abuse have been common enough to spark a debate. Adults are accusing and sometimes bringing to trial individuals who they claim abused them as children. Because of all these cases the question of repression is a hot topic. What exactly are repressed memories and what causes them to be repressed? How valid are recovered memories?

Sigmund Freud coined the term repression describing it as the process of blocking out emotionally painful events so that their effects would not have to be experienced. (2) It is important to remember that repression is a subconscious event, the conscious blocking out of memories is called suppression. (2) The theory of repression is based solely on case studies because any real scientific experiment on the subject would be difficult and most likely unethical. This leads to the question does repression actually happen? Recent findings published in Science show that there is indeed a neurobiological basis for the repression of memories. (3) Professor John Gabrieli and his colleagues showed that controlling unwanted memories is associated with increased activity of the right and left frontal cortex which leads to reduced activity in the hippocampus(which is associated with memory). (3) These findings prove that there is a neurobiological basis for repression, they also show that memories are repressed by reducing activity in certain areas of the brain and stimulating activity in others.

But are repressed memories accurate? Case studies have shown that recovered memories may be quite vivid and detailed or they may be very vague (1). Many times recovered memories have been found to be correct and remembered abuse really did happen. (1) What follows is the abstract of a groundbreaking article by Linda Meyer Williams:

"This study provides evidence that some adults who claim to have recovered memories of sexual abuse recall actual events that occurred in childhood. One hundred twenty-nine women with documented histories of sexual victimization in childhood were interviewed and asked about abuse history. Seventeen years following the initial report of the abuse, 80 of the women recalled the victimization. One in 10 women (16% of those who recalled the abuse) reported that at some time in the past they had forgotten about the abuse. Those with a prior period of forgetting – the women with 'recovered memories' – were younger at the time of abuse and were less likely to have received support from their mothers than the women who reported that they had always remembered their victimization. The women who had recovered memories and those who had always remembered had the same number of discrepancies when their accounts of the abuse were compared to the reports from the early 1970's." (4)

However, real memories, both repressed and regular, are often riddled with errors. So even if a memory is real it may still have discrepancies. Let us turn again to the case of Eileen Franklin. Eileen's story changed slightly over several tellings. For example, in her original report to the police she stated that her sister Janice was in the van with her and her father when they initially saw Susan Nason. Months later at the initial hearing she said that Janice was not in the van (1). Besides changing, her story also included many details that were widely reported in newspapers and on TV. Other details that she included could be only taken at face value because they are unfalsifiable or uncheckable(1). So, did Eileen really witness the murder of her best friend? A jury found her father guilty but truly we will never know. Eileen's memory could be a true recovered memory or it could be something called a false memory.

In general humans are easily influenced. With a good technique and some time a professional can easily succeed in getting a person to believe something. What is even easier is convincing yourself of something. I am sure that everyone has at some point in their life told themselves "everything will be okay." Just as our outlook on the present is easily changed, so is our memory. Raymond Lloyd Richmond says in his guide to psychology: "An event that you cannot remember can be psychologically equivalent to an event that never happened" and "An event that you falsely remember can be psychologically equivalent to an event that really did happen." (2) When you take a highly suggestible population and the widely publicized cases of Rosanne Barr and former Miss America Marilyn Van Derbur you invariably get people who come forward claiming to have recovered memories of abuse. And that is what has happened. Many of the people who came forward actually were abused but others had simply read about it and then become convinced that the same thing had happened to them. (1) Elizabeth Loftus states:

"There are at least two ways that false memories could come about. Honestly believed, but false, memories could come about, according to Ganaway (1989), because of internal or external sources. The internal drive to manufacture an abuse memory may come about as a way to provide a screen for perhaps more prosaic but, ironically, less tolerable, painful experiences of childhood. Creating a fantasy of abuse with its relatively clear-cut distinction between good and evil may provide the needed logical explanation for confusing experiences and feelings. The core material for the false memories can be borrowed from the accounts of others who are either known personally or encountered in literature, movies, and television." (1)

Outside sources that could lead to construction of false memories include magazine articles as mentioned above, therapist's suggestions, or popular books such as The Courage to Heal which is a guide for female survivors of child abuse. On page 22 of The Courage to Heal it says, "If you think you were abused and your life shows the symptoms, then you were." (5) These symptoms include common phenomena such as depression, low self-esteem, and suicidal thoughts. (5)

So far we have shown that repression is a very real phenomenon with a neurobiological basis that should be taken seriously. Real recovered memories are valid and are important in psychotherapy and in the prosecution of child abusers. However, we have also seen that false memories of abuse are easily created from internal or external factors. Real recovered memories are valid but false memories are not. In a therapists office this is not an issue because anything that helps a patient whether it is a real or false memory is important. But in a court of law the validity of a memory is extremely important.

It is in the courts where the real trouble begins. A person with a false memory is not a liar because they truly believe that their memory is valid and did not purposely implant the memory in their brain. If a person comes forward claiming that they have recovered a memory of child abuse this very dilemma arises. There is a very good chance that their memory is valid but there is also a chance that it is not. To not prosecute the perpetrator would mean that if they did commit the crime they are going unpunished. But to prosecute the perpetrator means that there is a chance of sending an innocent person to jail. In a case without physical evidence memory is all there is to go on. This is why there is such a huge debate surrounding recovered memories these days, not because true recovered memories may not be valid, but because a recovered memory may be a false memory instead.

What every abused child wants is to have someone recognize the abuse and put a stop to it (2). If the abuse is not recognized and regardless of whether the child represses the memory or not when the memory comes back (or if it never left) the person still harbors the need to have someone believe them. (2) Raymond Lloyd Richmond notes: "If such a person enters psychotherapy for the treatment of trauma, the issue of 'Do you believe me?' can quickly emerge as a therapeutic problem." (2) In therapy the therapist can quickly assure the patient that they do in fact believe them. But if the person goes beyond therapy and contacts the police or a lawyer wishing to prosecute the perpetrator problems can arise. As noted above there is a huge debate about whether or not recovered memories are valid and the police and/or a jury may not believe the person. This could lead to psychological damage as once again the child, now an adult fails to get the authorities to recognize their abuse. It is this coupled with the information given in the previous paragraph that leads me to say that in general repressed memories should be dealt with in therapy and not in court. Of course there are some exceptions to this rule but in general therapy is the best place to deal with recovered memories whether they are false or not. In therapy there is no danger of either the "perpetrator" or "victim" suffering needlessly in case the memory is false or dismissed by the authorities.

References

1)The Reality of Repressed Memory, a long thorough article by Elizabeth Loftus

2)Entry on Repression in the Guide to Psychology

3) The About.com repression page , A good page with lots of links

4)Jim Hopper's page on Repressed Memories, a good resource with material from many sources

5) Bass, E. & Davis, L. (1988). The Courage to Heal. (New York: Harper & Row)


Prescription for an A
Name: Elissa Set
Date: 2004-05-13 22:43:22
Link to this Comment: 9853


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"I have SOOOO much work to do!"

"I have three papers to write by Friday!"

"There's no way I'm going to be ready for this exam tomorrow."

These are common phases heard on a college campus. Students today are under immense pressure to succeed. The number of applicants to graduate schools continues to grow, while acceptance rates are declining, and the job market is fiercely competitive. It's all about who has the highest grade point average, and who can get the best GRE score. For some college students, it is difficult to maintain such a high standard for success without extra help. Beyond office hours, tutors, and flashcards, some students have found that their best study aids come in the form of pills: Adderall and Ritalin. These prescription drugs have become widely available on college campuses in the country. However, many of the people who use Adderall or Ritalin are not prescribed the drugs. They get it from their friends or siblings who have the prescription due to attention-deficit-hyperactivity disorder (ADHD). Students often think that taking a couple pills of Adderall or Ritalin does not have any kind of harmful side effects, but there are many consequences to taking these drugs. They actually are made of substances that are comparable to speed and cocaine. Adderall and Ritalin have entered the college scene as the new "non-drug": a stimulant viewed as harmless, but in fact have very dangerous side effects.

Adderall is made of stereoisomers of amphetamine(C9H13N): 75% of the D-isomer, and 25% of the L-isomer (1). It is a stimulant that affects the levels of two main neurotransmitter of the brain: dopamine and norepinephrine. It increases the amount of dopamine produced, and inhibits the uptake of norepinephrine (1). Increased dopamine levels in the frontal lobe of the brain lead to greater motivation, concentration, and focus (11). Other effects of Adderall include increased blood pressure, dilated bronchioles, and respiratory stimulation (2).

Ritalin is very similar to Adderall. The main ingredient in Ritalin is methylphenidate (C14H19NO2). Like Adderall, it is a stimulant of the central nervous system, and it increases the levels of dopamine in the frontal lobe of the brain (8). Ritalin is not as strong as Adderall, and but it also effective in increasing focus and attention for people (8).

What makes Ritalin and Adderall so easily accessible is that it is most commonly prescribed for ADHD, a minimal brain dysfunction, and narcolepsy, the rare disorder that causes someone to fall asleep suddenly and uncontrollably (7). People with ADHD are most commonly children, but it is a disorder also found in adults. Children with ADHD have problems with concentration, attentiveness, and social interaction. People with narcolepsy can fall asleep at any time, including while they are driving or eating (5). Ritalin and Adderall are often prescribed to people with these disorders, because the increased levels of dopamine improve their ability to function normally. In the past, amphetamines have also been prescribed for weight control, because it makes people more active.

What many people do not know is that both Ritalin and Adderall are classified by the Drug Enforcement Agency as Schedule II Substances. Other drugs listed as Schedule II Substances include opium, cocaine, and speed (2). Ritalin is considered to be similar to cocaine, but it is only weaker and in a little pill (6). What people do not often realize is that Adderall is a form of speed. Speed, which is also known as crystal meth, ice, and crank, is usually made of either amphetamine, or methamphetamine (9). The only difference between amphetamine and methamphetamine is the additional methyl group on the amine on the methamphetamine (4). The use of amphetamines started in the early 20th century after cocaine was declared illegal in 1914 (3). In the 1950's, pills of methamphetamine were legally produced and used widely by truck drivers, college students, and athletes, who all wanted to increase their focus, alertness, and energy (3). Although methamphetamines and amphetamines are incredibly similar, methamphetamine has become culturally demonized, while the amphetamine based drugs Adderall and Ritalin and prescribed to children from the age of three (4).

Even though it is closely related to some of the most condemned drugs in the country, it does not stop people from abusing it. In November 2002, the University of Wisconsin – Madison did a study and found that twenty percent of college students have illegally used Ritalin or Adderal (11). Years ago, college students were drinking pots of coffee, or popping diet pills or caffeine pills to stay awake. Now Ritalin and Adderall are the more attractive stimulant as it helps people stay awake, and maintain a high level of focus and concentration. Most of the college students that use Ritalin and Adderall to help them study are not prescribed the drug. According to the DEA, Ritalin is on the list of the most stolen medications (6). These drugs are highly available on college campuses. One student at the University of Virginia said that he could name twenty people that use Ritalin or Adderall without a prescription (11). Here at Bryn Mawr and Haverford Colleges, people will often pay someone two to five dollars per pill. It is also well known that students are easily prescribed one of the two drugs at the health center, so there are students that will set up an appointment with the health center psychiatrist in hopes of eventually getting a prescription for the drug.

What people do not often realize is that there are serious consequences to Ritalin and Adderall abuse. Some of the less serious side effects are dry mouth, bad breath, and irritability. Short term side effects that affect one's health include diarrhea, loss of appetite, and irregular heartbeat. If one's heartbeat and blood pressure continues to rapidly change, it can lead to sudden death from stroke or heart failure (7).

One of the major consequences from taking Adderall is psychosis, which is very similar to paranoid schizophrenia. This is caused by an excess of dopamine in the central nervous system (4). Symptoms of psychosis are visual hallucinations, slurred speech, loss of short-term memory, and poor hygiene. In one particular case, a 12 year-old girl said that she was experiencing hallucinations of bugs crawling over the walls, and auditory hallucinations telling her to stab her little brother (1). In a more extreme case, one woman was driving her car while in a psychotic state when she started hearing voices in her head telling her to trust in God, and let go of the wheel of her car and the gas pedal. She did, and the result was a head-on collision with another car that killed her son that was in the backseat (10). In both cases, the two people had been prescribed Adderall for ADHD.

While most college students claim that they only use Adderall or Ritalin on occasion, and do not think they have a problem with it, it is still important to be informed of all the harmful side effects of the drug. Unfortunately, since most people do not see the consequences that happen to other people, they continue to take it as a study aid. Whenever they need that extra boost, they pop a pill. When they have a big project due the next day, they find their friend with a prescription. However, students need to realize that popping one of those pills is similar to taking speed or cocaine, and can have just negative side effects. College is only going to get more difficult and competitive with the number of students enrolled in college increasing every year. Ritalin and Adderall usage may increase with the growing pressure and stress of school. Students are going to have to find some other way to help them do well in school without putting their lives and health in jeopardy.

References


1)Adderall-Induced Psychosis in an Adoloscent, A detailed account of a young teenager who suffered from psychosis due to Adderall


2)Adderall Side Effects


3)Amphetamines, Amphetamines Frequently Asked Questions


4)Erowid Vault, Any question you could possibly have about drugs or stimulants (legal or otherwise) can be answered here


5)Dextroamphetamine and Amphetamine, More info on dextroamphetamines and amphetamines


6)College students abuse Ritalin as a study aid , A college newspaper article about Ritalin abuses


7)Florida Alcohol and Drug Abuse Association, More facts about amphetamines


8)InfoFacts: Methylphenidate, Methylphenidate information


9)Methamphetamine


10)Out of Control: A Controversial Drug, A mother's personal tragedy with Adderall


11)Scope of Adderall-abuse uncertain, A look at Adderall abuse on college campuses


Neurobiology of Aggression
Name: Shirley Ra
Date: 2004-05-13 23:02:58
Link to this Comment: 9854


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Five million children are directly exposed to violence each year in the United States. Unfortunately, the most frequent type of aggression children are exposed to takes place in the home usually in the form of physical abuse or domestic violence (1). According to the American Association of Advancement of Science, the number of murders committed by children between the ages of fourteen and seventeen has increased by 165 percent since 1985. To this date, there are 28,000 children and adolescents that are known murderers. This number is expected to double by the year 2015 (1).

The definition of aggression is usually broken down into several categories since researchers have differing views as to its exact nature. Aggression is generally described "to be an emotional state that many humans describe as consisting of feeling of hate and a desire to inflect harm" (2). Such aggressive behaviors are diverse and manifested from a number of sources, such as, territoriality, defense of food source, defense of offspring and predatory behavior.

The question at hand then is: are individuals in our society aggressive due to the neurobiology of the brain, does society create aggressive individuals, or is it both? On one end it seems as though children are learning that aggression is a satisfactory manner to resolve disputes. For the latter reason researchers are aiming at interrupting the learning process to prevent violence from being transcribed in an individuals brain (1). Socially, many factors such as the media, poor parenting skills, and family stresses, create individuals with shorter fuses, hence, more likely to partake in aggressive behaviors.

Evidently there are various factors that lead an individual to engage in aggressive behavior. However, researchers are now implicating biology in the debate as to whether aggression is due to nature or nurture. Since aggression is a complex behavior it is expected that the underlying biological mechanisms that contribute to elevated aggressive behavior are complex as well.

The serotonin (5-HT) theory of aggression is by far the most widely accepted theory in the aggression field. It has been clearly proven by a variety of methods that disturbance of the serotonin system is linked to aggression behavior. It has also been suggested that serotonin exerts inhibitory control over impulsive aggression (3). Furthermore, cerebral spinal fluid (CSF) levels of the serotonin metabolite, 5-HIAA, are thought to reflect presynaptic serotonergic activity in the brain. Reduced cerebral spinal fluid has been found in aggressive violent men (4), and in victims of violent suicides (2). Moreover, the concentration of the serotonin metabolite, 5-HIAA, has been found to have the capacity to predict aggressive behavior in conduct disordered boys and vulnerable adults two to three years in the future (5). Lower levels of serotonin metabolite (5-HIAA) in the CSF have been reported in impulsive violent offenders than in nonimpulsive violent offenders (5). In essence, there is a proven association between low serotonergic function and aggressive behavior in studies of the serotonin metabolite, 5HIAA, in the cerebral spinal fluid.

Pharmacological studies are also providing another method to further learn about serotonin function and its impact on aggression. First, there are a series of studies exploring the relationship between prolactin and aggression. Prolactin is a hormone that is released from the pituitary gland and is controlled by serotonin. Elevation of prolactin in response to a dose of a serotonin agonist has been used to index central serotonin activity. Lower prolactin responses to a serotonin agonist have been associated with aggressiveness and antisocial personality disorder (6). In addition, pharmacological studies dealing with tryptophan have given researchers further insight as to how to induce aggression. The synthesis of serotonin depends on the availability of tryptophan, an amino acid. By limiting dietary tryptophan, Dougherty (7), showed that brain levels of serotonin were reduced. A reduction in tryptophan induced by consuming a beverage containing a certain amount of amino acids showed an increase in laboratory aggression in normal men (7) and in monkeys. When the beverage was loaded with tryptophan amino acids aggression remained unchanged (7).

According to recent research the prefrontal cortex appears to have a strong link to aggression further implying that aggression might be due to neurobiological effects. The prefrontal cortex is often referred to as the "executive" region since it is in this region where humans have the ability to imagine, think and make informed decisions. The prefrontal cortex is also an important component of a circuit critical to emotion regulation that has been implicated in aggressive behavior. Furthermore, serotonic neurons, which reside in the brainstem, project their axons into many functionally diverse regions of the brain, including the hypothalamus, hippocampus, the amygdala and the prefrontal cerebral cortex. Specifically, the prefrontal cortex is a region with a high density of serotonin type 2 receptors. Therefore it would not be surprising, given this spectrum, that if there are abnormalities within the serotonin system or the PFC that this would lead to aggression.

It is also known that the prefrontal cortex normally acts as a brake that can suppress urges or impulses. It does so by communicating with other regions of the brain that mediate aggression, such as the amygdala. For example, research in cats show that the activity of the prefrontal cortex can suppress aggressive acts (8). Also, stimulation of prefrontal cortex prevented the felines from attacking rats (8). What this is inferring is that environmental or inborn damage to the amygdala creates a malfunction that contributes to the tendency of engaging in aggressive behavior.

Moreover, there have been cases where humans suffering from lesions in the orbitofrontal cortex (OBC), a region of the brain beside the PFC, have displayed aggressive behavior (8). Anderson et al. (13) reported on two individuals, in their twenties, who had suffered early damage to the orbital and lateral sections of the prefrontal cortex. Both of the participants according to Anderson et al. displayed a significantly scarcity of moral reasoning, a history of verbal and physical abuse, and explosive burst of anger. Moreover, a case similar to that of Phineas Gage was reported by Cipolotti, where a 56 year old man who received a lesion to the OFC and to the left amygdala showed an increase in violence and in impulsive aggression. This 56 year old man was once known as a caring and respectful man, but after the incident he became impulsive, rude and aggressive.

The role of the amygdala in aggression also seems to be extremely significant. Behaviors that connote threat, such as staring or verbal threats are immediately communicated to the lateral nucleus of the amygdala, which then venture to the basal nuclei. This is where the social context information is integrated with the perceptual information (8). Physical responses are produced when the basal nuclei projects to the central nucleus. Behavioral responses are created when the basal nuclei projects to various cortical zones. This may mean that too little or too much activation of the amygdala may give rise to decrease sensitivity to social cues that regulate emotions or it may give rise to excessive negative affect. Either way the manner in which the amygdala plays a role in aggression is extremely important.

In other related work, studies of guclose metabolism have revealed prefrontal abnormalities in subjects prone to aggression. For example, a study of forty-one murderers showed hypoactivation in prefrontal regions including the lateral and medial zones of the prefrontal cortex. Interestingly enough, they also found hyperactivation in the right amygdala compared to control subjects. Moreover, patients with an antisocial personality disorder that are prone to impulsive aggression haven been shown to have an 11% reduction of overall prefrontal gray matter volume as measured by an MRI (9).

So what can aggression be the product of? We know how aggression works now, but what triggers all of these mechanisms? Emotion regulation seems to be an important theme since it seems that individuals who are aggressive might lack emotion regulation. Normal individuals have the capacity to regulate negative affect and also profit from reading the cues in their environment, such as facial or vocal signs. Is it that individuals that are prone to aggression have some abnormality of the emotion regulation circuitry (OFC, Dorsalateral prefrontal cortex, Amygdala, and Anterior cingulated cortex)? The scientific evidence provided in this paper specifically suggests that abnormalities in serotonin function or in regions of the PFC may be especially important.

It is important to note that serotonin levels are not inflexible quantities that are indifferent to social surroundings or stimuli. It is actually quite the opposite; Raleigh et al. (10) showed that low ranking male primates have low level of serotonin, but that serotonin levels increased with social ascent. Additionally, when the dominant male loses high status, his serotonin levels decline.

Furthermore, serotonin is not the antiaggression transmitter nor is the PFC the target location. Other substances have also been implicated in various forms of aggression in both humans and in animal models. Other neurotransmitters, neuromodulators, hormones, specifically testosterone (8), and cortiocotropin releasing hormone (8) have been linked to aggression. It is a shame that the aforementioned topics have not received as much as attention as they deserve.

Possible ways to treat aggression have been proposed, but the selective serotonin reuptake inhibitors have been widely accepted for their effects on aggression. A noteworthy amount of literature, as reviewed earlier, has shown that decreasing the serotonin in the brain results in aggressive behavior in both animals and humans while increasing serotonin decreases aggressive behavior. An accepted animal model for aggression is that of muricidality (mouse killing behavior) in rats and mice (8). Such brutal behavior has been decreased by the administration of drugs that increase the amount of serotonin levels in the brain (SSRI's) (6).

It is therefore possible to treat aggression by increasing serotonergic activity by administering antidepressant drugs such as Fluoxetine (Prozac), Sertraline (Zoloft) and Citalopram (Celexa), which are all serotonin reuptake inhibitors (6). Risperidone (Risperdal), an antipsychotic drug, has been recently proven effective for the treatment of aggression as well. It does so by antagonizing serotonin receptors. It usually takes these drugs two to three weeks to take on it full effects.

Salzman (11) conducted a controlled trial contrasting the anti-aggression effects of fluoxetine to placebo. Twenty-one patients with mild to moderate borderline personality disorder who were sensitive to aggression were randomized to 12 weeks of treatment with either fluoxetine or placebo. Measurements of anger were decreased in the fluoxetine treated subjects. This finding was independent of the change in the patients' mood.

On the other hand there is the case of Donald Schell a man prone to aggression according to his physician. Shcell had been taking Paxil for two only days, when he shot and killed his entire family and then took his own life. There was no motivation for this murder-suicide case (12). So why did Schell commit such a crime? Many researchers are skeptic that antidepressants actually serve as a treatment for aggression despite the fact that numerous studies have shown their positive effects. Perhaps these drug effect individuals differently depending on the stem of the aggression. It is clear that more research needs to be conducted to comprehend the main effects of these antipsychotic/antidepressants drugs as well as the side effects on people suffering with aggressive behavior.

Recent research has brought forth other possible types of treatments for individuals suffering from aggressive conduct. Lithium, anticonvulsants, stimulants and alpha agonist have been proposed as treatments for aggressive behavior. Research is still being conducted to test the efficacy of such drugs compared to SSRI's.

Besides pharmacological treatments there are therapeutic outlets as well. Many individuals prefer resorting to a psychologist or therapist. In this manner they can communicate their frustrations and find a way to diffuse the aggression. Ideally, a good rehabilitation would consist of a helpful therapist and some sort of medication prescribed by a psychiatrist.

Prior to this research paper I never thought about aggression as a "disorder" or as condition that might be caused by alterations to the brain. I am still unclear as to whether it is the aggression that causes the abnormalities in the brain or whether it is the abnormalities in the brain that cause the aggression? Research suggests that it is the abnormalities in the brain that cause the aggression, but is this 100% percent certain? More research needs to be conducted to gain a greater perspective of what aggression really is. Post-mortem studies would be the most effective as well as MRI studies comparing aggressive individuals with non-aggressive individuals. Another helpful step in the study of aggression would be to state a clear and precise definition of what aggression really is instead of having various contradicting definitions. This would facilitate the research process.

Despite the fact that aggression may have a negative connotation in our society, it is an important element of social behavior (9). This is so because it enables individuals to obtain a high dominance ranking, it helps them defend their offspring or obtain resources. While the former is true, excess aggression may lead to negative consequences as well. If the case is that individuals are pre-disposed to aggression because of our ancestors does that mean that our negative experiences alter brain function thus leading to aggressive behavior? If this is the case we need to come up with a manner of controlling our experiences from eliciting changes in the brain. For that reason it is extremely important that children as well as adults receive a nurturing and loving environment where their brain can be "regulated." In my opinion it is the developmental neglect or other traumatic stresses experienced by children that cause the sensitization of brain systems, such as the serotonergic system.

It can be concluded that aggression can be elicited by either or both neurobiological and environmental factors. It is extremely likely that aggression in caused by a profound merging of both and not just one or the other. Research needs to prove concretely what causes aggression in order to continue our research on aggression. It is hard to research a subject when the fundamentals are missing. Hopefully, future research suggested will demonstrate how to better control the negative effects and harness the benefits of aggression.

References

1)National Institute of Health, Violence as a Biomedical Problem: Natural born killers?

2) Neurobiology of Suicide and Aggression. ,

3) Volavka, J. Psychobiology of the Violent Offender, 1999.

4) Linnoila. M. Criminal and psychiatric histories of Finnish arsonists. Psychiatry Res, 1997.

5) Rawlings, R.; Tokola, R.; Poland R.; Guidotti A.; Nemeroff, C.; Linnoila, M. CSF biochemistries, glucose metabolism, and diurnal activity rhythms in alcoholic, violent offenders, fire setters, and healthy volunteers. Arch Gen Psychiatry, 1994.

6) Coccaro, E, Serotonergic studies in patients with affective and personality disorders. Correlates with suicidal and impulsive aggressive behavior. Service, Bronx Veterans Administration Medical Center, NY, 1989.

7) Dougherty, D. The Effects of Tryptophan Depletion and Supplementation on Serotonergic Functioning and Aggression in High and Low Aggressive Subjects. Psychiatry, University of Texas, 1998.

8) Rosenzweig, M.; Breedlove, M.; Leiman, A. Biological Psychology. Sinauer Associates, Inc. Publishers, 2003.

9) Raine, A et. al. Reduced Prefrontal Gray Matter Volume and Reduced Autonomic Activity in Antisocial Personality Disorder. Arch Gen Psychiatry, 2000.

10)Raleigh MJ. Social dominance in adult male vervet monkeys: Behavior- biochemical relationships. Social Science Information, 1992.

11) Salzman C, Wolfson AN, Schatzberg A, Looper J, Henke R, Albanese M. Effect of fluoxetine on anger in symptomatic volunteers with borderline personality disorder. Journal of Clinical Psychopharmacology, 1995.

12) Thompson, A. Paxil Maker Held Liable in Murder/Suicide: Will $6.4 Million Verdict Open a New Mass Tort? Lawyers Weekly USA, 2001.
13) Anderson, S. et al. Impairment of social and moral behavior related to early damage in human prefrontal cortex. Nature Neuroscience, 1999.


Colorblindness in a World of Color
Name: Debbie Han
Date: 2004-05-13 23:18:22
Link to this Comment: 9855


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The human eye perceives color when matter selectively interferes with the wavelengths that compose white light (1). For example, an object that appears blue is selectively absorbing certain wavelengths of light while reflecting others, which give the object the appearance of blue coloring. The human eye is capable of distinguishing a range of light from about 400 nanometers (violet) to approximately 700 nanometers (red). This is called the visual spectrum.

It may seem that the eye has hundreds of color receptors because the human eye can perceive hundreds or even thousands of colors; however, the trichromatic or "normal" human eye has only three different color receptors. These receptors are called cones. Traditionally, the three cones have been differentiated by the wavelength of the light they absorb. The three cones are the short-wavelength sensitive (S), middle-wavelength sensitive (M), or long-wavelength sensitive (L) and overlap to some extent. The peak sensitivity for each of these cones is 440 nm, 545 nm, and 565 nm, respectively (2). These three cones are also classified as blue (short), green (medium), and red (long).

The cone photoreceptors are part of the retina, which lines the back of the eye. The retina contains the optic nerve, ganglion cells, which are the output neurons of the retina, and the rod and cone photoreceptors. The rod and cone photosensors are in a specific area of the retina called the fovea. In order for the rods and cones to be activated, light must travel through the retina, which is approximately 0.5 mm thick. The visual pigment of the photosensors absorbs the light photons. The photosensors then translate the absorption into a biochemical and subsequently an electrical message. These messages are what stimulate the neurons of the retina and cause the brain to perceive the visual image (3).

Colorblindness affects about 8% of the male population and about 1% of the female population (4). It is considered an X-linked recessive trait; therefore, it is an inherited trait and is passed down on the X chromosome. Due to the nature of Mendelian genetics, men are more likely to be colorblind. For example, a woman carrying one colorblind X-chromosome is a carrier and will not be colorblind. If she were to have a normal color vision husband, there would be a 50% chance of having a colorblind son and a 50% chance of having carrier daughters (who are not colorblind). Therefore, a woman with a colorblind father becomes an obligate carrier (5). On the other hand, it is believed that complete achromatopsia with decreased visual acuity is a recessive, autosomal condition (6).

People who are colorblind can be classified as one of the three major forms: anomalous trichromats, dichromats, and monochromats. The differences in the forms of colorblindness arise from varying deficiencies or lack of specific cones. Different types of dyschromatopsia, or colorblindness, have different rates of prevalence, as well as different genetic causes.

The type of color vision I will call "normal" is trichromatic vision. Trichromats use the three photoreceptors, short, medium, and long, in order to form a color image. The mixing of the information from the three photoreceptors is the basis for the perception of color. Therefore trichromats see the full visual spectrum of colors.

Anomalous trichromats are considered moderately colorblind because they have all three color receptors except that one is misaligned. For example, an anomalous trichromat may have a reduced green sensitivity called deuteranomaly. This is the most common type of anomaly. Greens seem more washed out and less prominent than the same green viewed by a trichromat. The other types of anomalous trichromatopsia are protanomaly, a reduced red sensitivity and tritanomaly, a reduced blue sensitivity (4).

Dichromatic color vision means that an entire photo pigment is missing, blue, green, or red. Although dichromatic vision is normal for most mammals, including cats and dogs, humans who are dichromats must match colors by mixing only two photopigments. The three types of dichromacy are protanopia, the absence of the L-cone, deuteranopia, the absence of the M-cone, and tritanopia, the absence of the S-cone. The incidence rate of the three types of dichromacy in males is 1.3%, 1.2%, and 0.001%, respectively. Without the long-wavelength sensitive photopigment, for example, protanopes confuse blue-green, gray, and pinkish colors, which appear quite different to normal trichromats (7). This also applies for deuteranopes. Tritanopes confuse yellow, gray, and violet colors.

The most severe form of colorblindness is monochromatic colorblindness, which is caused by a complete lack of all three cones. Those who suffer from this condition are called achromats. Rods saturate at lower light intensities and are more sensitive to light than cones; therefore, achromats are better equipped to function in low-lighting environments. The incidence rate of complete achromatopsia is 0.00001%.

Achromats see the world as though it is a crisp black and white photograph. In order to determine colors, an achromat must distinguish shades of grays, whites, and blacks. In addition, achromatopsia is usually accompanied by photophobia, which is a severe sensitivity to light. Knut Nordby, world-renowned scientist from Norway is a complete achromat. Although he was not sensitive to light at birth, he soon acquired an inability to see clearly in sunlight or bright light. In addition, Nordby must wear glasses and use a magnifying glass in order to read in lower light environments.

Background knowledge on colorblindness can be applied to understanding specific areas of the disability. Rather than focus solely on the science of color perception, I will introduce a live case study on colorblindness. To do so, I have interviewed and conducted a color test with a colleague of mine, Imrul Huda, who is colorblind but knows little about his condition.

Imrul grew up compensating for his disability; therefore, he is not completely sure which colors he can and cannot distinguish. In order to assess his level of dyschromatopsia and what category he falls under, I conducted a number of color tests. His responses, the amount of time taken to respond, and the conditions of light for each experiment are important for the analysis of his condition.

Imrul Huda was born in August of 1977 in Calcutta, India and was raised in India until the age of 14. Neither of Imrul's parents is colorblind and his sister is not, as well. He believes that a maternal uncle may have been colorblind, although it was never confirmed. Imrul recalls that around the age of 8, his mother noticed that his ability to tell colors apart while reading books was random. Presently, Imrul can distinguish most basic colors but he admits that he cannot distinguish shades of basic colors such as magenta and purple. When he sees a book with many colors, he can see that "there are 100 different colors," but he does not know what each individual color is.

As Imrul explained, "I have basic colors in my mind. The seawater is blue. Leaves are green. But there are so many different shades of green, that's why I get so confused about green." In order to register that he sees green, he must first relate the color that he sees to the color of leaves. As a result, Imrul has difficulty determining darker and lighter greens. It also takes longer for Imrul to register specific colors than the split-second recognition of most trichromatic individuals. To him, light green is not even a color he recognizes. In addition, he admits that he cannot tell the difference between red and brown, purple and blue, yellow and orange, and dark green and red. Black and white, however, are obvious to him.

Regarding sensitivity to light when outdoors, Imrul stated, "I'm always frowning." By this he means that it is difficult for him to differentiate colors and to have structured vision in bright light. He finds color easier to distinguish in semi-darkness. During the color test I performed, Imrul was not able to quickly identify any colors. Each color distinction required a comparison to a color he already knew from memory. When a white sheet of paper with a large green dot was shown to him, it took 6 seconds for Imrul to recognize that the color was green. In response, Imrul said, "I have a mental image of leaves. That's how I know that this color is green."

When shown an drawing with two red stripes surrounding a block of alternating purple and blue stripes (shown below), Imrul recognized the red stripes as brown after 10 and 18 seconds (for each one) and then after 36 seconds, he was able to determine that blue and purple were different but was unable to distinguish precisely what color the purple blocks were. Instead, he stated that all five stripes were bluish colored.

(Please insert Image #1.)

This test was performed in a dark room with little natural light. When a similar test was conducted with the same colors but a different pattern in a setting with more natural light, Imrul's response time increased. He recognized red stripes as brown in color after 30 seconds and took over 50 seconds to recognize the blue and purple (which he again recognized as "bluish" in color).

Similar tests were conducted in settings of low, medium, and high light. In each case, Imrul yielded the quickest response times in low light. This is evidence that the human eye becomes more sensitive to light after spending a few minutes in the dark. This phenomenon is called dark adaptation and accounts for increased visual sensitivity in the dark following exposure to light. A reasonable explanation for Imrul's increased sensitivity is that after he spent a few minutes in the dark, there was a transition from cone-based vision to rod-based vision (8). Since Imrul's condition seems to be due to defective cones, relying more heavily on rod-based vision may be a better alternative for him.

When shown a pattern of thin stripes of alternating colors (shown below), he determined yellow after 12 seconds and said, "it's too bright, that's how I tell," and recognized orange after 1 minute stating, "stand alone, I may have gotten confused if it had not been nearby yellow." He recognizes orange as a less bright version of yellow. Again, purples were dark blues and reds were browns.

(Please insert Image #2.)

Imrul's color deficiency was most apparent when I asked him to determine the colors of the drawing on the cover of the writing tablet shown below. The colors were different from the basic ones that I had tested him with (shown below). The elephant's dark green hat appeared red to Imrul, the elephant dark blue, and the medium blue ball light blue.

(Please insert Image #3.)

Imrul's inability to distinguish shades of color has caused a variety of difficulties with day-to-day life. For example, driving is extremely challenging for him. Not only does he have to memorize the positioning of green, yellow, and red on a traditional traffic signal, he also has difficulty determining whether the light projected from normal street lamps is different than the light projected by traffic signals. Imrul's inability to rapidly register variations in color makes driving dangerous; hence, he chooses not to drive.

Shopping also poses significant challenges. For example, Imrul recently went to a store to purchase a water bottle. He selected a bottle he thought was blue. Upon leaving the store to go and fill the bottle with water, Imrul encountered a friend of his who asked him why he had purchased a pink water bottle. To Imrul, the water bottle was blue, but to trichromats, the water bottle appeared to be pink.

Imrul has grown accustomed to his deficiency in distinguishing shades of most colors and has made his own accommodations. Choosing outfits, he told me, is easy because he has given up on trying to match colors. However, Imrul does wonder what trichromats see when viewing a colorful painting. "I don't really know what people see. I must see something very different."

What is interesting about Imrul's condition is his wide range of color insensitivity. Colorblindness is explained by a complete lack of or deficiency of one of three cones. It seems that Imrul is deficient in all three. His colorblindness cannot be effectively explained by removing the S-sensitive, M-sensitive or the L-sensitive cones alone.

Using the diagram shown below as a reference, Imrul exhibits color insensitivity within the following wavelengths of light: the blue-purple division (approximately 450 nm), the green-red division (approximately 580 nm), and the red-brown division (approximately 680 nm). His strong inability in distinguishing dark greens from red is evidence that he is a deuteranope (9), although he has compensated for his inability to distinguish green by equating the color he does see to something familiar to him, the color of leaves. However, his mechanism of compensation does not work for very light and dark greens. A complete deficiency in M-sensitive cones is a valid explanation for Imrul's difficulty in determining most shades of blues, greens, oranges, and reds.

M-cone deficiency in his retina, however, does not fully explain his visual condition. I believe that he has a deficiency in both S-sensitive and L-sensitive cones. This conclusion stems from the fact that Imrul showed strong inability to differentiate purples and blues, which is primary territory for the S-sensitive wavelength, as well as reds and browns, which is mainly the function of L-sensitive cones. Although there is not much literature that suggests that people can exhibit dichromatic vision with a missing M-sensitive cone (deuteranopia), alongside deficiencies in both S-sensitive and L-sensitive cones, the results of the color tests which I conducted indicate that Imrul's colorblindness is more profound than deuteranopia alone.

(Please insert Image #4.)
http://www.cox-internet.com/ast305/color.html

Color vision is amazing albeit incredibly subjective. At the age of 27, Imrul has spent his life compensating for his visual shortcomings. Since he was born with this condition, he does not view it as a disability (he doesn't know what trichromats see), but rather as his unique perception of life. In reality, color is just that - a matter of perception.

References

1)WebExhibits webpage, introduction to color perception

2)Webvision website , thorough description of the photoreceptors

3)Webvision website, a great description of the retina, rods, and cones

4)University of Kentucky website, a good guide to the different types of colorblindness

5)Medical College of Wisconsin website, an introduction to colorblindness

6)University of Arizona website, Knut Nordby's personal account - Great story

7)North Dakota State University Psychology Department website , very insightful website on color vision

8)University of Toronto Department of Psychology website , great explanation of dark adaptation

9)Medterms website, brief description of red-green colorblindness


The Special Sleeping Capabilities of Birds
Name: Eleni Kard
Date: 2004-05-13 23:33:20
Link to this Comment: 9856


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

You are a college student with less than four hours until your Physics exam and half a textbook to understand. Or you have the night shift at the local 7-eleven. Or maybe you have a 9 o'clock deadline to get through the three-foot high stack of paperwork on your desk. One bodily function can keep you from completing your task-that is, your body's need for sleep. Without sleep you can't function, but during sleep you can't function at your job either. What if there was some way you could do both?

A recent study from Indiana State University reports that birds, in fact, have this kind of capability, and use it as a protection mechanism against danger. Birds can sleep with one eye open and one eye closed, and half of their brain awake and the other half asleep. This type of sleep is called unihemisphere slow-wave sleep (USWS), during which one half of the brain remains awake while the other is asleep. (1) Dolphins, seals, and manatees also use USWS because it allows them to swim up to the surface to breathe so they do not drown in their sleep. (1) This is because these animals have a voluntary respiratory system (2), which means part of the brain must be alert and retain control of the blowhole in order to get air. (3)

The interesting distinction between birds and the aquatic mammals is the that birds seem to be able to control when they want to use USWS, rapid-eye-movement (REM) sleep, or a combination of both whereas the aquatic mammals must use USWS to stay alive. When birds are in dangerous situations, they use USWS. (4) In this way they can keep one eye open on the look out for predators. Because USWS is not the most restful type of sleep, birds only use it when necessary. Therefore, under less dangerous conditions, the proportion of USWS decreases relative to REM sleep.

When birds sleep in large groups, it was noticed that the ones on the edges of the group used USWS more than the ones protected in the middle. This is termed the "group edge effect," (5) in which animals at the edge of a group spend more time scanning the area for predators than do those tucked in the middle. This shows that the birds on the end use the less restful sleep to protect the birds in the middle and thus sleep this way for the good of the group. By doing this not all the birds will have to sacrifice sleep as they would if they were each sleeping alone.

Niels C. Rattenborg and company at Indiana State University conducted a study using mallard ducks to investigate this phenomenon. They positioned four rows of ducks and video taped them while they slept. They characterized USWS if the duck had one eye closed and REM sleep (bihemispheric) if it had both eyes closed. (5) EEG readings were also obtained to detect the slow wave sleep. They found that the mallards did indeed increase their use of USWS, by a factor of 2.5, when sleeping at the edge of the group. (6) The study also found that the birds in USWS positioned themselves so that their open eye was the one facing away from the group, in other words, in the direction of potential danger, 86% of the time. (6) These ducks would also turn around every so often to switch which eye was the one facing toward the direction of danger and to alternate sleep between hemispheres of the brain.

In addition, when the ducks using USWS were shown a video simulation of a predatory attack, they initiated escape behavior. (7) This affirmed that the awake half of the brain was indeed able to detect danger. One piece that was unclear in the literature was whether the sleeping half of the brain had to be awakened in order to begin the escape behavior, in other words, whether one half of the brain controlled the ability to move the entire body, or whether both halves had to be awaken. The concept of motor inhibition during sleep arises. If humans could sleep one hemisphere at a time, it would certainly cause a problem. (8) If only half of the brain were awake, would only the corresponding half of the body be able to move?

This ability to half sleep is also thought to allow migrating seabirds to fly for long periods of time on "auto pilot." (9) Although this additional use of unihemispheric sleep has not yet been scientifically proven, it does surface the motor inhibition during sleep issue again. For the birds to continue flying with only one half of their brain awake would mean that they either don't experience motor inhibition when only one hemisphere of the brain is sleeping or that only one hemisphere is needed to control motor coordination for the entire body.

Rattenborg's research on mallard ducks can have further implications for humans, especially in the area of sleep disorders. A duck's ability to keep half of its brain awake seems somewhat similar to sleepwalking in humans. Sleepwalking is thought to occur when parts of the brain are awake while other parts are asleep. (10) It is suggested that sleep disorders, including sleepwalking, can arise from stress. (7) Rattenborg comments, "This could be a vestige of the ability to sleep with one eye." (7) Under stressful conditions the body feels it is in some kind of danger and tries to remain alert for safety. There are accounts of war veterans claiming that under intense conditions they were able to sleep with one eye open. (1) This claim has not been validated and further testing on the subject proves inhumane. Rattenborg also notes that humans who have been through severe trauma display brain-waves similar to those of birds in unihemispheric sleep. (6) This implies that there seems to be some type of correlation between nervousness, or the threat of danger, and sleep.

Let's examine a simpler case: have you ever set your alarm clock for 6:00 am knowing you must get up at that time in order to catch your train? And have you instead awaken at 5:55 am the next morning, 5 minutes before your alarm goes off? (10) How does this happen? It seems as if your body knows what time it is while it is sleeping. Does this imply that some part of the brain is awake and conscious of the time while other parts are sleeping? Although this is a quite common occurrence, it seems to happen most frequently when one has an anticipated event to wake up for. This would fall under the hypothesis that stress can alter sleep. And just as birds can control their sleep during stressful situations, this indicates that perhaps we can, to some degree, too.

Additional evidence from animal observation further backs this hypothesis. Christian Mathews, who also worked with the sleep team at Indiana State University and under sleep specialist Charles Amlaner, found that lizards can sleep with one eye open, especially after they have seen a predator. (6) Although the brain waves of the lizards differed from those of birds and mammals, Mathews believes that the similar sleeping behavior could indicate that they once shared a common ancestor. Amlaner agrees that perhaps all early mammals had the ability for unihemispheric sleep but then lost most of it as they evolved. Birds and aquatic mammals retained the ability due to their extreme circumstances, flight migration and aquatic living, respectively.

It must be pointed out that in order for birds and some aquatic mammals to sleep one hemisphere at a time, their brains must be set up quite differently than that of humans. Also, according to Rattenborg, birds can look up with one eye and down with the other, operating the two halves of the brain independently. (7) The neuroanatomical structures and physiological processes that allow each hemisphere of the brain to function independently during wakefulness could also be responsible for the independent sleep behavior of each hemisphere. (5) This set up could explain why birds have these special capabilities for sleep.

What differences in a bird's brain allows them to decide what kind of sleep they want to get based on the level of danger? Obviously our own sleep is affected by various internal and external forces, including stress, but the effect seems more unconscious as opposed to controlled. The situation of waking up before your alarm clock is a good example. Your "I function" doesn't decide that you are going to wake up at that particular time; rather your body just does it without your control. Birds, on the other hand, make behavioral decisions about the kind of sleep they want to get. (1)

Until further research is conducted on brain hemispheres and sleep patterns, the possibility that humans can somehow control their sleep as birds do remains open. What differences in bird brains allow their hemispheres to function independently? Is there some way to control what parts of the brain are awake and which are asleep? The answers to these questions could open a whole new door to discovery about the human body and our need for sleep. Regulating and learning to control our sleep could change the way we think about productivity and greatly impact our society.

References

1) CNN website, a complete news cite.

2) Scientific American online, a very useful source for science-related articles.

3) Scientific American online, a very useful source for science-related articles.

4) BioOne website, a site providing biology-based research journals.

5) Nature online, contains articles from Nature magazine.

6) Scientific American online , a very useful source for science-related articles.

7) Canada's Discovery Channel website , provides daily science news.

8) Essay on sleep from CalTech, entitled "Sleep: Humanity's Addiction."

9) Birding-Australia website and forum , an interesting cite about birds.

10) Flat Rock Forest Unitholder organization website , a reference cite covering a lot of different issues.


Pain Is Weakness Leaving The Body. Or Is It?
Name: Millicent
Date: 2004-05-14 01:40:51
Link to this Comment: 9859


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

As anyone who has ever seen a newspaper, television, or listened to the radio knows pain plays a major role in our society. From aspirin ads to sneaker commercials this nation has become consumed with fighting, relieving, and treating pain. As this obsession continues one question remains; are we overreacting? I saw a t-shirt once that read Pain Is Weakness Leaving The Body. While I don't compleatly subscribe to this theory I do wonder how pain can be measured. The shirt seemed to suggest that pain is something we should strive to overcome and only the weak allow it consume their body. However, the same injury can evoke very different feelings of pain for two different people. Given recent research it is evident that this difference in pain perception is valid due to the brain activity in different individuals when experiencing pain. This suggests that in regards to medical professionals and our society a new method is needed to properly asses pain.

A study published in the American Physiological Society examined Pain Intensity and brain function (2). The researchers set out to examine how the brain processes pain. Unlike prior studies this experiment focused brain activity in multiple locations, incorporated monitoring brain activity and participants' evaluations of pain intensity. The test intended to show that the brain activity during pain was "a bilateral, distributed mechanism" which ultimately means pain processing takes place in several places in the brain at a time (2). Participants included sixteen right handed individuals. Each of the people experienced pain stimuli administered by the researchers. The brain activity of the subjects was recorded as the intensity of the stimuli increased.

The studies results showed that the brain patterns of the participants were not limited to a certain area. Instead the pain felt from the stimuli was transmitted through several parts of the brain. This conclusion serves as a precursor to a second study which evaluates more closely pain perception.

The second study set out to determine why pain is felt differently by different people (1). In this particular test a heat stimuli was applied to six peoples legs and their responses were then recorded. In addition to the verbal observations of participants MRI's recorded the brains responses to the stimuli. As the heat reached what is considered a painful temperature the participants were asked to rate the pain on a scale of a 1 to a 10. While the temperatures were at the same levels when the participants responded their level of pain analysis were very different. At the highest temperature one person complained that the heat was unbearable while another ranked the pain as a 1. The MRI showed that these individuals showed activation in different parts of the brain. For example those who felt more pain had more locations in the brain activated. While those who experienced less pain activated fewer sections of their brains.

The result of these two studies have exteamly important implications for both neurobiology and medicine. Neurobiologicaly the results suggest that pain is felt differently by different people. Also the brain has mechanisms to insure that pain is felt. By including various locations as receptors the body is protected several times over if one receptor fails (1). This allows the brain to prevent further injury to the body by signaling pain.

In regards to the medical world these studies are crucial in understanding how patients experience pain. If medical professionals consider pain as a constant and assign certain amounts of suffering to certain injuries they leave out an important part of understanding the experience of patients. Perhaps pain is mostly in the patients mind. Meaning a persons brain may determine how she feels pain. Now we have evidence that while pain may be in the mind it is still valid. It is important to consider this when thinking about the experience of pain.

In regards to experiencing pain we understand that the nervous system is not simply a series of reactions with a singular set of responses. As the first study showed us there are often times different types of responses within an individuals body. This suggests that when experiencing pain it is not just the a set of chain reactions. Instead there are signals simultaneously alerting the body of the injury it is experiencing. This might have implications as to what types of medications are necessary to ease pain. Perhaps it is best to pinpoint a certain area of the brain for different people.

In addition to altering the ways we treat pain the idea that pain is relative to a persons brain makeup might give us some methods of knowing how a person will react to pain before something bad happens. In both studies important observations were made as to what part of the brain reacts to pain. In sensitive people there were reactions in the anterior cingulate cortex (1). Less sensitive people had reactions in different areas. If doctors were to figure out if patients were highly sensitive to pain before they were injured they would have a better idea of how to treat a patient. Perhaps a point system or degree of sensitivity could be assigned to patients during physicals. This way doctors would have a better idea of what responses a patient will have when exposed to pain.

The results also help to understand how illnesses like chronic pain work. While there has been no research to my knowledge linking these studies to chronic pain, the results may be useful for researchers. Currently chronic pain is an illness that effects many people and is not completely understood. Sufferers experience pain long after the normal healing time of an ailment (6). In addition to experiencing pain for an extended period of time these individuals experience certain societal pressures which are unfair. They experience the biases of a society that views pain as something that can be beaten. These individuals more that any others understand that while pain may be controlled by the brain this does not mean that will power can conquer it.

Understanding that pain is controlled by the brain does not mean that suffers have any more control over it than they would if it were a problem with any other part of the body. As we have learned the brain is extremely powerful. By accepting the findings of both studies those who suffer from chronic pain and other types of pain may be understood better by those in medical professions and society at large. The idea that "Pain is Weakness Leaving The Body" or a fightable battle is invalid. Instead these studies suggest pain is in fact controlled by our minds, which seem to be preset to determine how we deal with pain.


References

Bibliography

1),This site is a posting of a paper by Robert C. Cahill PhD. titles "Brain Mechanisms of Pain: Overview."

2.)Coghill, Robert C., Sang, Christine N., Maisog, Jose MA., and Iadarola, Michael J. "Pain Intensity Processing With the Human Brain: A Bilateral, Distributed Mechanism." The American Physiological Society. (1999).

Andy Coghlan "Pain Really is 'all in the mind'" from 3)New Science Web Site, www.NewScientist.com (June 23, 2003).

4),Frazin, Natalie. "Study Links Chronic Pain to Signals in the Brain"

5)BBC Web Site,Article from the BBC News web site http://news.bbc.co.uk/go/pr/fr/-/1/hi/health/3178242.stm

6)The American Chronic Pain Association,
http://www.theacpa.org/pf_04.asp


Tourette's Syndrome: History and Possible Causes
Name: Geetanjali
Date: 2004-05-14 06:41:30
Link to this Comment: 9862


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In the year 1825, a French neurologist named Jean-Marc Itard described a rather curious patient that he was treating. She was a French noblewoman named the Marquise de Dampierre, and the symptoms of her disorder had started in childhood. At the age of 7, the Marquise began to show odd convulsive movements in her hands and arms. These movements were involuntary, and soon spread to her shoulders and her neck, while she began to make odd facial grimaces. After a few more years she began to periodically emit horrible screams and say things that made no sense, sometimes startling people in the middle of a conversation. She would echo back things that people said to her. She also developed uncontrollable urges to spout obscene language and swear words. The Marquise's behaviour perplexed everyone around her. She came from an upper class background, and was in general a polite, well-mannered person. There was also no doubt about the fact that she was mentally sound. However, she had no control over either these strange outbursts or her odd repetitive spasms. She was eventually forced to live an isolated life because of her peculiar behaviour, and her symptoms never went away. She finally died at the age of 85. (5, 10)

The Marquise de Dampierre was the first recorded case of Tourette Syndrome, and also the most famous case of the syndrome. Her case was picked up again 50 years after Jean-Marc Itard's description of her, in 1885, by a French neuropsychiatrist named George Gilles de la Tourette. (8) Tourette included the Marquise as one of nine cases he described of people who exhibited involuntary motor and vocal tics, as well as a tendency to use obscene language without meaning to. (5)(11) As Tourette was the first person to seriously study the disorder and provide a detailed, clinical description of it, it was named after him: Gilles de la Tourette Syndrome, usually shortened to Tourette Syndrome. Although he was not correct about several aspects of the disorder (for example, only a small minority of cases show uncontrollable swearing), the name has remained.

Tourette Syndrome is characterized by involuntary motor and vocal (also called phonic) tics, that persist for at least over a year, and sometimes (though not often) for the affected person's entire lifetime. Both the motor and phonic tics can be extremely simple. For example, a simple motor tic might be eye blinking or simple muscle spasms, and a simple phonic tic might be repeated clearing of the throat or sniffing. Simple tics are the most common. However, they can also be very complex, to the point where it might be difficult for others to believe the tics are involuntary. (8) For example, a person with Tourette Syndrome might develop irresistible urges to smell things, might mimic the actions of other people (called echopraxia), or might (as in the famous case of the Marquise de Dampierre) burst out with inappropriate comments and language (called coprolalia). (2)(5)(8)

Describing and diagnosing Tourette Syndrome is a relatively simple matter. However, it is only in the past 30 years that people have come anywhere close to understanding its cause or finding an effective treatment. There is still no cure. The cause is still not known for sure, but I will describe various theories that have been put forward for its cause, while describing the disorder itself. The evolution of the understanding of the cause of Tourette Syndrome has accompanied a growing understanding of the disorder.

It was thought for a long time that Tourette Syndrome was purely psychological. (2)(5) Its symptoms appeared psychological: motor and phonic tics could be controlled, with effort. They would have to be released in a severe bout later, but they were under partial voluntary control. (8) As one sufferer of Tourette Syndrome describes his ability to control tics:

I like the analogy of holding a heavy box at arm's length. You can drop it straight away, or you can hold it out there for a little while. If you hold it, eventually it's too heavy and you have to drop it, and when you do, your arms are sorer than if you'd just dropped it straight away. (6)

Another fact that supported a psychological basis for Tourette Syndrome was that the severity of tics was also somewhat controllable. Tic severity is affected by factors in the environment. They are made worse by such things as stress, fatigue, and anxiety, and are seen to improve when the person is relaxed or deeply focused on a task. (1)(2)(5)

Tourette Syndrome also has a tendency to almost magically disappear as the person grows older. Generally the onset of tics is around the age of 7, (5) and the tics grow more severe for a while until they peak between the ages 9 and 11. (2) However, the vast majority of sufferers of Tourette Syndrome show improvement after that, without any outside intervention. In around 85% of cases, tics have either shown a significant reduction or completely disappeared by adulthood. (2)(8) At first glance, the disorder can appear to be something that one merely outgrows. This is supported by the fact that for many, the symptoms of Tourette Syndrome make children seem just badly behaved or exhibitionistic.

Tourette himself believed that Tourette Syndrome had a neurological basis. However, he was alone in believing that until long after his death. His contemporaries, in particular Freud, were stubborn in attributing a psychological cause to the disorder. It was supposedly caused by bad parenting, among other things. (11) This viewpoint began to change in the 1970s, when neuroleptics were found to be effective in treating Tourette Syndrome. Neuroleptics such as haldol are still by far the most effective way of reducing tic severity in Tourette Syndrome. (2) They have been shown to reduce tic severity by as much as 90%. (1) They act by blocking dopamine receptors in the brain, which reduces dopamine activity. If reducing dopamine activity helps reduce tic severity, that implies that there is abnormal dopamine activity to begin with in the brains of people with Tourette Syndrome.

This was the first sign that the disorder was not in fact psychological. With the discovery of a genetic basis and other biological factors involved, the theory that the syndrome was psychological was gradually dropped. (5) Although the disorder can be exacerbated by environmental stresses, it is now known to have a wholly neurological basis.

Tourette Syndrome has a strong genetic basis. The specific genes involved are not known for sure, but they are thought to follow a dominant pattern of inheritance. (2)(5)(8) Tourette Syndrome lies at the more severe end of a spectrum of tic disorders, all of which have a common genetic basis. (5) If the child of someone with Tourette Syndrome inherited the genes for it, these genes could be expressed as any of the disorders along the spectrum. The child would have only about a 10% chance of developing full Tourette Syndrome. (8) Either a milder tic disorder or obsessive-compulsive behaviour would be more likely. Obsessive-compulsive symptoms occur often in the offspring of families affected with tic disorders, either in the absence of a tic disorder or co-occurring with one. As obsessive-compulsive symptoms and full obsessive-compulsive disorder are seen far more often in female offspring than male offspring, and vice versa for tic disorders, it has been suggested that the two are sex-linked alternate expressions of the same genotype. (1)(2) That is to say, a girl and a boy might inherit the same genes, but in the girl they would be expressed as obsessive-compulsive behaviour and in the boy they would be expressed as a tic disorder.

It is certain, then, that Tourette Syndrome is partly genetic. However, the main question is: what exactly is it that's being inherited? The severity of the disorder is probably not itself genetic, but caused by environmental factors. What is inherited is a tendency towards the disorder, but it is not exactly known what creates that tendency. Some discoveries have been made, though. There are several chemical imbalances that might be involved in Tourette Syndrome, and a few areas of the brain have been pinpointed as significant. I will talk first about the areas of the brain that might be involved with Tourette Syndrome, and then about the chemical imbalances.

One current popular belief is that a defect in the basal ganglia area of the brain is involved with Tourette Syndrome. The basal ganglia is associated with motor control: that is to say, it helps inhibit movement. (3)(4)(5)(9)(11)(3,4,5,9,11) It is also associated with inhibiting inappropriate responses in general. (3) Part of what makes people suspect that the basal ganglia is involved with Tourette Syndrome is that patients with Postencephalitic Parkinson's and Huntington's Disease, both of which cause motor tics, have abnormalities in their basal ganglia. (5)(9) There is some evidence from neuroimaging studies that there are structural abnormalities in the basal ganglia of people with Tourette Syndrome, as well. (2)(5) It is also possible that there is decreased blood flow to and glucose metabolism in the basal ganglia. (9)

In studies of twins who both have Tourette Syndrome or symptoms of it, the right caudate nucleus tends to be smaller in the twin with more severe symptoms. (5)(9) This is significant, since the caudate nucleus is also involved in motor control. (11) Problems in the frontal cortex (7) and lack of inhibition of the frontal-subcortical motor circuits (2)(7)may also be involved.

The chemical imbalances that have been implicated in Tourette Syndrome include dopamine, serotonin, norepinephrine, androgens and cortisol. Dopamine, serotonin and norepinephrine are all neurotransmitters. Dopamine in particular has been implicated in Tourette Syndrome, because of the effectiveness in using neuroleptics as treatment, which I mentioned earlier. Neuroleptics act by blocking dopamine receptors in the brain. Their effectiveness could either mean that there is excess dopamine production in the brain, or that there are excess dopamine receptors, making the brain extremely sensitive to whatever amount of dopamine is there. There may also be imbalances in serotonin and norepinephrine activity.

Tourette Syndrome is 3-9 times more common in males than in females. (2) However, it is known that the genes for the syndrome do not reside on a sex chromosome but on an autosome. (2)(5)(6) For this reason, it is thought that androgens (male sex hormones) such as testosterone might make the symptoms of Tourette Syndrome worse. (5)(6)

Also, a hormone called cortisol which is released in the body during times of stress may have the effect of making tics more severe. This would explain why tic severity increases during times of anxiety and stress, and decreases during periods of relaxation. (6)

To conclude, there are many possible causes of Tourette Syndrome. Although it is not a psychological disorder, it is still influenced by psychological factors such as stress. It is genetic, but not entirely genetic, since environmental factors play a role in deciding the severity of the disorder. There are several areas of the brain that may be involved, which are involved in motor control in general. There may also be several neurotransmitter and hormone imbalances involved. Whatever the cause and the neurological basis of Tourette Syndrome, though, it is obvious that it is complicated, involving many varying and interwoven factors. It is not surprising that the cause is still not well understood. However, a lot of progress has still been made in understanding the cause, since if nothing else areas have been discovered that can be focused on in the future.

References

1) "A rational approach to Tourette disorder." (2002) Patient Care 7:59-75. Site name: Patient Care Archive. (date accessed: May 12, 2004)

2) Bagheri, Mohammed M., M.D., Kerbeshian, Jacob, M.D., and Burd, Larry, Ph.D. "Recognition and Management of Tourette Syndrome and Tic". (1999) American Family Physician, p. 2263. Site name: American Academy of Family Physicians. (date accessed: May 12, 2004)

3) Begany, Timothy. "From obsessions to attention deficits, 'Basal Ganglia Syndrome' covers a wide spectrum." (2000) Neurology Reviews, Vol.8, No.1. Site name: Neurology Reviews.com: Clinical trends and news in neurology. (date accessed: May 13, 2004)

4) Brain Function and Physiology: Basal Ganglia. Site name: Brain.com: Brain SPECT Information and Resources. (date accessed: May 13, 2004)

5) Hyde, Thomas M., M.D., Ph.D. and Weinberger, Daniel R., M.D. "Tourette Syndrome: A Model Neuropsychiatric Disorder. Grand Rounds at the Clinical Center of the National Institutes of Health." (1995) JAMA (date accessed: May 12, 2004)

6) Jones, C. (2001) Tourette Syndrome 101. (date accessed: May 12, 2004)

7) Lau, Edward. "Tourette Syndrome: Etiology." (2003) (date accessed: May 13, 2004)

8) Official web site of the national Tourette Syndrome Association, Inc. (date accessed: May 12, 2004)

9) Neuropathology of Tourette's Syndrome (date accessed: May 13, 2004)

10) "Tourette Syndrome History". Site name: Tourette's Disorder: Hope, Support, Information. (date accessed: May 12, 2004)

11) Ward, Rebecca Gillette, RN, BSN. "Education/CE: Self-Study Modules: Tourette Syndrome." Site name: The University of Chicago Hospitals, Nursing Spectrum- Career Fitness Online. (date accessed: May 13, 2004)


Is it Just That Women Aren't To Be Trusted? False
Name: Tegan Geor
Date: 2004-05-14 08:11:11
Link to this Comment: 9863


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Memory is a tricky thing.

The past is behind us, and in many cases is wholly irretrievable through any other means than through our memory of it. And yet, memory is not a snapshot of events taken by the brain and filed cleanly for future reference -- at least, not always. Though sometimes we remember things exactly as they happened, other times we remember things as they did not happen, or things that never happened at all.

Most of the time, it is not much of a problem that we remember things wrongly. If you remember being instrumental to winning a soccer game in seventh grade, and your brother claims you sat out the second half, then the two of you may bicker about it, but there's not much harm done.

Recently though, a lot of fuss has been made over what some are calling False Memory Syndrome: an alleged condition in which a person's entire identity and interpersonal relationships are centered around a memory of traumatic experience which is objectively false, but which the person strongly believes. (1) False Memory Syndrome (FMS) was first introduced as a concept in 1992 by the False Memory Syndrome Foundation, an organization founded by families (mostly families whose now adult children claim to have had recovered memories of past abuses), academics and professionals who believed "an organization was needed to document and study FMS, to disseminate the latest scientific information on memory," (2) and to help families dealing with individuals allegedly exhibiting this same False Memory Syndrome which the False Memory Syndrome Foundation helped introduce.

Memory is complicated, as anyone--and especially any scientist working in that area--can and will tell you. Memories are influenced by
A variety of factors: "developmental stage, expectations and knowledge base prior to an event; stress and bodily sensations experienced during an event; post-event questioning; and the experience and context of the recounting of the event." (3) Narrative also has a lot to do with our memories: how a story is told and is received can change the content and structure of the memory, and influence how much we believe it ourselves.

The debate for the FMSF is not whether people have memories which are false: it is how therapists, and families, and individuals--and recently, courts--ought to view remembered experiences of trauma. According to the FMSF, persons who claim to have recovered memories of incest or rape are growing at an alarming rate, due to what they see as therapist induced pseudo-memories: false memories which are created due to hypnosis or suggestion. And when individuals with recovered memories attempt to seek retribution or punishment for the injustices they recall experiencing--breaking off contact with their families, or in some cases bring charges against them in court --their decision to do so has a devastating impact on their's and other's lives. According to the FMSF, we should take seriously the possibility--and on their account, probability--that the person suffering from the memory of recovered abuse may not be recounting true experiences.

And that is a legitimate worry. Still, many have--as far as I am concerned very good--reasons to be worried about this skepticism regarding an individual's ability to really recall past abuses. It is only fairly recently that individuals suffering from abuse have had the encouragement to step forward: "Before the early 1980s, both therapists and the lay public discounted claims of child sexual abuse and ''concluded that it rarely, if ever occurred''(5) And a person suffering from or having suffered abuse are far less likely to report it if she or he thinks they will not be believed or supported. Sympathy for the abuser, wanting to forget the incident occurred, and fear about consequences--disruption of family, not being believed, etc--are common reasons abuses are not reported. (5) In allowing false memory syndrome to be taken as endemic, as commonplace, we once again discredit and silence the voices of survivors of trauma and abuse.

An example of the kind of disempowering and assumptive thinking proposed by the FMSF:

We live in a strange and precarious time that resembles at its heart the hysteria and superstitious fervor of the witch trials of the sixteenth and seventeenth centuries. Men and women are being accused, tried, and convicted with no proof or evidence of guilt other than the word of the accuser. Even when the accusations involve numerous perpetrators, inflicting grievous wounds over many years, even decades, the accuser's pointing finger of blame is enough to make believers of judges and juries. Individuals are being imprisoned on the "evidence" provided by memories that come back in dreams and flashbacks - memories that did not exist until a person wandered into therapy and was asked point-blank, "Were you ever sexually abused as a child?" And then begins the process of excavating the "repressed" memories through invasive therapeutic techniques, such as age regression, guided visualization, trance writing, dream work, body work, and hypnosis. (4)

Conjuring Victorian-era hysteria when discussing women's experiences and women's (in)ability to recount their experiences as rational human beings seems out of line even for psychotherapy, with it's roots in Freud. Claiming that therapy is a cohesive and singular methodology rather than a collective of differing and sometimes overlapping practices and then debunking therapy as misleading and coercive is directly undermining women's ability to speak of and to their experiences, both in therapy and in their day to day lives.

Also--as one author puts it--it is curious that the typical demographic profile of a typical victim of false memory syndrome--"a single, white, middle-class, college educated, aged 25 to 45, economically independent, professionally employed female (Mc Hugh 1993)" (5) is also the poster woman not just for the False Memory Movement, but for feminism (at least, feminism in it's white, North American, middle-class form). If women are specifically not to be trusted in recounting their experiences, particularly of traumatic ones, that may be more a reflection of a conservative backlash against feminism and against women than any allegedly objective scientific data.

We know a lot about memory: and yet, scientific knowledge is not yet precise enough to predict how a certain experience or factor will influence a memory in a given person. (3) Until that day comes--and it may never come--we should remain sensitive to the notion that our memories can deceive us. Even so, we should also remain diligent not to allow some voices to be heard and not others.


References


1) False Memory Syndrome Online.

2) False Memory Syndrome Foundation FAQ .

3) American Psychiatric Association Statement on Memories of Sexual Abuse, 1993, as reproduced in the FMSF Newsletter of Feb. 1994, Vol. 3 No. 2.

4) Remembering Dangerously , by Elizabeth Loftus, a very vocal proponent of the False Memory Syndrome.

5)False Memory Syndrome from Hypatia, a feminist philosophy journal.

6)The National Institute of Mental Health; has no listing for False Memory Syndrome as legitimate (hmm..). They have other interesting things, though, related to trauma, etc...


The Myth of the Vaginal Orgasm?
Name: Tegan Geor
Date: 2004-05-14 08:27:28
Link to this Comment: 9864


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In 1970, Anne Koedt published an essay titled The Myth of the Vaginal Orgasm. (1) It is a discussion of femininity as an allegedly biological imperative: since Freud, frigidity (defined as an inability to have vaginal orgasms from sexual intercourse with a man) had been the hallmark of a woman ill-adjusted to her role in society, to childrearing, and to being a woman within the natural order of things most generally. On Freud's account, women who insisted on stimulation of the clitoris during sex--or even, gasp, that doing such might be necessary for a woman's sexual pleasure--were immature. And on Koedt's--and later feminist's accounts--that was wrong: women should embrace the clitoris as symbolic of their own freedom, and shall we say, go with it.

It was remarkable how little legitimate research existed in the 1970s to back up claims--Freudian, Feminist, or otherwise--about female sexuality. Even more remarkable, in this day and age of instant information via the internet (and pop-up ads for pornography in any shade you fancy), is how little information is available now about female sexuality in general, and orgasm in particular.

At some points in antiquity, female pleasure was taken to be necessary for procreation--Hippocrates thought so. Aristotle, on the other hand believed "that a woman had no role in the procreative process." (Hmm...) (3) In 1559 by Renaldus Columbus of Padua claimed to have discovered the clitoris. (2) (Much the surprise of any number of women, I am sure.) And though Pietro d'Abano later in the 16th century seemed to have a decent idea of what the clitoris was about --"Women are driven to desire... by having the upper orifice near their pubis rubbed... For the pleasure that can be obtained from this part of the body is comparable to that obtained from the tip of the penis." (4)--Well into the early part of the 20th century, psychologists and physiologists argued about whether women could have orgasms at all.

To date, scientists are unsure as to functions of various anatomical parts of female genitalia. While Koedt in the 1970s rallied women behind the idea that the clitoris was solely responsible for women's sexual pleasure, the 1980s saw a lot of fuss being made about the Grafenberg or G-Spot: an area within the vagina, about 2 inches up the anterior wall which, at least by some reports induces orgasm when stimulated. (5) So, to coincide with the defeat of the ERA, a new theory of female sexuality: vaginal orgasms are possible--on some accounts, "better" and "more satisfying" than clitoral ones, and if you can't have one, perhaps you aren't trying (forgive the pun) hard enough...

The jury is still out on exactly what the G-Spot is: an extension of the clitoris back into the vagina? A gland correlary to the prostate gland in men? Something else? But the important thing to note is here again, science follows society, and not the other way around.

A more recent complication in the debate around female orgasms has, interestingly enough, came from research with women who have suffered severe spinal cord injury. Women with spinal cord injuries (paraplegics, etc) have reported for many years that they could, indeed, have orgasms: and for years they have been told that what they were experiencing was a mental--but not physical--process. And yet, in lab testing, women with complete severation of the spinal cord were able to experience and respond to stimulation of the cervix and vagina. This would mean that some other nerve--the researchers indicated they thought perhaps it was the vagus nerve--"could carry genital sensory information to the brain even if the major spinal cord pathways are interrupted." (6) If that is the case, then perhaps we know even less about female sexual anatomy than thought.


References

1)The Myth of the Vaginal Orgasm, Anne Koedt

2)The Clitoris In History. Be careful: this link may pop up ads for porn, although the page is just text, and not pornographic in the slightest.


3)History of the Female Orgasm .


4)The Strange History of the Clitoris.


5)The Ultimate Guide to the G Spot, (a picture illustration, but nothing horribly graphic).


6) His and Her Health: Beyond the G Spot


The Giggles Behind Laughter
Name: Maja Hadzi
Date: 2004-05-14 09:46:16
Link to this Comment: 9865


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


Bill Gates and the president of General Motors have met for lunch, and Bill is going on and on about computer technology. "If automotive technology had kept pace with computer technology over the past few decades, you would now be driving a V-32 instead of a V-8 and it would have a top speed of 10,000 miles per hour," says Gates. "Or, you could have an economy car that weighs 30 pounds and gets a thousand miles to a gallon of gas. In either case, the sticker price of a new car would be less than $50. Why haven't you guys kept up?"
The president of GM smiles and says, "Because the federal government won't let us build cars that crash four times a day" (1).

Most people will find the above joke funny; however, few will stop to question the mechanics behind this humor, and the laughter it triggers. What is it that makes us laugh? How can it make us healthier? What part of the brain is responsible for laughter and humor? Although the research that has been conducted in the fields of humor and laughter is limited, significant advancements have been made, and additional intriguing questions have been posed.

Laughter is a part of the universal human language. Humans are born with the capacity to laugh, and unlike language, it is not something that has to be learned. This enjoyable and mysterious form of communication is understood cross-culturally and throughout all age groups. More than just a frivolous emotional outburst, laughter has many important functions in human communication, playing roles in social situations ranging from dates to diplomatic negotiations. Studies have confirmed that people are thirty times more likely to laugh in social settings then when they are alone, in fact, we laugh to ourselves even less than we talk to ourselves. Laughter is a message that we send to other people; it is social and contagious (2). Behavioral neurobiologist and pioneering laughter researcher Robert Provine, suggests that humans have a "detector" that responds to laughter by triggering other neural circuits in the brain, which, in turn, generates more laughter. In other words, we laugh at the sound of laughter itself (1).

Furthermore, humans love to laugh so much there are actually industries built around laughter. Jokes, sitcoms and comedians are all designed to get people to laugh. It is interesting to note that, although it can be consciously inhibited, laughter is produced unconsciously. The physiological study of laughter is known as gelotology. Researchers have learned that it involves various regions of the brain. For example, humor researcher Peter Derks, traced the pattern of brainwave activity in individuals subject to humorous material. He found that the left side of the cortex analyzed the words and structure of the material. The frontal lobe of the brain, which is involved in social emotional responses, became increasingly active. The right hemisphere of the cortex carried out the intellectual analysis required to understand the idea. Brainwave activity spread to the sensory processing area of the occipital lobe, responsible for processing visual signals. Lastly, stimulation of the motor sections evoked physical responses to the joke. Unlike laughter, emotional responses appear to be confined to a specific region of the brain. However, one would think that emotions are a much more all-encompassing feature of the brain than laughter would be. The discrepancy between the way the brain deals with emotion and laughter is a very intriguing one (1).

Although the specific knowledge about the brain mechanisms of laughter is still being worked out, we do know that laughter is triggered by many sensations and thoughts, and that it activates many parts of the body (2). When we laugh, we alter our facial expressions and make rhythmic sounds. The main lifting mechanism of the upper lip, the zygomatic major muscle, is activated simultaneously as fifteen different facial muscles contract. Air intake occurs irregularly due to the half-closing of the larynx by the epiglottis in the respiratory system. In extreme outbursts of laughter, the tear ducts are activated, and in conjunction with the struggle for oxygen intake, the face becomes moist and red. The noises that are associated with laughter range from sedated giggles to boisterous outbursts (3).

Curiously, laughter rarely interrupts the sentence structure of speech. It punctuates speech, and so we only laugh during pauses where we would cough or breathe. Experts say that this suggests the presence of a neurologically based process that governs the placement of laughter in speech, and givens speech priority access to the single vocalization channel (1).

Laughter itself is more than just a physiological response to humor. There are many situations where humans break out in laughter in response to a situation which isn't anything like a joke. Contrary to popular belief, laughter is not about humor, it is about relationships between people (3). Different theories have been developed in an attempt to address the universal and still mysterious question of why we laugh. The three traditional theories are Superiority Theory, Incongruity Theory, and Relief Theory. The incongruity theory suggests that humor arises when the seemly and logical disappear and things that do not normally go well together appear. Incongruity is the prevalent trigger. There is an insight into something that is wrong, along with the prevailing perception that the situation is in fact, normal and okay. The superiority theory suggests that we laugh because a particular person or character has a defect or is at a disadvantage. Due to our feeling of superiority in comparison to this person, we would feel detachment from the situation and thus have the ability to make fun of it. The relief theory addresses laughter through a system of built-up tension and incongruity. People often store emotions rather than express them and laughter helps us release the built-up tensions and emotions (1). There are many more theories on why we laugh. These are but a few. As with any theories in Psychology, any one theory often gives us part of the explanation, rarely the whole picture.

Most people will agree that we laugh when we find something humorous, yet different reasons exist for what we find to be humorous. Additionally, different things are humorous to us at different stages in life. "Getting" a joke is sometimes easy, sometimes a challenge, and sometimes it never happens. Gender differences are responsible for sexist jokes, which while some find funny others may consider offensive. Age influences play a crucial role in the discrepancy of humors. There is a certain amount of intelligence involved in understanding a humorous situation, thus, as we mature, our sense of humor develops as well. The things we find funny as a result of our age or developmental stage, seem to correspond with the stressors we experience during this time of our life. More subjectively, our emotions also play a role in determining what we find funny. Some emotions act as deterrents to our sense of humor, while others enhance the mood for laughter. Culture and community have a great influence on one's sense of humor. There are economical, political, and social issues that are easily laughed about within the community itself, but would not be understood anywhere else (3).

When we laugh, we are often communicating playful intent. Therefore, shared laughter promotes bondage and unity within the group. People feel more welcome and free to offer suggestions and think out loud. Laughter also opens the door to more real and risky communication by making a humorous exaggeration of a concern at hand (2). Most researchers agree that one of the main purposes of laughter is making and strengthening human connections. As cultural anthropologist Mahadev Apte pointed out, this feedback loop of bonding-laughter-more bonding, combined with the common desire not to be isolated, may be another reason why laughter is often contagious (1).

Recent evidence suggests that laughter evolved form the painting behavior of our ancient primate ancestors. A study of a young Bonobo in a German zoo found that when it was tickled it combined vocalizations and facial gestures much like those made by human infants. The finding suggests that the rules of how emotion is encoded behaviorally were laid down in the common ancestor humans shared with other great apes. However, chimpanzee laughter usually occurs in a different social context than it does in humans. According to Robert Provine, most adult human laughter occurs during conversation without touching, while chimps laugh almost exclusively during physical contact. "A pre-human evolutionary origin for laughter could also explain why it is still present in deaf and blind infants, and why it fulfils the same role, and sounds the same, in people from different cultures," suggested the researchers (8).

For hundreds of years, it has been acknowledged that "Laughter is the best Medicine." Dr. Lee Berk and fellow researcher Dr. Stanley Tan of Loma Linda University in California have been studying the effects of laughter on the immune system. Their published studies have shown that laughing lowers blood pressure, reduces stress hormones, increases muscle flexion, boosts immune function by raising levels of infection-fighting T-cells, disease fighting proteins called Gamma-interferon and B-cells, which produce disease-destroying antibodies. Laughter also triggers the release of endorphins, the body's natural painkillers, and produced a general sense of well-being (5).

Berk and Tan would probably argue that "the biggest benefit of laugher is that it is free and has no known negative side effects." Contrary to this generalization, and though it is usually positive, laugher can be negative too. There is a difference between "laughing with" and "laughing at" someone. When an individual is at the "butt" of a joke, they often feel embarrassed, humiliated, and resentful toward the person who told the joke and all of those laughing at it. When humor is used in such a way that feelings of hostility, distress, and general negativity are aroused, there is nothing positive or holistic about it. People whoa re laughing at others may be trying to force them to conform or casting them out of the group. In contrast to this negativity, compassionate humor helps bridge gaps between people, break tension, provide hope, and increase positivity in a situation. It is this type of humor that is accepting, mature, healing, and beneficial to our health.

There is no doubt that an honest laugh from the pit of your stomach is therapeutically beneficial to both the mind and the body. However, as with anything else, the benefits of laughter should be reaped in moderation. Otherwise, the line will soon be blurred between an honest laugh and a forced one. Furthermore, 'laughing a problem away' does not confront the issue. Laughter can be usefully applied in an effort to make a problem more approachable; however, it should not be mistaken for a successful resolution to the issue at hand.

References

1)How Laughter Works

2)A big mystery: Why do we laugh?

3)What is Laughter

4)Therapeutic benefits of laughter

5)Humor and Health

6)Laughter really is the best therapy

7)Why do we Laugh?

8)Where did laughter come from?


Altruism
Name: Maja Hadzi
Date: 2004-05-14 10:34:50
Link to this Comment: 9866


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

There is a light in this world: a healing spirit more powerful than any darkness we may encounter. We sometimes lose sight of this force when there is suffering, too much pain. Then suddenly, the spirit will emerge through the lives of ordinary people, who hear a call and answer in extraordinary ways. - Richard Attenborough

The idea of altruism is a perplexing one. Can animals exhibit altruism, or is this a characteristic solely reserved for the human species? Can selfishness be found at the core of even the most selfless actions? Why are some people so selfish that they refuse to share even when they have excess, while others are selfless to the point of risking their own lives for the sake of others?

Altruistic behavior is doing something at one's own expense with the intention of helping another. Altruism is a great mystery of social behavior in animals because it appears to go against the basic survival instincts and contradicts our understanding of natural selection. In the biological aspect of altruism, it is not the intentions of the action that determine weather or not it is altruistic, but it is the consequence of an action in terms of reproductive fitness. If altruism reduces one's own chances for survival and reproduction, then how did altruistic behavior evolve in the first place and why has it not been eliminated via Natural Selection (5)?

If natural selection acts at the level of an individual, then altruism would indeed be a disadvantageous characteristic. However, if we think of natural selection in terms of animals that live in groups, then altruism would be a characteristic beneficial for the survival of the pack. Within each group, altruists would be at a selective disadvantage compared to the selfish members, however, the fitness of the group as a whole is only enhanced by their presence. Furthermore, it is interesting to observe how sometimes all that it takes is a single selfish individual to bring down an entire group of altruists. Since selfish individuals benefit from the altruism of others but do not incur any of the costs, they are at an advantageous position compared to everyone around them who is ready and willing to make a sacrifice (7).

Hamilton's hypothesis of kin selection can also be used in an attempt to explain altruistic behavior in animals. Kin selection predicts that animals are more likely to behave altruistically towards members of their family than they are to unrelated members of the same species. Moreover, it predicts that the degree of altruism will be greater the closer the relationship. This theory proposes the idea of an altruistic gene that causes an organism to behave in a way which boosts the fitness of its relatives. The goal of a gene is to maximize copies of itself in the next generation, and one useful technique is to increase the probability of the animal behaving altruistically towards other bearers of the gene. Furthermore, increasing the number of copies of the altruistic gene in the next generation increases the incidence of the altruistic behavior itself (7).

The idea of natural selection at a group level and kin selection do not explain altruistic behavior among non-kin. In an attempt to explain observations of non-kin selflessness, Trivers proposed the theory of reciprocal altruism. It states that it would be beneficial for an organism to act in an altruistic manner because it increases the chances of the favor being returned in the future. Therefore, the cost of the risk taken from altruistic behavior is offset by the likelihood of the return benefits (7).

Some people, however, believe that these theories that attempt to explain altruistic behavior, are in fact, devaluing altruism all together. True altruistic actions are done solely for the benefits of the recipient, without one's own interests in mind. However, the intentions of kin theory are to selfishly increase the expression of a gene in the gene-pool. Reciprocal theory also seems to "take the altruism out of altruism." Prioritizing someone else's interests in order to ensure return benefits from them later is just a delayed form of self-interest (5). Therefore, if by 'real' altruism, we mean altruism done with the conscious and sole intention of helping someone else, then the vast majority of living organisms are not capable of 'real' altruism.

Also, one must not confuse the action of altruism with the emotion of compassion, for the two are not synonymous. Not all altruistic acts are performed out of compassion, and not all compassion leads to altruism (6). Reciprocal theory and kin selection are helpful in explaining some human behaviors; however, they can not be applied to all of human altruistic behavior. It is clear that humans behave more altruistically towards their own family members, and that they are more likely to help someone who has helped them out in the past. However, many human behaviors seem inconsistent with the biological explanations. Take adoption, for example. Parents reduce their own biological fitness for non-kin with, biologically speaking, insignificant reciprocal benefits compared to the sacrifice (5). For most people, the sight of a drowning child will propel them to jump to its rescue. This altruistic action is not the result of some long-term calculation, but is simply the effect of an internal emotion that precedes reason and immediately concludes that that's the right thing to do (6). Furthermore, there are those few giving souls that accept altruism as the core of their existence, the epitomes of which are Ghandi and Mother Theresa (2).

Compared to all other animals, however, human behavior is influenced by culture to a much greater extent, and is partial to conscious beliefs and desires. Furthermore, altruism is at the core of most religions and even other secular moral philosophies. Every major religion and most philosophies have independently come to the conclusion that the best way to live life is to "do unto others as you would have them do unto you." It is possible, that over time, human altruism has progressed into a rewarding behavior almost completely independent of its original biological motive. For example, pleasurable sex, at its biological core was maintained because individuals that enjoyed engaging in such behavior had more offspring. Today, however, human pleasurable sex has come to represent many different concepts that go beyond the biological basis of procreation; such as sharing of intimacy, expression of love, mutual reassurance, and antidote against loneliness (6). So is it possible then that humans are born 'selfish' and that altruism is a characteristic that they learn through society, and culture? From the hue of different characteristics found in the human compass of altruism, it is clear that society plays a critical role in shaping ones altruistic inclinations. On the other hand, since even insects such as ants, which are said to be incapable of conscious thought, exhibit altruistic behavior, I am lead to believe that there is some sort of barren biological basis for such actions, which is refined through human nurture.

That people do not consider the interests of others as readily as their own interests is perfectly understandable; the value of our own interests is more readily apparent to us than the value of others' interests (1). Furthermore, in certain circumstances, it is difficult to make a judgment call on what is helpful to others. Suppose that someone is trying to kill himself. Is it helpful to help him kill himself, or is it helpful to stop him from killing himself?

Marx defines a human being as an "ensemble of social relations." As human beings, our effect on the environment lasts well beyond our lifetime (3). This includes everything from the memories we have created, thorough the books we have written, to the smallest chemical redistribution our bodies may have caused. Since it is our impact on the world that lasts, we should make that impact as beneficial as possible. We honor altruistic and brave actions even if we are unable to explain them or emulate them ourselves. The man who puts his life at an extreme risk by jumping off of a bridge to rescue a stranger struggling in the freezing water below is a hero, not a fool, and his actions are completely admirable. The person who doesn't stop to think and just rushes off to rescue is the person all of us would like to be. We mutely sense that if we could all live like this, the world would be a better place. Altruism is beautiful. It is a beautiful idea, that inspires us when other people act upon it and makes us feel very good about ourselves when we act upon it. That's all we know for certain. Despite all of the theories that attempt to explain it, altruism is still a mysterious phenomenon that stems from some combination of nature vs. nurture.


References

1)An Argument for Altruism

2)Altruism

3) Why Altruism?

4)Why not Altruism?

5) Biological Altruism

6) The Problem of Altruism.

7) The Evolution of Altruism

8)Altruism

9) Altruism: Selfless or Selfish?


Ignorance is Bliss - Placebo's and Their Effect on
Name: Katina Kra
Date: 2004-05-14 16:47:44
Link to this Comment: 9876


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Throughout time, many people have placed faith in herbs, draughts, foods and chemicals as medications for the treatment of diseases and disorders that affect them. The Western tradition of medicine has been systematic in its exploration of the body and the basis for natural and synthetic compounds to interact with and heal the body and mind. In many cases, however, there is reason to believe that it is not medication of the body, but rather the expectations and perceptions of the mind that may alleviate conditions symptoms. Inert treatments – known as placebos – have gained ground now as psychological and physical tool. From the puzzling logic behind the psychology of its effectiveness and connection it forges between the body and mind, to helping the development of new drugs and efficacy tests and changing brain activities and functions, placebos continue to grow in importance within medical community for research and use.

From ancient Chinese acupuncture therapy to modern "sham" surgeries, these procedures have been proven effective in their ability to improve heath. (7) In understanding what placebos are and what they accomplish, a clear idea of what the effects are must be established. The definition of placebo effect itself is a phenomenon where a patient's symptoms can be relieved by fake treatments or medications because the individual expects or believes it would work. (6) If the patient believes what they are given will truly help and heal them, their symptoms and/or their outlook and mood about their illness frequently progress. Even if individuals in the treatment or study are given is the placebo, if they are informed it was functional and medicinal, improvements would occur, despite no active ingredients found in true medications used now. (8) Today in modern medicine the placebo phenomenon still mystifies doctors, psychologists, and everyday people with its unique ability to heal using only a belief that something will help them become better.

This wonderment comes from the idea that something as simple as a sugar pill can heal and help someone. It is this self-induced healing from placebos that brings the scientific communities interest to how the mind functions and creates reactions from a placebo. In test trials of different medication, rates of improvement for just the placebo are between twenty five to sixty percent – a significantly higher rate then what one might expect from a random natural recovery. (4) Currently with the technological advancements towards new and better medications, recently developed medicines must go through trials in which its efficacy is compared to that of a placebo. If the active medication shows a significant increase in efficacy over the inert placebo, the drug will often move on to further testing, and may eventually become released for public consumption. Only drugs that are undoubtedly more effective than a placebo are released onto a public market.(8)

Voltaire, a noted writer and philosopher of the 1800's, once said that "the art of the physician is to amuse the patient while nature cures the illness." (7) Does this possibly imply that the placebo effect is actually just a natural cure or the course of illness? Do placebos only help psychosomatic problems, or conditions that our own mind makes up? Studies often cannot take into account the natural occurrence of recovery from disease because of its similarities to that of a placebo group. These occurrences of a non-medically induced recovery though, also show that the body will often heal itself without use of medications. But if this is not possibly, then the false assumption of hope through a placebo can create a duplication of this original and scientific method of recovery. Studies done on physical disorders have shown that it is not only depressive or mental disorders that benefit from placebo use. A study on Parkinson's patients showed that placebos and medications both generated similar dopamine reactions, a key component within the body that is lost with neuro-degeneration. (4) One though, must still question how and why the mind can make this "cure."

Placebos are merely an elaborate deception, as they trick the mind into releasing needed chemicals by the promise of relief or recovery. This "lie" may a promise of better mood, improved health, or no pain, and is sent to the brain. However, it is interpreted and activated differently then true medication, yet still shows benefits to the patient. The actions from the placebo and recovery only further prove how remarkable of an organ the human brain is. The mind itself not only comprehends the outside world with its actions and how it must react to thrive in the environment, but also every minuet detail of the inner workings of the body. With placebos, the mind interprets a message of expectation and improvement from the compounds believed to be in it. Even without the anticipatory message in the brain, complex interactions of chemicals found within our bodies create a new way of dulling pain or changing moods. What is produced is distinctively different then a chemical medication, but often produces the similar results. Science still cannot explain what the brain does precisely to allow for the placebo effect to occur. It has been proven though, through extensive studies and research, that the mind can heal the body, reasserting the use for placebos within modern society. (7)

Positive thinking and faith have always been crucial aspects of healing and recovering from illness. Holistic treatments and medications are popular in the modern world, as they were in ancient times, despite proven efficacy of active drugs for such diseases like cancer, and often are used instead. These medications are developed from simple and natural forms of herbs, plants, or berries, and purported to help the way to recovery. The supposed "quackery" of these methods, proven in scientific studies showing these natural medications are ineffective, often goes ignored because people chose to believe in the ability it has to help.(7) Now, many patients choose to go against what modern medicine does and believes. They use these natural methods and actively hope for recovery.

What then occurs when someone is unknowingly given a placebo, and then told of side effects that may occur from taking the supposed medication? Or other negative results from a placebo? Often, people will show many these symptoms, despite no active ingredients in the placebo responsible for the side effects. Not only is the mind able to heal, but also sicken itself. There is an immense desire to believe what has been said. This is known as the "nocebo" effect. (6) Medical students while in training, often times develop this problem, as when they continue to read about new diseases and their symptoms, they begin to analyze themselves for signs of any problems possible or actually believe they have it. Often times, cultural factors can play into how far the nocebo effects can do, such as in the cases of voodoo witchcraft, where a cursed person would actually die because of the fear resulting from a curse of death upon them. (7) The idea that mental stimulation, such as talking with a doctor or researcher, can give you either positive or negative effects, despite the treatment being inactive is a curious idea, altogether bringing more questions about the extent to which the mind can control the reactions of the body.

The psychological explanation regarding the effectiveness of placebos goes beyond want, need, and hope. There are several basic psychological and mind-based explanations that could lead to the effectiveness of placebos. In testing situations, patients often "come into" the treatment with high expectations of success and recovery. Because of this, they do not wish to believe that treatment itself may not work. With positive outlook, behavioral and symptoms changes can occur regardless of what the type of medication or placebo the patient is taking. Often times, people with depression, asthma, or other acute medical problems learn over the course of time what options for treatments they have. By going through other successful or non-successful treatments, a patient develops a conditioned response, much like Ivan Pavlov's experiment with bells and salivation, and the same reaction occur for any kind of treatment response. If a treatment or pill worked before, you are unknowingly conditioned to believe in its effectiveness. This has been shown by infants, who have no prior experiences or condition being unresponsive to placebo treatments. Anxiety is a problem among many people and studies have that by knowing there will be something to help them, anxiety levels are reduced, increasing immune system function and more endorphins. These can lead to significant changes in the moods and behaviors of patients. (8) However, the exact neurological and functional reasons for placebo efficacy are still unknown, but remain a highly studied and controversial area of medicine and psychology.

Comparative studies are the most common method of gathering information about placebos. Volunteers are asked to participate in a group where they can either receive an active medication or the placebo. A common factor between the two is then identified, such as pain level, and the results are compiled and compared against each other to determine if the placebo shows comparable efficacy to the active medication. In recent brain scan studies done by comparing depressed patients, who were either taking a commonly prescribed anti-depressant or a sugar pill, the results were remarkable. Patients who had received the active medication using a quantitative electrophoencephalograhpy scan, showed a decrease in prefrontal cortex brain activity where receptors for information processing and memory are. In reversal, those who received the placebo, an increase of activity in the frontal area occurred. Another importance difference was the time span in which these obvious changes happened in. For those taking the placebo, the effects did not show on the brain scans until approximately two weeks later, while the active medication showed immediate results in the scans. Over 30% of depressed patients taking the placebo reported a reduction in depression and improvement of mood, while over 50% of the people taking the active medications did. (1,3, 5)

Many other studies show similar findings, such as a placebo pain study done where patients where shocked with an electrical charge, and then with a placebo pain relief cream to judge pain response. Brain activity corresponded to the region of the source painkilling mechanisms, the same area the actual medication activated as well. (2) Also, the stimulation of the same chemicals can occur in placebos, such as dopamine in Parkinson's research placebo studies. This study showed equal effectiveness between the placebo and the true medication. (4) Some placebo studies, like those of "sham" surgeries that have been done, are so successful that their effectiveness far outstrips that of true surgery. This is especially true with heart surgical procedures. (7)

Time though, in the course of effectiveness a placebo is what creates apprehension and disbelief by many people. Even with proven short term positive effects, it has been shown that efficacy in placebos significantly drops in long term usage. Is the mind incapable of holding the expectation of health and recovery for long periods of time, or are there other conditions limiting the time placebos can be effective? (4, 5) It is with these disenchanting results that push forward to continuing research into the placebo effect.

Many questions about placebos and their effectiveness remain unanswered despite years of study and research. It is the psychology of the mind that allows for the placebo effect in short term use to be so positive. Yet, despite modern technology and medicine which have been shown to effectively help and treat disease, placebos continue to work in a mysterious way that cannot be easily understood or explained. Even with the lack of understanding of the physical reason behind the efficacy of placebos and the changes in the mind and body they produce, placebos have and will continue to be influential in modern medicine in drug research and testing.

References

1)New Scientist journal, An article about the results of placebo brain scans.

2)CNN News, An article about pain related placebo studies.

3) UCLA's site, Article concerning the brain scan study results.

4)Nature Science Magazine, An article about Parkinson's placebo studies.

5)The American Journal of Psychiatry, Their article regarding the brain scan placebo study.

6) Wikipedia, A definition and information regarding the placebo effect.

7) A comprehensive paper concerning the history of placebos and the effects of beliefs.

8) A site that goes over the definitions, ethics, and psychology of placebos, and the results of placebo studies.


"Neural Darwinism": A Revolution in Intelligence
Name: Ghazal Zek
Date: 2004-05-14 16:49:10
Link to this Comment: 9877


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Charles Darwin, the father of evolution, wrote in his Autobiography (1887), "If I had to live my life again I would have made a rule to read some poetry and listen to some music at least once a week; for perhaps the parts of my brain now atrophied could thus have been kept active through use." (1)As sons and daughters of a great theorist, it may behoove us to follow Darwin's advice, as well. While the "use it or lose it" concept has been around for some time, scientific evidence is now surfacing that claims that one has control over one's intellectual fate. Thus questions arise: how much of the brain is hardwired due to genetics, how much is under our control? New evidence suggests that until the brain has fully matured, its fate is at least partly under its owner's control. The implications of this are enormous. Not only does this tell us that Darwin was right in his presumptions about the brain, but it could also mean that we need to rethink past notions of how to measure intelligence, and therefore what intelligence is. Just as Darwin questioned the beliefs of his forefathers, it may, too help us to question modern theories of intelligence.

Neural Proliferation and Pruning

It will first help us to examine the data of Dr. Jay Giedd, of the National Institute of Mental Health. For the past 13 years, Giedd has examined brains of 1800 kids and teenagers using high-powered magnetic resonance imagine (MRI). (2) Giedd's data is revolutionary in that it shows that the brain undergoes two major developmental spurts, one in the womb, and one from childhood until the teen years. The second developmental spurt occurs in two steps: Nerve proliferation and "pruning." Between the ages of 6 and 12, neurons grow bushier, creating new pathways for nerve signals by making dozens of connections to other neurons. The thickening of grey matter peaks at about 11 in girls and 12 ½ in boys. After this nerve proliferation period, a significant amount of pruning occurs in a process in which Gerald Edelman, a Nobel prizewinning neuroscientist describes as "neural Darwinism," that is, survival of the fittest (most used) synapses. (2) Not only is gray matter being thinned out at a rate of 0.7% per year, fading into the early 20s, the brain's white matter thickens, making nerve-signal transmissions faster and more efficient. (2) Perhaps here, we can alter "use it or lose it" to "use it or lose it; use it often, and strengthen it."

It was previously believed by most scientists that the brain is a finished product by age 12. Indeed at 12, the brain is fully grown in size, and according to Swiss psychologist Jean Piaget, whose work is very cited in psychological literature (2), the final stage of cognitive development, "formal operations," ends during early adolescence. (3) Fortunately there has recently been an onslaught of data that has much more conclusive information about the development of the brain.

The Brain matures in a sequence from back to front

Collective scientific data suggests that the brain matures in a sequence that moves from the back of the brain to the front. (2) The Cerebellum, believed to support activities of higher learning such as mathematics and social skills as well as physical coordination, is also believed to play a role in regulating certain thought processes. It is the only part of the nervous system that grows well into the early 20s, changing quite noticeably throughout adolescence by increasing the number of neurons and the intricacy of connections between them. It may be very interesting to note that the cerebellum is more sensitive to environment than heredity. The corpus callosum, a bundle of nerve fibers that connect the left and right hemispheres of the brain (believed to be involved in problem solving, creativity) thickens during adolescence and processes information more efficiently through maturation.

The amygdala is a structure in the temporal lobes associated with emotional and instinctive reactions. Using functional magnetic resonance imaging (fMRI), a tool that shows brain activity while subjects are doing a specific task, Deborah Yurgelun-Todd of Harvard's McLean Hospital was able to show that adolescents really heavy on the amygdala whereas adults rely more on the frontal lobe, a region associated with planning and judgment, in making decisions. Yurgelun-Todd conducted her experiment by asking subjects to identify the emotions shown in photographs of faces. She discovered that while the adults made few errors, children under 14 years old had a tendency to make many more mistakes. As the children grew older however, brain activity tended to shift toward the frontal lobe, leading to a more reasoned response, and thus fewer mistakes. (4)

The Prefrontal cortex controls "executive functions" such as planning, thought organization, weighing consequences, and setting priorities. Incidentally, it is the last part of the brain to mature. The Basal ganglia is tightly connected to the prefrontal cortex and helps it to prioritize information. It is also active in small and large motor movements, and thus Giedd suggests that exposing preteens to music and sports may help helpful, while connections are still being made. (2) One common criticism of this idea is that for the time being, it really just refers to middle and upper class children. Children of lower income households most often do not have the same opportunities and thus may not be able be immersed in the same kinds of activities.

Knowing how and when parts of the brain mature is an extremely useful and empowering notion. While there are many ongoing debates on the topic of "nature vs. nurture," Giedd and colleagues' results show that no healthy individual has to be bound by their genetic makeup. Additionally, since research in this field is just beginning to emerge, there is still much to be uncovered. It is possible that in the future, information garnered in this field will place a lot more importance on the preteen and adolescent years. (4)

Questioning our modern beliefs

So how does this new data change previous theories of intelligence, and how it should be measured? First, let us define the modern understanding of intelligence, and then let us criticize it. The dictionary defines intelligence as a 1) The capacity to acquire and apply knowledge. 2) The faculty of thought and reason. 3) Superior powers of mind. (5) Applying our current understanding of brain development, we see that the dictionary definition is no longer completely accurate. The first definition assumes that the heart of intelligence is a capacity for learning and applying that knowledge. Our new concepts of "pruning and proliferation" suggest that one has active control over the "capacity" for learning, keeping in mind that the level of control may be limited due to circumstance. Nonetheless, if we keep in mind that the average human has 100 billion neurons, (6) and there are 1000 to 10,000 synapses for the "typical" neuron, (7) we can only imagine how great the potential for knowledge must be. The definition of intelligence therefore becomes very cloudy here. Should there be different definitions for individuals with mature vs. immature brains? Many proponents of evolution would consider Charles Darwin to be highly intelligent; however, even he was aware that through his old age, his capacity to acquire and apply knowledge had diminished through the "atrophying" of his brain. Should there be different definitions for individuals with mature vs. immature brains?

Perhaps assessing the modes of testing intelligence will shed light on the situation. Although there are many different methods of testing intelligence, testing the "Intelligence Quotient," or IQ is by far the most common. Testing intelligence has become so mainstream that websites offer "The Brain test," and even "PhD certified" IQ tests (along with "what breed of dog are you?") (8) for entertainment. Referring back to our dictionary, we define IQ as the ratio of tested mental age to biological age. (5) It is significant that the IQ test takes age into consideration. The validity of IQ testing has been questioned for some time for many reasons and new information about the brain's development spawns even more questions about how to measure intelligence. Since we are only beginning to uncover the details of how our brains work, it is perhaps best that we try to hold as little stock possible in any of the previous constructs for testing intelligence.

In essence, trying to define intelligence has become a daunting task. It is important to not view intelligence has a trait, susceptible to Mendelian genetics. Certainly, the debate concerning nature vs. nurture need not be put to rest; however, it may be more rewarding to construct a view of intelligence almost akin to a view of "the self." Whereas societal norms exist for "average" intelligence, there are no norms for one's self. Clearly, a society will hold its own ethics and morals which individuals need abide by, but at the same time, it is acknowledged that everyone is different and has their own "self." Just as indefinable as the concept of the self is, perhaps so indefinable is intelligence. Perhaps we would fare better if we treated intelligence more like a concept, rather than a concrete number. If we believe the statement, "I am, and I can think; therefore I can change who I am" to be valid (as opposed to Descartes' ditty), and we acknowledge that intelligence is a part of the self, then we can support our claim that we have a level of control over our intelligence. What an empowering thought!


References

1)Brain Quotes

2), "What Makes Teens Tick," Claudia Wallis. Time Magazine, May 10, 2004.

3)Mind-Brain.com: Piaget , Background on early ideas about cognitive development

4)NIMH: Teen brain , NIMH article on development of teen brain

5)The American Heritage College Dictinary, Fourth Edition. Boston: Houghton Mifflin, 2002.

6)Neuron Facts , Washington.edu site containing brain facts

7)Serendip: Fun Facts, Bryn Mawr College site containing brain facts

8)Tickle.com: Online quizes


What about the Placebo effect?
Name: Amar Patel
Date: 2004-05-14 17:51:19
Link to this Comment: 9879


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


Since long before the introduction of medicine in any recognizable form, humans have believed in the existence of spirits or external forces that take control of the body during a disease. The advent of modern science has further developed this notion of external forces. These "external forces" have now transformed into chemical imbalances, viral or bacterial infections, and even unhealthy diets. One begins to notice that these are not as much external forces as they are internal, as their existence arises out of human psychology rather than physiology.

Belief in the internal forces allows one to realize that psychological factors control one's health; it is not a stretch of reason to conclude that these factors can be collectively termed the mind. The mind, interpreted through a scientific lens, is responsible for all human thought and action. In psychology, we see the evidence supporting the claim that our subconscious mind controls all of our thought, action, and sensory perception, as seen in the phenomenon of phantom sight. With control over all these fundamental aspects of a being, would it be much of a leap to suggest that the mind controls health? According to modern science there is still some room for doubt, even though recent evidence is beginning to favor the mind.

An examination of the mind's control over one's health can be narrowed to a more specific field. The dominant field concerning this "mind over matter" issue is a medical phenomenon known as the placebo-effect. Although the placebo-effect can have a variety of meanings, we can try to examine it from the biological aspects, with some regard to the overarching philosophies it dictates. When taking the biological approach, one notices two basic views: for and against a "valid" scientific basis for this effect. Arguing for a valid scientific basis must take into consideration the idea that the placebo-effect can be operative in different aspects of science, from treatment of behavioral conditions to drug tests.

Looking specifically at its impact on drug research, one can find some level of support for the claims of validity. These studies must, by order of the FDA, treat the effect as a control in the laboratory sense. The implication drawn from this status of a "control" demonstrates how consistent an occurrence it is. As Harvard Medical school's Ted Katchuk claims, "...the drug industry attempts to minimize the placebo effect to demonstrate the usefulness of medications." (1) The drug industry understands that the placebo effect can play as crucial a role in the healing process as other medical supplements.

Many have similar doubts about the pharmaceuticals as they have for placeboes. According to a study by the U.S. Office of Technology Assessment, "...only about 20 percent of modern medical remedies in common use have been scientifically proved to be effective; the rest have not been subjected to empirical trials of whether or not they work and, if so, how." (3) This adds another twist to the arguments over the validity of placebos. If placebos are just as functional as modern drugs, then the drug companies need to answer the question of the necessity of their remedies.

In studies conducted for depression, as well as other forms of ailments, it has been seen that placebos work for 50% of the cases. (2) Although these results sound promising, there is also a large amount of criticism concerning the validity of the results. Some skepticism stems from experiments claiming that the placebo-effect has worked on patients with cancer or other physically life-threatening illnesses. These cases are detrimental to the case of effectiveness because of their lack of detailed (empirical) evidence. (2)

The primary reason behind this claim comes from a lack of biological proof. This belief has recently been challenged in some prominent research studies. Recent advances in biological imaging have finally led to some understanding of how this effect has a neurological foundation. The long-standing theory behind the biological basis of the placebo effect is that it could release endorphins in the brain. Endorphins are defined as "Naturally occurring molecules made up of amino acids. Endorphins attach to special receptors in the brain and spinal cord to stop pain messages. These are the same receptors that respond to morphine." (4) Because of the similarities to reactions created by morphine, researchers have coined this theory the "opioid model". (2)

Through the use of modern technology and other scientific research techniques, there have been a few studies which attempted to isolate the portions of the brain associated with the release of these endorphins, or which are generally associated with any placebo effect. Through the use of Positron Emission Tomography imaging (PET scans), researchers were able to determine that the activation of the prefrontal cortex is associated with diminished pain and the beginning of a healing process. (5)

Other studies have shown how the placebo effects can actually mimic responses in the brain similar to those taken from chemicals. A landmark study was conducted with the use of an antidepressant (fluoxetine), which showed that brain activity from placebos were identical to those from the drug. (5) The further use of PET imaging has shown how the placebo effect can have an eventual consequence that is presently as effective as drug treatment. PET imaging in an opioid treatment study showed how both the drug and placebo responding patients "...show brain activity [that] was most pronounced in the rostral anterior cingulated cortex..." (1) With such clear proof of effectiveness between similar areas of the brain, it is difficult to understand the skepticism. One study goes as far as stating that drugs and placebos affect the same neurological sites; however the placebo is done through "induced thought." (5)

The other leading theory for a biological basis is deemed the behaviorist approach. This model, which is called the "conditioning model," is centered on the idea that the environment, or even the act of taking some physical medication, is what helps ease the patient's ailment. (2) The "induced thought" takes one back to the notion of thoughts having a basis on the involuntary healing process in a body. From a neuroscientific viewpoint we know that the brain extends to control various systems in the body. If one examines the relation between stress and immune response, there is ample proof that these are inversely related. Inverse relations such as these are considered voluntary versus involuntary systems, in biology. This division in the nervous system creates the notion of a split brain.

The concept of a conscious and subconscious developed by psychologist Sigmund Freud relates to this notion of a split-brain. The constant battle between the ego conscious thought, and the id subconscious thought, is what he believed composed all human behavior. (6) Battles between the ego and id are seen in the placebo effect through the "conditioning model". In accordance with the theory it is the environment and physical actions that "trick" the ego into believing it has control over the healing process.

Freud's correlation between the ego and id is related to another theory developed by Paul Grobstein Ph.D. When one looks at the brain and its general function as an input/output mechanism, one can interpret it as a "box" theory.(7) The theory correlates this mechanism to consciousness. In the theory, the mind is a box in which a stimulus (input) will travel through a complex pathway and appear as some output. There are many other intricacies, such as inputs that do not produce an output, or outputs which produce no inputs, that are explained through self initiating boxes within the "mind" box. Additionally, there is an I-box which functions as the section of the nervous system that correlates to consciousness. This consciousness is where an individual holds his/her sense of "self."(7) The I-box is what Freud would see as the home of the ego, and the Mind box is the home of the id.

Our I-box holds what we call "rational thought". In the conditioning model for explaining the placebo-effect, rational thought is exactly what is attempting to be targeted. This "rational" mind is what needs to be fooled by a placebo in order to enable the mind's healing process. One of the possible reasons for the placebo effect only working in some cases and not in others is because of the difficulty associated with tricking the I-box.

The conditioning model has proven that the environment plays a crucial role in providing promising results from the use of placebos. In a summation of research studies conducted on the simple placebo based techniques, one can note that there are certain "symbols and rituals of healing-the doctor's office, the stethoscope, the physical examination-offer reassurance."(3) The real answer lies in the core of the mind, where through different nurturing stimuli; one can see natural healing take control.

Analysis of the placebo effect shows that there are ethical dilemmas from the beginning. Most of these dilemmas have stemmed from uncertainty in the biological sector, as to how certain chemical responses are triggered. Modern research techniques have now been able to narrow down exactly where the effect occurs, and perhaps this can lead to a biochemical analysis of their pathways. The interesting idea is that approaching this effect from the classical psychological view provides far greater support. Ignoring the molecular aspects and concerning ourselves with the idea of the mind over matter ideal, we can easily see the power that our own subconscious has on the body's repair systems. As stated previously, with such a strong agreement about the mind's role in thought, memory, and most behavior, it cannot be a stretch to see its role in healing.

References

1)The Scientist


2)Modern Drug Discovery July/Aug 1999

3) Scientific American The Placebo Effect. January 1998.

4)Definition of endorphins

5) Lieberman, Matthew D. et. al. The Neural correlates of placebo effects: a disruption account. UCLA, Jan. 2004


6) Interpretation of Freud's work Domhoff, G. W. (2000). Moving Dream Theory Beyond Freud and Jung. Paper presented to the symposium "Beyond Freud and Jung?", Graduate Theological Union, Berkeley, CA, 9/23/2000.


7)"Getting It less wrong, the Brain's Way: Science, Pragmatism, and Multiplism." Paul Grobstein Ph.D.


Pinpointing one's "Self"; Revealing Identity via t
Name: Shadia
Date: 2004-05-14 20:53:37
Link to this Comment: 9883

<mytitle> Biology 202
2004 First Web Paper
On Serendip

The Internet is a multifaceted thing, simultaneously used as an educational tool, business market, and recreational device. Most would agree that its benefits—instant communication, a wide array of resources at one's fingertips, and anonymity, far outweigh any pitfalls. Unfortunately though, the relatively new emergence of the Internet brings a variety of relatively new problems.

The Web has been instrumental to our exploration and understanding of neurobiology and behavior over this past semester. The online forum has facilitated an exchange of ideas and the shared experiences have fostered an atmosphere of trust, comfort, and freedom. But how would you feel if you suddenly learned that all of a student's posts had been fabricated? How has virtual communication contributed to the identities you attribute to those around you? In my previous paper (9), I discussed the intricacies of "trust" and "the self" as they relate to the factitious disorder known as Munchausen Syndrome. Here I explore the Internet's contributions to this disease and to the perceptions, and projections of the "self".

1997 saw the emergence of a new variant of the Munchausen disorder, termed "Cybermunch" or "Munchausen by Internet" (5),(2). Sufferers join chat groups or forums, often assuming entirely new identities. After they gain the other members' trust, they begin to divulge the unusually catastrophic series of events that constitutes their lives. Dr. Marc Feldman, a premier expert on factitious disorders claims, "the Internet was made for such fakers"(5). In order to credibly portray themselves as ill, Munchausen sufferers often need to research medical literature. The Internet conveniently allows them to become experts on obscure diseases with the click of a mouse while also avoiding the necessity of reproducing any symptoms. It is difficult to correctly assess the frequency of such cases since a single sufferer can join a multitude of groups. If and when suspicion is raised, the perpetrator can rely on anonymity and seek instant escape into the world of cyberspace.

Perhaps the most widely publicized case of Munchausen by Internet is that of the Kaycee Nicole "hoax" (1), (3),(4)). To those who "met" her on Metafilter.com, Kaycee was a 17yr. old Kansas high school student who chronicled her painful battle with leukemia on an online diary. Soon, her mother" started a companion diary and began to share the emotional experiences of caring for a terminally ill child. Over the course of two years, Kaycee befriended a sympathetic group of well-wishers who frequently sent cards, money and flowers. Intimate friendships developed as some members held regular phone conversations with the teenager. Her story was even featured in a New York Times article. When her death was announced on May 15th 2001, the online community grieved. The only problem: Kaycee never existed. Kaycee's creater, Debbie Swenson was in reality a 40-year-old mother who had used pictures of a local basketball star and the voice of her own teenage daughter to fabricate the entire story.

The Kaycee Nicole hoax illustrates several diagnostic features of a Cybermunch case (6). As with most Munchausen sufferers, Debbie's claims include elements of pseudologia fantastica, or gross exaggeration, in which on lies floridly about one's personal history in a manner that is compelling"(2). In addition, the public's interest is held with repeated claims of the worsening of an illness, rapidly followed by a miraculous recovery. Serious medical problems are discussed with a casual attitude and "supporting players", family members and close friends are periodically introduced.

The consequences of such "deliberate deceit" can be devastating to the other group members, many of whom have invested a considerable amount of time and energy sympathizing with the sufferer. Pam Cohen, a victim, describes the experience as an "emotional rape"(5). "Imagine if a person you loved had a double life and everything about them was a lie," Cohen said. "I found it hard to get real-life support. It's a disenfranchised grief when you're a victim of these people. People say 'How could you be so stupid?' or they dismiss your feelings" (1). Ironically, the treatment for Cybermunch victims is...another support group. Cohen founded "Victims of Factitious Liars" in 2002 and the forum already has 42 regular members (5).

While those who have been duped are justifiably angry, blame and responsibility should be assigned with caution. What could cause someone to deliberately assume a fictitious personality? Munchausen sufferers are not necessarily pathological liars. I had previously asked of Munchausen sufferers: "do they knowingly deceive, or are they themselves deceived?"(9). The account of a 40-year-old former Munchausen patient affords a rare insight into the "other" perspective. "I called them "scenarios", she explains. When I'd do something to attract the paramedics and police, I got an adrenaline rush. I believe I got addicted to it. At the time, it didn't occur to me I was hurting anyone but myself" (5).

Miss Scott offers another perspective. In an interview, the 50-year-old Londoner acknowledges, "She knew that what she was doing was wrong, but she could not make herself stop" (7). Hers is perhaps the most extreme case of Munchausen—she underwent surgery 42 times and was treated in over 600 hospitals—but she is one of the few who recovered. Although she has not faked her way into a hospital in over two decades, Miss Scott paid the ultimate price. A year and a half ago, she began to suffer from abdominal pains yet doctors refused to examine her or order tests. By the time she contacted Dr. Feldman, a psychiatrist at the University of Alabama, it was too late. The surgeons found a malignant tumor, too large to remove. Miss Scott acknowledges that the responsibility of having cried wolf lies entirely on her shoulders. Yet she still laments that, "once you've been branded, it's like you've got it written across your forehead: 'Not to be trusted. Munchausen'"(7).

Scott's experience illustrates the important point that it is our attitudes, suspicions, and conclusions about others that determines how we relate to, and interact with people. However the self that we choose to momentarily portray, is not always the one we want to be remembered by. Between 700 and 1500, the concept of the "self" was deeply rooted in religion. It referred only to the weak, sinful, "selfish" nature of humans, in contrast to the divinely perfect nature of a Christian soul. It was only about 800 years ago that the concept of an independent, self-directed "self" began to develop (8). In medieval times, one was expected to assimilate the values and meaning dictated by the community. Today, this concept has changed. Modern "self" theory posits that each person is expected to decide what is right and to know him/herself well enough to determine what courses of action "feel right.". Therefore, it is not surprising that each person's self-concept is different from all others. There exists no psychological definition of "self". In fact it is now commonly believed that we have many potential selves—such as "hoped for" selves, "ideal" selves, "successful" selves, "rich" selves, and also "feared" selves (8). It is important to remember that what you perceive as your "true self" is often your ideal, or preferred identity.

In conclusion, the recent phenomenon of Cybermunch is an extreme example of the desire, and ability to assume an alternative persona. But to a certain degree, the Net's anonymity is enticing to us all. One of the Internet's greatest assets is that it provides a more fluid means of expressing one's self. After all, how often does the average person embellish the facts when describing themselves via chatting? But few expect to be held accountable, or remembered, for these actions. Perhaps the most useful theory to remember when judging ourselves, and others, is that we all seem to have a self with many parts, some we like and some we don't.

References

1.<"They Think They Feel Your Pain"="1">1)Wired, Article Explaining Munchausen by Internet.

2. 2) Feldman, Marc D. Southern Journal of Medicine, 93, 669-672.(2000)

3.<"Kaycee Nicole FAQs"="3">3)Rootnode, Detailed account of Kaycee Nicole hoax.

4. <"A Beautiful Life, an Early Death, a Fraud Exposed"="4">4)NY Times, Article on Kaycee Nicole hoax, May 31,2001.

5.<"Cybersickness"="5">5)Villagevoice, Article on Cybermunch, July 2001.

6.<"Sympathy Seekers Invade Internet Support Groups"="6">6)Healthyplace, Article on Munchausen by Internet.

7.<"A Great Pretender Now Faces the Truth of Illness"="7">7)NY Times, Article July 1999.

8.<"Definitions of the "Self""="8">8)Mentalhelp.

9.<"Fact or Fantasy? The Truth Behind Munchausen Syndrome"="9">9)Serendip, Student Web Paper.

10.<"Munchausen By Proxy"="10">10)Serendip, Student Web Paper.

11.<"Munchausen by Internet: Faking Illness Online"="11">11)Selfhelp Magazine, Article.


Understanding the Basis for Aggression
Name: Mridula Sh
Date: 2004-05-15 00:53:27
Link to this Comment: 9885


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Aggression defined as "a hostile or destructive tendency or conduct," (2) is a complex social behavior that is highly conserved in vertebrates. While organisms such as sea anemones have been shown to exhibit acts of aggression when faced with challenges to their territory (3), this primitive behavioral tactic has developed into an evolutionary mechanism exploited by higher order species in their effort to survive and procreate. Yet, the assumption that acts of aggression are solely motivated by the innate desire for "survival", a term to be understood in an evolutionary context would be oversimplifying the causal factors for this complex behavior. Personal experience and current world events indicate that it is not uncommon these days to be a victim of or witness to mindless acts of aggression that appear to have no logical motive behind them.

Aggression can thus be understood as a behavioral mechanism adopted by organisms (and in a broader context by species) that is greatly influenced by and often a result of the intricate, yet fragile interactions of biological, cultural and environmental stresses placed upon them. This paper will attempt to analyze and understand the biological causes of aggression with a view towards appreciating such behavior within an evolutionary context of survival.

The word "aggression" often implies a negative connotation when used in reference to behavior. More often than not aggression is viewed as gender specific behavior especially in situations where the aggressor is generally of a particular sex. (4) One is more inclined to associate it exclusively with the masculine personality as opposed to behavior that is exhibited by both sexes.(4) While the basis for such an assumption lies in trends seen in statistical and empirical evidence, it is not entirely true to say that the gender of the aggressor is its sole causal attribution.

Aggression is seen in both sexes, yet it is primarily a male associated behavior. What causes this distinction to be made? Why is there a tendency to find aggression more acceptable when the perpetrator is female as opposed to male? We tend to create mental schemas based on personal experience and perceptions of the world which in turn lead to the creation of gender based stereotypes.(4) These mental images not only influence one's behavior to a large extent but also play a role in the assessment of such behavior by an observer. Male aggression on the one hand tends to be viewed as a more biologically induced trait brought about by situational forces as a power tactic to obtain and maintain dominance.(5) Hence while men in general tend to be the more aggressive sex, stereotypes of gender enhance this perception and attribute much greater hostility to the act when committed by a man. Women on the other hand are the more passive sex. Female aggression rather than being a predisposed biological tactic seems to be induced as a result of social and environmental circumstances and is most often attributed to a neurological/hormonal dysfunction.(4) Cultural interpretations of female aggression have viewed such behavior as evidence of irrationality and aberration of her otherwise submissive personality. (11)

It is well known fact that young male rhesus monkeys display more rough and tumble play compared to their female counterparts.(5) An explanation of this observation calls upon an understanding of the genetic, molecular, evolutionary and behavioral mechanisms responsible for such behaviors. (1)
Testosterone, the elusive primary androgen has been closely linked with aggressive behavior. The theory that elevated levels of testosterone cause aggressive behavior in males has been refuted by numerous scientific studies on one hand while other experiments have established a causal relationship between the hormone and behavior.(3) While aggressive behavior and testosterone levels are known to be positively correlated, the absence of conclusive evidence showing a definite causal association between the hormone and observed behavior makes their relationship a complex one(2).

The association between aggression and levels of testosterone is closely related to dominance, a mechanism that has great evolutionary significance. Aggression, when viewed in an evolutionary perspective is used to determine fitness which is often directly related to the intent of achieving and maintaining a dominant status in the hierarchical order. (2) Human and primate social behavior to a large extent is characterized by subtle dominant and subordinate positions of hierarchy, which for the most part are achieved without open acts of aggression. (5) Yet the majority of aggressive acts are performed when these positions of dominance are challenged or misused. It is hypothesized that testosterone induces behavior that favors dominance.(5) Dominance however doesn't always induce aggression. Levels of testosterone thus rise when faced with these situations of dominance and competition. In support of this assumption a study showed that testosterone levels in socially dominant but unaggressive prisoners were high, and not significantly different from aggressive (possibly dominant) prisoners. (5)

This could explain why men tend to be the more aggressive sex. At a very basic level it has directly to do with their evolutionary need to be the dominant sex to ensure mating and viable offspring. Male dominance and power is an evolutionarily favorable mechanism that has developed to such a large extent that one tends to attribute such behaviors solely to biological mechanisms but fail to look at the socio-cultural origins of such behaviors. (5)

Testosterone alone isn't the only hormone implicated in this complex behavior. Studies of the underlying molecular mechanisms responsible for aggression show that there are signaling molecules, other hormones and excitatory and inhibitory neurotransmitters that play an essential role in the determination of aggression.(1) The workings of these neurochemical systems are closely linked to brain mechanisms and together they control the careful balancing and integration of the excitatory and inhibitory components of the input-output boxes of the nervous system.(6) Behavior such as aggression may result due to changes in the levels of neurotransmitters circulating in the brain directly affecting the delicate balance maintained between the excitatory and inhibitory components.(6) One such inhibitory neurotransmitter that has been implicated in aggressive behavior is serotonin.(8) The brain is always receiving numerous input stimuli from its neurons and is constantly sorting, integrating, turning off and responding to these stimuli within its independent boxes.(7) The reduced activity of inhibitory neurotransmitters such as serotonin leaves the brain in a state of over-activation because of the incessant uncontrolled firing of neurons.(6) Such uninhibited brain activity could result in behavior that is violent and impulsive, characteristic of the state of the brain at the point of time. Hence serotonin levels in the brain share a reciprocal relationship with impulsivity and aggressiveness. Experiments have shown that low cerebrospinal fluid concentrations of the 5-HIAA (a serotonin metabolite) are correlated with high levels of aggressiveness (8) Furthermore interactions between androgens and 5 HIAA molecules have shown to affect aggression in different ways indicating the close interaction of neurochemicals in the brain to produce a state of normality . (1)

The best replicated biological correlate of aggressive/antisocial behavior is low autonomic arousal. (8) In the Raine study, it was found that conduct disordered children and petty criminals had significantly lower pulse rates as compared to their well behaved counterparts. (8) A lower pulse reflects fearlessness and less anticipatory anxiety when faced with the dilemma of committing an act that is violent. The function of anticipatory anxiety is to prevent one from committing such an act. Hence in the absence of this negative emotion, refraining from acts that are violent will not be reinforcing and might lead to a predisposition to violence.(8),(9)

The power that the physical environment has over aggression is enormous. This is especially true of adolescent aggressive behavior. While biological factors create a predisposition for aggression, environmental stresses act upon these predispositions to cause deviant behavior. The effect of social class, ethnicity, peer groups, family and academic environment are stresses that can either make or break an individual. Studies have shown that girls' involvement in crime has greatly increased in the last 20 years.(10) This increase in delinquency has been attributed to their vulnerability to increasing environmental stresses such as those mentioned above. Adolescents from low socio-economic status homes have a greater tendency to join gangs in an effort to develop skills that will enable them to survive in their harsh communities and temporarily escape rejection from other peers. (10) One can apply the psychodynamic stress/diathesis model to analyze adolescent aggressive behavior. The model shows a reciprocal relationship between diathesis and stress in causing such behavior. The greater the predisposition/ diathesis for aggression, the less stress needed to produce such behavior and vice-versa.a href="#9">(9) Thus in the last two decades with increasing environmental stresses, even a low predisposition towards such deviant behavior has resulted in adolescent aggressive behavior.

Aggression evolved as a behavioral mechanism for the survival and proliferation of a species. Today however, aggression rarely refers to an evolutionary mechanism but instead has evolved into a deviant behavior that is often a precursor to other serious disorders. By realizing that it is social behavior that needs be understood in terms of multiple causal influences one can attempt to begin to unravel the mystery of this complex behavior.


References

1)J. Nelson and Silvana Chiavegatto, Randy , Molecular basis of aggression. TRENDS in Neurosciences, Vol.24, No12, December 2001
2)Oxford English Dictionary, Online Edition, , 1989
3)Bland, J. About Gender:, Testosterone and Aggression. (1998-2004)
4) Steve Stewart-Williams, Gender, the perception of aggression, and the overestimation of gender bias. Sex Roles: A Journal of Research, March 2002
5)Allan Mazur, Alan Booth, Testosterone and Dominance in Men
6)Aggression:, An interesting site that has information about aggression and serial killers.
7)Prof. Paul Grobstein, Lecture/Discussion notes for Neurobiology and Behavior, Spring 2004
8)W. Wayt Gibbs, Seeking the Criminal Element. Scientific American, March 1995
9) Prof. Leslie Rescorla, Lecture Notes, Psychology 101, Spring 2004
10)Jeanne Weiler, Girls and Violence
11)Dr. Paul Kenyon, SALMON ,University of Plymouth, Dept. Of Psychology


Sex as a Weapon: Exploring the Gender and Diagnosi
Name: Ginger Kel
Date: 2004-05-15 01:07:29
Link to this Comment: 9886


<mytitle>

Biology 202
2004 Third Web Paper
On Serendip

"What do you call a woman obsessed with sex? Nympho. What do you call a man obsessed with sex? Normal" (1).

Nymphomania, or "uterine furor," is an unusual type of madness. An exclusively female condition, nymphomania is denoted by an excessive sexual appetite (1). The disorder, in its defined form, has existed since 1771 (2). Yet, absolutely no woman has suffered from it. I am not claiming that a woman has never been diagnosed as a nymphomaniac. Simply that nymphomania, as a mental disease, does not exist: instead, it is evidence of the medicinal ratification of archaic sexual politics. This statement may seem somewhat premature based on the information I have provided thus far. However, it is the intent of this paper to fully explore the gender bias present in nymphomania. What was symptomatic of nymphomania at its inception? Why, at present, is nymphomania considered a defunct scientific term? Is there such thing as an excess sexuality disorder?

As the yarn (at the top of the page) suggests, traditional thought saw sexuality as inherent in men (3). Women, in contrast, were depicted as having low sex drives (3). It was believed that this sexual prudence gave room for patience and other nurturing characteristics to evolve in females. Women were the mothers; men were the breeders. In their male dominated society, men found power in their sexual freedoms: the opportunity to achieve pleasure on one's own terms reaffirmed the sex's superiority. Thus, for women to have a similar sexual virtuosity was a dangerous threat to the patriarchal structure. Equivalent sexual drives could be argued as proof of gender equality. To eliminate that concern, men aggressively defended their social stigma. Any woman with significant sexual urges was determined to be unnatural.

By what standards was nymphomania diagnosed? Bienville, the physician who birthed the concept of nymphomania, cited: fixating on impure thoughts, reading novels, masturbation, and ingesting great quantities of chocolate as tell tale signs (2). Apparently, he was unaware of cravings associated with the menstrual cycle. Also suspect were females who aroused easily or had need for frequent sexual activity (1). In scrutinizing these nymphomaniac indicators, there appears to be quite a bit of variety. This spectrum of symptoms is reflective of the precariousness of the nymphomania definition. It is far too ambiguous. How does one measure excessive sexuality? Each human being has a unique mind and body. Since sexual desires are intuitive of those two entities, sexual particularities cannot be compared across a population. Within its definition, nymphomania provides no specific means with which the ailment could be confirmed (4). How excessive is excessive sexuality? The uncertainty of the definition is deliberate, but also problematic. Nymphomania did provide a façade under which unknown diseases could be lumped and treated. Physicians were given a means through which they could save face if their knowledge was insufficient. However, it is also this lack of specificity makes it too general to be of any real use. You cannot cure strep throat, if you're treating yourself in terms of the sore throat.

In 1987, the term nymphomania was done away with by the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (2). After decades of gradually reducing the scope of the term, pushed by feminism and sexual liberation movements, nymphomania was finally declared obsolete. Nymphomania was recognized as a loaded term that had no real medicinal implications.

A popular opinion, circa the Victorian era, rooted all women's diseases in the female reproductive system (3). Why is that? Ultimately, it comes down to popular preconceptions concerning gender. The female was seen as biologically inferior to the male. Therefore, since the woman's main purpose is to carry children, it is only logical that she would be identified by her reproductive organs. Male nature, however, "was never primarily defined by their genitalia" (3). As creatures of the up most sophistication, illnesses in males would target the mind. Congruent with all these principles, in treating nymphomania, the female genitalia was the focus. Depending on the "severity" of the condition, anything from a cold bath to an enema to a clitoridectomy could be prescribed (2). In these procedures, the clear intent is to distress the female organs without removing reproductive capabilities. Referring back to the discussion on the development of nymphomania, perhaps it was not the sexual appetites of nymphomaniacs that frightened men. Instead, it was the pleasure capabilities of these ladies. If women were allowed to experience the throws of ecstasy, they may be less content with their matronly place. Thus, the treatments for nymphomania conveyed a message of discipline.

Nymphomania was a falsehood placed upon women: a way to maintain patriarchal establishments at the expense of feminine sexuality. Sex is just another behavior. In order to perpetuate the survival of an organism, sexual behaviors produce euphoric feelings in the participants. The pleasure functions as a reward to encourage a "habitualizing" of the act. However, in human societies, sex is largely performed for its recreative rather than procreative benefits (7). Reproductively, mankind has proven that it is fit for survival. Still, in all aspects of life the pleasure derived from sex is unmatched. It is such that it has become that standard against which all other pleasures are measured (7). Humans toiled greatly to filter the satisfaction of sex from its breeding function. They did succeed in this task, but the secularization of sex, people left exposed new aspects of one's identity.

Ambition is innate in human nature. It provides human beings with the ability to dream, but at the same time, causes us to take gifts for granted. The delightful feelings that result from copulation are a biological gift. However, instead of respecting them as such, the greed of man drives the abuse of these gifts. What is the obsession of a pleasure called? An addiction. Like alcoholics and drug users, sexual addicts walk the streets. It estimated, in fact, that roughly one in twenty people suffer from compulsive sexual behaviors (5). However, sexual compulsives do differ from the nymphomaniacs. Nymphos desire extreme amounts of sex, where as sexual addicts view sex as their entire world. To them, sex is an obsession: one that they are willing to sacrifice every other aspects/relationship in their life for (6). Sex addicts, unlike drug addicts for example, are aware of the danger in their fixation. Yet, sex addicts, like drug addicts, are powerless against their addiction (6). A thing that can begin innocently, simple masturbation to illustrate, can single-handedly consume a life if gone unchecked: "addiction is a relationship—a pathological relationship—in which sexual obsession replaces people" (7). Do not sensationalize these arguments, though, sexual addiction is not a condition brought about by a little passion alone.

Sexual compulsive behavior is an equal opportunity disorder. Nymphomania was a condition that solely affected the female contingent. Sexual addiction is on the opposite side, with most sexual addicts being men (5). There are two categories of sexual addiction: paraphilic and nonparaphilic. Paraphilic compulsive sexual behavior is an obsession with an unconventional aspect of human sexuality (8). Paraphilic interests range from pedophilia to masochism. On the other hand, nonparaphilic compulsive sexual behavior involves a conventional aspect of sexual behaviors hightened to an extreme level. A model of this category would be a person who is sexually promiscuous (8); He or she seeks multiple partners to quench their insatiable thirst for the act.

How does one develop a sexual addiction? Most sexual addicts come from a background of abuse (6). At some point in their lives, the addicts underwent an experience that mutated their perception of sexuality. Thus, in trying to escape the trauma, the addict replaces it with the obsessive behavior. Ironically, the nature of the sexual addiction is usually indicative of the abuse. People who live in stressful situations are also prime candidates for sexual addictions (5). The sexual behavior develops out of a desire to relieve stress or depression. As the person remains in these situations, however, sexual behaviors become ingrained in their daily life: the short term relief found in the sexual addiction proves to be habit forming. In the spirit of the addiction, addicts in both categories face a ballooning of the addictions (9). As the compulsion progresses, the person involved develops an increasing tolerance toward sexual satisfaction. Thus, the addict will oftentimes pursue riskier means of stimulation in order to achieve satisfaction (9). The more intensely sexuality is pursued, the less fulfilled the person will feel. It is a chronic disorder that cannot be resolved by the addict alone.

Sex: a method of reproduction. It is an essential, natural act congruent with evolutionary values. Yet, humanity's divorce of sexual pleasure from reproductive necessity has left sex in an uneven psychological playing field. From a method of domination to a means of coping with a bad day, sex has become a way to manipulate emotions. The allowance of such an important behavior into the hands of mankind is similar to eating a box of chocolates. It is wonderful that we are able to attain the gift, but "too much of a good thing" can still be bad. The World Health Organization does not recognize sexual addiction as a mental disorder (9). Yet, a significant amount of sexual perpetrators were found to have sexual compulsive disorders. Human beings need to explore sexuality, but not with prejudice. If we see it with bias eyes, we invent devices such as nymphomania to subjugate the less informed. Nor can we ignore sexual eccentricities; they have the possibility of leading to unhealthiness and even violence. Science understands sex as a physiological act. Now, it is time to comprehend it as a behavior—reflective of human emotions.


References

1)Improving Sex! Advice & Information

2)Straight Dope: What is nymphomania?

3)The Guidon: Demystifying nymphomania

4)Discovery Health Channel

5)MayoClinic.com: Compulsive Sexual Behavior

6)Compulsive Sexual Behavior and Sex Addiction: too much of a good thing?

7)Bad Subjects: Nymphomania

8)Some Facts Psychologists Know About... Compulsive Sexual Behavior

9)Sexual Health.com: Overcoming Compulsive of Addictive Sexual Behavior



Dissociative Identity Disorder and the I-Function:
Name: Emily Haye
Date: 2004-05-15 18:06:09
Link to this Comment: 9891


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

I likely will not say anything "new" in this paper. But I want to consider things in a new light, within new contexts. I am now equipped with a new investigative tool, a new context within which to place aspects of the mind: the I-function. I want to use this new tool to investigate something that has fascinated me for years: the phenomenon of multiple selves within a single person, or Dissociative Identity Disorder (DID, formerly Multiple Personality Disorder). How can the I-function help us to understand DID and other dissociative coping mechanisms, specifically in survivors of severe childhood abuse? Can DID itself inform us about the I-function?

In order to survive a chronic trauma, in which one's physical wellbeing and survival is threatened over a long period of time and from which there is no escape ((7)), one must adapt. The self is threatened and must be preserved. Survivors of trauma report a myriad of dissociative coping mechanisms that do just that, from amnesia to "out of body" experiences, to dissociation from pain. ((1)) In the most extreme cases, the self is spared the horrific experiences and the reactions it has to them by walling them off completely, relegating them to entirely separate consciousnesses which take on their own ways of seeing the self, the world, and the relationship of the two. This extreme coping mechanism, the splitting of the self, is known as Dissociative Identity Disorder.

Dissociative Identity Disorder is defined by the presence of two or more alternate identities or personality states that recurrently take control of a person's behavior, inability to recall important personal information, such as the events of childhood in general or of trauma in specific, that cannot be explained by ordinary forgetfulness. ((2)) The alternate identities may or may not identify themselves by name and can be different in age, sex, physical characteristics (i.e. pallor), and degree of awareness of the instigating trauma or more current aspects of the individual's life. They all have unique perceptions of self, the world, and others.

The major origin of DID has been identified as severe and prolonged childhood abuse, especially that which is sexual in nature. In the adult survivor, DID impairs normal functioning, because the individual is free of the abusive environment and instead living in a safe situation in which the dissociative coping mechanism that is DID is not necessary, but continues to persist in response to relatively harmless stressful stimuli. DID first develops, however, as an amazing adaptive mechanism, allowing the self to survive prolonged trauma intact. By fragmenting herself—her experiences of the abuse, her emotional responses to the abuse, even the knowledge that the abuse occurs at all—the victim is able to preserve her self, unaware of all of the trauma at any given time. Certain portions of the self split off to carry the burden of the abusive events, or of the overwhelming emotional responses to the abuse, be they anger, sorrow, guilt, etc.

Due to their modes of manifestation, these adaptive dissociated portions of the self were once thought to be entirely separate selves existing within one mind. As a result, the condition was known as "Multiple Personality Disorder." Since this time we have gained greater understanding of the processes and adaptive nature of dissociation, and now know that these sets of behaviors and self-representation, before thought to be autonomous personalities, are actually parts of a single self. ((1), (3)) An individual with DID does not house in her mind many separate people, but rather many parts of her self that have failed to integrate due to trauma. ((3))

The integration or ordinary events occurs seamlessly. An individual incorporates them into the senses of self and the world, creating a narrative memory of them. In the case of traumatic experience, however, integration fails to happen. Many traumatic memories are stored not as narrative memories but rather as bits of the traumatic events: images, sounds, smells, feelings, all sensational, pre-verbal signals within the brain. ((4)) These bits fail to be incorporated into the victim's sense of self. This sometimes manifests as the feeling that the trauma, remembered as a static, unemotionalized event, didn't happen "to me" but rather to someone else. In the extreme case of DID, these unintegrated experiences are partitioned behind dissociative barriers, incorporated into only a part of the self rather than the self as a whole. One of the "identities" may hold the memory of the trauma in whole, or it may be distributed among them as fragmented sights, sounds, smells, sensations, and emotions. ((5))

In the treatment of DID, the ultimate goal is integration, or the reunification of the parts of the self that were separated due to trauma. ((3)) This reunification is the bringing together of all aspects of the trauma—the reality of the event itself, what it means about the world, the emotions it conjured—in one awareness. This awareness is the I-function.

Integration brings up the crux of the DID/I-function conundrum: If integration is the unification of the various parts of self in one central awareness, the I-function, where do the separate parts reside prior to integration, namely in the DID mind? Were there many little I-functions running around in the head, getting in each other's way, and vying for control over behavior? Or was there just one I-function, cracked but not broken, unaware of aspects of itself?

I am initially inclined to say that it is the latter of the two. It seems that one self should equal one I-function. However, it could be the previous, as each fragment has its own pattern of integration and senses of self and the world. This could mean that each piece is a separate I-function, processing information separately. This idea is supported by Oppenheimer, who proposes that each fragment of the whole self is a unique "self-system," with its own "theories" of self, reality, others, and the world—in essence, its own ways of processing information. ((6))

It is interesting to consider the role of the dominant I-function (note: see below) in both of these scenarios. If the DID mind is one fractured I-function, then the lack of integration of all parts of the self means that the I-function is unaware of parts of itself. It is easier to accept that the I-function is unaware of many neural processes peripheral to the self, as it is the location of those things important for functioning in the immediate environment. ((6)) The various signals involved in the vision pathway, for example, are not known to the I-function, are not experienced. Rather, the final image of the world is what reaches the I-function. It seems acceptable that this happens, as the I-function, the conscious self, would be overwhelmed if it had to experience every neural process. But the idea that its own functioning may be too much, that the I-function cannot be aware of all aspects of itself in the avoidance of overstimulation, is harder to accept. That there are aspects of the world that we are not conscious of is ok. That there are aspects of our selves that are unknown to our conscious selves is unsettling.

From this perspective it is perhaps easier to accept the model of many I-functions in the DID mind. In this case, the dominant I-function, in being unaware of the others, is unaware of something separate from itself. This is more like being unaware of every function in the visual pathway. But it is still unavoidable that no matter the number of I-functions, the DID self is not aware of its whole self.

In this light, I want to return to the semantics of several paragraphs ago. If something does not reach the I-function, it is not experienced, as with the many functions along the visual pathway. So, in both the single- and multi-I-function models, not all (parts of the) I-function(s) are being experienced. This results in the experience of a fragmented self, because the dominant I-function does not have access to all aspects of one's life and experiences. There is no integration, of self or of experiences.

In terms of the I-function, one of two things happens upon full clinical integration, the reunification of the self and the abolition of the DID mind. Either the many little I-functions join together or the cracks in the single I-function are repaired in order to reunify its pieces. This gives us insight into the state of the DID I-function. As described by an integrated (formerly DID) individual, the dominant I-function remains dominant. ((6)) If this is the case, it seems that it is more likely for there to exist pieces of a single I-function, rather than many I-functions, in the DID mind.

The role of the I-function in DID seems to be integral to understanding the neuronal processes of the condition—both in its onset and final integration. There has been recent investigation into the neuronal nature of the self-system, and the complexity thereof. It is thought that the more complex the self-system, or the more disunified it is, the more individual and only loosely connected circuits are involved. A more integrated self-system is the result of a larger, more integrated neuronal system. ((6)) We do not, however, know the location of the I-function. It likely lies in the cerebral cortex, as those animals with I-functions have cerebral cortexes while those animals without do not. But it has not been pinpointed. It may not have an exact location. Rather, it may exist all over the cerebral cortex; it may be the cerebral cortex itself. Not until we know the neuronal nature of the I-function will we know the neuronal nature of DID. Or, perhaps, insight into the neuronal nature of DID will lend understanding of where the I-function lies.

Note: I am using "dominant I-function" to refer to the self that suffers DID—the self that the DID patient experiences most often, that she feels is her "true" self. This may be the dominant part of the one, fragmented I-function or the dominant of many I-functions, depending upon which model you accept.

References

1) Sidran: Dissociative Disorders.

2) Diagnostic and Statistical Manual IV. American Psychiatric Association. 1994.

5)"Understanding Integration As A Natural Part of Trauma Recovery.", Rachel Downing, L.C.S.W-C.

4)The Myth of Sanity. Martha Stout, Ph.D. Viking Press: New York, 2001.

5)"Dissociation: Nature's Tincture of Numbing and Forgetting.", David L. Calof.

6)"Self or Selves: Dissociative Identity Disorder and Complexity of the Self-System.", Louis Oppenheimer.

7)"What is Psychological Trauma?", Esther Giller, President and Director of the Sidran Foundation.


Schizophrenia and How It Differs from Multiple Per
Name: Jean Yanol
Date: 2004-05-17 02:33:15
Link to this Comment: 9898

<mytitle> Biology 202
2004 First Web Paper
On Serendip

Often times two psychiatric disorders can be misinterpreted by the lay person who does not fully understand the meaning of the symptoms that categorize each disorder. This is the case with Schizophrenia and Multiple Personality Disorder, which the general public often confuses. Schizophrenia is the more complicated of the two disorders and in most cases is the more severe which therefore calls for it to be clarified and subsequently compared to Multiple personality disorder.

Schizophrenia is a devastating neurological disease characterized by disorganized thoughts, abnormal emotional responses, and an inability to clearly perceive reality. Approximately 1% of the world's population is afflicted with this illness (1).

There are generally two categories of symptoms associated with schizophrenia in general that are recognized by western psychiatry. The first category of symptoms are called positive symptoms. These include:

1) Hallucinations, which usually involve hearing voices, but can involve all the senses-seeing, tasting, touching, hearing, or smelling something that is not there

2) Delusions, often involving beliefs such as of being an important person (for example, the President of the United States) or being persecuted (for example, being chased by the CIA or by demons)

3) Confused thinking and speech that does not make any sense. The person has difficulty distinguishing outside reality from internal thoughts and doesn't know if what he or she is thinking is really occurring.

4) Bizarre or disorganized behavior, such as being overly excited, angry, or unresponsive to other people. This may also involve the way the person moves his or her body (for example, rocking back and forth or grimacing repeatedly).

5) Self-neglect, such as becoming isolated from other people, wearing dirty clothes, or neglecting their untidy, cluttered homes

6) Inappropriate emotions, such as smiling when speaking of sad topics or laughing for no reason

The second category of symptoms are called negative symptoms. These include:

1) Problems with speech or disorganized speech, such as abruptly responding to questions or not being able to respond with enough information (for example, always giving a one-word reply to questions)

2) Inability to experience pleasure, which is very common in schizophrenia. The person can no longer enjoy activities that once brought them pleasure, such as playing golf or visiting with friends. This is also a symptom of depression.

3) Lack of emotion, which can lead to few friends or social contacts. The person has little facial expression, poor eye contact, and slowed speech.

4) Problems with job or school performance due to the inability to complete tasks or goals. The person loses motivation to succeed or accomplish goals.

5) Problems focusing or paying attention. The person has problems processing information, which leads to confusion or fragmented thoughts. (2)

There are also different classifications of schizophrenia itself. People diagnosed with schizophrenia are usually placed in one of five groups: paranoid, catatonic, disorganized, residual, and undifferentiated. Paranoid schizophrenics display severe paranoia, as the name implies, which can be directed at people closely connected to them, such as family or friends, or even people or organizations in some cases that are of no connection, such as the government. This paranoia is usually caused by delusions experienced by the individual, which is what leads to other symptoms, such as anxiety, anger, and ultimately violence. Catatonic schizophrenics often display inappropriate reactions for whatever may be occurring around them. For example, while they may not react to pain, they may be provoked to move by nonexistent stimuli. Disorganized schizophrenics seem to have "jumbled thoughts" and because of this, their speech is often incoherent and their mannerisms seem odd and inappropriate due to their bizarre train of thought. Residual schizophrenics are people who have suffered from acute episodes of schizophrenia in the past, but have slowly recovered and only have a few remaining symptoms. Undifferentiated schizophrenics display symptoms from multiple groups and therefore cannot be classified.

Still to this day, scientists are unsure of what exactly is responsible for the disease. It is believed and supported by research that schizophrenia is caused by a combination of genetic, environmental, and possibly social factors. Factors that seem to raise a person's chances of developing schizophrenia include pre- and post-natal complications and a family history of the disease or other psychological disorders. The disease usually first affects a person at the onset of adulthood around ages 18 to 25 for males and around the late-twenties for females, although it can also occur later in life around age 45. Childhood schizophrenia, which develops around age 5, is much rarer than the other two. Schizophrenia can usually be diagnosed, after initial symptoms, and/or family and medical history have been assessed, by performing one or more of the following tests: a mental health assessment test, a computed tomography scan (shows abnormal areas of brain size/shape), magnetic resonance imaging (shows abnormal shapes and sizes of different parts of the brain), or positron emission tomography scan (shows blood flow and metabolic function of the brain).

Within the brain, schizophrenics display morphological differences such as enlarged ventricles and decreased cortical volume, as well as reduced neuronal count in certain regions of the brain. One of the major theories of how schizophrenia is biologically formed is the theory revolving around glutamate. Glutamate is neurotransmitter that has a suspected role in schizophrenia, due to the fact that drugs such as PCP, which produce schizophrenia-like symptoms in their users, act on the glutamate system. Studies support that PCP and schizophrenia produce dramatic changes in the levels and function of a glutamate sensitive ligand-gated ion channel known as N-methyl-D-aspartic acid (NMDA) receptor. This is one of three known receptors that respond in the presence of glutamate. It has been suggested that one subunit of the NMDA receptor is present at lower levels, while the other subunit is present at a much higher concentration causing there to be many non-functioning receptors and a few that function properly (3). This hypothesis on the biological mechanisms of schizophrenia still has not been completely proven yet however.

Schizophrenia is usually treated with anti-psychotic drugs like risperidone and olanzapine. In the past, drugs that acted on the dopamine system were used to treat patients such as Haldol, but these often had horrible side effects. In rare cases, if a patient does not respond to conventional treatment, a lobotomy, which is surgery on the frontal lobe of the brain, may be performed. As stated by one source, "Treatment goals are to reduce or eliminate symptoms, reduce the number of relapses, and reduce the length of the illness. Improving the person's level of social function and relationships is also important. Currently, the best treatment for schizophrenia is usually a combination of medication (such as anti-psychotics) and professional counseling, such as behavioral therapy" (2).

Now that we have described the basics of schizophrenia, it is time to compare them to Multiple personality disorder. First of all, we should dispel the myth that schizophrenics have multiple personalities. While they do have audio and visual hallucinations, which may make it seem as if they have another person "inhabiting their mind", it is not the case. Multiple personality disorder develops in people who have typically experienced an abusive childhood. In order to cope with their lives, they develop these different personalities, so that they can hide, in a sense, from what is really occurring in their lives. While multiple personality disorder is a psychological problem created by the patient that can be treated merely with therapy sessions with (4), schizophrenia is a more deeply rooted disease, meaning that it is more likely that there is something physically wrong with the schizophrenic's brain, and because of this, pharmaceuticals are usually prescribed in order to treat the illness. In other terms Multiple personality disorder seems to be a response to a traumatic events which can be dealt with through counseling, and schizophrenia is a problem probably caused by a biological malfunction whose origin is unknown as of now.

It is evident that there is no correlation between the two described disorders and indeed the perceive similarities most likely stem from misunderstanding. Schizophrenia also is not properly understood in matters such as its formation, which need to be studied further.

References

1) Schizophrenia.com

2) (WebMD Health) Schizophrenia

3) Freeman, Scott. Biological Science. Upper Saddle River, NJ: Prentice-Hall, Inc., 2002.

4) (WebMD Health) Types of Mental Disorders


Seizures: Forms and Treatments
Name: Kimberley
Date: 2004-05-19 23:36:45
Link to this Comment: 9905


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Seizures have been seen throughout history and in many different lights. To early Christians, seizures were thought to be caused by demon possession (1):

"And one of the multitude answered and said, Master, I have brought unto thee my son, which hath a dumb spirit; And wheresoever he taketh him, he teareth him: and he foameth, and gnasheth with his teeth, and pineth away... straightway the spirit tare him; and he fell on the ground, and wallowed foaming. And he asked his father, How long is it ago since this came unto him? And he said, Of a child. And ofttimes it hath cast him into the fire, and into the waters, to destroy him... And [the spirit] cried, and rent him sore, and came out of him: and he was as one dead."

This quote from the gospel of Mark (1) demonstrates many of the classic symptoms of a tonic-clonic seizure. A person suffering from such a seizure generally falls to the ground and the entire body stiffens, which is referred to as the tonic phase. Following the tonic phase, the muscles contract and relax in rapid succession causing the body to jerk violently, which is known as the clonic phase. It is at this time when the person may bite his or her tongue and foam at the mouth. The entire seizure may last a few seconds to several minutes but the person may not come to full consciousness for several more minutes following the attack. (2)

The early Christians were not the only people who recorded such episodes fitting the description of what we call seizures. However, not all of them considered the cause of these fits to be demonic in nature. For example, in ancient Greek texts those who associated seizures with demon possession were ridiculed. (3) Regardless of whether seizures were viewed as the work of an evil spirit or of an organic cause there were recorded methods of treating the disorder. Early Christians looked to God for help in casting out the demon, while the ancient Greeks relied on medicinal treatments. (3)

Herbal remedies were also concocted throughout Anglo-Saxon England, mostly involving a flower called the Lupine. (3) Current researchers have found that the Lupine flower contains a very high concentration of Manganese. People with seizure disorders often have low levels of this ion in their body as compared to the general population. Studies have shown that ingesting Manganese can be beneficial in reducing the number or severity of seizures experienced by some with epilepsy. Though not as effective as the antiepileptic medication on the market today, herbal remedies rich in Manganese would have greatly improved the quality of life for some epileptics during the dark ages when little else was available. (3)

The knowledge about seizures has increased dramatically since the dark ages however, especially in areas of cause identification as well as classifying the types of seizures experienced by those with epilepsy. Generally speaking, a seizure results when neurons in the brain receive an abundance of excitatory signals causing these cells to depolarize and send action potentials down their axons to other synapses in other neurons. (4) Eventually these signals are received by motor neurons that innervate muscle cells. This process generally does not lead to uncontrolled movement because the neurons repolarize thus stopping more action potentials from being sent. In a seizure, however, the neurons continue to create action potentials, which means the motor neurons maintain their contractions of the muscle cells.

The number of neurons that are depolarized and their location determine the type of seizure experienced as well as its severity. In a tonic-clonic seizure, as described above, the initial neurons involved are found in both hemispheres of the brain. (5) Seizures that are initiated on both hemispheres are referred to as generalized seizures. These types of seizures are the most severe and well known, yet there are many forms of seizures that are much milder in outward appearance and less detrimental to the sufferer. These seizures are called partial seizures and the depolarized neurons that propagate the action potentials that initiate the seizure are located in one focused part of the brain. (6) If these neurons depolarized a small region of other neurons a person may just experience tingling sensations, numbness or become temporarily distracted without loosing consciousness. It is the maintained consciousness and fast recovery period that defines a simple partial seizure. (6) Complex partial seizures have outward symptoms that are between simple partial seizures and generalized seizures. These include uncontrolled motor movements such as rolling one's head from side to side and cycling the legs as though riding a bicycle. The person looses consciousness during this time or may go in and out of consciousness. (7)

It is important for those suffering from epilepsy to know what kind of seizures they have to facilitate a treatment plan to reduce or eliminate the seizures. Diagnoses are made by a variety of mechanisms. The first is self-reports from the person with the seizures as well as witnesses if the person goes unconscious. Descriptions of the seizures can help determine the severity as well as the frequency of attacks. Doctors also gather general information about electrical signals in the brain. The most helpful new developments in diagnosing seizure types are brain-imaging devices. Positron emission tomography (PET) has enabled doctors to view the brain and identify asymmetries as well as tumor growth, which may be responsible for the seizures. (8) Magnetic Resonance imaging (MRI) has also provided a means for doctors to view the inside of a patient's brain and identify abnormalities. However, for some types of seizures whose origins are deep within the brain, such as those in the mesial temporal lobe, irregularities cannot always be viewed by such noninvasive techniques. Some people with epilepsy must therefore use different monitoring equipment to determine the exact location of initiation of their seizures. This equipment generally involves depth probes that contain sensors at various locations along the probe. The probes are inserted into the brain and remain for 24 hours gathering sensory information about electrical activity throughout the brain organ. This information is then analyzed for irregular neuron depolarization to see where in the brain it is occurring. (8)

The more invasive techniques for gathering information about focal points of seizures are reserved for severe cases when surgery will be attempted as a cure for them. (9) Once the point of origin has been detected surgery is undergone to remove the overly excitatory neurons that propagate the seizures. These could be cells growing in a brain tumor, damaged by head trauma, or they could be malfunctioning due to genetic reasons. As better imaging machines become available and removal of the effected area in the brain becomes more precise, positive results of these surgeries increases with reports as high as 70% to 80% of patients dramatically improving after their operations. (9)

Surgery is not the most common treatment for seizures nor is it the first treatment tried in attempts to reduce or eliminate them. Antiepileptic drugs carry this designation. These too have greatly improved over the years primarily by reducing side effects and adverse drug interactions. (10) Before 1993 there were very few antiepileptic drugs on the market; the choices being limited to carbamazepine, phenobarbital, phenytoin, primidone, and valproate. These drugs were effective for most sufferers of seizures but they usually had harsh side effects associated with them. Since that time many new drugs have become available. Though some of their mechanisms are unknown, many work in similar ways. Some antiepileptic drugs act as Sodium channel blockers, which decreases the likelihood that an action potential will be produced since the neuron cannot depolarize as readily. Others are involved with increasing the neuron's response to inhibitory neurotransmitters such as gamma aminobutyric acid (GABA), which opens chloride ion channels. Decreasing a neuron's response to excitatory neurotransmitters is yet another mode of reducing the likelihood that the neuron will produce an action potential. It is through these mechanisms that seizures are reduced or eliminated by chemical therapies. The new drugs available, such as gababentin, lamotrigine, levetiracetam and zonisamide, have similar rates of success with patients as the older medications for reducing or eliminating their seizures. In contrast to the older medications, more people can safely remain on the new antiepileptic drugs because they have fewer side effects.

Not all people respond to antiepileptic medication and some of them are not good candidates for surgery. Many young children with epilepsy fit into this category because if they do not respond to drugs or the side effects are too detrimental to their functioning, then they are only left with the option of surgery. However, doctors would be reluctant to use that option because of concerns about brain development at that early age. Yet even this population is not without a form of treatment. The ketogenic diet, which consists of 4 parts fat, one part carbohydrates and one part protein, has been shown effective to control seizures when other options are not available to the patient. (11) The mechanism for this treatment is still unknown but there is clearly a link between the control of seizures and the way energy is consumed by the body. When there is a low supply of carbohydrates the body is forced to get most of its energy from fat, which is consumed in a different way than that of carbohydrates. (11)

The view of seizures has altered greatly throughout time. Some blamed evil spirits for possessing the body while later people recognized it as a medical condition. The forms seizures can take vary widely on their outward appearance to observers as well as the occurrences taking place within the brain. Treatments have been devised, and have altered over time as better research and strategies of detection have been implemented. Today researchers are attempting to create treatments that would go inside the brain, detecting when a seizure is eminent and releasing antiepileptic medication to specific regions of the brain to prevent action potentials from being produced in those regions. (12) New unforeseen problems may arise from these forms of treatment but as new information is gathered there is continuing hope that more and more people will go through life without experiencing seizures.

References

1) The Bible, Mark 9.17-27.

2) Epilepsy Action: Tonic-Clonic Seizures

3) Dendle, P. (2001). Lupines, manganese, and devil-sickness: an Anglo-Saxon medical response to epilepsy. Bulletin of the History of Medicine, 75, 91-101.

4) Mechanisms of action of drugs for status epilepticus, describes the appearance of seizures

5)Tonic-Clonic Seizures

6)Simple Partial Seizures

7)Complex Partial Seizures

8) Wilson, C.L. (2004). Intracranial electrophysiological investigation of the human brain in patients with epilepsy: contributions to basic and clinical research. Experimental Neurology. Article in press.

9) Diaz-Arrastia, R., Agostini, M.A., & Van Ness, P.C. (2002). Evolving treatment strategies for epilepsy. Journal of the American Medical Association. 287(22), 2917-2920.

10) LaRoche, S.M. & Helmers, S.L. (2004). The new antiepileptic drugs. Journal of the American Medical Association, 291(5), 605-614.

11) Kwiterovich, P.O., Vining, E.P., Pyzik, P., Skolasky, R., & Freeman, J.M. (2003). Effect of a high-fat ketogenic diet on plasma levels of lipids, lipoproteins, and apolipoproteins in children. Journal of the American Medical Association. 290(7), 912-920.

12) Drury, I., Smith, B., Li, D., & Savit, R. (2003). Seizure prediction using scalp electroencephalogram. Experimental Neurology. 184, 9-18.





| Serendip Forums | About Serendip | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 11:57:14 CDT