Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.


Biology 202 Spring 2004 Web Paper Forum


Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.

Go to last comment

Does Testosterone Really Lead to Aggression?
Name: Cham Sante
Date: 2004-02-20 00:19:29
Link to this Comment: 8300


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

It is a common myth that testosterone causes aggression but is there biological reason to back up this assertion? Some say there is, while emphatically rattling off statistics and experimental evidence, while still others are armed with ambiguous or even refuting information with which to contest this argument. The bottom line is that we do not know for sure whether or not testosterone causes aggression (how problematic the idea of cause and effect can be in biology!) and so at this point we must turn away from the enticing idea that there exists a clear and definitive answer to this question. We must instead turn our attention to the evaluation of available information, in order to better understand the role of Testosterone in guiding behavior.

According to theory from evolutionary biology, aggression serves an important function in terms of both individual survival as well as procreation potential. In terms of this evolutionary biological theory, what it comes down to is this: competition arises when resources are limited and therefore animals/species must actively compete in order to increase their own fitness. It does not take a biologist to then infer that aggression is advantageous at both the individual and genetic levels. (1).

Hormones are inextricably linked to behavior as seen by the impact that its presence or absence has on an organism. In terms of aggression, there exists intriguing evidence that there is a definite connection between the hormonal effects of testosterone and the outward expression of aggressive behavior (1). For example, castration leads to a marked decrease in aggression as shown by castration experimentation on various species. Furthermore, when testosterone is replaced through hormone therapy in these castrated animals, the amount of aggression increases and is restored to its original pre-castration level (1). Taken together, this seems to present a strong argument for the role of testosterone in aggression. However, the story does not end here: if we are to suppose that testosterone does in fact lead to aggressive behavior we must then necessarily ask how and why it does. In doing so, we might just find that the original supposition falls through.

Testosterone exerts its hormonal and behavioral effects upon interaction with androgen receptors (i.e., when converted into 5-alpha-dihydrotestosterone) or with estrogen receptors (i.e., when converted into estradial by aromatase) (2). . According to some, there exists a "critical time period" (i.e., during development) when testosterone serves to "sensitize" particular neural circuits in the brain. Presumably, this sensitization allows for the effects of testosterone that manifest in adulthood. A recent theory builds upon this story, adding the idea that almost immediately after birth, testosterone leads to the establishment of an "androgen-responsive system" in males. And what about females? It is presumed that a similar androgen system is set-up in females, "although a greater exposure to androgens is required to induce male-like fighting" (2).

Although not the primary function of most hormones, neural activity can be modulated as a result of their presence. For example, it has been shown that some hormones can modify cell permeability and therefore have a crucial impact on ion concentration, membrane potential, synaptic transmission and thus neural communication and behavioral outcomes (2). More specifically, when a hormone such as testosterone acts on a target neuron, the amount of neurotransmitter that is release is significantly affected. For example, it has been suggested (i.e., with experimental data) that testosterone acts on serotonergic synapses and lowers the amount of 5-HT available for synaptic transmission. This is important when coupled with the fairly well accepted idea that the presence of 5-HT serves to inhibit aggression, as shown convincingly in studies done on male rhesus monkeys: Serotonin reuptake inhibitors such as Fluoxentine and several other antidepressants lead to a significant decrease in aggression in both monkeys and humans (2).

Although convincing relationships have been found between testosterone and aggression, hormones in general cannot cause a particular behavioral outcome; they can only facilitate or inhibit the likelihood that such an outcome will occur. For example, the mere presence or level of testosterone is not sufficient in invoking aggressive behavior, as seen by a significant population of males that are not aggressive. There must therefore be other factors involved: at the hormonal level, what about the effects of noradrenaline, acetylocholine or glutamate? It is important to remember here that the endocrine system consists of a complex array of communication pathways, none of which act independently (2).

Furthermore, we know that biological factors do not act in a vacuum and we must therefore concede significant impact and effect from environment and social factors as well. For example, some studies have found that it is not testosterone level that is the best predictor of aggression, but that obesity and lower levels of "good" cholesterol tend to be the best predictors of aggressive behavior in human males (3). Additionally, it has been shown that social status greatly influences the presence/degree of aggressive behavior in both animals and humans. The facts are that higher levels of social status correspond to higher levels of testosterone, although the quandary remains: is this elevated status a result of elevated testosterone levels and the evolutionarily advantageous aggressive behavior it might influence, or is the testosterone level a result of the heightened social status (i.e., building upon the well-supported idea that "winning" social competition leads to an increase in testosterone levels) (4)? It is the age-old nature versus nurture debate, or perhaps more appropriately, nature and nurture discussion.

To come full circle and reiterate this discussion's opening declaration: we do not know for sure whether or not testosterone leads to aggression. Therefore, any assertion of a causal relationship between the two is instantly problematic. Instead, we must continue to learn and to discuss the various possibilities with an open mind, in order to come to a better understanding of the role that testosterone and other hormones play in aggressive behavior.

Resources

1)Gender Website, a comprehensive cross- disciplinary approach to gender difference, touching upon areas such as Psychology, Genetics, Neurobiology, and Development to name a few.

2) Simpson, Katherine. The Role of Testosterone in Aggression. McGill Journal of Medicine, 2001. A thorough biological examination of aggression and the role that hormones play in facilitating/inhibiting aggressive behaviors. Many studies sited, comprehensible graphs presented. As found from the website: http://www.med.mcgill.ca/mjm/v06n01/v06p032/v06p032.pdf

3)DeNoon, Daniel. Don't Blame Testosterone for Aggression: Angry, Hostile Men Don't Have Extra Sex Hormone. WebMD Medical News, November 11, 2003. A newspaper article reporting on recent findings that Testosterone might not be the most important factor in aggression.

4)Steroids Website, a website dedicated to education regarding anabolic-androgenic steroids. Informative articles available such as: "Psychological and Behavioral Effects of Endogenous Testosterone Levels and Anabolic-Androgenic Steroids Among Males: A Review".

References

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Schizophrenia
Name: Natalie Me
Date: 2004-02-22 12:54:44
Link to this Comment: 8351


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

I have always been interested in mental disorders, particularly ones with dramatic debilitating affects on an individual's behavior and their brain. Illnesses such as severe depression, bipolar disorder and the more intense schizophrenia have intrigued me my whole life. Diseases of the mind seem to be uniquely connected to each other, and connected to humanity in a very intimate yet dissociative way. It seems to me that the brain is the great mystery of the universe, and even greater a mystery is that of the mind. With all our modern scientific advancements, what prevents us still from overcoming these obstacles as a society? Why can we not yet see into the mind enough to heal it from a disease like schizophrenia?

This course has prompted me to look at these issues more intensely and as I go through other courses and encounter certain situations throughout the semester, I have been constantly reminded of this mind/body connection. A few weeks ago I was reading the January edition of Scientific American that had an interesting article on Schizophrenia inside. It spurred my interest and I began to look online for more information about schizophrenia's symptoms, effects, treatment and research. What do we know about this disease and how are we beginning to answer the question of mind/body connectedness through the search for cause and cure of schizophrenia?

Schizophrenia typically makes its appearance in an individual during their late teens and early twenties. This may be different for men and women, with women developing symptoms up until their thirties (1). I found this particularly interesting and wonder why this would be the time period for developing such a destructive disease. I wonder if it has anything to do with stress and the challenges of adolescence and early adulthood. Those times of particular pressure may force the brain to self-destruct in a way. Often, homelessness, poverty and unemployment are associated with schizophrenia, however these are usually secondary to the illnesses devastating effects (3).

Symptoms of schizophrenia vary wildly. They range from observed behavior such as apathy, decreased speech and movement, sleeping problems, poor health or appetite problems, and money management problems to symptoms that are harder to detect such as delusions and hallucinations, obsessive thoughts and compulsivity, sad or depressed mood, poor concentration, distrust, and anxiety (2). Unfortunately, many of these symptoms can be misdiagnosed as they mirror other mental illnesses. Schizophrenia may early on be thought to be depression or bipolar disorder and will be treated inappropriately. The most disturbing symptoms of schizophrenia are a distorted perception of reality caused by hallucinations and delusions. These can alter one's personality turning them into a very different person.

.

Treatment of schizophrenia is also varies, with inconsistent results. Most frequently, medications are prescribed to inhibit the intense symptoms patients suffer. Treatment more often than not is a life-long control management problem that is dependent on compliance and dedication by an individual who is not always competent enough to maintain such a regimen of treatment. Psychotherapy is sometimes used, though more frequently some form of counseling is used for family members rather than the patients due to the emotional burden supporting someone with schizophrenia can cause (3). A combination of different types of medications has been used to treat the varying symptoms of schizophrenia including antipsychotic, antidepressant and antianxiety medications. These medications, especially when taken in high doses and for long periods of time may have seriously detrimental physical and mental side effects that further discourage patients from sticking with their harsh regimen of treatment (4).

Treatment also depends on the causes of schizophrenia and is therefore always changing due to new evidence supporting one cause or another. This was my original question and interest in writing this paper, discovering the connection between cause and treatment and those links between the mind and body. For a long time, medications were serotonin-dopamine antagonists, treating supposedly the deficit in the brain causing schizophrenia. More recently though, it has been discovered that dopamine is not the primary brain agent causing the illness. Schizophrenia is actually caused by a multi-faceted system of breakdowns in the function of the brain.

This brings me to my main concern. According to the January Scientific American, "scientists have long viewed schizophrenia as arising out of a disturbance in a particular brain system – one in which brain cells communicate using a signaling chemical, or neurotransmitter, called dopamine" ((5), p. 50). It has recently been found, however, that like the multiple system attack of schizophrenia, the disease may be caused by glutamate, a neurotransmitter that plays a role in many different functions of the brain. Scientists discovered that the NMDA glutamate receptor is blocked or inhibited in schizophrenia patients. Glutamate is a more "pervasive neurotransmitter," affecting dopamine receptors as well. This abnormality would also explain why dopamine was originally thought to be the agent responsible for schizophrenia. As I am not a biologist, some of this is a bit confusing to me.

What I have found interesting is the fact the giant step forward in knowledge this discovery has provided. It answered a tremendous number of questions, including the question of how one neurotransmitter caused such a wide range or problems. Really, it didn't – it was part of a larger process and malfunction. This indicates to me that perhaps the mind and body are connected at a more scientific level. We, at this time, lack the explanatory skills and evidence to prove exactly how it works but perhaps someday we may better comprehend the intricate ways in which our nervous system and intelligence operate.

I did have some additional questions after doing my research. From a social perspective, I wonder about the resources available to both the patients suffering from the disease as well as those more peripherally affected such as family members and friends. Why do many suffering from schizophrenia end up on the street without jobs and cut off from resources? I stumbled onto a web page listing the schizophrenia diagnosis criteria available to physicians online (6). The behaviors and symptoms listed were quite specific but also very familiar to me. Schizophrenia exists within our collective memory. I was reminded of the social theoretical field of symbolic interactionism that posits that people occupy a role expected of them. The sociology of deviance and medical sociology are particularly applicable here. It is interesting to wonder though, the effect of a shared knowledge of symptoms and expected behavior within the sick role has upon one finding themselves labeled as 'mentally ill' or 'schizophrenic.'

There is a lot of information out there about schizophrenia. I feel as though I've only scratched the surface. Social and psychological effects of the disease are far reaching, both on those who suffer but on family members, friends and the wider society as well. It is good to know that progress is being made in the search for a cause and treatment, though I wonder if we will ever really know exactly how the mind interacts with the brain and why things go wrong and how to fix that.


.

References

1) SCHIZOPHRENIA.COM

2) MENTALHEALTH.COM

3)PSYCHCENTRAL.COM

4)PSYCHOLOGYINFO.COM

5) Scientific American. January 2004. Volume 290, Number 1. By Daniel C. Javitt and Joseph T. Coyle. "Decoding Schizophrenia."

6)FPNOTEBOOK.COM


HOW DOES MARIJUANA AFFECT THE BRAIN?
Name: Akudo Ejel
Date: 2004-02-22 21:59:44
Link to this Comment: 8374

AKUDO EJELONU
NEUROBIOLOGY FIRST PAPER
SPRING 2004

HOW DOES MARIJUANA AFFECT THE BRAIN?

Pot, weed, grass, ganja and skunk, are some of the common words used to describe the dried leaves drug known as marijuana. Marijuana is a cannabis plant that is "usually smoked or eaten to entice euphoria."((1). Throughout the years, there has been research on the negative and positive effects of marijuana on the human body and the brain. Marijuana is frequently beneficial to the treatment of AIDS, cancer, glaucoma, multiple sclerosis, and chronic pain. However, researchers such as Jacques-Joseph Moreau have been working to explain how marijuana has harmful affects on the functions of central nervous system and hinders the memory and movement of the user's brain. The focus of my web paper is how the chemicals in marijuana, specifically cannabinoids and THC have an effect on the memory and emotions of a person's central nervous system.

Marijuana impinges on the central nervous system by attaching to brain's neurons and interfering with normal communication between the neurons. These nerves respond by altering their initial behavior. For example, if a nerve is suppose to assist one in retrieving short-term memory, cannabinoids receptors make them do the opposite. So if one has to remember what he did five minutes ago, after smoking a high dose of marijuana, he has trouble. Marijuana plant contains 400 chemicals and 60 of them are cannabinoids, which are psychoactive compounds that are produced inside the body after cannabis is metabolized or is extorted from the cannabis plant. Cannabinoids is an active ingredient of marijuana. The most psychoactive cannabinoids chemical in marijuana that has the biggest impact on the brain is tetrahydrocannibol, or THC. THC is the main active ingredient in marijuana because it affects the brain by binding to and activating specific receptors, known as cannabinoid receptors. "These receptors control memory, thought, concentration, time and depth, and coordinated movement. THC also affects the production, release or re-uptake (a regulating mechanism) of various neurotransmitters."(2) Neurotransmitters are chemical messenger molecules that carry signals between neurons. Some of these affects are personality disturbances, depression and chronic anxiety. Psychiatrists who treat schizophrenic patient advice them to not use this drug because marijuana can trigger severe mental disturbances and cause a relapse.

When one's memory is affected by high dose of marijuana, short-term memory is the first to be triggered. Marijuana's damage to short-term memory occurs because THC alters the way in which information is processed by the hippocampus, a brain area responsible for memory formation. "One region of the brain that contains a lot of THC receptors is the hippocampus, which processes memory."(3) Hippocampus is the part of the brain that is important for memory, learning, and the integration of sensory experiences with emotions and motivation. It also converts information into short-term memory. "Because it is a steroid, THC acts on the hippocampus and inhibits memory retrieval."(4) THC also alters the way in which sensory information is interpreted. "When THC attaches to receptors in the hippocampus, it weakness the short-term memory,"(5) and damages the nerve cells by creating structural changes to the hippocampus region of the brain. When a user has a high dose of marijuana, new information does not register into their brain and this may be lost from memory and they are not able to retrieve new information for more than a few minutes. There is also a decrease in the activity of nerve cells.

There are two types of memory behavior that is affected by marijuana, recognition memory and free cells. Recognition memory is the ability to recognize correct words. Users can usually recognize words that they previous saw before smoking but claim to recognize words that they did not previously see before smoking. This mistake is known as memory intrusions. Memory intrusions are also the consequence of THC affecting the free cell of the brain. "Marijuana disrupts the ability to freely recall words from a list that has been presented to an intoxicated subject."(6) For example, if a list of vocabulary words presented to the intoxicated subject and few minutes later, they have to recall the words that were on the list. The only words that they remember are the last group of words and not the words that are in the beginning of the list. This is an initiation that their memory storage has been affected. "The absence of an effect at short term delay times indicates that cannabinodis did not impair the ability to perform the basic task, but instead produce a selective learning and/or
memory deficit."(7)

I did a study with two college students (Student A and Student B) who both smoke marijuana every other week. This particular study was done an hour before, while and after they were under the influence of the drug. Student A was watching television before she smoked marijuana, was asked which advertisement was splayed before the show started and she got four out of five of her answers correct. After this first section, she smoked a small dose of marijuana twice within an hour. Fifteen minutes after she smoked her last blunt, she continued her regular activity of watching sitcoms. When a commercial would come on, I would ask her simple questions like what happened before the show went to a commercial break. Her responses would be macro-answers about what was going on but when I asked her what the main character was wearing, she did not remember. This was ironic because the protagonist wore a bright yellow suit that my friend commenting on earlier when the show began ten minutes ago. Her short-term memory is weakening because she was only able to remember big picture information and not small picture. Though the results are interesting, I know that I would have had different response on someone else because it depends on how often the user smokes and if they have good memory prior to smoking weed.

Marijuana also impairs emotions. When smoking marijuana, the user may have uncontrollable laughter one minute and paranoia the next. This instant change in emotions has to do with the way that THC affects the brain's limbic system. The limbic system is another region of the brain that governs one's behavior and emotions. It also "coordinates activities between the visceral base-brain and the rest of the nervous system."(8) I am now going to use Students B to describe how emotions are affected by marijuana. Students B is an articulate and well spoken young woman who has a troublesome relationship with her best friends which gets her upset and tense up. But after she smoked one high dose weed, her body was relaxed however, she had trouble formulating her thoughts clearly and would talk in pieces and was jubilant. It has been researched that a person needs to have high dose of marijuana would be in the state of euphoria. High dose of marijuana is measured as "15mg of THC can cause increased heart rate, gross motor disturbances, and can lead to panic attacks."(9) Thankfully, Student A did not experience any of these extreme examples.

College students usually smoke marijuana because they are stressed over schoolwork and feel that marijuana can help them unwind. I have encountered marijuana smokers who are chilled and have no worries in the world but after the effect of the drug wears off, they're sometimes capable to tacking their problem or at the original state that they were in before the drug. The effects of happiness that marijuana usually cause to the user is not a lasting effect because even though a user smokes weed to get away from the troubles of his/her own life, they still have to face these problems after the effects of the drug wears-off. In a survey with college student, an organization called, parents: the Anti-Drug interviewed college students and found that "compared to the light users, heavy marijuana users made more errors and had more difficulty
sustaining attention."(10) This was evident through my second experiment with Student B but not everyone who smoke high doses of marijuana experience the same affect.

The chemicals in marijuana bring cognitive impairment and troubles with learning for the user. "Smoking [marijuana] causes some changes in the brain that are like those caused but cocaine, heroin, and alcohol. Some researchers believe that has changes may put a person more at risk of becoming addicted to other drugs such a cocaine
and heroin."(11) To prevent such harm, one must be cautious of their actions. Those who do not do drugs do not risk harm. So please the next day you light up, remember you that you central nervous system and brain will be at risk.

1)Online Dictionary

2)Marijuana: The Brain's Response to Drugs, A Good Web Source

3)Mind Over Matter: Marijuana Series, A Good Web Source

4)Alcohol Addiction & The Limbic System, A Good Web Source

5)Marijuana: The Brain's Response to Drugs, A Good Web Source

6)Cellular and Molecular Mechanisms Underlying Learning and Memory Impairments Produced by Cannabinoids, A Good Web Source

7)Cellular and Molecular Mechanisms Underlying Learning and Memory Impairments Produced by Cannabinoids, A Good Web Source

8)Marijuana and the Brain by John Gettman. High Times, March, 1995, A Good Web Source

9)Alcohol Addiction & The Limbic System, A Good Web Source

10)Parents. The Anti-Drug. -- Drug Information, A Good Web Source

11)Marijuana: Marijuana Brain Effects, A Good Web Source


Chocolate on the Brain
Name: Kristen Co
Date: 2004-02-23 10:19:05
Link to this Comment: 8389


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

While thinking of things to put in a gift basket for a friend who was in the hospital, my roommate turned to me with some of her German chocolates and inquired if indeed it was true that chocolate makes a person happy. "It has something to do with endorphins in the brain, right?" she asked me. I decided to do some research. Does chocolate make you happy by effecting the brain? Intrigued, I turned to the Internet and searched for "chocolate on the brain." Lo and behold, I discovered that the over 300 chemicals that compose chocolate have numerous and varied effects on our bodies through the nervous system (1).

Chocolate can affect the brain by causing the release of certain neurotransmitters. Neurotransmitters are the molecules that transmit signals between neurons. The amounts of particular neurotransmitters we have at any given time can have a great impact on our mood. Happy neurotransmitters such as endorphins and other opiates can help to reduce stress and lead to feelings of euphoria. As connections between neurons, they are released from the pre-synaptic membrane and travel across the synaptic clef to react with receptors in the post-synaptic membrane. Receptors are specified to react with particular molecules which can trigger different responses in the connected neurons. The proper neurotransmitter can trigger certain emotions.

It turns out that my roommate was correct in her assertion that chocolate affects the levels of endorphins in the brain. Eating chocolate increases the levels of endorphins released into the brain, giving credence to the claim that chocolate is a comfort food. The endorphins work to lessen pain and decrease stress (2). Another common neurotransmitter affected by chocolate is serotonin. Serotonin is known as an anti-depressant. One of the chemicals which causes the release of serotonin is tryptophan found in, among other things, chocolate (1).

One of the more unique neurotransmitters released by chocolate is phenylethylamine. This so called "chocolate amphetamine" causes changes in blood pressure and blood-sugar levels leading to feelings of excitement and alertness (1). It works like amphetamines to increase mood and decrease depression, but it does not result in the same tolerance or addiction (3). Phenylethylamine is also called the "love drug" because it causes your pulse rate to quicken, resulting in a similar feeling to when someone is in love (4).

Another interesting compound found in chocolate is the lipid anandamide. Anandamide is unique due to its resemblance to THC (tetrahydrocannabinol), a chemical found in marijuana. Both activate the same receptor which causes the production of dopamine, a neurotransmitter which leads to feelings of well being that people associate with a high. Anandamide, found naturally in the brain, breaks down very rapidly. Besides adding to the levels of anandamide, chocolate also contains two other chemicals which work to slow the breakdown of the anandamide, thus extending the feelings of well-being (4). Even though the anandamide in chocolate helps to create feelings of elation, the effect is not the same as the THC in marijuana. THC reacts with receptors more widely dispersed in the brain and is present in much larger amounts. It would take twenty-five pounds of chocolate to achieve a similar high to that of marijuana (1).

Theobromine is another chemical found in chocolate that can affect the nervous system. Besides having properties that can lead to mental and physical relaxation, it also acts as a stimulant similar to caffeine. It can increase alertness as well as cause headaches. There is much debate as to whether or not caffeine even exists in chocolate. Some scientists believe that it is the less potent theobromine which is solely responsible for the caffeine-like effects (5).

When examining the effects of chocolate on the nervous system, it is also important to point out that chocolate does not treat all nervous systems the same. Many animals, for example, can be killed by the chemicals in chocolate. Theobromine in particular does not metabolize as quickly in other animals such as dogs and horses (1).

Chocolate has a long history associated with feelings of well being. It has been favored by people ranging from the ancient Aztecs to high society Victorians to Popes. Chocolate also has a history of being a known aphrodisiac (6). This makes sense when you combine phenylethylamine's ability to quicken the heart, the feelings of euphoria from anandamide, theobromine's power to cause relaxation, and the other neurotransmitters sending pleasurable feelings throughout the brain. Even the names associated with chocolate imply its power. Anandamide is derived form the word ananda which is Sanskrit for bliss and theobromine can be traced back to the Greek word theobroma meaning "food of the gods" (6).

It seems to be true that eating chocolate can increase feelings of euphoria as well as decrease stress and pain, but is it possible that chocolate can be addictive? There are many people out there who consider themselves to be addicted to chocolate, partly because of its mood-enhancing qualities. Many questions, however, still remain regarding if chocolate can, like the drugs with similar chemicals and effects, be an addictive substance. The majority of scientists seem to agree that chocolate is not addictive. Some go as far to say that chocolate is merely a kind of placebo that only causes these effects because people believe that it will. Chemicals such as phenylethylamine and anandamide can be found in other edibles in much greater amounts but they don't seem to have the same effect (1). There are plenty of self professed chocoholics out there who would, however, refute this claim and who continue to proclaim the wonders of chocolate.

It is also important to remember that not all chocolate is created equal. The strength of chocolate depends greatly on how it is manufactured. The cacao bean, from which chocolate is derived, has a naturally bitter taste and is greatly diluted by sugars and other ingredients. In the United States, something needs only to have 10% cacao in it to be considered chocolate (5). When examining my roommate's collection, most of which is from Germany, I found that cacao levels were around 30%, the dark chocolate being slightly higher. It seems that in diluted chocolate, the effects would be minimal.

I think it is quite fascinating that a food such as chocolate can have such an effect on the operations of our brain and thus our perceptions of the world. Since I met my roommate over a year ago, I have significantly increased my chocolate intake. I also think I'm a happier person than I was before we met. Could it be that the chocolate I consume now almost on a daily basis has something to do with my subtle transformation in mood? I would like to think not, but it is an interesting thought. I do, however, instinctively find myself reaching over to the chocolate stash whenever I start feeling a little depressed or overwhelmed and it always seems to make me feel better.


References

1)BBC News ,
2)"Endorphins: The Body's Stress Fighters" ,
3)http://www.chocolate.org/refs/index.html,
4)"All About Chocolate: Chocolate and your Health" ,
5)http://www.mrkland.com/fun/xocoatl/index.htm#SEL,
6) "Chocolate: Melting the Myths" ,
7) Neuroscience for Kids-Chocolate and the Nervous System ,


Alzheimer's Disease: The Loss of One's Self
Name: Sarah Cald
Date: 2004-02-23 14:25:05
Link to this Comment: 8394


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Our class discussions of late have related behavioral characteristics to the anatomy of the brain. We have questioned what it is that defines a persons "self?" What is it that processes various sensory inputs in an individual and formulates that individual's personal outputs, feelings and attitudes in response to these inputs? For the time being, we have given the responsibility of input processing to the I-box. There are several mental illnesses that may accompany dementia. A person suffering from one or more of these illnesses can be characterized as having "lost one's self" (1). In this paper, I hope to understand how Alzheimer's disease causes loss of memory and, eventually, the loss of one's "self." What factors of the disease determine how much of one's original "self" is lost from day to day?


Four and one - half million Americans are estimated to have Alzheimer's Disease (AD) in greater or lesser degree (2). Alzheimer's disease is a complex condition that affects the brain and is one which is considered to be a major public health problem for the United States. AD has a huge impact on individuals, families, the health care system and society. While scientific research has enabled scientists to develop a better understanding of Alzheimer's, and consequently develop more effective diagnosis, effective treatments have been elusive. Overall, the disease remains enigmatic.

Alzheimer's disease was first observed and described in 1906 by German physician Dr. Alois Alzheimer during the autopsy of a woman with dementia (2). Alzheimer's is an irreversible, progressive brain disease that slowly destroys memory and thinking skills. As the disease progresses it eventually prevents those suffering from the disease from performing simple tasks (4). Although once viewed as rare, research has shown that AD is the leading cause of dementia. Dementia is an umbrella term for several symptoms all of which result in a decline in thinking and cognitive capabilities. Such symptoms include: gradual memory loss, reasoning problems, judgment problems, learning difficulties, loss of language skills, and a decline in the ability to perform normal, routine tasks. People with dementia also experience personality and behavioral changes such as agitation, anxiety, delusions and hallucinations (4). It is important to note that dementia is not a disease itself, but a group of symptoms that usually accompanies a disease. Accordingly, dementia is not solely a result of Alzheimer's it is also experienced in many related disorders of the brain.

The progression of AD varies widely and can last anywhere from 3 to 20 years. Alzheimer's first affects the areas of the brain that control memory and thinking skills, as the disease progresses cells in other regions of the brain die as well (2). Researchers aren't certain of the causes of AD and theories of its cause have ranged from intake of excessive aluminum from modern cookware to exposure to pesticides. At present, the causes remain open to scientific debate. What is known is that people with the disease have an abundance of two abnormal structures in the brain: plaques and tangles. Plaques are dense accumulations of a protein called beta-amyloid. Tangles are twisted fibers caused by changes in a protein called tau. The beta-amyloid plaques reside in the spaces between neurons, in the brain, and the neurofibrillary tangles clump together inside the neurons. Plaques and tangles block the normal transport of electrical messages between the neurons that enable us to think, talk, remember and move. As AD progresses, nerve cells die, the brain shrinks, and the ability to function deteriorates (5). The destruction and death of nerve cells causes the memory failure, personality changes and other features of AD (5). To be sure, plaques and tangles develop in the brains of many older people, however the brains of AD patients have them to a much greater extent. While there is strong evidence that suggests these protein accumulation are involved in AD, their exact role in the disease continues to elude scientists.

The two biggest risk factors for getting AD seem to be genetic predisposition; about 30 percent of people who have AD have a family history of dementia, and age (6). As many as 10 percent of people 65 years of age and older have AD and nearly 50 percent of people 85 years and older have the disease (6). Sporadic AD refers to cases of AD where no other blood relatives are affected by the disease, this type of AD occurs in about 75 percent of cases (4). In these cases, the risk of developing AD increases as a person gets older. The remaining 25 percent of AD cases are hereditary, which means they are caused by mutated genes and tend to cluster within families. These cases can be divided into early-onset disease (symptoms begin before 65 years of age) and late-onset disease (symptoms begin after age 65) (4). Scientists have identified several genes that play a role in early-onset AD, the more rare form of the disease that strikes people as young as in their 30s (7). Research has also identified a gene that produces a protein which may play a role in late-onset AD, although this is far from certain.

There is no cure for AD. While there are a number of treatment regimes, none are capable of reversing the effects of the disease and overall effectiveness is far from clear. Several drugs have been FDA approved to treat some of the symptoms of AD in an attempt to improve the quality of life of those afflicted with the disease (7). Interestingly, some studies have shown that participating in mentally stimulating activities such as reading books, doing crossword puzzles, or going to museums, may be associated with a reduced risk of AD (7). In addition, this "use-it-or-lose-it" theory postulates that repetitive actions may improve certain cognitive skills and make them less susceptible to brain damage (7).

While scientific research has furthered the understanding of AD, it has yet to address the possibility of the I-function in Alzheimer's patients being impaired. The protein build-up of plaques and tangles as well as genetic mutations play a role in the etiology of AD. However, no investigations have questioned how factors affect the I-box. Alzheimer's is viewed as a disease which causes patients to lose their "self." If this is the case, and the I-function is the part of the human brain responsible for defining one's "self," then it would seem logical that AD directly affects the I-function. The possible connection between AD and the I-function is one worth investigating further. Perhaps insight into the I-box is the missing link in understanding, completely, the mechanism of Alzheimer's.

References

1)Alzheimer's Disease: A Family Affair and a Growing Social Problem,

2)What is Alzheimer's,

3)Alzheimer's Association: About Alzheimer's,

4)The National Women's Health Information Center: Alzheimer's Disease,

5)Alzheimer's Disease: Unraveling the Mystery,

6)National Institute of Neurological Disorders and Strokes: Alzheimer's Disease Information Page,

7)FDA: Alzheimer's Searching for a Cure,


Health: Mind and Society I
Name: Aiham Korb
Date: 2004-02-23 20:25:30
Link to this Comment: 8403


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


In the ethnographic study of disability, the subject shifts from THEM to US, from what is wrong with them to what is wrong with the culture that history has made for all of us, from what is wrong with them to what is wrong with the history that has made a THEM separate from an US, from what is wrong with them to what is right with them that they can tell us so well about the world we have all inherited. (1)



This study, the first of three papers, is intended to shed light on the effects of psychosocial factors on the human body and their influence on health. It will explain the physiological basis upon which the environment and society can promote poor health. Disability and pathology are symptoms of deeper problems; disease is the end product of malfunctioning systems. In the interest of better understanding the etiology of disease in human beings, we must recognize the many complex interacting systems that contribute to health and epidemiology. For this, we must take a step back and consider basic questions such as "What is health?". As defined by the World Health Organization, "health is a state of complete physical, mental and social well-being and not merely the absence of disease and infirmity" (2). In this definition, accepted by most countries in the United Nations, physical well-being is clearly only one of several factors that constitute good health. So let this be our point of departure, and let us ask next what problems face global health.

Ironically, poverty is still considered the number one problem linked with poor public health around the world. There seems to be a wide gap between the WHO's definition of health and how health is actually being approached. As society is becoming more technologically advanced, the focus is shifting to a Bio-medical Model. With this specificity, the problem of health today may be that of limited perception. Our society seems to have forgotten the principles from which it has departed on the quest for "Healthy People". For example, the World Health Report 2000 found that despite the fact that the U.S. health system spends a higher percentage of its GDP than any other nation, it ranked 37 out of 191 countries according to its overall performance (3). In 2003, the Census Bureau recorded more than 43.6 million Americans with out health insurance (4). These absurd paradoxes of our society are grave symptoms of malfunctioning political and economic systems. Yet, these figures are often forgotten because of excessive specialization on the physiological aspects of health. Therefore, we need to reconsider the larger point of view, and the many variables that affect the health of individuals and populations. Just as the WHO's definition suggests, psychological and socio-economic well-being are essential to the overall formula of health.

It is the awareness and integration of these "global" factors that we will attempt to introduce with PsychoNeuroImmunology. This field studies the interactions between the mind, the Nervous and the Immune Systems. PsychoNeuroImmunology will help us establish a bridge between the material (biological and physiological) factors and "non-material" (societal, economical, political) factors that affect health and disease. The Nervous System, the brain in particular, is at the center of those interactions. It is the principal link between the mind (or the mental state) and the body's immune system. There are several existing models that try to map these complex interactions. For example, Kemeny's X-Y-Z model investigates the linkages between psychological processes, physiological mediators and disease progression (5). Kemeny indicates that "the brain is the most proximal physiological substrate through which psychological factors act on peripheral neural systems [...] to affect pathophysiological mechanisms and clinical disease" (5). Another model where the Nervous System is at the heart of the interacting factors is Costanzo's Biopsychosocial Model (6). This is a more complete model, integrating the psychosocial, biological and behavioral catalysts on health. According to the Biopsychosocial Model, these factors affects, via stress, the Neuroendocrine and Immune Systems, which in turn determine disease vulnerability and progression. Thus, by mapping those interactions, it provides us with the mechanisms of mind-body relations in disease. Castanzo asserts: "Interactions between psychosocial and immunologic factors are relevant to a variety of diseases including inflammatory diseases, cardiovascular disease, infectious diseases, cancer, diabetes, osteoporosis, muscle wasting, and multiple sclerosis, and processes such as wound healing, surgical recovery, and efficacy of vaccination" (6). It is true that various longitudinal studies have shown stress to be strongly related to heart disease, especially among people of low socio-economic status. Also, loneliness and social isolation have been linked to increased morbidity and mortality. In order to approach these issues, we will start by examining the mechanisms of interactions between the NeuroEndocrine and Immune Systems.

The "non-material" factors on health (psychological, social, economic and political) can affect the human body by inducing change in the physiological systems. This change is brought through by the working of the NeuroEndocrine and Immune Systems, and their effects on the rest of the body. Environmental events that are challenging, uncontrollable or unpredictable activate the body's stress or "fight-or-flight" response. This response triggers physiological and behavioral changes in taxing or threatening situations (7). The Sympathetic Nervous System promotes the release of hormones that affect both the Nervous and Immune Systems. "A key hormone shared by the central nervous and immune systems is corticotropin-releasing hormone (CRH); produced in the hypothalamus and several other brain regions, it unites the stress and immune responses" (7). CRH causes the pituitary gland to release adrenocorticotropin hormone (ACTH), which triggers the adrenal glands to make Cortisol. The HPA axis is composed of the Hypothalamus and the Pituitary gland, located in the brain, and the Adrenal glands, which lie above the kidneys. The HPA axis and its key hormone Cortisol, are major components of the NeuroEndocrine stress response. "Cortisol is a steroid hormone that increases the rate and strength of heart contractions" (7). Cortisol is also an immunosuppressor, a potent immunoregulator and anti-inflammatory agent. This is a key point, because this arousal is thought to be a mechanism by which the stress response affects health. It causes an increasing "wear and tear on bodily systems, and damage to arteries, neural systems, and organ systems, and reducing resistance to pathogenesis" (8). This emphasizes the inter-dependence of the nervous and immune systems, and indicates that the malfunction of their regulating mechanisms can have serious consequences on health. "The adoptive responses may themselves turn into stressors capable of producing disease" (7). Therefore, stress can have negative outcomes on health by dampening the functioning of the immune system and increasing the body's susceptibility to infections and diseases. "The regulation of the immune system by the neurohormonal stress system provides a biological basis for understanding how stress might affect these diseases" (7). It is upon this basis that we will develop the understanding of how psychosocial stress promotes pathology. For example, the feeling of loneliness in humans is associated with an adrenaline-like pattern of activation of the stress response and high blood pressure (7). Our attention will turn to such psychosocial catalysts of disease.

The disparity between the Bio-medical Model and public health is evidence that the integration of all the variables affecting health is lacking and needed. The Biopsychosocial Model is more comprehensive, and will thus help us in our approach to the problem of mind, society and wellness. PsychoNeuroImmunology gives us a physiological basis upon which we can build the mechanisms of how social interactions, or the lack there of, can affect health, for instance. In the next paper, we will explore stress and its correlation with socio-economic status. As we depart from the biological basis to the "non-material" influences on health, we will begin to attain a wider picture of what well-being means. It will eventually make it possible and meaningful to raise certain questions as "Do certain economic systems promote disease? Does a healthy economy necessarily mean a healthy population?"


Sources:

1)Culture as Disability, By Ray McDermott and Hervé Varenne. Serendip website.

2)World Health Organization

3)World Health Report 2000,WHO archives.

4)AMA decries rise in number of uninsured Americans, American Medical Association. Sept. 30, 2003.

5)An interdisciplinary research model to investigate psychosocial cofactors in disease: Application to HIV-1 pathogenesis, By Margaret Kemeny. Brain, Behavior, and Immunity 17. 2003. p. S62-S72.

6)Psychoneuroimmunology and health psychology: An integrative model, By Erin Castanzo and Susan Lutgendorf. Brain, Behavior and Immunity 17. 2003. p. 225-232.

7) The Mind-Body Interaction in Disease. By Esther Sternberg and Philip Gold. Scientific American. 2002.

8) HEALTH PSYCHOLOGY: Mapping Biobehavioral Contibutions to Health and Illness. By Andrew Baum and Donna Posluszny. Annual Reviews. Psychology. 1999. 50:137-163.


Stockholm Syndrome: Unequal Power Relationships
Name: Katina Kra
Date: 2004-02-23 20:40:40
Link to this Comment: 8404

<mytitle> Biology 202
2004 First Web Paper
On Serendip

On August 23rd 1973, Jan Olsson began a bank robbery that would add a new interpretation to the world's view of hostage situations, and the psychological effects behind unequal power relations. It started with the storming a local kreditbanke in downtown Stockholm, Sweden, and the shooting of the police officers who had gone in after Olsson. With this action, a six day ordeal and hostage situation known as the Norrmalmstorg Robbery began. Four hostages were taken into the bank's vault. Dynamite was strapped to them, and they were rigged to snare traps so that in case of a gas attack by the police, the hostages would be killed regardless of any rescue attempts. Three women and one man were confined to this small room, fighting to survive. (7) Yet when these captives were released, they had more sympathy for their captors than the police who had rescued them – and went so far as to publicly decry their own rescue. Two of the hostages became friends with the captors, a fund was set up by them to help pay for the defense fees accrued through their trial, and continue to support their captors against the police even today. (8) Psychologist Nils Bejerot named the captives' attachment towards their abusers "Stockholm Syndrome," and from this case, a new behavioral attachment disorder began. (7)

In the hands of some psychologists, Stockholm Syndrome has proved an extensible term. It has been invoked to describe the results of slavery upon the African-Americans psyche, abusive relationships between men and women, or any situations where the division of power within a relationship or any kind is severely unequal. (7) Though the situations are intuitively connected, there are important differences by the way the term is being applied. Consequently, it's necessary to examine the different interpretations of how Stockholm Syndrome occurs within these power situations, and the reactions and strategy of the subjects who are confined to them.

The "misplaced" attachment of subjects to their abusers is not uncommon, and has been documented in many different contexts. It happens in abused children and women, cults, controlling relationships, prisoner of war camps, and other people or institutions that enforce unreasonable control on those who have no recourse. Stockholm Syndrome itself is most commonly perceived to occur with hostage situations, with the logic behind developing this relationship with an abuser or captor is in the interest of self-protection. (9) This development occurs when there are perceived threats of violence, disempowerment of the subject, high levels of stress or trauma upon subject, and ultimate dependence upon the person in control for base survival. (2)

In an act of self-delusion, the victim of Stockholm Syndrome develops conditions in order to reassure themselves they will be protected or cared for. By creating a false emotional attachment and seeking praise and approval of their captor, they attempt to make a false reality for themselves, in which no harm can come to them. And by defending and/or protecting their captors from police or anyone who "comes to the rescue," they allow themselves to appear as if they have some control in a relationship which they really have no power. The value of their lives, which the captor grants, is seen as a sign of affection or love, and the captive wishes to reciprocate in order to maintain their own position at that time. By accepting a level of objectification that one should reject as a matter of basic human dignity, hostages or captives weaken their ability to control their emotions. This allows themselves to become malleable, thus becoming easily susceptible to the whims of their captors, and creates this unbalanced relationship of attachment between the captor and the captive. (2, 5, 6, 8)

Many associate the image of hostage and captor with Patricia Hearst and Elizabeth Smart. Both cases involve the kidnapping of a woman for the further pursuit of ideals by their captors. However, these cases can be distinguished by the varying ways that Stockholm Syndrome manipulates the emotions, behavior, and actions of its subject. Patricia Hearst was kidnapped from her home, and locked in a closet where she underwent severe psychological, physical, and sexual abuse before she became a member of the Symbionese Liberation Army. At the point where the members of the SLA began to give her more freedom and liberty to speak, she was given the opportunity to leave the SLA or join and help in their fight. (4, 10) However, Hearst, under the influence of the Stockholm Syndrome, chose to remain with the group as a survival tactic.

"I knew that the real choice was the one which Cin had mentioned earlier: to join them or to be executed. They would never release me. They could not. I knew too much about them. He was testing me and I must pass the test or die." (4)
- Patricia Hearst

The effects of the trauma and abuse are clear here in what one might identify as Hearst's 'compromised survival instincts.' She would rather have stayed with those who had tortured her for nearly two months than risk affronting the SLA. After initiation, Hearst, dubbed "Tania" by the group, helped in a robbery, but when the SLA lost their power in a fire fight with the Los Angeles Police Department, she was returned to her family. (4) Unlike the Norrmalmstorg Robbery, she distanced herself from the group and her captors when she returned to her regular life, and insisted that her reasons for joining were purely in self protection. Perhaps Patricia Hearst, despite the abuse endured in the time of her kidnapping, was not necessarily protecting the others by joining the SLA, but attempting to save herself by the actions she believed would help. (10)

In the case of Elizabeth Smart, a very different dynamic between the captor and the captive emerged. At the young age of fourteen, Smart's own instincts of survival or protection were not as developed as Hearts', and this lack of maturity resulted in the development of a strong bond between her and Brian Mitchell, resulting in intense Stockholm Syndrome. (3) This is exemplified by her failure to seek help. Only three days after her kidnapping, Smart had heard her uncle searching and calling from her not far from her hidden location, but did not call out or draw attention to herself. (11) This derisive lack of motivation to be rescued is prevalent the nine months of which she was under hostage. Many people questioned her and her captors about who she was during this period of time, but she denied anything but what she had been told by Mitchell.

The evidence so far shows no physical abuse to Smart, but there was a constant subjection to threats, the trauma of the kidnapping itself, and propaganda forced upon her, that all resulted in the Smart's personal will breaking down, allowing for the relationship of affection to develop towards her captor. (2) Even during her rescue, Smart was still reluctant, perhaps still believing in the myths Mitchell had told her, or convinced that those helping her were hurting her by taking her away from this man who she had become so attached to. (11) Unlike Hearst, Smart did not speak out against her captor once she had returned to regular life, despite an angry and vocal family. She remained silent about what occurred during the nine months of which she was under his control, and did not defend her choice to avoid seeking rescue. A predominant sign of Stockholm Syndrome is this sympathy and compassion with your captor, and even though Smart did not outwardly explain this relationship as those hostages in Sweden had, she had remnants of it even after she was returned home. (2, 5, 8)

Both of these cases exhibit Stockholm Syndrome through the hostage scenario, but there are many other situations in which its dynamics can be identified. In the mid-19th century, many African-Americans felt betrayed by Lincoln when his government emancipated them. Some adamantly refused to leave their masters even when they were granted freedom. Though in a sense, slaves were confined to the area which their master presided over and had the lingering fear of violence, they still could claim certain areas of their lives were their own, and were not generally as directly threatened as hostages are. Even so, the legacy of domination and abuse manifested itself in these "one-sided relationships" where African-American slaves remained devoted to their American master despite the cruelty they had endured. (1) "Indeed, the regulation of behavior and the resultant adjustment that was made had a direct influence on the consequent formation of the slave's personality."(Huddlestone – Mattai, 347) Consequently, this domination pattern of those with money and power, typically European, over the African-Americans is still prevalent today, as parts of society still holds that they are inferior, as would be in a master and slave complex.

Not all potential subjects, placed in these situations, react in a way that engenders Stockholm Syndrome. Many in similarly unequal power relationships seek revenge or escape as soon as it is offered. Bank hostages have held their captor to the window to be shot (8), slaves have killed their masters in rage, and so one cannot assume that there exists a 'hard and fast rule' to generalize that captives will come to inappropriately identify with their captors when placed in such survival scenarios. Strong morals and beliefs are personality traits that may attenuate Stockholm Syndrome in some people. (2) In the rapid change of Elizabeth Smart, it may be possible to attribute this to her age and lack of clear values within her life due to inexperience, and her desire for acceptance and obedience.

As a basic concept, Stockholm Syndrome is the duality of a power relationship over someone. A person captured becomes deeply involved with the captor due to the typical confine of the circumstances, and because even through the abuse and threats, they still must accept them as the only source of contact and nurturing that focuses on them. The need, under duress, of approval and reassurance, when combined with a fear of severe punishment, creates the precondition for the type of aberrant attachment described as Stockholm Syndrome. Nevertheless, its specific consequences – for Elizabeth Smart, Patricia Hearst, or a generalized category of victims such as African Americans – are highly variable, and so more careful clinical examination would be merited in order to define the ways in which Stockholm Syndrome effects those who experience it.

References

1. Huddleston-Mattai, Barbara. "The Sambo Mentality and the Stockholm Syndrome Revisited: Another Dimension to an Examination of the Plight of the African American." Journal of Black Studies. Vol. 23, No. 3, pg. 344-357
2. A site about Elizabeth Smart and Stockholm Syndrome.
3. An article written about the expert opinion involving Elizabeth Smart.
4. A site about the Patricia Hearst kidnapping.
5. The dictionary of Peace and their ideas about Stockholm Syndrome.
6. A site describing the symptoms of Stockholm Syndrome.
7. A encyclopedia that discusses the Norrmalmstorg robbery.
8. An article written about the mental health issues of Stockholm Syndrome.
9. A site describing Stockholm Syndrome in to abusive relationships.
10. An interview with Patricia Hearst and the effects of her kidnapping.
11. A site describing Elizabeth Smart after her rescue.


Fear and Anxiety: Post-Traumatic Stress Disorder
Name: Amy Gao
Date: 2004-02-23 22:30:39
Link to this Comment: 8408

<mytitle> Biology 202
2004 First Web Paper
On Serendip

Almost all of us, at some interval in our lives, will come to experience emotionally perturbed events such as bereavement of a loved one, violence, sudden disaster and other similar events that seem to spin our lives out of control. Even though time eventually may help to dim the memories of such tragic events and many people will come to terms with and accept these losses, many individuals may remain emotionally scarred from their experiences.

Post-traumatic stress disorder, or PTSD, is an anxiety disorder associated with the reactions that an individual has in response to a dramatic emotional event. The incident can be one that has directly affected the individual or one that the individual has witnessed. In adults, symptoms for the disorder include flashbacks and dreams associated with the event, feelings of detachment or estrangement from others, noted diminished interests in activities that the individual once avidly participated in.(1)

It is estimated that PTSD may affect 3 percent to 6 percent of adults in the United States(1), which account for around 5.2 Americans. (4) Women are twice as likely to be afflicted with this syndrome as compared to men, and reports indicate that substance abuse and other anxiety-related disorders may occur concurrently with PTSD.(4) Studies have also indicated that individuals who have had histories of emotional disorder, substance abuse, anxiety, and being part of a dysfunctional family, may be predisposed to PTSD more than other people who have not had such histories.(1)

One example of what triggers PTSD would be the tragic events on September 11, 2001. Indisputably, all of us have experienced some state of shock and disbelief at the horrendous acts committed, and some of us more than the others. Take, for instance, the study conducted that found high levels of PTSD found in New York residents who lived in the vicinity of the World Trade Center, which also found that the farther away from the disaster epicenter, the lesser the incidence of PTSD.(3) This appears to suggest that the closer an individual is to the disaster scene or related to it, the higher chance of the individual being afflicted with PTSD.

In addition to the aforementioned symptoms that may be exhibited by victims of PTSD, some studies have found that there is an association between poor physical health and PTSD. It has been found that individuals who are afflicted with PTSD are more likely to have physical health problems than those who do not have the disorder. (2) The research so far seem to suggest that for those who are not in the prime of their physical health, they are either more vulnerable or more perceptible to be diagnosed with PTSD. Further exploration of the causation and affect link between the two is necessary, since the data that support this theory have only come from veteran populations.

Researches that attempted to correlate PTSD with the brain have focused on the areas in the brain that are believed to be involved in anxiety and fear, which is an emotional response that is triggered when the individual faces danger. Studies have found that the amygdala, a complex structure inside the brain, is responsible for the fear response that activates many of the body's protective mechanism. Therefore, if the previous assumption holds true, it should stand to reason that if the amygdala malfunctions in some way, the results could lead to anxiety disorders, one of which includes PTSD.(4)

PTSD victims also have been found to secrete uncharacteristic levels of hormones when they respond to stress. Opiate is a substance that assists in pain-relieving that is produced when people are in danger, and it has been found that PTSD patients have maintained a high level of opiate even after the danger has passed, which may be associated with the dissociative disorder that is also observed in individuals afflicted with PTSD.(4) Moreover, it appears that cortisol, a steroid hormone released from the adrenal cortex during stress that prepares the individual to deal with the stress factors and insure that the brain receives adequate energy sources are lower than normal.(5) Epinephrine, which is secreted by the medulla, is also known as the "fight-or-flight" hormone that is responsible for increased metabolism and norepinephrine, a neurotransmitter that is released during stress to activate the hippocampus, which is the section of the brain that is responsible for long-term memory, are found in higher levels than normal in PTSD patients.(4) Therefore, deducing from the information above, if an individual is under extreme stress, it can be reasoned that norepinephrine (since it has been found to be present in high levels even after the moment that triggered the response has passed) may have a stronger impact on the hippocampus, which may explain the reason why individuals with PTSD often have recurring flashbacks.

There are many ways to rehabilitate a PTSD patient. Treatments for PTSD include anti-depressive medication that may help in reliving some of the symptoms exhibited by PTSD, behavioral therapy that focus on rehabilitation of PTSD-onset behavior and family therapy that work with the families of PTSD patients who may have been affected by the patient's PTSD-behavior.

PTSD is one way that an individual responds to extreme stress under traumatic events. If diagnosed in time and treated properly, it is an illness that can be successfully cured. Though further research will be necessary to observe if other parts of the brain play parts in the abnormal hormone levels secreted in patients with PTSD and more concrete evidence are needed to correlate situations in which an individual may be more pre-disposed to PTSD than others, this disorder is not so shrouded in mystery as many other mental disorders are anymore.

References

(1)The Mayo Clinic, The Mayo Clinic on PTSD

(2)National Center for Post-Traumatic Stress Disorder

(3)National Institute on Drug Abuse, National Institute on drug abuse, depression, PTSD, substance abuse in crease in wake of September 11, 2001 attacks

(4) National Institute of Mental Health, The National Institute of Mental Health on PTSD

(5) Medline Plus Medical Encyclopedia, Medline Plus Medical Encyclopedia on definition of cortisol


Studying Functional Differences in the Adolescent
Name: Elizabeth
Date: 2004-02-23 22:53:14
Link to this Comment: 8413


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Adolescents between the ages of 13 and 19 tend to act impulsively and irrationally. Testing limits, experimenting, and acting without considering future consequences are all part of adolescent behavior according to Dr. Laurence Steinberg of Temple University (1). He states that teenage self-regulation of impulsive behavior does not appear to mature until later in adolescence (1). The perceived rebellious actions of teenagers that were once dismissed as changes in hormones corresponding with the beginning of puberty may actually be due to functional differences in teenage brains. The behavioral differences between adults and teenagers become recognizable due to the increased freedom and decision making that adolescents acquire. Studying the variations in the brains of adolescents and adults provides evidence for the argument that the actions of the nervous system are responsible for observed behaviors.

Two studies have identified differences between adolescent and adult brains. One study conducted by Dr. Arthur Toga of the Laboratory of Neuro Imaging located at UCLA demonstrates that children and adolescents from ages 12 to 16 have less myelination in the frontal lobes of the brain (2). The frontal lobes, located at the front of the cranium, have been identified as the area of the brain that dictates rational behavior and reasoned weighing of consequences (4). Myelin is composed of neural cells that form insolating lipid layers around nerve processes. Myelinated processes can more effectively conduct electrical signals from one neuron to another. The presence of more myelin in adult frontal lobes implies that more neural processes are connecting neurons together. If connections between neurons in adolescent frontal lobes are not as abundant, adolescents may not be as capable of using their frontal lobes Decreased myelination may mean that neurons in the frontal lobes of children and teenagers are not as interconnected and not as capable of communicating via passing signals as the neurons of adult frontal lobes, resulting in decreased ability to make reasoned decisions. Dr. Jay Giedd of the National Institutes of Mental Health also studied the adolescent brain using magnetic resonance imaging. Dr. Giedd has identified a growth period of the neuron bodies or gray matter in the prefrontal cortex, a specific section of the frontal lobes, at ages 11 in girls and 12 in boys (3). Though adolescents contain more gray matter than adults, neurons are connected throughout the teenage years so development and usage of the frontal lobes occurs gradually. Throughout adolescence, the brain decreases the amount of synapses and increases the amount of myelination of certain processes in order to strengthen them (3). He concludes that the adolescent brain has not made adequate neural connections and can be shaped by activities throughout the maturation process (3).

If the connections in the frontal lobes of children and teens are not as developed as the brains of adults, another portion of the adolescent brain may be used in tasks where adults normally process inputs with their frontal lobes. In a study conducted by Dr. Deborah Yurelun-Todd of Harvard University, brain activity was scanned using functional magnetic resonance imaging (5). Both adults and adolescents from ages 11 to 17, who had no diagnosed psychological disorders or brain injuries, were asked to identify the emotion on pictures of faces on a computer screen (5). The expression of the picture shown to the participants was one of fear. The teens typically activated the amygdala while the adults activated the frontal lobes to perform the same task of identifying the expression (5). Because teens and adults are activating different portions of their brains to perform the same task, studying the function of the amygdala may provide an explanation for observed behavioral differences in adolescents and adults.

The amygdala is part of the limbic system and is responsible for emotional reactions. Dr. Jean-Marc Fellous states that the amygdala is responsible for emotional processing and reactionary decision making because lesions of this region interfere with emotional reactions (6). By using the area of the brain that identifies situations with emotions, adolescents react in an impulsive manner more than a reasoned one. The increased activity of the amygdala in teens may be because the frontal lobes have not yet developed a regulatory role in the nervous system. Dr. Richard Davidson of the University of Wisconsin-Madison found that in 500 individuals who had decreased activity in their frontal lobes, they also had decreased ability to regulate emotion (7). Davidson concludes that there may be some interaction between the amygdala and the frontal lobes (7). Like the individuals Davidson studied, adolescents may not have the ability to sufficiently regulate emotional processes because their frontal lobes have not matured. The impulsive behavior of adolescents is due to the increased reliance on the instinctual part of the brain while the area for rational thought, the frontal lobes, develops.

Further evidence to support the nervous system producing all behavior would be to observe different behaviors corresponding to varying neural connections within the frontal lobes. Signals to different neurons would be expected to produce different types of behavior if interactions of the nervous system are responsible for producing behavior. Since the frontal lobes are still forming myelinated connections between neurons during adolescence, environmental factors can influence development of varying connections. Dr. Giedd identifies the time between 13 and 18 when connections are made as the "use it or lose it principle (3)." He says that the activities that teens participate in will influence the connections made in the brain (3). If the neuron process connections are not properly made through sufficient stimulus, reduced function of the frontals lobes can result. Different environmental inputs can influence the development of teenage frontal lobes (3). If Dr. Giedd is correct that connections can be influenced by different stimuli, monitoring a child's behavior, setting rules and seeing that they are obeyed should promote the development of regulatory connections in the frontal lobes. An individual who grows up in an environment where regulation of emotions is encouraged would be expected to have different myelinated processes than an individual where such activity is not promoted. Environmental inputs may have an important role in forming connections between neurons that leads to increased reasoning ability and self-regulation of emotional behavior. More studies are needed to support Dr. Giedd's theory. One potential study would be to map white matter, myelinated processes, in children who grew up in various household environments.

The structure of the adolescent brain provides an explanation for the perceived teenage behavior of irrationality and impulsiveness. This behavior can be attributed to activation of the amygdala the region of the brain responsible for emotional behavior. Mature frontal lobes may regulate the actions of the amygdala and allows individuals to reason through situations instead of acting on instinct. Poor connections between neurons formed during adolescence may lead to less emotional regulation as an adult. Since differences in the adult and adolescent brains can be correlated to different types of behaviors, the variations in the brains of adolescents and adults provides evidence for behaviors being produced by the activity of the nervous system. Future studies to further correlate adolescent behavior to functional brain differences would include functional magnetic resonance imaging study of the use of the amygdala over the frontal lobes and further evidence that the frontal lobes do regulate the activity of the amygdala.


References

1)The Study of Abnormal Psychopathology in Adolescence, This is the web version of Dr. Steinberg's paper that outlines some normal and abnormal adolescent behaviors.

2)Teenage Brain: A Work in Progress, This site from the National Institute of Mental Health presents several studies on the development of the teenage brain, mainly through MRI imaging.

3)Adolescent Brains are Works in Progress, This site from Frontline presents data obtained from Dr. Jay Giedd's studies of the development of the adolescent brain. Dr. Giedd focuses on prefrontal cortex development study, but also addresses Corpus Callosum and Cerebellum development.


4)Frontal Lobes, This site gives some background on frontal lobe structure and function. Some research on possible frontal lobe abnormalities and consequences are also presented.


5)Deciphering the Adolescent Brain, This is a web version of an article published in the Harvard University Gazette that presents the research performed by Dr. Deborah Yurelun-Todd. She studies the use of the amygdala as opposed to the frontal lobes in children and adolescents.


6)Emotional Circuits and Computational Neuroscience, This site is the online version of a paper by Dr. Jean-Marc Fellous and colleagues Jorge L. Armony and Joseph E. LeDoux. They determine that many emotional responses originate from the amygdala.


7)Brain's Inability to Regulate Emotion Linked to Impulsive Violence, Research conducted by Dr. Davidson on the regulatory role that the frontal lobes play is presented in this article.


The Many Aspects of the Ancient Egyptian "Self"
Name: Ariel Sing
Date: 2004-02-23 22:54:54
Link to this Comment: 8414


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

For the Ancient Egyptians the "afterlife" was a very important concept. Once a person died there were a number of steps that needed to be taken to ensure their continued existence. Mythologically the deceased person came before the god Osiris and denied having committed any offenses in their lifetime. The most famous and telling trial was the weighing of the heart. In this ceremony the feather of Ma'at, the goddess of truth, was weighed against the heart of the deceased. If the heart was not heavier than the feather the person was able to continue on into their afterlife, however if, because of sins, the heart was heavier than the feather, the soul of the person was devoured by a chimeric amalgam of hippopotamus, crocodile and lion. (1)

For each step of the journey to the after-world the Ancient Egyptians believed that their soul or "self" had a different aspect. There were five parts: the ka, the ba, the akh, the name and the shadow or shade. (2) Each of the aspects of the Ancient Egyptian "self" was unique, yet interrelated with the other four elements.

The ka was depicted in two ways; in many instances it was simply a smaller version of the individual. It is thought that this represented, not a distinct facet of the individual, but a way of representing the ka as being within the person. (3) Alternatively, the ka was depicted as two arms upraised. Sometimes, such as in text, this symbol would be seen alone, however often it was attached to the top of the head of the individual. There is no accurate manner in which to translate the meaning of ka, however for the sake of discussion it is often referred to as "sustenance". (4) It was the primary differentiation factor of a living person from a dead one. When born, every ancient Egyptian received their ka and it would stay with them until their death. (5) It was believed that when the god of creation, Khnum, formed the person on his potter's wheel , he also formed their ka. (6) After the person had died the ka still required food, this was supplied in the tomb, either in the form of actual food, or symbolically as tomb paintings. The ka did not so much eat the food, as absorb the life-energy of the sacrifices. Upon the moment of death the ka became dormant, and stayed thus until the end of the mummification process when it was rejuvenated and the ba came to join it in the afterlife. (7)

The ba is the closest manifestation of the modern idea of a "soul". It was always illustrated as a bird with a human head, and sometimes with human arms. (8) Because of the avian depiction the ba is often connected to migratory birds, which were thought to be peoples' bas going from the tomb to the afterlife and back. (9) Humans were not the only creatures with bas, they were also possessed by gods, for example the Benu bird was considered the ba of Re, as the Apis bull was that of Osiris. (10) The ba was all of the nonphysical aspects of a person that defined them, it is sometimes considered to be the modern equivalent of personality. It was the role of the ba to travel to the ka in the afterlife, in place of the body, which was unable to make this journey. Once it had reached the ka, the two joined aspects of the "self" were transformed into the akh. It should also be noted that without the ba, the body of the deceased person would not be able to survive, and thus their entire being would die. There were two things required by the ba in order for it to endure. First, it must return to the body every night. Second, it required the same sustenance as a person, to supply this, food and drink were left in the tomb and their depiction was painted on the walls. (11)

The akh was the combination of the ba and the ka. This was the form of the "self" that lived in the after-world. (12) The akh was believed to have direct influence on the world of the living, for good or ill. In fact, when people believed themselves to be suffering from malice, they would write letters to the akh of dead people to ask for their forgiveness and beg pardon. (13) After the heart of the deceased person had been weighed and had been accepted into the after-world, the ka was allowed to join with the ba creating the akh. This new form was often portrayed as a mummified figure. However, the hieroglyphic form that describes it is the crested ibis. This akh was believed to be the link between the human and the divine, in fact, dead ancestors, who were not royal, were often given a place of exaltation in the house. (14) The akh was one of the aspects of the "self" that was allowed to freely wander the land, and thus able to interact with the living. The akh was believed to be forever the same, it never changed or perished. (15)
Names were given at the moment of birth to all children, for without a name, that child never really existed, and was thus unable to live. Often the name given was adapted from the name of a local deity or a god that was particularly powerful at the time. (16) The only way for a person's name to be preserved was to have it inscribed, either on texts within their tomb, or even directly onto the tomb wall itself. In fact, if one wished to eliminate a person's akh, indeed their entire being, they would remove all mention of the dead person, scratching their name out when carved in stone and destroying any textual reference to that person. (17) One of the most famous examples of this is Hatshepsut, whose probable son (or stepson) ordered all examples of her name to be annihilated. Because of the power that the ancient Egyptians believed true names to hold, gods' true names were often never known. They might have hundreds of names, but none would be the power-relating true name. Conversely, if one knew the name of an evil spirit, it could be vanquished, the ritual words used were "I know you and I know your names." (18) One of the most telling examples of the power of the true name was that it was believed that Ptah, one of the creator gods, brought everything in to being, simply by speaking the name for each. (19)

The shadow (or shade, as it was also know) was a form of the "self" often represented by a darkened painting of the individual. Apparently it was imperative to protect the shadow from any harm, (20) although it itself was considered a form of defense for that individual. This protection was well known, for even in the Valley of the Kings the tombs were built taking the shadow of the sun into account. (21) This idolization of the shade is understandable given the intensity of the sun in Egypt, anything could rapidly become burned, thus something that protected from that heat would be considered powerful. (22) In a similar vein it should be noted that pharaohs were often depicted under the shade of a fan made of feathers or palm leaves. The final defining factor of the shadow was that it moved with tremendous speed and contained great power. (23)

As is clear from the above information, each aspect of the "self" was viewed as unique; each had its own purpose and use. There are, however, also many ways that these aspects are interwoven.

The ka and the ba are the most closely related. They both represent different portions of a person's personality. They are, in fact, so closely related that after death they become joined into one, the akh. Thus it is clear how these three aspects relate to each other, and how without one, the rest would be powerless. The name and shadow are less obviously integrated. Both of these aspects were more closely related to the world of the living than the ka, the ba or the akh. The name was an actual continuous link to those living, just as the ka was, both could be affected by the actions of the living. The name was also similar to the ka because when a child was born, the two thinks that it received were its name and its ka. The shadow was more closely associated with the ba. Both the shadow and the ba were thought to stay with the body after death. Because of their presence the body was sustained and protected.

It can now be seen that the ancient Egyptian "self" was a complicated and intricate idea. By loosing just a single aspect, the dead person was doomed, they would never go into the after-world, instead they would simply cease to exist. It was this symbiosis that created such a strong sense of self and unity among the ancient Egyptian people.

A beautiful example of this unity and power is the name of the pharaoh Akhenaten. The name when translate conveys the idea that the pharaoh is the akh of the god Aten, the lord of light who creates shadow. Thus he has combined into his name all the five element of the soul, the name, the ka and the ba as the sacred akh, and the shadow formed by the passing of the sun, Aten.

References

1) The Spirits of Nature: Religion of the Egyptians, a summary of the basic tenets of Ancient Egyptian religion

2)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

3)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

4)Ka, a summary of the ka

5)The Afterlife, a summary of the aspects of the ancient Egyptian afterlife

6)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

7)The Afterlife, a summary of the aspects of the ancient Egyptian afterlife

8)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

9)Ba, a summary of the ba

10)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

11)The Afterlife, a summary of the aspects of the ancient Egyptian afterlife

12)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

13)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

14)Akh, a summary of the akh

15)The Afterlife, a summary of the aspects of the ancient Egyptian afterlife

16)Name and Shadow, a summary of the two aspects of the soul, the name and the shadow

17)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

18)Names, a summary of the concept of names in ancient Egypt

19)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

20)Death in Ancient Egypt, a summary of the funerary customs of Ancient Egypt

21)Shewet, a summary of the shewet or shadow

22)The Concept of the Soul in Ancient Egypt, a summary of the five forms of the Ancient Egyptian soul

23)Name and Shadow, a summary of the two aspects of the soul, the name and the shadow


The Effect of Video Games on the Brain
Name: Eleni Kard
Date: 2004-02-23 23:03:13
Link to this Comment: 8415


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The effect of video games on the brain is a research area gaining popularity as the percentage of children and adults who play video games is on the rise. Some people believe violence in video games and in other media promotes violent behavior among viewers. While there is not sufficient data to validate this claim, there are a number of studies showing that video games can increase aggressive behavior and emotional outbursts, and decrease inhibitions. From a few of these studies, and from my own observations of children playing video games, it is quite obvious that the video games do have at least some effect on the behavior of the player. The extent and long range consequences of these behavior changes after one has turned off the video game are not so easily deduced. One source states that "While research on video games and aggressive behavior must be considered preliminary, it may be reasonably inferred from the more than 1,000 reports and studies on television violence that video game violence may also contribute to aggressive behavior and desensitization to violence" (1). Another study reports that "Hostility was increased both in subjects playing a highly aggressive video game and those playing a mildly aggressive video game. Subjects who had played the high-aggression game were significantly more anxious than other subjects" (2).

I had a chance to observe the effects of video games first hand on two boys, ages eight and ten, when I babysat them earlier in the semester. They were playing the video game "Mario Cart," which is really not a very violent game; the object is to win a car race by coming in first while maneuvering through different courses. When the younger brother won, the older brother got up and started kicking him and yelling insults! Later on that day, the younger brother was playing another video game by himself and when he could not beat the level, he threw down the controller and screamed at the t.v. screen, "Why are you doing this to me...?!" and burst into tears. I was very shocked by this reaction and was not quite sure how to handle the situation. This game had brought an eight year old boy to tears, right in front of me. "Certainly, video games can make some people go nuts. You just have to look at some enthusiasts playing video games on their cellular phones, mumbling to themselves heatedly even though others are around them. At game centers (penny arcades), frustrated people punch or kick game machines without regard to making a spectacle of themselves" (3). From the above descriptions, it seems that players get somewhat "sucked" into the video game and become oblivious to their surroundings and much less inhibited to share their emotions. What types of changes are occurring in the brain to activate this behavior which one exhibits when "sucked" into a video game?

Akio Mori, a professor at Tokyo's Nihon University, conducted a recent study observing the effects of video games on brain activity. He divided 260 people into three groups: those who rarely played video games, those who played between 1 and 3 hours three to four times a week, and those who played 2 to 7 hours each day. He then monitored "the beta waves that indicate liveliness and degree of tension in the prefrontal region of the brain, and alpha waves, which often appear when the brain is resting" (4). The results showed a higher decrease of beta waves the more one played video games. "Beta wave activity in people in the [highest amount of video game playing] was constantly near zero, even when they weren't playing, showing that they hardly used the prefrontal regions of their brains. Many of the people in this group told researchers that they got angry easily, couldn't concentrate, and had trouble associating with friends" (4). This suggests two important points. One, that the decrease of beta wave activity and usage of the prefrontal region of the brain may correlate with the aggressive behavior, and two, that the decrease of beta waves continued after the video game was turned off, implying a lasting effect. Another study found similar results and reported: "Youths who are heavy gamers can end up with 'video-game brain,' in which key parts of the frontal region of their brain become chronically underused, altering moods" (5). This study also asserts that a lack of use of the frontal brain, contributed by video games, can change moods and could account for aggressive and reclusive behavior. An important question arises: if the brain is so impacted by video games as to create behavioral changes, must that mean that the brain perceives the games as real?

Perhaps looking at what effects video games have on autonomic nerves can begin to answer that question. "'Many video games stir up tension and a feeling of fear, and there is a very real concern that this could have a long-term effect on the autonomic nerves,' Mori commented" (6). Autonomic nerves are those connected with involuntary internal organ processes, such as breathing and heart rate. "Heart rate can be altered by electrical signals from emotional centers in the brain or by signals from the chemical messengers called epinephrine (adrenaline) and norepinephrine. These hormones are released from the adrenal glands in response to danger..." (7). Multiple studies have reported that playing video games can significantly increase heart rate, blood pressure, and oxygen consumption. If studies show that heart rate is increased when playing video games, then it seems that the brain is responding to the video game as if the body is in real danger. Does repeated exposure to this "false" sense of danger have an effect on what the brain then perceives as real danger?

From the above studies and observations, video games do effect the players in some ways, since it appears that players get so wrapped up in the game that they forget their surroundings and begin to see the game as a real quest. Studies have shown that playing video games can increase heart rate and blood pressure, as well as decrease prefrontal lobe activity while the person is playing the game. This could account for changes in the player's mood and cause him or her to become more aggressive or emotional. However, the extent of these effects on the body once video game playing has ceased are preliminary and need to be confirmed.

References

1)Mediascope website, highlights data from various scientific studies concerning video games.

2)Mediascope website, violent video games causing aggression.

3)Japan Today News website, an interesting news site and discussion board.

4)Mega Games website, a hardcore gaming site, including cheats, demos, and facts.

5)Beliefnet website, centers around spiritual, religious, and moral issues.

6)Sunday Herald online, a news resource.

7) Freeman, Scott. Biological Systems. New Jersey: Prentice Hall Inc., 2002.


Body Dysmorphic Disorder- A Brain Disease?
Name: Nicole Woo
Date: 2004-02-23 23:09:33
Link to this Comment: 8416


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


Though Body Dysmorphic Disorder, commonly known as BDD, was first documented in nineteenth century, it is not a well known disorder. However, despite its lack of notoriety, BDD is not a rare disease, affecting two percent of the population (3). As scientists attempt to discover more about this illness to learn how to treat it, psychological and sociocultural factors have been considered to be possible causes for BDD


When discussing the origins of BDD, scientists and patients have been inclined to attribute BDD to psychological factors. Many have felt that BDD arises from childhood trauma, resulting in channeled feelings of conflict, shame, or guilt
(2). However, though psychological factors do not seem to be causal, it would be foolish to deny their influence on someone with a genetic predisposition towards BDD.


In addition to the psychological factors, sociocultural factors seem to influence BDD as well, mainly by exacerbating it. Many would be inclined to attribute the presence of BDD to the images that our modern society is constantly bombarded with, namely images of the ideal beauty. On every magazine cover and every television channel, the message of the ideal, for both men and women, is displayed constantly. How could these impossibly perfect images not affect how individual persons perceive themselves in comparison? There is no doubt that these images can increase the anxiety felt by anyone, particularly those with BDD, when compared to their own bodies. While images of ideal men and women could make anyone feel dissatisfied with their bodies, where does one draw the line between the desire to look more attractive and the obsession of those who suffer from BDD? While it is generally accepted that people like models and ballet dancers obsess over their bodies, there is another profession, though less well known for this preoccupation, that also has high rates of BDD; those who are involved in the arts (5). While no doubt these environments can increase the attention placed on the body, I would suggest that many dancers, models, and art historians are drawn to their respective professions because the focus is on appearance. While it may be unconsciously done, perhaps the people involved in these professions are already preoccupied with their bodies, thus an occupation which demands constant surveillance of appearances appeals to them. As an art history major, ballet dancer, and former model, I ask myself the question, "Is my involvement in these industries a suggestion that I am predisposed towards this disorder?" Though there may be an inclination on behalf of the twentieth century observer to claim that the media has caused an unnatural obsession with appearances, the fact that cases of BDD were documented as early as 1886 is evidence that BDD predates the era of the supermodel(2). In addition to cases like this, the causes of BDD originate can be seen in how patients response to treatment. Currently, there is evidence that BDD responds to medications known as serotonin-reuptake inhibitors, suggesting that BDD results from a dysregulation of serotonin

References

1)BDD Central, a helpful website discussing various aspects of BDD,including a forum where one can read the writings of those who suffer from BDD


2) Phillips, Katharine A. The Broken Mirror. New York:Oxford University Press, 1996.


3) Body Dysmorphic Disorder, a good resource for basic information on BDD


4) Facts Sheets: Realising Human Potential , a good source for statistics about BDD


5) Body of Work: art career linked to image , an article discussing occurrence of BDD among certain professions


Dreaming Through the i-box and the id-box
Name: Amar Patel
Date: 2004-02-23 23:47:21
Link to this Comment: 8418

<mytitle> Biology 202
2004 First Web Paper
On Serendip

Dreaming has always been an enigma plaguing the studies of psychology and biology. Through each of these fields we get a different interpretation of the reason for dreams and their effects on our own consciousness. From the start, one needs to define consciousness in terms that can be identified through both of the fields in which we will analyze dreams. When one looks at the nervous system, and its general function as an input/output mechanism one can interpret it as a "box" theory, which was developed by Paul Grobstein Ph.D.(1) The theory explicates the nervous system and its relation to consciousness. In this theory, the entire nervous system is a box in which a stimulus (input) will travel through a complex pathway and appear as some output. There are many other intricacies, such as inputs which produce no output, or outputs which produce no inputs, that are explained through self initiating boxes within the nervous box. Additionally, there is an I-box which functions as the section of the nervous system that correlates to consciousness. This consciousness is where an individual holds his/her sense of "self."(1) Beginning with the psychological (Freudian) viewpoint and then continuing into the biological (physiological / developmental) interpretations of the dream state we will come to understand their individual effects on the I-box theory.

When examining the thoughts about dreaming from a psychological standpoint, one must look at the works of Sigmund Freud, a pioneer in the interpretation of dreams. During his time, little was known about the science behind the study of dreaming. This meant that there was more clinical speculation and less proven lab work behind the theories developed by him. Freud was only able to examine dreaming through patients who tried to recollect dreams after waking up, which proved to be very inconsistent, and a rare occurrence. When he did get summations of dreams, Freud was able to develop the notion that dreaming was the "royal road" to the unconscious. (2) Freud saw the rare occurrence of dreams as forms of recalling the earliest events in one's life, with the undertone of one's desire and passions being fulfilled. This theory was called his "wish fulfillment theory". (3)

When this theory is applied to the notion of the I-box we see some complications. Where is the unconscious in relation to the conscious box? One may say that the unconscious exists as a separate entity, another box which has its own inputs and outputs. Although this may be a temporary solution to Freud's interpretations, one understands that in dreams all the inputs (senses) and emotions are in tact. Additionally, in the case of lucid dreams, the conscious extends far enough to gain control of the unconscious and fulfill its desire or will. These notions force a strong link between an I-box and another "unconscious" box. The most sufficient way to explain this theory, through the psychological standpoint of the ego (consciousness) versus the id (unconsciousness), would be to place the I-box inside this "Id" box. Since the psychological beliefs of conscious state that the Id is the predecessor to the ego, the ego being merely an evolution of control imposed by society on the id, the I-box can be seen as this ego. Since the Id is the precursor to the ego, one must also note that it holds greater importance in the sense of "self". This hierarchy places the Id in a more prominent space, around the evolved ego or I-box. Although this is quite a controversial step, it accounts for any of the psychotic-like episodes people experience in their dreams. Phenomena such as out-of-body experiences mean that people are thinking within the Id-box, but not the within the context of the I-box. The idea of the Id-box containing the I-box supports Freud's and other psychologists' claim of the Id being present from birth, and the ego being a product of the environment, meant to tame the Id.

Looking at the I-box function and its relation to dreams in the scope of physiology will better explicate the notion of an I-box within the Id-box. First of all, one must examine the recent scientific knowledge of dreaming. The state of sleeping that is most associated with dreaming is REM (rapid eye movement). Scientists discovered that this state of sleeping can be measured through the use of an EEG which measures the theta waves that the neocortex produces. In the REM stage of sleeping, the theta waves are comparable to those of a waking person.(2). Research has been conducted to help differentiate the states of consciousness in REM versus that of a person in the waking state. The results show that "brain activation during waking is associated with noradrenalin, 5-hydroxytrptamine (5-HT) and acetylcholine-mediated neuromodulation, brain activation during REM is exclusively cholinergic..." (4) Essentially, the types of chemicals that are active during the dreaming state are distinguishable from those present in the waking state. Another important result from the same study shows that the role of the prefrontal cortex in dreaming and waking can explain some distinctions between the states of consciousness. The reduction of activity within the components of the frontal lobe is what contributes to a change from waking to dreaming state. Additionally, when an individual encounters the REM stage of dreaming, select portions of the posterior and medial prefrontal cortex are activated. (4) Aside from the prefrontal cortex, the brainstem and occipital lobe (vision center) have increased activity which also leads to the REM stage of dreaming. (5) What these studies outline is the notion that there are, in fact, different areas of the brain that correspond to dreaming. Additionally, there have been findings that show that the prefrontal cortex does in fact have different activity dependent on the amplitude and frequency of the theta wave, showing different stages of dreaming. These intricacies to the dream state maintain the notion that dreams must encompass some specialized functions in our brain, such as an I-box.

Many scientists have provided proof that dreaming states are nothing more than the brain attempting to unlearn any useless memories it had acquired during the day. (2) Although these studies are very well documented, one notion that has not been addressed how, in fact, this would correspond to the dream states in which there was little difference between the dream and reality. Jonathon Winson Ph.D. therefore looked back at the evolutionary biology behind the anatomical components of REM and learned that as evolution progressed for mammals, so did the process of REM. Dr. Winson established the theory that REM sleep was, in effect, a useful tool in animals because it was a process of relearning the traits that were not coded in genes, but which were still important to functioning in its own environment. (2) These ideas help to establish the notion that perhaps the unconscious is something that incorporates the I-box, something essential to the behavioral patterns of all animals.

After examining the notion of dreaming through the Freudian definitions of the conscious and unconscious states, one can make the clear argument that the I-box is indeed supported within this state of unconsciousness, or "id-box". The idea that dreams are always relative to the sense of an individual self contribute to the notion of an enveloped I-box. When taking this idea further through the biological aspects and noting the differences between the Id and I-boxes, one can see how dependent the Id becomes on the I-box. This leads to a fundamental conclusion that the Id-box does envelope the I-box, but only because the Id-box is the entirety of the nervous system. In stepping back from the struggle of the conscious versus unconscious, one must note that there cannot be any action or input/output that does not lie within these two states. Dreaming is therefore the act of experiencing the Id box with little to no support from the I-box.

References

1) "Getting It less wrong, the Brain's Way: Science, Pragmatism, and Multiplism." Paul Grobstein Ph.D.

2) The Meaning of Dreams Winson, Jonathon. Scientific American Online. 2002.

3) Interpretation of Freud's work Domhoff, G. W. (2000). Moving Dream Theory Beyond Freud and Jung. Paper presented to the symposium "Beyond Freud and Jung?", Graduate Theological Union, Berkeley, CA, 9/23/2000.

4) The prefrontal cortex in sleep. Hobson, J.Allan, Muzur, Amir, etc. TRENDS in Cognitive Sciences Vol.6 No.11 pp.475-481

5) General physiological interpretation of dreaming R. Joseph, Ph.D


The Gaps between Science and Behavior in Understan
Name: Debbie Han
Date: 2004-02-23 23:59:26
Link to this Comment: 8419


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In March of 2003, my sister, Christine was in a horrible accident. She tripped off the
platform of the subway station near our New York City apartment and fell into the gap
between the platform and the train; as a result she lost all of the tissue and skin on and
below her knee and is today a below-knee amputee. During the 5 operations needed to
remove the tattered limb and close the wound, the vascular orthopedic surgeons were able
to successfully save her upper leg and create a "residual limb" or "stump." The limb
remained swollen and discharged blood for a couple of months, but gradually, raised
blood vessels and a neuroma, a ball of nerve fibers, formed at the end of her stump (1).

Following the accident, I spent between 5 to 10 hours with Christine everyday. I
monitored her convalescence as well as her initiation into a new life as a below-knee
amputee. Immediately after the amputation, she experienced phantom sensations in her
residual limb, which is common among amputees. It is believed that 50% to 80% of
amputees experience phantom pain (2). In an attempt to better
understand the behaviors of both my sister and her phantom limb, I researched the
scientific explanations for her behavior. To what extent is science helpful in
understanding my sister's case?

Phantom sensations vary in type and in degree. The types of different sensations felt by
amputees include warmth, itching, pressure, shocking, wetness, and the feeling that the
limb is in a certain position, among others. When they become cramping, stabbing, and
intense shocking, they are classified as phantom pains (2).

One of the original explanations of phantom sensations is rooted in the somatosensory
complex, the part of the brain presumed to cause sensation. According to this hypothesis,
neuromas continue to create impulses, and that the impulses travel through the spinal
cord and thalamus to the somatosensory cortex. After a limb is amputated, the nerve
paths still exist; therefore, stimulation anywhere along the nerve path to the homunculus
(a part of the somatosensory cortex similar to a miniature map of the human body (3)) can elicit the same sensation as when the limb did exist (2). This would imply that the brain is hard-wired and that the brain
doesn't realize that the amputated limb no longer exists.

A subsequent hypothesis initiated by Ronald Melzack proposes that the origin of
phantom limbs is in the brain and more focused in the cerebrum than the somatosensory
cortex. According to Melzack, the brain has a neuromatrix (network of neurons) that
creates impulses indicating one's own body, which he calls the "neurosignature" (2). The matrix consists of 3 subunits: the classical sensory pathway,
the limbic system which manages emotion, and the cortical systems which recognize self
and assess sensory signals.

Melzack believes that sensory signals received from the periphery are evaluated by all
three systems and generated into a single output which then receives its specific
neurosignature. The neurosignature is determined by the neurons in the matrix and their
connectivity. The connectivity is determined for the most part by genes and less so by
experience (4). According to Melzack, neuromas can generate an
input which will subsequently travel through the same neuromatrix as a traditional
external input. As a result, a similar output would be generated and the limb would be
perceived to exist.

A new train of thought among scientists is that once an appendage is severed, the
receptive fields go silent and then become active again through other parts of the body.
Vilayanur Ramachandran at the University of California in San Diego has most
extensively studied this theory of cortical reorganization. Through experimentation with
this theory, Ramachandran found that while brushing the body surface of an amputee
with a Q-tip, he was able to evoke sensations in the phantom limb. There were localized
references areas which yielded responses in the lost appendage. More specifically,
Ramachandran found an area of the chest which corresponded to a lost leg and areas of
the face and chin which corresponded to a lost arm. The localized field was not specific
to a patient; rather, Ramachandran found the field on the chin area on a majority of the
patients who had arm amputations with whom he worked with. Pressure and water on the
reference area would elicit responses in the phantom, as well (5).

Ramachandran also developed the mirror box technique. The mirror box technique
consists of a box which is halved by a mirror. The patient can only see one half of the
box. Once the patient puts his "good" leg into the box, the mirror produces a
"sterioisomeric image (5)" of the other leg. For example, if the
participant has an amputated right lower leg, he would put both legs on opposite sides of
the mirror and then the right half would be covered. The mirror mimics the left leg's
actions and the participant perceives this manipulated reflection as his right leg. When
participants kept their eyes open, 4 out of 5 patients claimed that they felt relief from
being able to move their once-phantom limb in or out of positions. This would imply that
the phantom limb is a creation of the brain and that relief can come from satisfying the
brain by maneuvering the image and making oneself believe that the phantom limb
actually exists.

In order to test each of these hypotheses, I compared the theories to my sister Christine's
actual behavior. Regarding phantom sensations, Christine most commonly feels
shocking and itching. The shocking is throughout her entire right leg, and the itching
emanates from what she perceives as her right foot. A few times, Christine has actually
felt as though her leg was wet like she had stepped in a puddle. According to the original
hypothesis on phantom sensations, neuromas can generate random signals and cause the
same sensations that had occurred prior to amputation. What would explain the feeling
of moisture covering her right foot when the neuroma at the end of Christine's stump was
not wet? It is plausible that random firings could cause feelings of shock and pressure,
but random signals have not yet been shown to cause the feeling of wetness in amputees.
This is still a mystery which science has not been able to answer.

If the brain is hard-wired but occasionally malleable to experiences, what type of
experience would interrupt the hard wiring? On numerous occasions, Christine has
tripped and tried to maintain her balance by landing on her right leg and has
unfortunately crashed down on her residual limb. Her brain tells her that she still has a
right lower leg, but when she looks down at her leg it is not there. In split-second
decisions, such as trying to break a fall, Christine instinctively tries to land on her right
leg. If the brain cannot be re-wired to recognize that her lower leg no longer exists, what
type of life experience merits resculpting of the neuromatrix? Is this a matter of habit
rather than a faulty neuromatrix?

According to Ramachandran's theory, sensations are referred to different locations
following amputation of a limb. The Q-tip method was intriguing and to test its findings,
I blindfolded Christine and brushed a Q-tip along her chest. If the chest was a reference
area for the lost leg, Ramachandran's theory could possibly explain phantom wetness.
The Q-tip method did not arouse any sensations in Christine's phantom limb. In addition,
the wet Q-tip test did not yield any results.

Proponents of the principles behind Ramachandran's mirror box technique believe that
phantom sensations are attributed to the brain. If the principles are valid, my sister
should receive a sense of satisfaction in believing that her limb receives the attention it
needs. For example, when her leg is itchy, if she can convince her brain that her leg is
being scratched, even though it is not, she should feel a sense of relief. In trying to
mimic what the mirror box provides for participants, I recommended the following
technique to Christine: I asked her to imagine that her leg was still intact and to scratch
where the foot would be located when the foot was itchy (when Christine has phantom
sensations and pains, she can envision where the feeling is radiating from). This method
provided no relief for Christine. Instead, she tapped and rubbed the bottom of her stump.
Since that method was unsuccessful, I asked her to monitor her own actions when she had
the itchy sensation in her phantom limb. Typically, when she is wearing her prosthesis,
she will unconsciously scratch the part of the prosthetic leg which corresponds to the part
of her leg or foot which feels itchy. She noticed that she would reach down, scratch, and
then realize afterwards that her leg was prosthetic when there was no relief from her
scratching. In Christine's case, Ramachandran's hypothesis was incorrect. Even when
she was "tricked" into believing her prosthetic leg was her own right leg, scratching it
offered no solace.

Although I have studied only Christine's case extensively, I asked other amputees to
contribute their own experiences while I was conducting my research. Three additional
amputees reported that the Q-tip test did not work and that nurturing the prosthesis did
not provide any relief for phantom pain. This leads me to believe that there is a gap
between the scientific explanations for phantom sensations and what I have witnessed in
Christine's behavior towards her phantom limb.

One of the hypotheses that seems reasonable for Christine's case is in the same vein as
Melzack's theory and is credited to Timothy Pons of the National Institute of Mental
Health. His studies indicate that other locations which were previously dormant along
the nerve path of an amputated limb are unmasked and that a "neural reorganization (6)" occurs following amputation. In addition, since Ramachandran
did have many successful case studies in both his aforementioned projects, the Q-tip test
and the mirrored box technique, it is plausible that Ramachandran's science helps to
understand the behavior of a certain population of amputees.

At this point in time, each scientific explanation for phantom limbs, sensations, and pains
seems to have credit-worthy aspects, as well as flaws in trying to understand phantom
sensations for amputees as a whole. Aforementioned research leads to further questions
as to what extent the brain is pre-wired. Christine's behavior seems to exhibit that there
is a lack of communication between the physical reality and conscious and subconscious
understandings, though experiences of amputees differ. It may therefore be worthwhile
for scientists to study phantom limbs on an individualized basis. The origin of phantom
sensations could be dependent upon the type of injury or the specific cause for
amputation. Comparing each explanation to Christine's actual behavior leads me to
believe that different amputees experience phantom sensations for diverse reasons and
that varying sensations are potentially caused by different mechanisms, as well.


References

1) Amputee-Related Terms, A glossary of amputee-related terms

2) Electromyography webpage, An overview of specific phantom pains

3) Neurological Theories , An interesting discussion on
phantom limbs in a less technical voice

4) "Phantom Limbs," Scientific American, April 1992, 120-126
~ A good foundation for understanding phantom limbs

5) Phantom Limb Disorder , A thorough overview of phantom limbs and
phantom sensations and research regarding the topics

6) Touching the Phantom , A fascinating description of the research
of Melzack, Pons, and Ramachandran

Further Reading


BBCi Website
, BBC interview with Vilayanur Ramachandran

Mirror Box Technique, A description of the mirror box technique


Can Science Replace Religion? Analyzing the Neurob
Name: Bradley Co
Date: 2004-02-24 00:17:38
Link to this Comment: 8421


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"As for Heaven and Hell, they exist right here on earth. It is up to you in which you choose to reside."

-Tom Robbins

Religion is a societal entity that has subsisted since the earliest record of man's existence. There are a multitude of religions as well as varying degrees of faith. Many religious convictions are based on spiritual knowledge or simple belief. However, science often searches for physical and mechanical understanding of knowledge. There are many issues in which science and religion clash. These issues range from the beginning of life, evolution versus creationism, to the idea of existence after death. As the advancement of science continues, physical explanations for life's occurrences are presented. Do these explanations disprove religious accounts? Will science eventually disprove religion and render it useless? This question is analyzed in the occurrences of Near Death Experiences (NDE's).

An NDE is defined as "a lucid experience associated with perceived consciousness apart from the body occurring at the time of actual or threatened imminent death (1)." Death is the final, irreversible end (2). It is the permanent termination of all vital functions. The occurrence of an NDE is not a rarity. Throughout time and from across the globe NDE's have been described by many, and in these accounts there are several similarities among them. The commonalities of an NDE include a feeling of peace and connection with the universe, a sense of release from the body (often called an Out of Body Experience or OBE), a movement down a dark tunnel, the vision of a bright light, and the vision of deities or other people from their lives (2). Not every NDE contains each of these events, these are merely the most common similar events described. An NDE can range in magnitude from having all of these events occur to having none of them occur (2). There are two theories explaining the similarities among NDE's. The scientific explanation describes a situation in which a mixture of effects due to expectation, administered drugs, endorphins, anoxia, hypercarbia, and temporal lobe stimulation create a unified core experience (3). The religious explanation claims that they are a glimpse of existence after death. The unified core experience is due to there being a destination after the body dies with a similar path for all. These two theories debate whether an NDE is simply the neural activity preparing the body for death or a preview of the beyond. To further understand the occurrences of an NDE neurobiological research has believed to have mapped the neural activity of an NDE.

The most common similarity of NDE's is the feeling of peace, tranquility, spirituality, and oneness with all (3). This occurrence has been discovered to be associated with the release of endorphins as well as reactions between the right and left superior parietal lobe (4) (5). The right portion of this area of the brain is known to be responsible for the sense of physical space and body awareness. It is responsible for orienting the body. The left portion of the parietal lobe is responsible for the awareness of the self. During an NDE neural activity in these areas shuts down. The result of this is an inability for the mind to have distinction between the self and non-self. All of space, time, and self becomes one (4) (5). Essentially one feels as being the infinite, rather than part of the infinite because there is no realization of self. However, other aspects of the brain are still functioning and thoughts are occurring. These other thoughts are believed to be associated with the visions perceived (4). If a persons thoughts are focused on a deity or personal relation, without the ability to comprehend self, time, and space, the person may in fact see an image of that focused thought because visual neurons are still intact. It is the relation of neural inactivity in the parietal lobe combined with other activities within the human brain that are responsible for most aspects of an NDE (2) (3) (4).

The understanding of neural relationships during NDE's has culminated in the ability to reproduce each phenomena in a controlled setting. It has been found that the intravenous administration of 50-100 mg of ketamine can safely reproduce all features of an NDE (2) and electrical stimulation of the right angular gyrus portion of the brain can safely reproduce an out of body experience (6). Scientific research has even explained why religion is emphasized during an NDE. Activation in the temporal lobe region, known as the "God Spot (7)" during an NDE is reported to stimulate religious themed thoughts (8). This research has major implications in the battle of science versus religion. It provides evidence that specific brain activity can create the perception of religion and divinity. If this is true than this brain activity can be turned off and in effect remove religion from our lives. Many wars would be stopped, borders would open up, life as we know it would change completely. However, there are many faults to this theory. The major error in the idea that understanding the mechanical brain activity of NDE's and religion makes them useless is the assumption that the experience only exists within the brain. Begley (5) uses an example of apple pie to illustrate this point. Upon the site of a pie, the neural activity linking site, smell, memory, and emotion can all be mapped quite clearly. However, this mapping of activity does not disprove the existence of the pie. This is the precise reason the existence of God or any other religious deity or beliefs cannot be disproved. It is just as simple to believe that viewing the mechanics of the brain during an NDE or religious experience is like getting a glimpse of the tool or hardware used to experience religion (9). However, this does not prove the existence of a God, or any other belief, either. It is the principle that understanding the neurobiological mechanics of religion cannot disprove or prove the existence of God, religion, or spirituality that makes it improbable that science will eliminate religion.

Believing that science will eventually do away with religion wrongly assumes that knowledge of the mechanics of the brain and universe are capable of eradicating the importance of religion to humankind. Religion is present in society for a plethora of reasons branching far beyond the mere belief in an existence of a God. The multitude of religions, deities, and even atheism is evidence of this. Among many, the reasons for religion include fear, comfort, stability, and tradition. The NDE provides an excellent example of one of the importance's of religion, the existence of life after death. Existence after death refutes the idea that we are simply organic material organized in a certain fashion with a certain time span of functionality. The religious belief than an DNE is a glimpse of our existence beyond life is valuable for peoples behavior in life, not just as evidence of a theory. In very few NDE's do negative feelings occur. People often describe a "heavenly" light rather than a hell (1) (10) . This may be because of the power of suggestion (3) in that it is a common societal belief that when a person dies they are supposed to see a tunnel, a light, an angel, and heaven. So when an NDE occurs, this is what the person sees because it follows their thought process. Not many people believe that when they die they are going to go to hell. The idea of existence of a better place after death comforts and eases the pain of many who suffer in life. It can provide them with hope through troubling time whether they believe in Jesus, Buddha, Elijah, or no God at all. Religion is a tool of mankind to sustain a belief. The reasons for that belief vary among people and religions but the importance is in believing. Having a belief can instill a sense of pride, confidence, comfort, strength, and much more in a person. A single belief can provide a purpose for life. The actual beliefs of each religion are only important to the individual. However, the idea of belief itself is important to the foundations of religion. The importance of religion to mankind makes it improbable society will ever allow scientific understanding to overrule religion. Science may disprove religious stories such as Moses' parting of the red sea, but the importance of religion goes beyond the stories. Religion is indispensable because it is a belief. For this reason science is incapable of eliminating religion.


References

1)Near-Death Experience, Religion, and Spirituality, a religion and spirituality article related to NDE's
2)Ketamine Model of the NDE, Drug induced replication of the NDE
3) Blackmore, Susan. "Near Death Experiences," Royal society of Medicine. Vol. 89. February 1996, pp. 73-76.
4)Why God Won't Go Away: Brain Science and the Biology of Belief, Excerpts from the author
5) Begley, Sharon. Religion and the Brain. Newsweek, May 7, 2001, p. 50.
6) Blanke, O., Ortigue, S., Landis, T., Seeck, M. Stimulating Illusory Own-Body Perceptions," Nature. Vol. 419. September 19, 2002. pp. 269-270.
7)God on the Brain, An article on the cross between neurobiology and faith
8)Meridian Institute, Transformational experiences
9)Tracing the Synapses of our Spirituality, Examination of brain and religion
10)Susan Blackmore Home Page, Experiences of Anoxia


Dreams
Name: Allison Ga
Date: 2004-02-24 00:45:22
Link to this Comment: 8422


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Dreams are a product of the brain in ways that science cannot fully explain. I have always been fascinated by the ability for dreams to be extremely vivid and realistic. Almost anyone can identify with having a dream in which they awoke feeling as if they had just been active in some way, although they are at home in bed.

The most vivid and active dreams occur during the REM (Rapid Eye Movement) stage of sleep which occurs every ninety minutes (1). It has been found that during this cycle, brain activity is comparable to being awake. The implication of this information is that during sleep, the brain still processes and reacts to information without external influence. This enables the existence of intense imagery and the formulation of situations, dialogue, etc. The body is immobile although mental activity is extremely high (2). The restriction of the body prevents the dreamer from acting out any physical activity that may occur in the dream.

The subject of dreams and their role as part of the brain's functions cannot be discussed without Sigmund Freud's take on the dream world. In his Interpretation of Dreams he outlines dreams as a wish-fulfillment, sending a message that the brain formulates images of something that is lacking in one's waking life (3). While it cannot be decided whether or not this is fundamentally the purpose or meaning of dreams, it is interesting to think that our brains may be trying to communicate a way to fulfill an existing void. Freud also equates dream bizarreness to the mind's effort to cover up the true meaning of the dream and subconscious desires that the conscious mind cannot deal with. Although most dreams may be "strange" or "weird", why would the subconscious go to such lengths to disguise true desires? I believe that Freud's psychoanalytic take on dreams is valuable in trying to understand our subconscious, but do his ideas necessarily apply to every dream? He believes that our hidden desires are trying to break through to our consciousness, but does that include dreams that simply depict a situation in life that is normal to the dreamer? For example, if someone anticipates some major event, such as a giving a presentation or throwing a party, and they dream about this event either being a disaster or a success— does this necessarily communicate desires that are unacceptable to the conscious self? I do think that these "normal" dreams communicate hidden, or even obvious, anxieties and hopes but that Freud's focus on the dark side of the psyche might not always be applicable.

The extensive study of the symbolism of dreams has always fascinated me since it logically follows to wonder how the brain utilizes symbolic imagery to communicate to the conscious self. One example of symbolism from a dream book, which intends to help decipher and understand dreams, states that if a dream includes keys they represent power and access (4). While it may be somewhat obvious for us to think of possessing keys as having access, or wanting access to something—it is fascinating to think that the brain will substitute access with the possession of a key. If we do not have this conscious association with keys in our everyday life, how does the subconscious identify it in this way? I think that everyday objects have subconscious associations that we may not be consciously aware of.

The ability to dream communicates that the brain functions actively without the need to receive input from the external world. In our dreams, we create an alternate universe into which situations, places and people in our everyday lives take on symbolic value. The knowledge of REM sleep and Freud's interpretation of dreams contribute to further our understanding of the dream world and how dreaming involves our brain and the subconscious.

References

1)American Psychoanalytic Association, A helpful article on the current scientific stance on REM sleep and dreams.

2)The MIT Encyclopedia of Cognitive Sciences,A searchable reference outlet that contains more information on sleep, dreams and Freud.

3)Interpretation of Dreams, Freud's interpretation of the meaning of dreams.

4)Dreams, A book to help decipher dreams.


Parkinson's Disease
Name: Shirley Ra
Date: 2004-02-24 01:17:41
Link to this Comment: 8424


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The disease first described as the "shaking palsy" and now known as Parkinson's disease was first discovered in 1817 by James Parkinson, for whom it was named. Parkinson's disease is a central nervous system disorder that affects approximately 1.5 million people in the United States alone. This disease results in a progressive and chronic loss of motor coordination, tremors, bradykinesia as well as other severe impairments (1). Research has shown that men are slightly more prone to the disorder than women, though the exact reason for this fact has yet to be discovered. In addition, the onset of this disorder is usually seen in people around 60 years of age. The disorder has also been known to affect younger people, however the rates of Parkinson's disease are extremely low in people under 40 years of age (1).

James Parkinson provided us with the challenge to resolve the connection between Parkinson's disease and the nervous system. As a result of his work, we now ponder as to why it is that a person suffering from Parkinson's disease cannot control their movement, despite the fact that they are trying to do so? It would be interesting to explore how this disease then affects the I-function.

Soon after James Parkinson described this "shaking palsy" disease it became a goal to research the cause of this disease. Through the examination of Parkinson's postmortem bodies, it was hypothesized that the substantia nigra was involved in this loss of motor control and coordination (2). They arrived at this conclusion by observing considerable amounts of apoptosis in the midbrain, specifically in the substantia nigra. With time there was an increase in knowledge about neurotransmitters and their role in neurotransmission in the nervous system. This knowledge identified that dopamine in the striatum of Parkinson's postmortem bodies was 80% lower than in healthy individuals (2). The fact that Parkinson's patients experience low levels of dopamine and apoptosis in the substantia nigra led many scientists to hypothesize that the substantia nigra generates dopamine, further implying that the low levels of dopamine paired with apoptosis led to the symptoms of Parkinson's disease.

In short, Parkinson's disease is caused by the degeneration of neurons in the substantia nigra which results in the decrease of dopamine. In addition, Mono Amine Oxidase-B breaks down the excess dopamine in the synapse further diminishing the dopamine that is left in the substantia nigra (3). Dopamine is vital for normal movements because it allows messages to be transmitted from the substantia nigra to the striatum, which then initiates and controls the ease of movement and balance (3). Furthermore, the loss of dopamine causes the neurons in the basal ganglia to fire randomly accounting for involuntary movements.

Acetylcholine is another neurotransmitter that is needed to produce smooth movements. In normal individuals there is a balance between acetylcholine and dopamine. In Parkinson's patients there is not sufficient dopamine to maintain the balance with acetylcholine (3). This irregular disproportion results in a lack of movement coordination leading to the more overt symptoms of Parkinson's.

It seems as if our brain is controlling the movement of our bodies without the individual having control over the disease. It would be great if there was an explanation as to the reduction of dopamine in the substantia niagra, but unfortunately there is not a concrete answer. There a many theories which seek to explain the cause of Parkison's. For example, some state that the disease is genetic ( "Parkin" gene) and others believe it is due to environmental toxins such as MPTP (4). MPTP causes Parkinson's like symptoms in drug abusers as seen through PET scans. Other studies conducted in rural areas have shown a higher frequency of Parkinson's in locations where herbicides and pesticides are prominent (5). Additional suggestions as to why dopamine degenerates are mitochondrial dysfunction and excitotoxicity (4). Extensive research is being conducted all over the world in an attempt to discover the definitive cause of Parkinson's disease. This is significant because once we identify what causes Parkinson's disease we can hope to prevent future occurrences of this disease as well as ultimately find a cure.

As mentioned earlier, there is no cure for Parkinson's disease. Therefore, the immediate goal of scientists is to find a drug that mimics dopamine, since dopamine itself is not allowed through the blood brain barrier. Researchers have thus far been successful in depicting the biological pathway of dopamine in the effort to replace the degenerating dopamine in the substantia nigra. This pathway shows that dopamine is derived from the amino acid tyrosine, which is converted into L-Dopa with the aid of the enzyme tyrosine hydroxilase. L-Dopa is then converted to dopamine by the enzyme L aromatic amino acid decarboxylase (L-AACD).

This biological pathway allowed scientist to discover that L-Dopa is able to cross the blood brain barrier giving scientist hope that L-Dopa might be converted to dopamine once it arrived to the brain. L-Dopa was found to be effective in reducing the harsh Parkinson's symptoms, meaning that L-Dopa actually converts to dopamine in the brain. L-Dopa is effective in the brain because the nervous system becomes up-regulated, and therefore craves the drug. In other words, the individual becomes highly sensitive to the drug. Unfortunately, L-Dopa also had severe side effects such as the inducement of vomiting and causing nausea. Later it was found that these side effects where caused due to the overexposure of L-AADC in the gastrointestinal tract. This was corrected by creating an L-AADC inhibitor which was unable to pass through the blood brain barrier. The L-AADC inhibitor allowed dopamine to successfully increase in the brain. There are many drugs for Parkinson's disease, but L-Dopa seems to be the most effective.

The issue of administering drugs in order to decrease the symptoms of Parkinson's disease is relatively controversial, since such administration can create tolerance to such drugs. As a patient's tolerance increases, the less effective the drug becomes and higher doses of the drug are required to discontinue the symptoms of Parkinson's. This leads to a dilemma; when does a doctor prescribe L-Dopa given that, due to the patient's progressively increasing tolerance to the drug, it cannot work forever? Does a doctor administer the drug during Parkinson's early stages when symptoms are becoming apparent or should they wait until Parkinson's is at its peak? It would be a tremendous success if there was a drug that would delay Parkinson's disease, but when the symptoms became severe, administer L-Dopa to regenerate dopamine in the substantia nigra.

Surgeries and implantations of embryonic cells have also been suggested to control the symptoms of Parkinson's disease, but none have been proven to be effective thus far (6).This gives us hope that we are working at making this disease as controllable as we can.

In essence, Parkinson's disease is a horrible disorder that kills many people all over the world. Unfortunately there is no cure for this disease, but many efforts are being made to control the prevalence of Parkinson's. In a positive note, thanks to Parkinson's disease we have learned a lot about the human body and its intricacies. It interesting to understand that malfunctions at a neuronal level can affect a person's life completely, in this case impairing people from controlling their movement. This research topic has allowed me to value how complex we are as humans and how fortunate I am to be healthy. Furthermore, while researching on Parkinson's disease I started thinking about the brain and behavior dichotomy. In this case it seems as though brain malfunctions are controlling behavior. So does brain actually equal behavior?

References:
1)National Institutes of Health , General Information on Parkinson's

2)Home Page, General Information on Parkinson's

3)Home Page, Brain and Parkinson's

4)Home Page, Causes of Parkinson's

5)Home Page, General Information

6)Home Page, Treatment of Parkinson's

7)Home Page, General Information

8)Home Page, Parkinson's and Pesticides


Intelligence Quoi?
Name: Amanda Gle
Date: 2004-02-24 01:21:27
Link to this Comment: 8425


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


"The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function."

-F. Scott Fitzgerald 1936 (Bartlett's Familiar Quotations 694:17)


IQ. Intelligence Quotient is defined as "the ratio of tested mental age to biological age." (1) Intelligence is defined as "a. The capacity to acquire and apply knowledge. b. The faculty of thought and reason. c. Superior powers of mind." (1) The IQ allows people to quantify and to create a range that all people can understand. While people do not think about what intelligence actually is much, intelligence is an important portion of society. What is IQ and why is it important in our society? What affects the outcome of that IQ test?

IQ is a method that does not factor in talent or outer exhibition or knowledge. All the results are based on a closed test during which one's intelligence is tested. Made up of several different sections depending on the test, most include questions in different sections. The WAIS test is made of four sections; the reading composite (which is word reading, reading comprehension, and pseudo word decoding), the mathematics composite (which is numerical operations and math reasoning), the written languages composite (which is spelling and written comprehension), and the oral composite (which is listening comprehension and oral expression.) (2) The scores are taken and compiled into four other sections: verbal comprehension, perceptual organization index, working memory index, and processing speed index. These are all looked at not just with scores but with percentiles. For example, on my POI, I got a 97, which means that my score was higher than 97 out of 100 adults my own age. (2) This is all combined into whatever one's actually IQ is. It is important that the IQ is only compared to people of the same age because brains are thought to continue developing until about age twenty-nine. (3)

The scores of these tests are looked at on a curve. People of the same age are compared and the scores are calculated "in a proper sense with the mental age in the numerator and the chronological age in the denominator." (3) The test is what determines the mental age. The number that comes from the equation creates the classification of the results. The classification varies from test to test but one form is that a person is under average if the IQ is under 85, average if it is between 85 and 115, and above average if it is above 115. An IQ between 75 and 85 is classified as debility, between 35 and 70 as imbecility, and below 35 as having oligophrenia or feeble-mindedness. (3)

What is a genius? To be a member of Mensa, one must be in the top two percent of IQ scores. Depending on the test, it means being above a certain score: for Cattell above 148, for Stanford Binet 132, for WAIS 130, and for Otis-Lennon 132, among many. (4) Geniuses are those who leave marks in history based on their intelligent gifts to the world. Today, they are those who are pulled aside early in school and win the Nobel Prizes. Gifted people are encouraged through special schooling and families. Good resources can help increase the results of the IQ test. A brain can be trained to be more intelligent.

The opposite end of the scale is mental retardation. The condition of mental retardation is defined as having these criteria: "intellectual functioning level (IQ) is below 70-75; significant limitations exist in two or more adaptive skill areas; and the condition is present from childhood (defined as age 18 or less)" (6). Those with mental retardation (at times known as oligophrenia) can have it due to a number of reasons including genetic conditions, problems during pregnancy, problems at birth, problems after birth, and poverty and cultural deprivation. (6) Some of the same reasons can manipulate the opposite end of the IQ scale as well.

There are many studies to determine what affects the intelligence quotient. One suggestion is birth order does. In 1973, the first test was done in Holland by Lillian Belmont and Francis Marolla about family size, birth order, and IQ. They found that children from larger families did poorer on tests, the firstborns of any family size always scored better than later-borns as well as in a declining pattern for birth status, and as family size increased the performance decreased. (5)

As was mentioned before, another suggestion is that one's environment affects the way that one turns out. This would be truer for those with lower IQs. It is difficult to increase IQ a large amount based on the food that one is fed. On the other hand, a poor diet can lead to mental retardation.

Whatever affects IQ, it is important in the way it grades one's intelligence. We must remember though that while a person may be extremely smart by the books, on the street or socially it can be a different story. Intelligence, not the quantitative IQ, truly is what is important.

References


1) 1) The American Heritage College dic•tion•ar•y, Fourth Edition. Boston: Houghton Mifflin, 2002.
2)
2) Dr. Thomas Brown; WAIS IQ Test I took January 2003.
3)
3)Intelligence=IQ
4) 4)Mensa International
5) 5)Human Intelligence
6) 6)Introduction to Mental Retarddation
7) The American Heritage College dic•tion•ar•y, Fourth Edition
8) Dr. Thomas Brown and the IQ test I took January 2003


In the Blink of an Eye: A Look at Locked-in Syndro
Name: Shadia Be
Date: 2004-02-24 01:38:34
Link to this Comment: 8426


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In the Blink of an Eye:
A Look at Locked-in Syndrome

Shadia Bel Hamdounia

"Twelfth Night", "Freaky Friday"--we are all familiar with the many scenarios that depict a common fear—being trapped in another's body. But there exists a bigger nightmare. Imagine the horror of being trapped in one's own body. For those with locked-in syndrome (LIS), that fear is a reality.

LIS describes one of the most debilitating conditions in which a person retains consciousness. The result of head injury, brain-stem strokes, or neurological diseases like ALS, locked-in syndrome is caused by a lesion in the nerve centers that control muscle contraction or a blood clot that blocks circulation of oxygen to the brain stem(6),. First introduced in 1966 by Plum and Posner, the term has since then been redefined as "quadriplegia and anarthia, with preservation of consciousness".(1). (Anarthia refers to the neurologic inability to speak, as opposed to an unwillingness to speak.) Unable to either move, or speak, yet fully cognizant of the world around them, these individuals are virtually locked in. An accurate diagnosis of LIS depends on the recognition that the patient can open his eyes voluntarily rather than spontaneously in the vegetative state.(4). Although horizontal eye movements are usually lost, the ability to open their eyes and blink is retained.(4) Therein lies the key to communication with the outside world.

I first learned about this extremely rare condition while helping a friend with a French paper. The subject, Jean-Dominique Bauby's, "The Diving Bell and the Butterfly", piqued my interest. On Dec. 8th, 1995, Bauby, a 42-year-old father of two, was test-driving a new car when he suffered a massive stroke. He awoke from a coma two months later to find himself paralyzed and speechless, but able to move one muscle: his left eyelid.(3) Due to his privileged position as an author and editor of a popular French magazine, he was afforded the opportunity to do the unimaginable—share his experience with the outside world. With the aid of a secretary and an elaborate alphabet in which each letter was recited to him in the frequency with which it occurs in the French language, he was able to blink his novel.(3)

It was Alexandre Dumas who in 1820 first described LIS when he created Monsieur Noirtier de Villefort in his novel, The Count of Monte Cristo. He described his character as a "corpse with living eyes"(1), but Bauby's tale contradicts this commonly held notion. He recounts his struggle with the realization that he is trapped within a paralyzed body—the diving bell—in which his mind flies like a butterfly:

"I am fading away, slowly, but surely. Like the sailor who watches his home shore gradually disappear, I watch my past recede. My own life still burns within me, but more of it is reduced to the ashes of memory. Since taking up life in my cocoon, I have made two brief trips to the world of Paris medicine to hear the verdict pronounced on me from medical heights. I shed a few tears as we passed the corner café where I used to drop in for a bite. I can weep discreetly, yet the professionals think my eye is watering." (3).

In his memoir, Bauby continually addresses the very sense of alienation and exclusion from society that is shared by all who are severely handicapped. How worthy are these individuals to our society? Those with profound neurological disabilities such as LIS, tetraplegia, or who are in persistent vegeatative state, have been the subject of substantial medical and ethical debate. Many feel that the allocation of resources to maintain their lives is too much of a high-stakes game. After reading Bauby's book, I sought to better inform myself about this rare condition. But the paucity of information or research available was disheartening. An interview with Roger Rees, Director of the Institute for the Study of Learning Difficulties at Flinders University in Adelaide, explains that "from an economic rationalist's view of rehabilitation or of a simplistic absolute view that a person is either cured or not cured people in the locked-in state are considered of no account." (3) Although there are no statistics available on the number of patients with LIS, the locked-in populating is growing due to advances in artificial respiration.(2) How then to convince those responsible that the benefits to sustaining these individuals far outweigh the monetary sacrifices?

Niels Birbaumer, a German neuroscientist and leading expert in the field, works on brain-computer interface (BCI) research in an attempt to give those who are locked in a voice so that they might be involved in the decisions that affect their lives. One of his "patients", Elias Musiris, a wealthy Peruvian owner of a casino, suffers from Lou Gehrig's disease which has induced a locked-in state. Using BCI, electrodes were attached to his scalp producing a moving white dot across a screen—Musiris was looking at his EEG, who's up and down motion represents his brain activity. His task—to willingly change the electricity of his brain by changing his thoughts, and in doing so, to control the white dot by keeping it in one half of the screen. Birbaumer had previously developed a similar technique to train epileptics to fend off impending seizures.(2) He hoped that teaching Musiris to influence his EEG would then enable him to "respond" to simple yes-no questions by moving the dot to a certain half of the screen. The results ? After a week of intensive practice, Musiris was able to produce answers that, through repetition, had reached a statistical safety level of more than ninety per cent.(2) Through this new method his family learned that he wished to buy new pool tables and keep the old slot machines in his casino which they were about to sell. For the first time in five years Musiris began to have a deliberate impact on his world and his business—without having to move a single muscle.

The stories of Bauby and Musiris not only put a human face on locked-in syndrome, they offer insights into the question of a mind/body dualism discussed in class. The body is inextricably linked to one's sense of self, however, physical suffering need not steal one's sense of self. There remains a wealth of thoughts, feelings, memories and dreams to be generated and recalled. Bauby's tale is a poignant testimony of human resilience in the face of adversity; it demonstrates that the loss of one's last faint muscle movement does not somehow eliminate the will to be heard.


References


Works Cited:

1. http://web5.infotrac.galegroup.com/itw/infomark/301/818/45683206w5/; very comprehensive research on "Impairment, activity, participation, life satisfaction , and survival in persons with LIS"(Jennifer Doble)

2. http://web5.infotrac.galegroup.com/itw/infomark/301/818/; article on brain-computer interface research; (Ian Parker).

3. http://www.abc.net.au/rn/science/ockham/stories/s10275.htm ; interview with Prof. Roger Rees on LIS

4. http://www.jnnp.com/cgi/content/full/71/suppl_1/i18?RESULTFORMAT=1&eaf; contrasts LIS with coma

5. http://jnnp.bmjjournals.com/cgi/content/full/63/6/759 ; a look at ERP's in patients with LIS

6. http://www.questdiagnostics.com/kbase/nord/nord472.htm; gives the basics of LIS


Theories on Left-handedness and Laterality
Name: Hannah Mes
Date: 2004-02-24 01:54:56
Link to this Comment: 8427


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Every time I walk into a classroom I am faced with the same challenge. It's not that I haven't done my homework or that the professor is boring, but that I can't find the right chair to sit in. Some may argue that the problem is inherently caused by a neurobiological difference. Others might say that it is the work of the devil. My crime be told, I am left-handed.

For 20 years I have suffered from the writing discomfort associated with right-handed desks and notebooks. I have also had difficulty playing the guitar and have been forced to try and play sports in a right-handed fashion. Scissors are always a battle unless they are specifically designed for left-hand use and power tools are not recommended as they have an usually high accident level with left-handedness. I have managed to overcome most of these minor obstacles without much difficulty, but perhaps this is due to my socio-cultural background which is relatively understanding of left-handed differences.

Throughout history, people with left-handedness have been persecuted for a variety of reasons. Deemed social deviants or mentally defected, left-handedness has been considered an undesirable attribute that can be "corrected" through persistent repetition of right-handed behaviors. Adroit, according to the Miriam Webster dictionary, is derived from the Latin word droit (translated as 'right') and in contemporary use means correct or proper. 1 Comparatively, the French word gauche, literally 'left' is used to express someone who is "lacking social grace or is not tactful". 2 Being left-handed has historically been grounds for discrimination although it in recent years these biases have grown more subtle. As I considered the issue of being left-handed more deeply I was convinced that this clearly had not only physical and socio-cultural implications but was also related to neurology, specifically the lateralization of the certain functions in the cerebral hemispheres.

According to Dr. M.K. Holder, the Director of the Handedness Research Institute at Indiana University, left-handedness can be understood in terms of brain lateralization and the functional specialization areas for speech. Although there is not one definitive explanation as to why an individual has a specific handedness, there seems to be evidence that there are several factors such as genetic background, socio-cultural influences, and neurobiological implications.

For example, the Kerr clan of Scotland is infamous for their predisposition towards left-handedness. 3 This feature, known by the Kerr as being "Corry Fisted" or "Kerr-Handed" can be understood as a culturally reinforced trait that was supposed to aid these Scottish warriors in methods of warfare. Due to their common gene pool, however, this trait can also be understood a genetic characteristic of the Kerr clan.

According to Oldenfield (1971), the statistics for left-handedness are also higher in males than in females. Geschwind and Galaburda (1987), developed the "G-G theory" which builds upon the notion of sex-differences, arguing that higher levels of testosterone can affect cerebral lateralization by causing the normal dominance pattern to change. 4 For the majority of right-handed individuals, the language is associated with dominance in the left-hemisphere and visio-spatial skills in the right-hemisphere. Gorski et al expands upon this idea, noting the important role that the levels of the testosterone can have on lateralization.
The hormone can affect the growth of many tissues, and has an inhibitory effect on the growth of immune structures, such as the thymus gland and the burse of the Fabricus. Testosterone is also capable of changing the structure of specific nuclei in the hypothalamus and limbic system. (Gorski, 1986)

From what I understood, the G-G theory argues that testosterone levels can increase for many reasons and one of the effects can be a delay in the growth of the left-hemisphere. This delay can in turn produce what neurologists have coined "Anomalous Difference" which is characterized by "left-handedness, right hemispheric language dominance, left-hemispheric visuo-spatial dominance.." 4 In short, when there are higher levels of testosterone the normal patterns of dominance associated with language in the majority of the population are switched. Their explanation can be categorized as a chemical model based on the changing variable of testosterone in an effort to understand the creation of specific functional lateralization with regards to handedness. Although the G-G theory is widely supported, I would argue that it is not the definitive explanation for left-handedness but rather one of many important factors in determining this disposition.

Both the French neurologist Paul Broca, 5 and German neurologist Carl Wernicke made important discoveries in the 18th century that identified areas in the prefrontal cortex of the left hemisphere as being associated with areas that are primarily used for speech production. Compared to other primates, this area of the brain in humans is greatly enlarged. 6 Although handedness used to serve as a basis for establishing which lateralization individuals had for language, it later became clear with use of the sodium amytal (Wada) tests of the 1960's that lateralization for language in some left-handed individuals can also occur in the left-hemisphere. 7 By injecting patients with this contrast dye, the areas of the brain which are associated with language and memory become visible with use of an x-ray. 8 The explanation for this still remains unknown.

This point raised various questions for me on an individual level. As a left-handed person it may be significant that my disposition for fine arts and foreign language is high, a function that according to my research seems to be associated with the right hemisphere. In line with the G-G theory, this can be understood as an overcompensation in the right hemisphere as a "compensatory growth mechanism" because the left hemisphere growth has been delayed.

Whether or not the G-G theory correlations between testosterone to specific functional lateralization proves causation is debatable. From the research that I have done I would argue that left-handedness can not be understood simply in terms of neurology, genetics, or socio-cultural factors alone but as a combination of all these. The G-G theory also fails to explain why the left hemisphere is more sensitive to levels of testosterone or if there are more testosterone receptors in this area of the brain.

References


References:

1)Miriam Webster Dictionary

2)Miriam Webster Dictionary

3)Kerry Clan Lineage

4)Theories About Handedness Causation

5)Biography of Paul Broca

6)Lateralization and Language II

R7)Medical College of Georgia, MCG Wada Protocol: Clinical Core

8)The Biological Basis for Langauge


Cochlear Implants: A Bionic Sensory Experience?
Name: Lindsey Do
Date: 2004-02-24 02:01:08
Link to this Comment: 8428


<mytitle>

Biology 202
2004 First Web Paper
On Serendip



"Hearing is the soul of knowledge and information of a high order. To be cut off from hearing is to be isolated indeed" (1).

What does it mean to hear? Imagine what it may be like if your perception and recognition of sound has changed three times during your lifespan. Phases one, two and three encompass a full spectrum of hearing, with various technological aids (in phases two and three) triggering a range of psychological and physiological repercussions. An in-depth look at the relationship between the hearing organ and the auditory processing center of the brain might illuminate hearing as a integration of audition and cognition. As someone who has experienced full hearing, deafness and rehabilitated hearing via an electronic prosthesis, how do my experiences contribute to the notion of a personalized auditory experience—an awareness which draws the distinction between the sensation and interpretation of sounds?

The ear contains complex organs which allow for sound to be converted into an electronic signal, which is transmitted to the brain for interpretation. The mechanical input of sound waves is transduced in the cochlea into an electrical response. The basilar membrane in the cochlea vibrates from the movement of the surrounding perilymph, which bends the hair cells, inducing depolarization and triggering an action potential. The ganglion cells that innervate the hair cells within the organ of Corti serve as receptors, and they are responsive to particular frequencies according to topographical (tonotopic) organization. The auditory nerve connects to the brain stem (a bilateral pathway), which synapses into the cochlear nucleus. Here, the information is separated into the ventral cochlear nucleus (time-sensitive localization) and the dorsal cochlear nucleus (quality) (2). The auditory pathway projects into the cerebral cortex, specifically the primary auditory complex located on the dorsal surface of the temporal lobe (3). Furthermore, these auditory nuclei project into other parts of the brain that constitute a neural net—a schema that allows for the functional organization of language, music, memory and knowledge (4).

Hearing loss may be caused by the destruction/degeneration of hair cells in the cochlea (sensorineural), or by damage to/malformation of the apparatus that transmits sound energy (conductive) (5). Hearing aids are one corrective device used to amplify sound; however, the cochlear implant is a fairly new innovation that targets sensorineural hearing loss by bypassing the damaged cochlea. This instrument entails an external microphone built into a speech processor which acts as acts a spectrum analyzer, deconstructing complex sounds into certain frequencies. These electrical signals are then carried to a transmitter held to the head which conveys the coded information through the skin to a receiver implanted in the bone (5). The simulator relays the signal down an electrode that is wound through the cochlea, activating specific frequency locations that coincide with the tonotopic organization of the auditory nerves. The implant "mimics" a sound by stimulating the corresponding neurons, producing the "sensation" of hearing. Subsequently, the cochlear implant is a controversial device that raises ethical questions—what does it mean to replace our "natural" senses with an artificial sensory experience through electronics?

As a unilateral cochlear implant recipient, I have come to think of my CI as an extension of myself—without it I feel helpless and vulnerable. In my personal experience as a two-year user of this fairly new medical device, the cochlear implant may be relevant in the exploration of the cognitive aspect of the auditory process. I was born with full, normal hearing—however, between the ages of three and four, a congenital birth defect (Large Vestibular Aqueduct Syndrome) resulted in a bilateral sensorineural hearing loss that left me profoundly deaf. Hearing aids boosted what little hearing I had until the age of 18. As time passed, I began to notice that I was struggling more than I used to with my hearing aids. My observations were confirmed: my discrimination (the ability to make sense of what I heard) had been declining as a result of reduced stimulation to my auditory nerve cells. This phenomenon is common to those who are born hearing, later suffering hearing loss. The neural pathways of my auditory memory did not disappear, but they failed to sustain my previous recognition of sounds and words. Embarking on the third phase, I hoped to make use of my vestigial sensory ability by getting a cochlear implant, which would directly trigger and invigorate the ganglion cells connected to the auditory nerve. Ironically, the implantation meant destroying any residual hearing function left in my hair cells. Having to undergo a third adjustment to my hearing, relearning sound artificially stimulated was incredibly different and frustrating.

The auditory neural net in my brain continues to be reorganized and reshaped to this day, in order to adjust to an entirely different sensation of sound. An electrical perception translates into an altered recognition/interpretation of sound, in the sense that my familiarity "stemming from a contact between an external event and an internal reception of a previous experience of that event" is rendered inadequate. My success from the implant is likely because I am able to draw not on the audition (the activities of the hearing organ proper, the actual stimulus) enabled at birth, but the experience of auditory processing that encompasses cognition. By cognition, I refer to Reiner Plomp's definition as "the top-down processing stressing the significance of concepts, expectations, and memory as a context for stimulus perception" (6). However, this definition raises certain tensions as I often felt that I was starting from scratch, having to practice auditory memory, language production and processing and interpretation. I continue to think of hearing as an active experience, or an acquired knowledge in which I file away every new sound I hear, attaching labels such as "train whistle," "bird song" or "shhhh" rather than remembering what I heard naturally 17 years ago. Indeed, this suggests that hearing does not rely on the simple stimulating of specific neurons contained in the auditory cortex; rather, hearing is a holistic "exercise" that involves conscious and unconscious extrapolations that construct sound as a subjective perception. Our interpretation of sounds must mediate automatic and voluntary processing (4). I often confuse one sound for another—for example, my brain may automatically mistake a train for music, but upon visual cues I will voluntarily recognize otherwise.

My cochlear implant experience as well as those of thousands of others validates the plasticity of the brain. I no longer rely on 90 percent of my eyesight for information; rather I have come to recognize specific voices, music, sounds. I still lack the ability to localize and pick out sounds from a noisy environment as a result of unilateral hearing. Do I need more time to develop this ability or is this a function inherent to my natural audition?

With the exponential progression of scientific advancement, fully implantable implants are on the horizon, in contention with hair cell regeneration research. Edmond Alexander raises the fascinating issue of the "coming merging of mind and machine," in which the biological authenticity of the human brain may be undermined by the artificial, reverse engineering of the brain which will be enhanced and expanded (7). If I still have a brain that contains a bionic device substituting for my hearing, does that mean that my behavior is still "human?" To what degree can we make the distinction between human and machine? Will the brain still equal behavior if electronic devices are responsible for our sensory experiences?

1)Helen Keller Quotations

2)Auditory Transduction

3) Kandel, Schwartz and Jessel. The Principles of Neural Science. MgGraw Hill Companies, 2000

4) McAdams, Stephen and Bigand, Emmanuel. Thinking in Sound. Oxford: Clarendon Press, 1993.

5)Sound From Silence, The Development of Cochlear Implants ; overview of cochlear implants

6) Reinier, Plomp. The Intelligent Ear: On the Nature of Sound Perception. New Jersey: Lawrence Erlbaum Associates, Publishers, 2002.

7) Edmond Alexander. "The Coming Merging of Mind and Machine." Scientific American Inc, 1999.

Other Helpful Sources:

8)Turned On ; another personal account from a Cochlear Implant recipient

9)Introduction to Cochlear Implants


I have become Comfortably Numb: Depression and Per
Name: Chelsea Ph
Date: 2004-02-24 02:08:33
Link to this Comment: 8429


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"Without emotion, man would be nothing
but a biological computer. Love, joy,
sorrow, fear, apprehension, anger,
satisfaction, and discontent provide
the meaning of human existence."
Arnold M. Ludwig---1980 (1)
Questions and Introduction

Depression is one of several serious mental health conditions affecting over 450 million people worldwide. Is there a universal experience of depression? If so, can that universal experience lead to a deeper understanding of concepts of the self across cultural boundaries?

Facts, Statistics and Symptoms
Symptoms of depression include:
* Depressed mood - most of the day, every day
* Mood swings - one minute high, next minute low
* Lack of energy and loss of interest in life
* Irritability and restlessness
* Disturbed sleep patterns - sleeping too much or too little
* Significant weight loss or gain
* Feelings of worthlessness and guilt
* Difficulty concentrating and thinking clearly
* Loss of sex drive
* Thoughts about death and the option of suicide (2)
"Mental problems are common to all countries, cause immense human suffering, social exclusion, disability and poor quality of life. They also increase mortality and cause staggering economic and social costs" (3). Depression does not distunguish between ethnicity, gender or age, though it is twice as likely to occur in women and often goes undiagnosed or is misdiagnosed in second- and third-world countries without the resources to fund mental health programs (3). In addition, cultural associations with depression frequently prevent sufferers from seeking help.
In China, stoicism is a highly valued character trait- seeking help for depression would indicate a weakness in one's character (4). The same perception is observed in African-American culture, particularly when pertaining to women (4). Information gathered on depression in Hispanic culture indicates that depression is expressed somatically in chronic aches and pains in addition to the "common" symptoms. Linguistic evidence shows that the somatic theme is also present in China. The literal meaning of the word "depressed" in China is a closed and drowning heart and "Depression" is a worrying (heart-troubling) disease (5).

Expressions of Depression
While the above symptoms are naturally important in diagnosis and determining treatment, the personal testimony of those with depression is important when attempting to understand perceptions of the self. Personal testimonies on depression range from completely detached to hysterical and everything in between, including an affinity with the experience, a desire to stay depressed. These testimonies almost always indicate a loss of self, though this may be good in some cases. It is essential to understand that by "self," a person's perception of their normal cognitive state is meant.
"I have become comfortably numb." -Pink Floyd (6)

"No pain remains, no feeling..." -VNV Nation (6)

"...my mind lay limp in an empty world." -Despair, V. Nabokov (7)

"Wake me up inside...
before I come undone,
save me from the nothing I've become." -Evanescene (8)
"...and you´re watching moving shadows live instead of you ...
suicidal tendencies, but no will to interfer.
feel it coming over you ... indifference ... indifference ..." -Wolfsheim (6)

"all the weights that keep you down seem heavier than before.
they hit me in my face, though you feel nothing..." -Apoptygma Beserk (6)

"This is when I feel dead: when I lie in the dark (or sit or stand anytime, anywhere) and can feel how insignificant taking the next breath is...It doesn't hurt not to, there's no panic, only a mild, detached observation that this might be what it feels like to die."
-Anonymous

"Depression is merely anger without enthusiasm"
- Unknown

"...And then I heard them lift a box,
And creak across my soul
With those same boots of lead, again,
Then space began to toll..." -Emily Dickinson, #112 (9)

"It is hopelessness even more than pain that crushes the soul." -William Styron, "Darkness Visible" (10)

Each of these quotes and testimonies are astounding in their repetition. Loss of feeling, numbness, death. "Save me from the nothing I've become." Is "nothing" Ludwig's biological computer? To lose emotion is to lose an essential part of self as identity. Therefore, whatever makes our emotions makes our "selves?"
Chemical Theory
Although the exact cause of depression is unknown, theories on chemical imbalances in the brain have led to the development of medications capable of eliminating or reducing symptoms. Some of these medications are known as SSRI's or Selective Serotonin Reuptake Inhibitors. Serotonin is a hormone produced in the brain, which affects many things, including appetite, emotion, and sleep pattern, and promotes feelings of calm, contented well-being. When too much serotonin is reabsorbed by the presynaptic nerves in the brain, depletion disrupts the normal cycles regulated by serotonin. (11)
Conclusion
Coupling this knowledge with the personal experiences naturally leads to the question: are chemicals the "self?" This chemical balance leads to the feelings of "numbness," "not being oneself," etc. because perceptions of the "normal" self have their roots in the typical chemical make-up of their individual brains. "I'm not happy like I usually am" becomes "My serotonin levels are usually higher than this" or "my dopamine levels aren't usually this erratic." Does the self really exist in combinations of chemicals; somewhere beyond the "I"-box, yet containing it- a fluid, perpetually moving self?

References

1)Dr. Ivan's Depression Central

2)Befriender's International

3)The World Health Organization

4)Depression Screening.org

5)Online Chinese/English Dictionary

6)Song lyrics

7)Nabokov, Vladimir. Despair. New York: Vintage Books, 1989.

8)Song Lyrics Search Engine

9)Serendip Website, a Web Paper on depression and serotonin

10)The Quote Cache

11)More from Serendip


Autism: In a world of dreams and shadows
Name: Geetanjali
Date: 2004-02-24 03:46:15
Link to this Comment: 8433


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Autism is a neurological disorder that is interesting, in part, because of its potential to shed light on how we perceive and understand the world around us, and how we are able to relate to other human beings, by demonstrating what happens when one is unable to relate with others, and has trouble with perception and understanding. The abilities and complexity of the human brain can be seen most clearly when the brain is damaged and vital abilities have been lost. It is often only when one sees the debilitation caused by the loss of an ability that one can see the importance of that ability, and fully understand it.

Autism is characterized by problems in three specific areas: communication, imagination, and socialization.(5) Autistics generally have very poor verbal skills, and can be so unresponsive to speech and noise in general that they are sometimes mistakenly thought to be deaf.(7)(2) Autistics also have trouble understanding the meanings of intonations in a sentence, and have difficulty speaking with the proper intonations themselves. Autistic children often don't appear to engage in imaginative play. They tend to be very socially withdrawn, and unresponsive to human contact as children.(1)

There are other, less general characteristic symptoms. Autistics have trouble making eye contact,(7)(2)(1) and show an aversion to physical affection (such as hugging).(1)(2) There are often motor problems that accompany autism, such as a lack of coordination.(4) Autistics often show an obsession with order and a resistance to change,7 and a tendency to focus on parts of objects instead of the entire object.(5)

Quite a bit is known, then, about the outwardly observable characteristics of autism. To describe all of its observed characteristics in detail would take several pages at least. Autism is, though, a disorder that can unfortunately only be defined by outwardly observable behaviour. Its wide range of symptoms are classed together as one disorder simply because they are seen together often, too often for there not to be some link between them. It is believed that there is a common neurological problem (or even a group of related neurological problems) that is at the core of autism. However, one reason why autism is still something of a mystery and remains surrounded by controversy, is that after decades of research its biological basis is still not known for sure. One possibility is that damage to the amygdala is linked to autism. However, only about 50% of autistics show damage to the amygdala in MRI scans, so other structures are evidently involved as well.(4)

It is known, though, that autism does have an entirely biological basis.(4) Although it was thought for many years that autism was a psychological disorder, it is now known that autism is caused by a combination of genetic and environmental factors.(2) And although the specifics of its biological basis are not known, the neurological damage that lies behind autism creates specific cognitive defects that in turn cause the outward symptoms of autism. Two theories about such cognitive defects are the theory of mind hypothesis, and the theory of central coherence.

A theory of mind allows one to make deductions about what another person might be thinking based on his or her outward behaviour. It allows people to attribute separate beliefs and mental states to another person, and to link these mental states with outward behaviour. A hypothesis was put forth by Baron-Cohen et al in 1985 claiming that autistics lack a theory of mind.(3)

This claim was based on an experiment first done in 1985, where autistic children were given what is called a "false-belief" test. The test went roughly as follows. A girl named Sally hides a marble in a basket, and then leaves the room, so that she can no longer see the basket. Her friend Anne then takes the marble out of the first basket and puts it in a second basket. The question is: when Sally comes back to the room and looks for her marble, where will she look?

In order to correctly reply that Sally will look in the first basket, the subject has to be able to grasp the concept that Sally does not know everything that the subject knows, and therefore believes that her marble is still where she left it. In order to attribute such a false belief to another person, the subject would need to have a theory of mind. Only 20% of the autistic children tested were able to answer the question correctly. The other 80% replied that Sally would look for her marble in the second box. Also, when the 20% that were able to answer the question correctly were given more advanced tests to further test their theory of mind, the majority of them failed.

It has been generally accepted since then that one of the major cognitive deficits in autism is the lack of a theory of mind. However, although the majority of autistics failed the more advanced tests, not all failed. And this combined with the fact that 20% were able to pass a simple false-belief test shows that not all autistics completely lack a theory of mind. This would imply that there are other cognitive defects that contribute to autism, since most autistics who pass false-belief tests are still undoubtedly autistic (although they are generally high-functioning autistics). The fact that much autistic behaviour has no obvious relation to the theory of mind further supports this.(5) Other theories have therefore been put forward, one of which is autistics have weak central coherence.(5)

Central coherence is, in colloquial terms, the ability to see the big picture instead of getting lost in details. It is the ability to read a story and then be able to remember the gist of the story afterwards, even if individual details are lost. The theory put forward by Happé claims that autistics, although they have this ability to a certain degree, still have trouble with central coherence. Several experiments seem to support this claim, but simple observations of autistics support it as well: one widely observed characteristic of autistics is that they have a tendency to focus on the parts of an object over the whole.(5)

One experiment carried out by Happé tested the ability of autistic children to judge the meaning of a word based on the context of the sentence. For example, they were made to read outloud the sentences "There was a big tear in her eye", and "In her dress there was a big tear", to see if they would be able to judge which pronunciation of the word "tear" was appropriate for which sentence. In general it was found that they had difficulty with such judgements. They had a tendency to simply use the more common form of the word, regardless of the context. Another experiment showed that autistics show a remarkable ability for spotting "embedded figures" within an image, which also supports this theory.(5)

The theory of mind and central coherence are not aspects of the human brain that one would even necessarily think about, under normal circumstances. Most people don't think twice about their ability to read a book and understand it, somehow understanding both each individual word and the complex ideas that the words combine to form. Most people don't wonder about their ability to look into a person's eyes and know what that person is feeling. Both tasks are astounding. And yet it is human nature not to notice our own everyday abilities, no matter how astounding they are, simply because they are so common. It is often only when we lose an ability or learn about people who lack it that we take notice of our abilities, and the miracle that is the human brain. Autism has shown the effect when something so basic and necessary as the ability to relate to other human beings is lost from damage to the brain. The sheer isolation of autism is described well by an autistic woman named Donna Williams, in her autobiography,

Staring into nothingness since time began
There and yet not there she stood.
In a world of dreams, shadows, and fantasy,
Nothing more complex than color and indiscernible sound.
With the look of an angel no doubt,
But also without the ability to love or
Feel anything more complex than the sensation of cat's fur
Against her face.(8)

References

1) Autism Resources maintained by John Wobus. (accessed February 15, 2004)

2) Autism Society of America (accessed February 17, 2004)

3) Baron-Cohen, S., Leslie, A.M. and Frith, U. (1985) "Does the autistic child have a 'theory of mind'?" Cognition, 21, 37-46

4) Frith, Uta and Hill, Elisabeth. (2003) "New techniques yield insights on autism". (accessed February 18, 2004)

5) Happé, Francesca. (1997) "Autism: Understanding the mind, fitting together the pieces" (accessed February 17, 2004)

6) Rimland, Bernard. (1997) "Genetics, Autism, and Priorities". Autism Research Review International, Vol. 11, No. 2, page 3.(accessed February 17, 2004)

7) Sterling, Lisa. (2002) "Autism and Theory of Mind" (accessed February 16, 2004)

8) Williams, Donna. (1992) Nobody Nowhere, New York, New York: Avon Books.


What Makes a "Monster"?
Name: Erica Grah
Date: 2004-02-24 03:57:55
Link to this Comment: 8435


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The recently-released film "Monster" is based upon the story of female serial killer Aileen "Lee" Wuornos. This movie raises several interesting questions regarding the nature of homicide, specifically that which is carried out in cold blood, and its causes. What makes people commit murder? Is it a question of underlying aggression? If so, how does this differ among individuals? In combining aspects of the movie with those of Wuornos' real life, the purpose of this paper will be to attempt to identify several factors in the lives of both the fictionalized and actual murderer that may provide some explanation or at the very least, a discussion of certain neurobiological and psychological factors and their roles in homicide and aggression among the general population and more specifically, among women.

"Monster" in some ways portrays the life of a victim. Through flashbacks to her childhood and the mention of it later in the film, the audience is made of aware of the sexual abuse that the young Aileen endured. Research has shown that there are several aspects of an abusive childhood that remain with the child for the rest her life. The impact of child abuse alone, whether physical, sexual or emotional, can over time result in disruptions of mood, including depression and anxiety, and in antisocial traits such as aggression, criminal behavior and impulse control (1). In his article on the neurobiology of child abuse, Martin Teicher discusses the effects that child abuse has been hypothesized to have on certain areas of the brain, particularly the limbic system. This system is described to be the area of the brain that is essential in the development of emotional responses and the recollection of memories. Within the limbic system are the hippocampus and the amygdala, which are thought to be key components of "the formation and retrieval of both verbal and emotional memories" and of "creating the emotional content of memory – for example, feelings relating to fear conditioning and aggressive responses," respectively (1). Research has led to the discovery of a correlation between an early history of abuse and decreases in the size of the adult hippocampus and amygdala. The smaller these brain structures, the greater the likelihood of over-stimulation, to the effect that the individual would be more traumatized by the memories and be more closely attached to the emotions recalled. In addition, given a person's history of maltreatment, the responses brought forth in any perceived threat, regardless of the severity, could be representative of those continually initiated in the past or of the desire to react differently. In the latter case, it can be said that an overly aggressive response blatantly disproportionate to an event, however slight, would occur to counter a past threat in which the individual was unable to be aggressive.

Although the true account of her childhood is sketchy, due to several different accounts given by Lee, it was confirmed by professionals who testified on her behalf that she had borderline personality disorder (2),(3). Individuals suffering from this disorder exhibit difficulties with the regulation of their emotions in that they can be vastly antithetical from one moment to the next. Acute yet ephemeral anger and aggression is sometimes a by-product, and impulsivity can prove to be a problem as well. Research has shown that a history of abuse, neglect or separation is common among a large percentage of individuals with borderline personality disorder, particularly in those who have suffered sexual abuse (4). Thus, it is most likely the case that Wuornos' real life was marred by the occurrence of such maltreatment.

Under this assumption, there are neurobiological reasons that may explain why and how Lee developed borderline personality disorder in the first place. It has been found that the middle part of the corpus callosum – which is essentially the bridge that allows information to travel between hemispheres of the brain – in females who endured sexual abuse tended to be much smaller than in individuals who reported no abuse. This then reduces the amount of communication or integration that can take place between hemispheres at any given time. Lack of integration forces one hemisphere to dominate the emotions of the individual; presumably, the dominating source of emotion can change almost instantaneously and randomly, resulting in rapid fluctuations in perception that are notably characteristic of the borderline personality (1).

In conjunction with this reduced size of the corpus callosum may be a significantly decreased flow of blood in the cerebellar vermis – the middle part of the cerebellum – which plays a role in controlling the presence of norepinephrine and dopamine in the brain. These are neurotransmitters that govern the shift to "a more right hemisphere-biased (emotional) state," and "a more left hemisphere-biased (verbal) state," respectively (1). Research has shown that people with an abusive past exhibit a diminished amount of blood flow in the vermis, which disrupts its ability to regulate the production and discharge of the above neurotransmitters, thus increasing the risk for sporadic hemispheric shifts, leading to the borderline behaviors described earlier. As was previously stated, a history of separation is prevalent among those with borderline personality disorder. Wuornos was abandoned by her mother when she was a toddler, only to be raised by an alcoholic grandfather and her grandmother (3). Perhaps this too led to the neurological development that fosters such a disorder.

Although borderline personality disorder may lead to bursts of extreme anger, this does not completely explain how or why Wuornos came to commit the heinous crimes that she did. She began prostituting in her early teens, and the film portrays a brutal rape scene, taking place in her early thirties, in which she is attacked by one of her johns. This marks the beginning of the end as she, in self-defense, kills her rapist. It is plausible enough to believe that self-defense led her to kill. However, the same does not hold true for the five or six other murders that she committed. Is it possible to believe that one day she just snapped? Viewing the situation from the film's point of view, it is. Women who have been sexually assaulted most likely develop post-traumatic stress disorder (5), symptoms of which may include flashbacks, emotional numbness and sporadic and spontaneous occurrences of anger (5),(6). "Monster" shows Lee having a flashback to her rape during her second murder. Individuals with PTSD tend to have relatively low levels of cortisol, a stress hormone that regulates the release of norepinephrine, which, as was stated previously, controls emotional responses and tends to be higher in people suffering from PTSD. It is activated by the presence of stress, and it triggers the hippocampus to store the stressful input in long-term memory (1),(6). This is believed to explain why greatly emotional events can be recalled so vividly. More dangerous, however, can be highly traumatizing events, in which malfunctions may occur to the extent that memories are formed more strongly than normal, leading to flashbacks or other visual recollections (6). As she continues to murder and rob various men, she seems to do so with an air of stoicism. Supposing she was suffering from PTSD, such emotional numbness can be explained through the increased presence of hormones linked to stress, such as natural opiates. The levels of these opiates produced in PTSD individuals tend to be abnormal and have the effect of disguise pain, but for longer periods of time than would normally occur (6).

However, the portrayal of what happened on film and the perceptions of the police who interviewed her, the reporter who researched her life and the jury who condemned her to death (7),(3) lead to a different question regarding Lee's killing spree. What if she knew exactly what she was doing and planned each murder? Given her spotted record, it should come as no surprise that generally women convicted of homicide, in which the victim is someone other than a family member, usually have a pre-existing criminal record, and are considered to follow the male blueprint for criminal behavior (8). Given this information, we are forced to look at other biological factors that may have played a role in such a tragic expression of aggression. A possible explanation is the existence of abnormalities in the orbitofrontal cortex or in the amygdala, both of which have been cited to function abnormally in the brains of murderers and/or psychopaths (9),(10). The orbitofrontal cortex is part of the prefrontal cortex, which works to inhibit impulsivity (11) and has a role in the decision-making process. Individuals with noted anomalies in this area tend to have problems with controlling aggression (9) and the inability to correctly associate certain behaviors with being either good or bad. The amygdala, as mentioned above, regulates fear responses. Thus, a malfunctioning amygdala would most likely not produce the fearful and empathic responses (9) that would prevent a person from committing a crime such as murder and repeatedly so, thus causing aggression to be acted out without inhibition.

The simple factor of genetics may have also contributed to Wuornos predisposition of remorseless actions. Her biological father was reported to be a child molester who had an extreme case of antisocial personality disorder (7),(3). Individuals with personality disorders have been found to have lower cerebrospinal fluid (CSF) levels of 5-hydroxyindoleacetic acid (5-HIAA), which corresponded to high levels of aggression. It has been found that low levels of CSF 5-HIAA may be genetically inherited and thus may cause a susceptibility to aggression (12).

There is nothing that can unquestionably determine the actual cause of aggression and its more severe consequences. There are certain factors that can be observed, but the biological, psychological and social aspects of a person's life are intertwined so intricately that it would be impossible to fully understand or to answer the simple question of why. Science is a collection of observations, and the life and death of Aileen Wuornos is indicative of this.


References

1)The Neurobiology of Child Abuse, from Scientific American

2)Washington Post, article on Aileen Wuornos by her biographer

3)Crime Library , another story about Aileen Wuornos: The Myth and the Reality

4)Borderline Personality Disorder, from the National Institute of Mental Health

5)The Consequences of Violence Against Women, from Scientific American

6)Posttraumatic Stress Disorder, from the National Institute of Mental Health

7)Serial Homicide, Mind of a Killer: an Investigation of Serial Homicide-Aileen Wuornos

8)Wiley InterScience Journals, Journal of Clinical Psychology article on Homicidal Women

9)Into the Mind of a Killer, from Nature journal

10)Predicting Behavior, from Nature Journal

11)Society for Neuroscience, characterizing Violent Brains

12)The American College of Neuropsychopharmacology, the Neurobiology of Aggression


Memory or Imagination: Where Does the Brain Draw a
Name: Mridula Sh
Date: 2004-02-24 06:10:39
Link to this Comment: 8436


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The creation of false memories has recently been the focus of many experimental investigations and has sparked much debate and controversy. This phenomenon has been studied extensively in view of its impact on related conditions such as memory repression and its recovery through psychotherapy. False memories are created when events that were originally imagined or intensely thought about, are experienced as real on subsequent recollection.(7). Such falsely implanted memories have questioned the accuracy of memory. More importantly they have provoked serious ethical questions regarding the legitimacy of psychotherapy and other intrusive therapeutic procedures. Suspected perpetrators of sexual abuse and murder have been convicted in courts of law based on "evidence" provided by such memories that were nonexistent until the victim underwent therapy.(1). This paper will discuss the phenomenon of False Memory Syndrome (FMS) and attempt to find the neurological pathways that lead to its creation.

Nadean Cool, a nurse's aid went into therapy to help her cope with the effects of a traumatic event experienced by her daughter. Repeated sessions with the psychotherapist involving hypnosis and other suggestive techniques resulted in the resurfacing of memories of abuse that she herself had experienced. She came to believe that she had more than 120 personalities and had been subjected to severe sexual and physical abuse as a child. Once Nadean realized she was a victim of FMS, she sued the psychiatrist for malpractice. Her case was settled out of court for $2.4 million.(3). Nadean is just one of the many women who have developed False Memory Syndrome as a result of questionable therapy. Studies have shown that under the right conditions, guided misinformation can very easily blur the boundaries between reality and imagination.

The classic profile of an FMS victim is a white, middle class woman undergoing long term psychotherapy for relief from emotional problems.(6). She comes to a psychotherapist for treatment who often, in an effort to correlate these emotions with past abuse promotes the development of FMS. The rationale behind such an association lies in the theory that victims of childhood sexual abuse suppress memories soon after the occurrence of such events. These repressed memories induce emotional and physical ailments in adulthood resulting in the development of what some term Incest Survivor Syndrome. While there is no scientific evidence supporting this theory, therapists often induce the patient to take part in Recovered Memory Therapy (RMT).(6). Techniques of RMT include age regression, hypnosis, art and trance therapy and guided visualization.(1). Other techniques include group therapy sessions and reading of other accounts of women who have recovered traumatic memories of such abuse. Such "therapeutic sessions" pressurize the subject to find memories of abuse even when none originally exist. While such manipulative and confusing procedures "recover" disturbing mental and bodily memories of sexual abuse, their purpose is questionable. Misinformation interferes with accurate recollection of the actual event. Such memories misunderstood by the patient and miscomprehended by the therapist result in the creation of false memories leading to FMS.(6). In essence RMT is a technique used by therapists to generate a diagnosis often based on evidence that is conjured by the mind of the patient in response to misinformation fed to it.

The development of FMS impacts the psychological as well as social spheres of the patient's life. The patient is encouraged to distance herself from the perpetrator (often her father), members of the family and skeptical friends. Instead she derives support from other victims of abuse.(6). She gradually looses sense of the real world and encloses herself within an environment that supports the FMS state. The subject can develop multiple personality disorder, discovering hidden personalities ("alters") whose characteristics are significantly different from each other. In some extreme cases the patient believes she is a victim of Satanic Ritual Abuse involving the participation of relatives motivated by clandestine satanic beliefs.(6).

FMS raises a number of questions regarding the authenticity of memories of childhood abuse remembered later in life. Where and under what conditions are such memories generated? Are there ways of differentiating a true memory from a false one? Can one erase false memories created as a result of misinformation? These questions have been the focal point for experimental research in areas related to the repression and restoration of traumatic memories and the creation of false memories. The study of false memories has generated evidence that indicate the complex connection between memory and emotion. While strong emotions can either weaken or strengthen real memories, false memories can provoke strong emotion thereby simulating the creation of real memories.(5). Studies also show that false memories created as a result of the "misinformation effect" show variability depending on both the person as well the memory. The only apparent connection is that persons experiencing lapses of attention are more vulnerable to memory distortion.(5).

Researches working with split brain patients have made some fascinating observations regarding the nature of memory processing in the two hemispheres of the brain. When people are given information, their recollection of it is based largely on their experience. Often it is found that some parts of the recollection are not truly part of the experience. When split brain patients are presented with this information it if found that the left hemisphere is responsible for the creation of false reports whereas that right hemisphere gives a more factual description.(5). While this is proof that the two hemispheres respond to data differently, it also opens up avenues for the determination of how and where false memories are created.

One theory supports the view that false memories are a result of an erroneous processing of past experience. People create an outline of proceedings and then fit in false events that corroborate with the outline to develop a recollection of the original experience. Several observations support this view. The left hemisphere specializes in generating such schemata and has the ability to put the memory into context. In an attempt to interpret pieces of information within the larger context the left hemisphere is constantly seeking meaning and reason behind events. However when presented with information that is inconsistent with the schemata, the left hemisphere unable to differentiate between true and false data constructs an artificial past in place of the original one.(4). These findings are supported by the demonstration that left prefrontal regions of the brain of normal subjects are activated when false memories are recalled. In another experiment to determine the neurological pathway involved in the creation of memory, experimenters PET scanned the brains of volunteers. It is found that while true and false memories activate the hippocampus, only true memories activate the superior temporal lobe.(2). However PET scans cannot be relied on for accuracy. False memories may be equally likely to ignite the sensory apparatus of the brain as true memories do as a result of repeated misinformation.(2).

Once false memories are implanted it is often hard to rid them from memory. Yet studies have shown that propranolol, a beta blocker used in the treatment of patients with PTSD might prove to be effective in erasing false memories. Propranolol "interferes with the neurochemical pathway thought to be responsible for making emotionally arousing events more memorable- the beta adrenergic system."(5). Hence if the creation of false memories rely on activation of this system then propranolol administration could be effective in treatment of FMS. However false memories that are created as a result of fantasies or outright fabrications would be immune to the drug.(5).

This paper has attempted to discuss the phenomenon of False Memory Syndrome and define the neurological processes behind its creation. While this is an area that has seen an explosion of research in recent years, the specific neurological mechanisms that underlie the construction of such memories are yet to be determined. On a cautionary note, it is important not to completely disprove the legitimacy of buried memories. While it is true that memories can be implanted, it does not necessarily indicate that all hidden childhood memories recovered after therapy are fabricated. Thus the big question is, will research eventually allow one be able to correctly distinguish between an accurate memory and a false one?

References

Dr. Elizabeth f. Loftus, Remembering Dangerously. Skeptical Inquirer (March 1995): An interesting article that traces case studies of questionable techniques in Psychotherapy.
Sharon Begley, You must remember this. (false memories) Newsweek (July 15, 1996): Scientific paper that investigates parts of the brain activated by memory.
Dr.Elizabeth Loftus, Creating False Memories. Scientific American (September 1997): Research article that shows how suggestion and imagination can create false memories.
Michael Gazzaniga, The Split Brain Revisited. Scientific American (September 1998):
Scientific article on research into brain organization and consciousness.
We can implant entirely false memories. The Guardian (December 4, 2003) Article on research conducted to determine the nature of false memories.
John Hochman, M.D. Recovered Memory Therapy and False Memory Syndrome. Skeptic vol. 2, no. 3, 1994, pp58-61: An article that investigates techniques of RMT and the creation of FMS.
Christine Mc.Brien; Dale Dagenbach, The contributions of source misattributions, acquiescence, and response bias to children's false memories. American Journal Of Psychology (Winter 1998)


Can You Make Yourself Laugh
Name: Elissa Set
Date: 2004-02-24 07:48:37
Link to this Comment: 8437


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

People often say that laughter is the best medicine. However, how could someone administer laughter to oneself? Most people define laughter as a response to something funny or humorous. What most people do not realize, though, is the complexity that lies behind people's ability to laugh. Laughter has two aspects to it: the neurological part, and the physical part that produces sounds and gestures (4). There are various stimulants that make people laugh, but all of the stimulants cause the same effect in the parts of the brain that control laughter. However, in most cases, laughter can only be stimulated from an external source. Often people cannot simply make themselves laugh, similar to how people cannot tickle themselves. Primarily examining the neurological aspect can explain why that is.

Laughing is a complicated matter. There are fifteen facial muscles involved in laughing. The larynx and epiglottis of the respiratory system also play a vital role in making the gasping noises that are associated with laughter. If someone laughs hard enough, they may also form tears (4). What is particularly interesting is the cause of these actions. Laughter is stimulated through many parts of the brain. One of the main parts is the frontal lobe, one of the brain's largest regions and it controls one's emotional reactions. Activity is also observed in the cerebral cortex, which analyzes the structure of the humor and then helps the brain understand the humor, occipital lobe, which processes visual signals, and the motor sections of the brain, which stimulates the actual physical response of laughter (4). This complex process sets laughter apart from any other emotional response. Other emotions are usually concentrated to activity in a specific area of the brain (4).

A lot of recent research has been conducted to study the stimuli of laughter. In 1998, Nature magazine published a paper that studied how electric stimulation caused laughter in a 16 year-old girl. Researchers were trying to map her brain, because she was having epileptic seizures (5). They were able to map an area in her left superior frontal gyrus that measured about 2 cm x 2 cm that always caused laughter when it was stimulated with an electric current (2). During the test, they would have her do various activities, such as reading a story, naming objects, and hand movements. Whenever her superior frontal gyrus was stimulated, she would laugh and attribute the laughter to the activity she was doing (2). Regardless of what the activity was, she thought it was funny because of that stimulus. Therefore, any kind of stimulus in that region of her brain made her laugh, because they all followed the same pathway.

A similar conclusion was made when a group of neuroscientists did a study on laughter using episodes of Seinfeld, a comedy sitcom, and The Simpsons, a cartoon show (1). There are two main differences in the show. One is that Seinfeld uses live characters, while The Simpsons uses animation. Another difference is that Seinfeld uses a laugh track, which is a recording of people laughing that is played during funny parts of the show. The Simpsons does not use one. Using a magnetic resonance imaging machine, researchers found that both shows set off the same nerve pathway in their brains (1). The study also found that different parts of the brain respond to different parts of a joke. When a participant saw something funny, the posterior temporal cortex and the inferior frontal cortex showed signs of activity, and a few seconds later, when the person responded to the joke, the insula and the amygdale showed activity (1).

Laughter can also be seen as contagious, which is likely one of the reasons why shows using live characters also use a laugh track. One is more likely to laugh when other people are laughing (6). In a study done by Robert Provine, people who are by themselves are 30 times less likely to laugh than if they were in a social situation (6). Laughter can be contagious. In his laughter studies, Provine had a group of undergraduate psychology students listen to a toy called a laugh box that played the sound of laughter for 18 seconds. Provine played the laugh box ten times, and had students report how they felt to the laughter. With the first time the laugh box was played, half the students laughed, and 90 percent of them smiled at the least (6). However, by the tenth trial, only 3 of the 128 students laughed at the laugh box (6). Hearing and seeing the other students laugh made the laugh box seem funny. It was the combined stimulus of the laugh box and the laughter of other students that evoked continued laughter among the group. However, students could only take so much of the same stimuli. By the tenth trial, most of the students had found the laugh box obnoxious (6).

All of these studies have helped formulate the following deductions. One deduction is that there must be a stimulus in order for laughter to occur. Another is that laughter requires activity in multiple lobes of the brain. The other is one that can be deduced from the conclusions of those studies: laughter must involve some sort of element of surprise. In Provine's study using the laugh box, after the element of surprise was removed, the students found the box annoying. Using an example of a joke, the audience does not expect the outcome of the joke, and that is a reason that makes it funny. In a given situation, when the outcome is unpredictable, the audience is stimulated with the surprise, causing laughter. This is one of the possible reasons as to why people cannot tickle themselves. Some scientists believe that laughing is a built-in reflex to the stimulation on one's skin, yet people cannot tickle themselves (5). Although the signal sent from one's skin to their spinal cord and brain should be the same as when someone else tickles that person, the effect has changed, because there is no tension or surprise (5). One's brain is aware that the stimulation is going to happen, so the action is expected.

Laughter is a topic that should continue to be researched. As the recent studies have shown, laughter is an effect of an external stimulus that is networked through various parts of the brain. Future studies can be done in order to understand how people who have suffered strokes can have episodes of uncontrollable laughter or have lost their ability to laugh completely (3). Understand the brain's response to humor can also help researchers understand the mental illnesses and depression (3). Science has already discovered that certain parts of the brain control specific functions of the body. Laughter, being that it activates many parts, can be an encompassing topic of study that helps understand the relationship between the various lobes and regions.


References

1) Brain's Funny Bone , a study about laughter using television

2) Electric Current Stimulates Laughter , a scientific paper from Nature magazine

3) Finding the Brain's Funny Bone , a study about laughter using MRI scans

4) How laughter works , an explanation for the mechanism of laughter

5) Neuroscience for Kids – Laughter and the Brain , an overview of laughter

6) Provine Laughter , a groundbreaking in-depth study on laughter


Dynamic Mimicry of the Indo-Malaysian Octopus
Name: Michelle S
Date: 2004-02-24 08:31:43
Link to this Comment: 8438


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Recently, researchers have discovered the existence of an extremely unique type of octopus. The species, known as the Indo-Malayan octopus, has the ability to alter its shape, form, and color pattern to mimic or imitate other sea creatures in order to avoid predation (2). The discovery of the mimic octopus is noteworthy because no other type of cephalopod is known to have impersonation abilities. The octopus is also not limited to one imitation. Researchers have observed up to eight different formations. The alternations occur depending upon the appetite, surrounding environment, and proximity of predators the octopus encounters (1). In analyzing the formations, behaviors, and predators of the mimic octopus, it is important to isolate the origins of this exclusive, and highly intelligent defense mechanism. Is this means of protection or evolutionary development, one that allows the cephalopod a better means of survival? Or is this the result of observed behaviors where the mimic octopus becomes aware of the relations occurring in the environment, and successfully imitates a species based upon their ability to subsist when dealing with dangerous predators?

The existence of mimic octopi is restricted to the islands of Indonesia, specifically off the coasts off Solawesi, and Bali (3). Surprisingly, the octopi have been viewed during the daylight hours, generally residing near sand tunnels, and holes (1). The octopi enjoy these mounds because they provide a significant source of food, including small worms, fish, and crustaceans. The octopus utilizes its arms to feel for prey, and then captures the food through the use of expanded webs. However, when the animal is attempting to hide itself from possible enemies, the Indo-Malayan octopus can transform itself into a variety of organisms, including fish, sea snakes, and anemones. If the octopus observes a cluster of damselfishes, it will change into a lionfish by swimming above the ocean floor, with arms extended beyond the body (2). The lionfish is known to possess poisonous spikes, which successfully deter the damselfish from preying upon the mimic octopus. Another possible transformation includes the sole fish. The octopus is able to propel itself in a similar manner by forming a leaf-shaped arm that moves it across the ocean floor effortlessly. The octopus's arms are also useful in impersonating the sea snake. Two arms are waved around to appear like a pair of snakes, while the other six are hidden from view. The octopus also changes its color and creates yellow and dark bands across the exposed arms. Other variations employed by the mimic octopus include the sea anemone and the jellyfish (3).

The phenomenal behavior of the Indo-Malayan octopus has left researchers wondering how this trait has developed, or been acquired by the animal. The ability has not been viewed in any other species of cephalopods, despite their lack of a strong internal or external skeleton, a body type ideal for imitations. The studies and observations of these animals within their habitat point to a wide variety of reasons, both evolution-oriented and behavior-oriented, which are responsible for the development of this talent. The Indo-Malayan octopus has been able to copy those animals known to generate poisons, such as fish with toxic glands, and anemone, and jellyfish known for their stinging powers. This characteristic appears to validate the behavioral influence of the octopus's capacity for imitation since the animal has isolated those species, which are known to contain toxins (4). The behaviorist theory is further authenticated by the sexual mate perspective. Researcher's have also explored the idea that the characteristics are not primarily for defense, but to attract sexual mates (2). The idea is that females are more likely to mate with those who have the ability to transform into a larger number of sea creatures. The problem with this theory is that both female and male octopi were able to show mimic mannerisms, even when isolated from each other. Impersonations have never occurred within the cephalopod species without the presence of the male. Therefore, the trait is much more likely to be something examined and observed by the species over a long period of time.

However, there is a considerable amount of evidence, which supports the idea of evolutionary development. The cephalopod species is known to have the ability to duplicate the surrounding environment, by creating colors and patterns similar to the background. For example, the reef squid has the ability to camouflage itself among a group of parrot-fish. Yet, none of these organism types can accurately mimic so many different types of sea creatures. Since the species has begun with the aptitude to emulate an environment, evolutionary theory would explain a new advancement in the area of predatory defense. The progression of mimicry is based upon an organism, which reveals innovative formations that have not occurred within the species before (7). The octopus has developed the ability to not only mimic surroundings, but mimic a number of other creatures. This dynamic mimicry gives the Indo-Malayan choices to cater their behavior toward specific adversaries. This also explains why this trait is such a rare occurrence. As more creatures obtain the ability to imitate, the less effective the trait will become when avoiding enemies. These predators will eventually become aware of imitation, and develop the ability to spot charlatans (2).

Evolutionary assumptions also help in explaining the relative toxicity of the mimic octopus. It is currently unknown whether the octopus is poisonous, and whether the level of poison changes with the alternation in appearance. Theorists assume that the mimic has the same potential for poison whether it physically is perceived as a lion-fish, or a sea snake. This is because the entire act of imitation reveals that the animal is engaging in predatory deterrence. It most likely that the octopus imitates in order to avoid encounters. It does not have the available toxins to truly be a danger.

The evolutionary theory seems to explain more of the octopus's behavior and development. If the assumption is made that the mimic octopus has obtained the behavior through instinctual means, then possible lines of inquiry include what probable advancements in mimicry will occur in the next thousand years, and what behavioral traits will predators develop in order to battle camouflage defenses? Is water a more encouraging environment for camouflage behaviors? Can the qualities found in the imitation of the octopus similar to the imitations that occur in cellular diseases, such as cancer? The support of evolution as a basis for the growth of mimicry merely provides a foundation for the direction of future expansion in the area since most of what is known of these octopi are conjectures.


References

1)ABC's News Article Homepage, General Article

2)The Royal Society Articles Server, Indo-Malaysian Octopus Article

3)National Geographic Website,General and Related Articles

4)News Scientist Website, General Article

5)Science News Website, General Article

6)For Romeo Website, Small Article with Good Picture

7)UniScience Website, General Article


"On Becoming A Person: A Therapist's View of Psych
Name: Jennifer S
Date: 2004-02-24 09:18:36
Link to this Comment: 8440


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

On Becoming A Person: A Therapist's View of Psychotherapy
Carl Rogers
"Life, at best, is a flowing changing process in which nothing is fixed." ((1),222)

Today most bookstores have entire sections designated for self-help books, consumer's guides to psychological illnesses, and how-to guides for recovery from mental illness. In the 1960's, however, mental health remained a veiled science. People spoke about psychology in an unfamiliar language, it was a topic that made many uncomfortable. Freud felt that therapy had to be frustrating and increase the anxiety the patient to allow him to improve, and it was generally assumed that therapy was to be difficult and unpleasant. Carl Rogers's book, "On Becoming A Person", revolutionized the psychological literature. While his intent on publishing this book was simply to make his material more widely available to other practitioner's, he found that everyone from housewives to lawyers sought out the book, and over a million copies were sold. Rogers had not expected this, and his reputation was adversely affected by the success of this book among those outside of the behavioral science fields.
The book, "On Becoming A Person", was originally released in 1962. At that time, the Freudian school of thought prevailed. Studies were demonstrating that psychoanalysis actually made many patients less reflective, less comfortable, and less able to function at a high level socially. In spite of these facts, the silent psychoanalyst behind the couch continued to thrive throughout the fifties and into the early sixties. Studies were beginning to explore behavioral science, but the science was in its infancy. While little was known about neurochemistry, scientists were exploring stimulus-response conditioning, with experiments that attempted to show that patients simply reacted to the rewards they received. Robots were constructed to reward patients in mental institutions for good behavior, and psychiatrists believed this was the wave of the future. Rogers examined these studies, and found them dehumanizing and flawed. He felt out of place in psychology, as he questioned the ideals of the time, often to the dismay of his supervisors. Rogers proceeded to explore behavioral science in a mode similar to the pedagogy of serendip, the idea of "getting things progressively less wrong."((2)) While Rogers recognized that the current methods of psychotherapy in the 1950's were inherently flawed, he worked to apply the concepts he learned in his formal study to his independent research to find a less wrong way of treating patients. Rogers participated in many lectures and freely admitted to his ignorance in behavioral science, but his desire to learn and enjoy the process of learning inspired others and allowed him to make a lasting impact.

In a debate with Dr. R. F. Skinner, who was eager to objectively map out the brain and to diagnose and determine physiological reasons for everything, Rogers argued that all studies man ever completes will be subjective. Rogers felt that, because the hypothesis or ideas a man who begins research has inevitably effect the direction and outcome of his research, research can never be completely objective. Rogers was questioning the core of science- the idea that man could objectively acquire and amass information. His argument resonates in our time as clearly as it did in the fifties. Man always has a choice- a choice to pursue what interests him, and in pursuing these interests, he brings along a set of core values he has learned and chosen to accept as his own. This inherent subjectivity empowers us to pursue ideas that appeal to us.

Research that sought to label every action of man as somehow out of his personal control, as a biological impulse or reaction, seemed to strip man of free will and dignity. Determining the basis for behavior is a debate that spans nearly all disciplines. Philosophers, theologians, and persons alone ponder this point- what causes behavior? In discussions, people are often vehemently split on this point. Rogers, hoping to preserve the enigma of humanity while researching in behavioral science, Rogers felt that research in the behavioral science should be based on the following values:
"Man as a process of becoming; as a process of achieving worth and dignity through the development of his potentialities;
The individual human being as a self-actualizing process, moving on to more challenging and enriching experiences." (1, 396)
His fears, in a time before Prozac, before the frantic rush to label active children with ADD and medicate them, are quite similar to the fears raised today about over medication. He feared that the study of behavioral science, if done with values that did not promote the individuality of man and preserve the idea of free will, could lead to the elimination of creativity and a conforming society. How similar is this to a parent's fear that his child's individuality will be eliminated or altered by the administration of Ritalin? While Rogers did not live to see the psychopharmacology craze of the 1990's, he certainly recognized the power of behavioral science to create a happy, docile, homogenous society. Rogers concludes his lecture hopefully, stating:

"Unless as individuals and groups we choose to relinquish our capacity of subjective choice, we will always remain free persons, not simply pawns of a self-created behavioral science." ((1), 401).

Rogers's beliefs and study seem to center around the inherent value in the part of the mind that is not understood. He didn't label the unknown territory of the brain, what we've come to call the I-function, but he respected it. His contemporary, Dr. Skinner, felt that what we consider to be free will is just the part of the brain that we cannot explain yet. Similarly, in a class forum the I-function was called a default system, a place to put the ideas we couldn't explain. ((3)) The I-function is a known function which has a process that is not understood. It is the core of our humanity, our belief in our free will and ability to evaluate our decisions independently is what we see as setting us apart from other animals. This begs the question that still puzzles scientists and philosophers- what constitutes behavior? If biology does not equal behavior, what unknown elements allow us to behave the way we do? Rogers seems to feel that research in the behavioral science was only useful to the extent that it bettered humanity. Mapping the entire brain would be an incredible feat, if ever accomplished, but what would this do to society? Stripped of free will, how would we come to terms with life?

Rogers's research sought to explore man's life as a process, a continuum, that would never be completely understood, but hoped the discoveries made could better the lives of many without stripping them of their individuality. Rogers independently researched the outcomes of psychotherapy as quantitatively as possible. Rogers was one of the first psychotherapists to record sessions for further analysis. In his research, which was completely based on psychotherapy, not medication, he sought to preserve the idea that man was a unique and independent entity. Displeased with what he had learned from psychoanalysis and other prevailing theories, Rogers set out to do what he called "negative learning." When the ideas provided to him through formal education failed him, he pursued other options. Rogers coined the term "client centered therapy", a term still in use today. His overarching belief was that, through a constructive relationship with a patient in which he was "real", he would be able to help them learn things about themselves and help affect how they acted in their other relationships to become more successful and happy in life. He set out several models to explore what aspects had to be present in order for the relationship to be therapeutic. If a patient felt he was working cooperatively with the psychotherapist to solve a problem; if the psychotherapist was trustworthy and communicated this to the patient; if the patient could be allowed to express his thoughts free from external evaluation; and if the therapist could view the goal as a process of becoming, then he stated that therapy would succeed.

The ideas Rogers raised are helpful not only for use in therapy but also for education. While recognizing the flaws of the education system, Rogers applied the concepts he saw as effective stimuli for learning in the classroom. A class without teachers, lectures, or examinations was his ideal, but this ideal wasn't readily approved by any university. Instead, Rogers attempted to create an environment where students and faculty are seeking a solution to a problem or problems, pushing collectively away from flawed ideas and using resources collaboratively to advance to a less wrong idea. Rogers suggests we see examinations not as markers of the material we've learned, but as necessary tickets for entrance into points in life, such as graduate school. If we as learners could come to value to process of learning more than the examinations and final grade, what could we achieve together? Rogers states that there is no ideal that is a stasis, but a constantly flowing process that we can allow ourselves to become engaged in, and that in becoming part of this process, we can achieve what he considers to be the good life. ((1), 184-196). Learning too, is a process of constant change, of growth in our own knowledge and in the generally accepted ideas of society. By accepting knowledge as a fluid concept, we can further our enjoyment of life and our academic pursuits.

References

1. Rogers, Carl R. "On Becoming A Person: A Therapist's View of Psychotherapy." 1961. Houghton Mifflin Company, New York.

2)"Science as getting it less wrong." Paul Grobstein, Bryn Mawr College serendip website.

3)Class discussion on the I-function, forum for Biology 202, on the serendip website.


You couldn't catch it if I threw it at you: A neu
Name: Erin Okaza
Date: 2004-02-24 09:51:31
Link to this Comment: 8447


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

I was 15 and volunteering at my brother's school for children with learning and developmental disabilities when I met Blake Matheson. Unlike most of the other kids, Blake had cerebral palsy (CP) and was confined to a wheelchair. At the time, I didn't know anything about CP and remember nothing but sweaty palms and racing thoughts as I approached him. After an awkward moment of introductory silence, he started asking me questions. Not only did we become friends, we introduced the other to aspects of our worlds they would have never otherwise known. Though my family's move the following summer ended my time at the school, my humbling experience with Blake remains a constant echo of people's tendency to make assumptions (however innocent) about others based on shallow observations of outward physical appearance and behavioral differences. And in the end, walking away with nothing.

The goal of the following discourse is to provide a useful way of thinking about cerebral palsy in the context of the nervous system. I hope that such an examination will enhance our understanding of behavior associated with CP and ultimately demystify common misconceptions. First, I will explore why box models of the nervous system are useful in explaining CP. Next, I plan to investigate how recognized differences in the nervous system provide useful ways of thinking about specific CP mechanisms and treatment. Finally, I will close with an evaluation of the nervous system's limitations in defining behavior. In effect, this discussion will demonstrate support that cerebral palsy is yet another condition consistent with observations characterizing the notion of brain = behavior.

CP occurs as a result of irreversible damage, before, during or after birth, to the networks of brain cells (neurons) and connecting "cables" (white matter) that control movement. In effect, it is not a disease that can be "caught," but a medical condition dealing with muscle control that affects posture and movement (1). CP is a generic term that covers four distinct cerebral palsies - spastic, athetoid, ataxic, and mixed. In addition, further classification of CP is characterized by body location: quadriplegia (all four limbs), hemiplegia (one side of the body) or diplegia (either in both legs or both arms) (3).

Thinking about behavioral outcomes in terms of boxes is especially helpful in the case of CP. Such analysis offers an explanation of the occurrence of different CP's all within the realm of motor disability. Behaviors associated with various types of CP's will change in relation to the severity and location of brain damage. Athetoid cerebral palsy, caused by damage to the basal ganglia, is characterized by the lack of coordinated smooth movements; ataxic cerebral palsy, evidenced when there is damage to the cerebellum, hinders depth perception and balance; spastic cerebral palsy, mainly caused by damage in the cerebellum, results in stiff difficult movement; children with mixed cerebral palsy may display a combination of two or more of the above types (2). This suggests that behavior is highly dependent on brain organization. Damage to the white matter of the brain does not result in a random expression of behaviors; instead damage to the white matter severs connections between specific internal interconnections linking "boxes" producing a very specified behavior expression consistent with damage only that compartmentalized region.

Further developing the function of "boxes" in explaining the CP is the notion that physical and behavioral differences of the nervous system do not necessary imply mental retardation or a learning disability. In the case of Blake, I noticed that with the help of communication devices, he was able to relay and communicate ideas which were often far more developed than those of his "normal" looking peers who, unlike Blake struggled with learning disabilities. The above discussion about specific "boxes" generating specific outcomes implies that intellectual outcomes are independent of motor outcomes. Thus, causality cannot be used to assume that an individual with behavior differences automatically has cognitive disabilities. Only one-forth to one-half of children with CP experience some type of learning problem such as a learning disability (1). It is important to note that individuals with learning disabilities are usually within the normal range of intelligence as opposed to those with severe learning problems such as mental retardation -- were average intelligence is below normal (4). It is also important to note that many "tracts" run between different "boxes." This suggests the existence of many pathways to achieve the same outcome. In effect, it may be possible for other interconnections (i.e. axon bundles) to take over in the event that damage occurs in one "tract" to still produce the same result. Only if the severity and location of the same motor-hindering brain injury also affects the internal interconnections between "boxes" of the brain specific to the facilitation of intellectual outcomes, and other non-affected white matter interconnections cannot compensate to recreate the output, might causality be determined (5).

The second area of examination investigates the extent to which we can use observations about the nervous system to explain why people with cerebral palsy behave differently. To conduct a thorough analysis, the focus will be placed on spastic CP, as it is prevalent in 80% of all CP cases (2). Spastic CP occurs as a result of abnormal motoneuron excitability (8). Under normal circumstances, muscles usually have enough tone to facilitate movement and maintain posture while adjusting for speed, gravity, and varying flexibility. This movement occurs as sensory nerve fibers communicate how much muscle tone the muscle has as it relays the information "to tense" to the spinal cord which then carries the message to the brain (7). The command to reduce muscle tone follows the opposite path of direction from nerves in the brain via the spinal cord. These two processes work in tandem to coordinate smooth muscle movement and strength. On the other hand, an individual with spastic cerebral palsy cannot control the muscle's amount of flexibility. In effect, the relay from the muscle floods the spinal cord and creates a muscle that is too tense (spastic) (6). The inability of the nervous system to facilitate coordination between the stretch receptors, sensory neurons and interneurons in the spinal cord creates stiff muscles, limits stretching, and hinders muscle range. Over time, spasticity becomes the major cause of physical deformities in limbs (1).

Knowledge of abnormal motoneuron excitability in the nervous system is used to create CP management techniques specific to various types of spasticity. The first technique, selective dorsal rhizotomy (SDR) is currently the only permanent procedure that reduces spasticity and is favored in young children with velocity-dependent spasticity (10). SDR involves cutting hyperactive sensory nerve fibers that originate from the muscle and enter the spinal cord rootlets so as to reduce message flow to the muscle (7). In effect, nerve cells in the spinal cord receive less information from the muscle sensory neuron resulting in a more even distribution of nerve cell traffic in the spinal cord. Another relatively new method is the intrathecal baclofen pump used for patients with diffuse spasticity. It works off the nervous system's failure to release gamma amino butyric acid (GABA), a chemical neurotransmitter that signals the relaxation of the lower back and leg muscles producing an inhibitory affect on the thalamus (4). When baclofen is injected into the spinal cord, it mimics the functions of GABA - blocking abnormal nerve signals and allowing for greater muscle control (7). In the end, both treatments address the muscle neuron's inability to send controlled messages along an interneuronal mechanism, resulting in improvements in standing, sitting, walking and balance control. Though both methods clearly use different mechanisms, both techniques have gained positive responses.

Despite the fact that neurobiological advancements have enhanced our current understanding of cerebral palsy, there are limitations to which the aspects of behavior can be explained by the nervous system. Currently, all treatment for cerebral palsy focuses on symptom maintenance. Little is known about the exact nervous system interactions that cause the death of white matter tissue or why CP primarily affects motor function (7). Prevention of cerebral palsy can only be addressed once researchers understand the process of normal brain development and what mechanisms go awry during development causing nervous system anomalies that are observed as behavior differences (9). Once understood, comparisons might then be made between the brain and nervous system functions in CP and non-CP development to investigate the exact mechanisms leading to brain damage, and possibility of prevention prescriptions. The key to understanding brain development lies within fetal development. We can apply our observations about the box model's usefulness in characterizing cerebral palsy behavior to ask questions about what happens during this time of rapid cell division. At what levels do brain cells specialize into different types? How do they know where to assemble in their respective parts of the brain? We can further the depth of developmental questions by asking about the process by which white matter develops and the nature of connective branches that form crucial connections with other brain and nervous system cells.

CP presents us with yet another example of how the "brain" generates sets of behaviors unique to its construction and organization. People with CP lack the ability to control their motor faculties due to neurodevelopmental impairments caused by damage to specific areas in the white matter of their brain. This behavior is consistent with the severity and location of the damage as generalized by the four major types of palsies. The CP brain accounts for the differences caused by brain damage and produces a slightly different set of behaviors depending on the extent of damage. Though the nervous system is useful in explaining CP behavior, it does not account for all aspects of behavior. This leaves us with suggestions about what we should look for in the nervous system - particularly in the area of developmental neurobiology and the implications such research might have on CP prevention. Cerebral palsy offers a unique look at the neurological triumphs of medicine while simultaneously presenting a humbling manifestation that we are all at the mercy of our own misunderstandings. Though scientific shortcomings are reconciled through trial and error, education is the only way by which clarification and personal understanding is achieved. Maybe this discussion about CP has in a small way continued where Blake and I left off 7 years ago.


References

1) Miller-Dwan's Regional Rehabilitation Medical Center , specifically devoted to providing information about spastic CP
2) University of Virginia Medical School , Children's Center, tutorial for cerebral palsy
3)About Cerebral Palsy , Information focused on specific types of cerebral palsy
4) American Association on Mental Retardation , provides distinguishing characteristics between mental retardation and learning disabilities
5) Cerebral Palsy Resource Center , information and links about treatment, diagnosis, care, ect.
6) University of Alabama , defines the mechanisms of spastic CP
7) St. Louis Children's Hospital , surgical treatment options for spastic CP
8)Kennedy Krieger Institute , general overview of CP and current research initiatives
9) National Institute of Health , general overview of CP
10) Ontario Federation for Cerebral Palsy , information about spastic CP


The Mind's Eye? A Look at Optical Illusions
Name: Ghazal Zek
Date: 2004-02-24 10:14:56
Link to this Comment: 8449


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Epicharmus, a Greek poet and originator of Sicilian Comedy (1) is credited to have said that "the mind sees and the mind hears. The rest is blind and deaf." (2) Although Epicharmus' idea was conceived around 450 BC, it is interesting to apply it to our modern understanding of optical illusions, if we understand an optical illusion to mean a "false visual perception" (3). One type of optical illusion that specifically interests me elicits the illusion of motion. These "motion perception" (4) illusions provide exceptionally striking visual effects, usually of a stationary figure appearing as a circular-rotating figure. Using motion perception illusions as a model, can we use Epicharmus' notion that the "mind sees" and the "rest is blind" (in this case, the eyes) to explain the phenomena of optical illusions?

It will first prove helpful to understand how the eye works. When we see an image, our eyes are actually receiving light, which enters through the cornea. The cornea bends the rays of light before they reach the pupil. The rays of light then pass through the lens and bend toward the retina. (2) The retina, however, captures an inverted image. There is a layer of photoreceptors (among other types of neurons) on the retina which are used to measure light intensity in a way that then allows the rest of the nervous system to understand the signals. In humans, as well as most animals, nerve cells found in the eye are organized into a "lateral inhibition network." Before the signals are sent to the brain through the optic nerve, the lateral inhibition network, along with other organizations of neurons on the retina, process them. (5)

The lateral inhibition network actually "throws away" a significant amount of information. So, is lateral inhibition helping or hurting our ability to see? Lateral inhibition consists of excitatory input from some photoreceptors and an inhibitory input other photoreceptors. The same levels of illumination of excitatory and inhibitory photoreceptors generates the same output signal. However, when there is a contrasting dark/light border, different output signals are generated. (5) In general, lateral inhibition is able to "fill in" much of the information that it "throws away." In this case, lateral inhibition does not hurt our ability to see. On the other hand, sometimes the wrong information is filled in, and we see the illusion of another image.

The motion perception model (also called the "peripheral drift illusion") that I would like to discuss is called "rotating snakes." (6)Here, the viewer sees the image of rotating coils of snakes, whereas the actual image is quite stationary. It is important to note that there exist many different regions of color contrasts in this illusion, and that it heavily relies on peripheral drift. In general, illusory motion has a pattern of moving from a black region to an adjacent dark gray region, or from a white region to an adjacent light gray region. Factors such as curved edges and shorter edges enhance the peripheral drift. (7)
Although Epicharmus could not explain the phenomena of peripheral drift or the lateral inhibition network, his idea that the "mind sees" and the "rest is blind" still raises some interesting points. With regard to the peripheral drift illusions, the image the viewer sees is in large part a product of the "brain." The eyes, therefore do not behave as a camera does; they cannot simply capture an image independent of a lateral inhibition network, independent of the brain's involvement. However, simply because the brain may be involved in our sight does not mean that seeing is necessarily a "conscious" effort. For example, the lateral inhibition networks work as a part of the unconscious brain (5). In effect, no matter how hard one tries to avoid being fooled into motion perception, one cannot do it (unless one is an appreciable distance from the image, therefore lessening the strength of its light/dark regions).

So, while it is the mind, or part of the mind that is deciphering the rays of light picked up by the eyes into meaningful images, it may be working semi-independently from other parts of the brain which are used for logical thinking or problem solving. In other words, when it comes to peripheral drift illusions, we cannot think our way out of seeing something that is not there. On the other hand, we can know that what we are seeing is in fact, an illusion (although not necessarily instantaneously).

Clearly, no two brains are alike, so we can infer that no two people see something in one way. By and large, however, the patterns of vision are similar, especially with regard to motion perception illusions because of the way the eyes (and brain) work. Knowing that what we see is not exactly a snapshot of the world can be a disheartening notion. However, realizing that the way in which we view the world is unique and subject to a system as complex and evolved as the human brain, our view of the world does not seem so disheartening, after all.


References

1)Encyclopedia Britanica: Epicharmus
2)Are you seeing what I'm seeing? By Keith Gaudet, A simplified explanation of how the eye works and perceives illusions.
3)Encarta.msn.com dictionary definition of "optical illusion"
4)Sap Design Guild's Optical Illusions, A nice web resource for the different types of optical illusions
5)Serendip: Lateral Inhibition, A rich resource from Bryn Mawr College about how the eye works
6)Optical Illusions: Rotational motion, A website containing rotation motion optical illusions, namely the "Rotating Snake Illusion"
7)Phenomenal Characteristics of the Peripheral Drift Illusions: Vision: Vol. 15 No. 4 261-263, 2003, An article from the Journal "Vision" explaining the phenomenon of peripheral drift.


Scrutinizing Timmy and Lassie: A Behavioral Explor
Name: Ginger Kel
Date: 2004-02-24 10:45:13
Link to this Comment: 8450


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"A dog teaches a boy fidelity, perseverance, and to
turn around three times before lying down (1)."
-Robert Benchley

In United States homes, people do not dominate—pets do. Today, Americans own 377.8 million domesticated animals, 65 million of which are canines/dogs (2). When surveyed about why these families own pets, words like "companionship, love, company, and affection" frequently popped up (2). In recent times, as the above quotation demonstrates, pets, but more specifically dogs, have been bestowed with near humanity. They are parts of our families, our guardians, and our best friends. What causes humans to venerate dogs so? What is it in our nature that makes us compatible with such a different species? The answers to these queries lie in the behavioral common ground between man and dog.

Canine roommates are not recent phenomena. It has been estimated that dogs were domesticated as far as 15,000 years ago in East Asia (3). Dogs were a form of livestock. "People must have gained some advantage by having this domestic animal at that early time...Dogs may have been used as sentinels, for transport, and for herding in hunts (3)." The task of taming wolves required energy. This is energy that could have been used in some other venue of daily life, and therefore, makes domestication a costly process. However, the function performed by dogs made it a worthwhile cost. Why is this? "Humanity's first obligation is to ensure humanity's survival (4)." The use of canines afforded human beings a hereditary advantage.

Why did man form a partnership with canines despite all the other animals available? The origin of the dog/man attraction is rooted in similar lifestyles. The wolf/dog is a predatory creature, and therefore, naturally exists in packs. Its' home life is a result of its profession. The pack ensures safety for individuals as well as allows for more profitable hunts (5). In other words, the convention of a pack makes survival easier for the wolf. The pack social system is a hierarchy: consisting of an alpha male, an alpha female, and a pecking order of subordinates (6). The alphas (selected dogs are instinctually inclined to this) are extremely aggressive and have to be that way in order to defend their position. Their reward for their paranoia over being usurped is to eat first at kills. Does this all sound somewhat ominous? There's a reason for it; the wolf pack has a great deal in common with the human family. Families are too a hierarchy consisting of alpha(s) figures and the subordinate offspring. They provide protection and resources for the members of the family. As offspring mature, they in turn become alphas that reproduce and support their family. For Americans, this is the quintessential American Dream. For Canines, this cycle is life. Humanity shares its basic social system with canines (6).

How can these patterns be drawn out? Widespread behavioral patterns emerge across species due to the presence of instincts. An instinct "is a behavior that animals exhibit independent of the wide range of learning and experiences of different individuals (7)." Instincts are available to organisms from birth. For example, puppies know to knead on its mother's breast in order to release milk. They are blind and deaf at birth, so there is no way to learn that behavior. It must simply be part of their initial programming. Instinctual behaviors have evolved over evolutionary time to ensure the survival and reproduction of that species (7)." In addition to giving the organism basic survival skills, it serves as a control device. Nature does not favor those who are unhealthy. Instincts, being such primitive signals, can almost be directly translatable from one organism to another. The mounting of a submissive dog by a dominant dog (7) can be equivocated to a bully checking a smaller child into a wall. By urinating on a tree, a dog is doing little more than a human marking his property with a fence.

Instinct is the beginning and end of behavioral correlations between men and canines. Our anatomy, especially our neural structures, is vastly different from our dogs. The greatest contrast being sheer size as well as structural differences in brains. The human brain is roughly 18 x the size of a dog brain (10). The human brain has an exaggerated forebrain with numerous folds to increase surface area. This is a model better suited for memory. Although similar, the dog brain is more hindbrain focused; it is better adapted to certain sensory works (i.e. Smelling). Another difference created by the brains of organism, the "genes controlling brain-cell activity are very different between the species (4)."

Behavioral differences between humans and dogs are made clear through learned behaviors. "...Learned behaviors are shaped by experience (7)". For humans, learning how to walk or crawl would be a learned behavior. For dogs, learning how to hunt would be considered a learned behavior. However, the desire to hunt is an instinct. That's why, despite centuries of repression, even the smallest poodle loves to fetch toys. This ability to adapt behavior is necessary for survival (7). The domestication of wolves into dogs was reliant on adapting learned behavior. Certain behaviors, like barking still exist, due to adaptation on the dog's part. All dogs have a very strong territorial instinct to protect their den. When they became companion animals, the dynamic of their pack was shifted. Humans become the alphas, while the dogs were subordinates. As subordinates, their duty remained one of protection. "This explains why dogs often bark at intruders at home... This behavior is often reinforced since the intruder tends to go away, thus convincing the dogs that its protective, territorial behavior works (8)." Learned behaviors are very reflective of the environment and circumstances of the organism.

It has been demonstrated how man and dog are dissimilar. Common traits have also been pointed out on why human beings would want canines in their lives. However, a connection has yet to be established that shows why dogs are given a "soul" by humans. Somewhere in the 15,000 years together, dogs began to "converge on some of our thought processes (3)." The proximity of living space allowed humans to notice the airs and quirks of dogs. Canines had been forced to accept their owners as members of their pack. When dogs were seen as part of families, humans bestowed personalities upon them.

During World War II, British Sgt. Cyril Jones was helplessly caught by his parachute in a tree in the jungles of Sumatra, Indonesia. A wild monkey, perhaps recognizing Sergeant Jones' hunger and vulnerability, gathered bananas and bamboo shoots, and fed them to the soldier for 12 days straight. Even after Jones finally managed to cute himself loose, the monkey stayed with him. The animal continued to provide fruit as Jones
searched for his regiment (9).

Morality within dogs/animals is a scientific worm hole at the moment. There is no way to communicate with animals (9), and therefore no way to prove or disprove the divinity of an animal. Do animals have true thoughts and true emotions? Did the monkey take pity upon Sergeant Jones? "Animals, like humans, are capable of experiencing really strong feelings. They can choose to express their emotions through behavior that is virtuous and moral (9)." Another circulating trend of thought is that animals ultimately look out for animals. If they act in an unselfish manner, it could be because: they're acting instinctually, they expect a favor, or are making sure their pack survives (9).

Dogs and Humans made a deal 15,000 years ago. In return for their freedom, humans have ensured the survival and dispersion of their species. Humans spend over $31 billion a year on their pets (2). They claim dogs bring forth numerous health benefits including: lower blood pressure, prevention of heart disease, reduction of stress, and even lower health care costs (2). There's something unique about the bond between man and dog. It forces us to face our primitive aspects, and that in itself is healthy. There is something raw, but true, in our differences, in our similarities, and in communicating with something outside our species. Even if dogs are just cute, fuzzy parasites, humanity will be arm in paw with them until the end.

References


1)Mridula Shankar, mshankar@brynmawr.edu "Quotations about Dogs," 19 February 2004, forwarded email (19 February 2004).

2)APPMA Industry Statistics & Trends,from American Pet Products Manufactures Association

3)Stone Age Man Kept A Dog,written by Kendall Powell for Nature News Service

4)Animal-Based Research: Our Human Obligation,written by Dr. Adrian Morrison in BioOne database

5)Herd/Pack Behavior,written by Tom Rittenhouse

6)ThatDarnDog.com - Understanding Pack Behavior,from That Darn Dog. Com

7Basic Animal Behavior in Domesticated Animals,by Kimberly J. Workinger for Yale-New Haven Teachers Institute

8)Instinct & Behaviour,from the ACT Companion Dog Club

9)Unbeastly Behavior,by Sara Steindorf for Christian Science Monitor

10)Comparative Brain Anatomy



Ion Channels and Cystic Fibrosis
Name: Kimberley
Date: 2004-02-24 12:20:02
Link to this Comment: 8452


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Ion channels are a crucial part to all cells. They are responsible for allowing ions in and out of the cell, which permit such things as muscle contraction to occur. But in these gated structures are there ever any malfunctions? If so, what is it that causes these problems in the channels and how are they manifested? It was through the disease cystic fibrosis that I attempted to answer these questions.

Cystic fibrosis is a genetically inherited disease in which chloride transport is the root cause of its symptoms. The easiest detectible symptom, and least detrimental, of cystic fibrosis (CF) is excessively salty sweat, chloride being one component of salt (NaCl). (1) Other more harmful manifestations of the disease are abnormal heart rhythms and thick mucus, which amasses in the lungs and intestines. The mucus cannot drain normally due to its thick viscosity and therefore becomes a breeding site for bacteria. People with CF generally acquire respiratory infections as well as other breathing difficulties. Complications involving lung function is the primary cause of death among CF patients. Additional symptoms include enlarged and rounded digits, abdominal discomfort and poor weight gain. (2)

Treatment of CF generally includes ingestion of digestive enzymes to reduce the abdominal problems, taking antibiotics to prevent lung infections, and thinning the mucus in the respiratory system for more efficient drainage. These treatments have transformed the prognosis of patients from certain death during childhood to an average life span of 30 years. (2) However, these treatments only reduce the symptoms and do not eliminate the cause. The reason these treatments are not able to eradicate the disease from a patient is the nature of what causes CF.

The disease is caused by an alteration in a single gene on chromosome 7. (3) This gene is responsible for producing a protein that regulates transmembrane conductance. Upon discovery of the gene and the protein it is responsible for producing, researchers named the protein cystic fibrosis transmembrane conductance regulator. In CF patients, this gene lacks 3 nucleotides that are in control of producing the amino-acid phenylalanine. Therefore, the defective cystic fibrosis transmembrane conductive regulator (CFTR) protein cannot manufacture phenylalanine. Every time CFTR is made, the defect is detected in the endoplasmic reticulum, (which is responsible for protein synthesis and insertion of proteins into the cellular membrane) and is marked for degradation, never making it to the cell membrane. This type of CF is most common. However, other forms of the disease manifest themselves in slightly different ways. (3)

Some CF patients are able to produce CFTR that is inserted into the cell membrane. However, the protein still malfunctions due to disruptions in the function of nucleotide binding sites. One such mutation in CFTR alters the amount of time the channel stays open, making it so that the protein closes at a faster rate than that of normal CFTR. (4)

In normal CFTR, triphosphates are required for proper function. Two nucleotide binding folds are present in the ion channel, each of which has unique functional traits. Nucleotide binding fold (NFB) 1 controls opening the ion channel and it is responsible for when the ion channel is closed. NBF 2 is also involved with when the channel opens but not when it closes. Adenosine triphosphate must bind to CFTR in order for ion gating to occur, but CFTR has many more binding sites for adenosine triphosphate (ATP) than is necessary for the protein to function correctly. This observation would imply that ATP is important for the extra negative charge that prepares the protein for ion gating. (5)

Much is unknown about the nature of CFTR dysfunction and relatedness to lung infections in CF patients. The molecular mechanisms are still being studied but many hypotheses have come forth over the years. None of them fully explain the means of viscous mucus production and bacterial propagation however. Clearly, the thick nature of the mucus is due to dehydration, if the secretions had more water in them they would be of a more normal consistency. However, studies have not shown differences in chloride concentrations in mucus located in the epithelial airways between people with CF and those that do not. Thus salt concentrations may not be the cause of dehydration in the fluid. The dysfunction in CFTR may be its inability to clear fluid from the surface of the lungs. (6) Again, the mechanism is still unknown making any hypothesis a speculation.

Even though the exact molecular cause for CFTR disruption and effect of poor chloride ion regulation in the epithelial cells are not known, research is being done on ways to cure CF. One such approach is looking at the regulatory domain within the CFTR protein and its interactions with NFB 1. Researchers are hoping to find a way to keep the ion channel open longer in order to allow more time for ion exchange. (6) This research would only benefit those with the mutant type that allowed CFTR to actually make it to the cell membrane. The vast majority of CF patients would not gain from this research because of the degradation of CFTR making it impossible for the protein to reach the cell membrane. Research for this type involves altering viruses to include the normal gene that produces functioning CFTR protein and infecting those with CF with this virus. This research is slow however because patients can build up immunity to the virus, and because the patient must be infected many times this proves a great hindrance in progress. (2)

In answer to the questions posed at the beginning, yes ion channels can malfunction. Malfunctions can even have a genetic cause. Ion channels can malfunction by improper formation making it impossible for it to get to the membrane surface. (2) They can also have gate problems, making it so that the channel is not open for the normally prescribed amount of time. (6) In the case of CF, these problems caused a build up of fluid in the lungs and intestines resulting in chronic infections leading to death. (1)

During the course of writing this paper a connected but different question arose. Since CF is a genetic disease what are the ethics behind two people reproducing who both knowingly have the recessive trait? (7) There is a one in four chance that their child could have CF with the only outcome being certain death by the age of 30 and a life of physical pain. Is it wrong for two people to become parents when they know that there is a strong possibility that their child could suffer most of his or her life?


References

1)Symptoms of cystic fibrosis, for general questions about CF

2)Welsh, M. (1995, December). Cystic Fibrosis. Scientific American, 52-59.

3)Cystic fibrosis gene

4)New Insights Into Cystic Fibrosis Ion Channel

5)Molecular Structure and Physiological Function of Chloride Channels

6)Pier, G. (2002). CFTR mutations and host susceptibility to Pseudomonas aeruginosa lung infection. Current Opinion in Microbiology, Vol. 5, Issue 1, 81-86.

7)Andre, J. (2000). On being genetically "irresponsible." Kennedy Institute of Ethics Journal, Vol. 10 No. 2, 129-146.


Alliance Strategies in Bottlenose Dolphins
Name: Emma Berda
Date: 2004-02-24 12:45:07
Link to this Comment: 8454


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Dolphins have long been considered some of the smartest animals next to humans. They exhibit complex behaviors such as: social hierarchy, formation of alliances, what appears to be suicide(1) and cooperative behavior.(2) This paper will deal with alliance formation in particular. Why do dolphins form these alliances? Is it simply helpful for survival or is it more complex? How do these alliances compare with human behavior?

Researchers have been studying the bottlenose dolphins (Tursiops sp.) in Shark Bay, Western Australia for quite a long time because they are tame. They have observed male-male alliances that seem very stable. Male alliances are usually groups of two or three males that can last many years. The association coefficient for some pairs of males is in the same range as those found for mothers and their nursing calves(3).

So why do males form these alliances? The answer seems to greatly reflect human behavior: to get women. Male alliances typically "herd" females for anywhere from a few minutes to months(4) These herding events are not usually enjoyed by the females. Herding is often forcible with escape events and violence involved.(3) In a herding event males will surround the female or chase her. Aggression toward the female is common and can include: hitting with the tail, head-jerks, charging, biting, or body slamming.(3) Should the female try to escape, which often happens, the males will chase her more often than not. Of course the ultimate goal of a herding event is sex and the males in the alliance will take turns to make sure everyone has an equal share. If the alliance has three members, only 2 will herd the female and the third will stay behind. However, the individual who is left behind changes with every herding event so again all members have an equal chance at mating. (3)

What has just been described is a primary alliance. However, bottlenose dolphins also form secondary alliances. Once again these are between males.(3) Let us say we have a primary alliance A consisting of 2 males. They may have a secondary alliance with another primary alliance B which has 3. Now there is a third primary alliance that is not affiliated with A or B called C which has 3 males. If C has just herded a female that alliance A or B wants then A and B will join together and forcibly take that female.(3) If A or B took on C alone it is unlikely that they would succeed because they would be evenly matched. But if they work together then its 5 against 3 and the secondary alliance will succeed. Both primary alliances do not mate with the stolen female. Perhaps alliance B will claim her this time but this means that next time alliance A will get the female.(3) Again we see equal sharing of the "spoils". Now in a reverse situation let us say that perhaps alliance C has come to reclaim their female from alliance B. B will call upon A and A will help defend the female from alliance C. (3)

I will briefly touch on the third type of alliance, the super alliance.(5) A super alliance is made up of stable alliances and labile alliances.(5) Stable alliances are like the primary alliances described above. Labile alliances are ones in which males change partners frequently. The observed super-alliance consists of 14 males. Each male of the alliance has 5 to 11 alliance partners from within the super alliance.(5) Although theoretically the males should have no preference of one male over another for alliance formation, in reality there are preferences and avoidances.(5) The super alliance is another example of the social complexity found in these dolphins.

We know the dolphins form these alliances to get women but are they looking for sex or to reproduce? Alliances are likely to herd non-pregnant females that are likely to be in estrus.(3) So we can assume that although fun may be had, the overall goal is reproduction. Since there is equal sharing of the female, according to the theory of fitness we would expect males in an alliance to be related. If the males are related, than a member of the alliance would increase his own fitness when one of the other alliance members took their turn with the female. Research shows that males in a primary and secondary alliance are likely to be somewhat related.(4) However, males in a super alliance are not usually related at all.(5) Why then would a male choose to be in the super alliance? One answer could be that since the super alliance is so big they can basically take on all of the primary and secondary alliances and steal many females. Therefore the males in the super alliance would have more access to females. Perhaps that would make up for what fitness is lost by not allying with related dolphins.

Do these revelations mean that dolphins may be closely related to man on an intelligence level? We can definitely say that dolphins have complex social structures. In fact, nested alliances are quite rare and are really only found in dolphins and humans. Also, a lot of the dolphin social behavior and structure is similar to primates, which again suggests that dolphins are close to the human intelligence level.(5) But let us look at relatedness of these alliances with human society. In human society both males and females do form alliances with each other(friendships) and these alliances can last for long or short periods of time. In dolphin society it's only that males that form these alliances. In human society one of the many things that these alliances do is approach members of the opposite sex. The same is true in dolphin society except that dolphins often approach the females aggressively, while the same behavior in humans(gang rape) is much less common. In dolphins, alliances that go after females are likely to be related, in humans this is less common.

Finally, the last issue I will address is the idea of sex for fun. In my opinion, an animal that has sex for purposes other than reproduction is probably more likely to be related to humans intellectually. Earlier I stated that alliances are more likely to herd non-pregnant females. So reproduction is one of the goals. But it has been proven that dolphins do enjoy sex. Dolphins have been recorded having homosexual sex and there is no chance for reproduction there.(6) So perhaps the dolphins in the alliances are also having sex for fun but since they don't have the worries of fatherly duties they may as well have sex with non-pregnant females.

In conclusion, dolphins are remarkably social, intelligent, and complex animals. Their social complexity indicates that they may be near the intelligence plane of human beings. I think that the more we study these animals the more we will realize that they are closer than we think.


References

1)Dolphin fact page

2)Seaworld Bottlenose Dolphin Fact Sheet

3)Connor, R.C., Smolker, R.A, & Richards, A.F. 1992a Two levels of alliance formation among male bottlenose dolphins (Tursiops sp.). Proc. Natl. Acad. Sci. USA 89:987-990

4)Krutzen et al. 2002 Contrasting relatedness patterns in bottlenose dolphins(Tursiops sp.) with different alliance strategies. Proceedings of the Royal Society of London series B. 270:497-502

5) Connor, R.C., Heithaus, M.R., & Barre, L.M. 2001 Complex social structure, alliance stability, and mating access in a bottlenose dolphin 'super-alliance'. Proceedings of the Royal Society of London series B 268:263-267.

6)Gay Marine Animals


Heart Attacks: Cause And Effects
Name: Laura Silv
Date: 2004-02-24 14:29:22
Link to this Comment: 8457


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Part of maintaining a healthy lifestyle is to know, at each stage of one's life, what diseases or dangers one faces. Infants, for example, are extremely susceptible to colds because their immune systems are not fully developed. Children under the age of 10 have a very high chance of getting the chicken pox. Men and women between the ages of fifteen and twenty-five are at high risk of becoming infected with HIV/AIDS. Adults over the age of forty become increasingly at risk for having a heart attack. Since February is American Heart Month, I thought this would be an excellent time to research some of the causes and effects of heart attacks.

A heart attack is caused by the build-up of fatty substances, cholesterol, calcium and other substances that make up plaque. Plaque can begin to build up within the inner linings of the larger arteries of the body in childhood, but it takes much longer, usually thirty years or more, for the build-up to escalate to dangerous levels. This process of plaque build-up is called atherosclerosis, a process which is quickened by having high blood pressure or cholesterol, diabetes or especially by smoking.

Over time the build-up of plaque severely limits the flow of blood to the heart, specifically to the myocardium, the middle layer of the wall of the heart (the outer layer is called the epicardium, and the inner layer is the endocardium). The myocardium is the main muscle which allows blood to flow in and out. In fact, according to the American Heart Association, "the medical term for a heart attack is myocardial infarction." (1)

Because less blood is getting through to the heart, oxygen, which is carried within the blood cells, also becomes limited. If one or more artery (arteries) becomes completely blocked, a heart attack follows. If immediate treatment, usually surgery to clear up the arteries, is not administered, the muscles of the heart become permanently injured, causing the patient to die or become disabled.

A heart attack can, less frequently than by the complete blocking of the arteries, also be caused by a severe spasm or tightening of the coronary artery, which temporarily cuts off blood flow from the heart. While causes of artery spasms are not widely agreed upon, it is believed that they may be caused by smoking cigarettes, heightened stress, or by taking certain illegal drugs like cocaine. (2)

Warning signs of a heart attack are varied and usually do not precede an attack by more than five minutes, so it is necessary to act quickly. Some such warning signs are prolonged or recurring (over the time period of a few minutes) discomfort or irritation in the chest or arms, shortness of breath, which is usually proceeded by the afore-mentioned discomfort, or a feeling of being lightheaded.

Treatments for heart attacks vary depending on severity of the condition and how far in advance the condition was discovered. Most common is an angioplasty procedure, which is when a small tube is placed inside an artery in order to reinstate and facilitate blood flow to the heart. Medications likewise vary from case to case, but most commonly, beta blockers are given to patients to, according to the National Heart, Lung and Blood Institute, "decrease the workload on your heart ... [and] to prevent additional heart attacks." (3)

A new treatment is emerging now for preventing heart attacks years before they start. In November 2003, Dr. Eric Topol of the Cleveland Clinic and his team of scientists were able to locate the first gene known to directly cause heart attacks. This discovery was found with the help of an Iowan family, the Steffensens, which had suffered from heart attacks for generations.(4) Out of ten siblings, nine had their first heart attack between the ages of 59 and 62, and many have had more than one heart attack. And the one sibling exempt from the heart attacks was found not to have the gene. This particular gene "creates weak artery walls", which make heart attacks a practical guarantee. And now that the gene has been identified, it can be isolated and prevented from spreading.


References

1 )American Heart Association Online

2 )National Heart, Lung and Blood Institute

3 )National Heart, Lung and Blood Institute

4 )CBS News


Brain Modularity: Links between Evolution, Intelli
Name: Prachi Dav
Date: 2004-02-24 15:57:00
Link to this Comment: 8460


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Questions that arise during an examination of brain structure seem always to proliferate from a point of origin, that is, from the first question asked. One of the simplest question may ask, is the brain modular? And the simplest answer would be almost certainly positive. However, it is the nature and origin of such modularity that we should perhaps be concerned although the characterisation of each module is presently beyond our grasp. Brain modularity has more recently, with the marriage of psychological and evolutionary concepts, has been depicted as structure arising from evolutionary forces such that the brain constitutes a compilation of adaptations, evolved as solutions to the various adaptive problems in the environments faced by our ancestors. The amalgamation of evolutionary principles and the ideas regarding brain structure are primarily the work of Leda Cosmides and John Tooby who indicate that human reasoning is not generalised but a specialised ability and that the reasoning mechanisms (reflected in modules) are devoted to the management of social problems 1)Evolutionary Psychology an informative source for theory in evolutionary psychology. However, such an approach begs certain questions. For example, such an interaction with the social world and its problems requires mechanisms that can remember and track changes in the environment. As Henry Plotkin asserts, this mechanism is intelligence 2)The evolution of culture clearly presented ideas of a respected evolutionary psychologist. The relationship between intelligence and the modular structure of the brain on the one hand and their relationship to human culture as we see it today, in it's vast and wonderful complexity on the other, are linkages that evolutionary psychologists are currently grappling with and which will be the subject of the following paragraphs.

The field of "Evolutionary Psychology" has begun to be associated primarily with certain fixed principles and the work of Tooby, Cosmides and Pinker, among others 1)Evolutionary Psychology an informative source for theory in evolutionary psychology. Given that evolutionary psychology may strive to discover and explain psychological adaptations and their functions, such psychological adaptations in the brain must be characterised. As they developed in response to various problems in the environment, dilemmas so different from one another that they could not be solved by some abstract, generalised mechanism, they were posited to be represented in the brain as a collection of "modules" which are specialised problem-solving domains. The brain is conceptualised as a container for enormous numbers of these modules, to the extent that some extreme proponents of the theory defend the concept of "massive modularity" which maintains that the brain is riddled through and through by such modules others 3)In Defense of Massive Modularity an extreme position within the school of evolved modularity.

The modules proposed by Tooby and Cosmides are purported to have evolved in order to contend with a variety of adaptive problems encountered by our Pleistocene ancestors (since evolution as a cumulative process requires vast swathes of time, human psychological adaptations are not in accordance to modern life) such as alliance formation, kin relations, sexual attraction and so on. As these module functions are listed, an obvious problem arises, that is the difficulty of obtaining evidence for their existence. In this case, then, the desire to discover the universal structure that links together all members of our species, is severely obstructed.

Given the obvious existence of cultural and attitudinal variance, evolutionary psychologists hold no expectation of finding common human behaviours and beliefs through the discovery of modular commonality, but instead hope to uncover similarities in "cognitive Darwinian Algorithms" 1)Evolutionary Psychology an informative source for theory in evolutionary psychology which are then expressed through different behaviours among humans and are context dependent. The assumption that a wide ranging sample of behaviours may be explained through the indiscriminate application of such modules and evolutionary principles to modern life often leads individuals astray and the endeavour is known as adaptationism. Exaptations, however, are behavioural expressions of functionally empty traits or traits that evolved for different uses, while spandrels are traits which developed as byproducts of others and had no original function and yet became applied toward a different adaptive function. One claim regarding modern human behaviour originates from Gould 1)Evolutionary Psychology an informative source for theory in evolutionary psychology who argues that most "mental properties" are not adaptations but spandrels. This concept is given credence by the difficulty of explaining reading, writing and the consciousness of one's mortality as shaped by natural selection. These claims are not refuted by evolutionary psychologists and they hold that the complex design clearly evinced by the brain is typical of adapted structure. The spandrels, however, are products of evolved mechanisms and they may be what we see today in terms of the remarkable complexity of human culture.

Human culture, however, concerns another capacity whose evolution engenders yet more questions of increasing complexity. The capacity referred to here is that of intelligence and is described by Plotkin (4)as a "special kind of adaptation that generates adaptive behaviour by altering brain states." This assertion is made in the context of intelligence as a mechanism through which individuals track the changes in their environment as they occur and generate behaviours which, in turn, result in learning. Learning, which is known to result in changes in the brain, then results in the storage of information within the organism such that the experience may be applied to future dilemmas. Acquired knowledge, however, is not passed from generation to generation biologically while the evolved structure of the brain is perhaps carried through generations. Such a structure, as posited by Plotkin (4)constrains the kind of learning organisms engage in through the creation of specialised modules. However, the contribution of human intelligence to a phenomenon of human culture, which Plotkin likens (in importance) to the evolution of self-replicating molecules, is great.

Human intelligence allows for extensive learning in various fields. Culture, however, depends on the sharing and communication of that which is learnt by means of "intelligent" mechanisms. The knowledge that is shared is not isolated and fixed but forever modified and metamorphosing into different knowledge and practises and this is the consequence perhaps of communication through the complexity of human language and mediation of intelligence. Furthermore, the existence of intelligence perhaps allowed for the development of a theory of mind among humans, the ability to attribute intentions to others' mental states, which then allowed for the construction of social entities. Therefore, it seems possible that the intelligence allowed for the development of the cultural intricacies that are observed especially in human societies.

In essence, evolutionary psychology is a burgeoning field of study whose proponents believe that almost any experience or instance of human behaviour arises from evolved structure in the form of modules in the brain. This conception continues and is at risk of falling into adaptationalist modes of thought. Regardless, most evolutionary psychologists believe that a great deal of human behaviour is not a direct product of evolved structure and yet this places them in another conundrum, that is, delineation of the function of each module. Yet, the theory that the brain is compartmentalised in such a way, at least to some extent, allows for theorising about the construction and propagation (with modification and addition) of human culture, whose study is fascinating and of interest to most who ponder the origins of such complexity.

References


1) http://host.uniroma3.it/progretti/kant/field/ep.htm; David J. Buller, Evolutionary Psychology, Northern Illinois University.

2) http://www.iisg.nl/research/plotkin.html; Henry Plotkin, The Evolution of Culture

3) http://www.dan.sperber.com/modularity.htm; Dan Sperber, In defense of massive modularity

4) Plotkin, Henry. The Imagined World Made Real: Towards a Natural Science of Culture. New Jersey: Rutgers University Press, 2003.


The Tenuous Past: Memory and the Ways it Fails
Name: Dana Bakal
Date: 2004-02-24 21:51:12
Link to this Comment: 8469


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"I remember it like it was yesterday!" you say. But how well do you really remember it? How well do you remember yesterday? Here's a quick quiz: What time did you have lunch yesterday? What exactly did you eat? What did you say? What did the people around you say? If you read the paper yesterday, name all the stories you read and summarize them briefly.

Don't remember yesterday as well as you thought? Don't worry, nobody does. Our memories are often thought of as recording devices, mechanically noting what has happened during the day and replaying these events like a tape. In truth, memory is a function of the brain, which is constantly in flux, organic, and does not behave like a machine. Your memory can be affected in many ways by many things, which can cause you to forget, to change memories around, to repress memories, and even to invent completely new ones!

This is of no small importance, because our only evidence that the past occurred comes from our memories. In what ways, then, can memory fail us?

Dr. Daniel Schacter of Harvard University lists "7 Sins of Memory," ways in which our memories fail us. His list features :
Transience, absentmindedness, blocking, suggestibility, bias, persistence, and misattribution (5). Most of these sins are things we experience in everyday life. When something you read last week isn't as clear now as it seemed then, that's transience. When you forget where you put your book or forget that you have to be somewhere, that's absentmindedness. Blocking is the "temporary inaccessibility of stored information," such as a person's name or a word. Suggestibility and misattribution go together, since memories can incorporate misinformation and also BE misinformation. Suggestibility is the "incorporation of misinformation into memory due to leading questions, deception and other causes," and misinformation consists of 'remembering' something that did not occur. Persistence is slightly more abnormal, and the inability to get a thought out of your head that it characterizes is common in post traumatic stress disorder.

To this list, some would add "repression," the conscious or unconscious suppression of traumatic memories. Repression was first conceived of by Freud, who felt that people could push memories out of their awareness (1). This theory enjoyed new fame in the 1990's, when hundreds of people, mostly women, 'recovered' repressed memories of abuse, fueling a Satanic Ritual Abuse scare during which many people were convicted of heinous crimes they may not have committed (8).

Michael C. Anderson et al did a study to see whether repression had any physical signals, whether the brain changed when people tried to repress a memory. They set up an experiment wherein subjects looked at a pair of words, memorizing the association. Then, after performing either a task involving thought or one not involving thought, were shown one half of the word pair and either asked to think of its complementary word or to suppress thought of it. They took MRI's of the subjects throughout the process and found that "Controlling unwanted memories was associated with increased dorsolateral prefrontal activation, reduced hippocampal activation, and impaired retention of those memories," and that "Both prefrontal cortical and right hippocampal activations predicted the magnitude of forgetting" (1). This means that there is a physical mechanism for repressing memories. This is important, as it means that memories can be buried and lost, impairing ability to remember entire portions of life.

On the flip side of repression is memory fabrication. This is affected by the 'sins' suggestibility and bias, but is really a case of misattribution. Sometimes we remember things that someone else told us about, things that we dreamed, or things we just made up. University of Washington memory researchers Jacquie Pickrell and Elizabeth Loftus conducted an experiment wherein they showed people a fake advertisement in which the reader is described visiting Disneyland and meeting Bugs Bunny. Later, one third of participants reported that they knew they had or remembered having shaken Bugs' hand. This, of course, cannot be true, since the Bugs Bunny Character is a trademark of Warner Brothers and not Disney (2). This is quite significant in everyday thought and in advertising. If imagination or suggestion can give rise to memories as real as those of actual events, how can we tell what has actually occurred and what has not?
Loftus points out that this is a memory process that advertisers use when creating "nostalgic ads." A company such as Disneyland or McDonalds can prompt consumers to create false memories of having had positive experiences with their products and services in the past, increasing your likelihood of returning (2).

Besides the more everyday ways memory fails, there are many diseases which can affect it. Alzheimer's is probably the most well-known of these. Alzheimer's impairs judgment and changes personality as well as affecting memory (6). It occurs most often in older people, who make up about 50% of the population with the disease, and is very rare in individuals under 40 (7). The memory loss in this disease, as well as in other brain-altering diseases, comes form changes in the physical structure of the brain, rather than from normal brain mechanisms.

Overall, then, our memories, which we depend on to report the past and to form our personalities, are in fact extremely mutable. They can be affected and changed by things we think, things we see, diseases we get, and they can be fabricated out of suggestion or imagination. Since these flawed memories are all we have, we must form a world view based on the premise that they are more or less accurate interpretation of the past; this premise is usually useful and necessary, but can sometimes cause problems. How much should we trust eyewitness reports of crimes, for example? Or the reports of a repressed abuse memory?
How can advertisers manipulate us using these memory flaws? And who are we really if our memories of our selves and our interactions with others are so changeable?

I leave you with those thoughts; but remember, you don't remember yesterday well as you might have thought!


References


1)Science Magazine, Anderson, Michael C. et al. "Neural Systems Underlying the Suppression of Unwanted Memories".

2)University of Washington, "'I Tawt I Taw' A Bunny Wabbit at Disneyland."

3)American Psychological Assosciation, "People Think They Remember."

4)PSYCHIATRIC ANNUALS, Loftus, Elizabeth. "The Formation of False Memories."

5)APA ONLINE,
Murray, Bridget. "The Seven Sins of Memory."

6)WebMD Health,"Alzheimer's Disease: An Overview."

7)WebMD Health, "Who is Affected by Alzheimer's Disease?"

8) Loftus, Elizabeth. The Myth of Repressed Memory. New York: St. Martin's Press, 1994.


Can Hope Heal?
Name: Millicent
Date: 2004-02-24 23:32:25
Link to this Comment: 8480


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

A positive outlook during a time of suffering particularly during an illness can help one heal faster. It is often believed that a person can fight disease with their mind. The thought that we can combat sickness with our attitude is not an idea that has much scientific proof. Until recently little scientific research was available on the effects of hope in the healing process. However recent studies show significant evidence is available which suggests that hope may have an effect on the body during illness.

On the surface there is practical evidence that a hopeful outlook can help a person heal. Someone who believes that he or she might eventually get better when afflicted with a life threatening disease is more likely to take care of his or her body. This approach of taking care of ones body with the hope that it might heal can keep a person alive until better methods of treating her specific disease come along. This type of hope is believed to have helped some HIV patients who were first diagnoses with the virus. As these individuals took various steps to stay as healthy as possible, scientific advances made living for a longer period of time with the virus viable (2). While this observation is interesting this type of hope does not actually help a person heal.

As Abraham Verghese writes in his article "The Way We Live Now: Hope and Clarity" there is a belief in our society that with hope and a positive outlook one can fight off a disease such as cancer. He writes "If you accept the war metaphor... then a diagnosis of cancer becomes a call to arms, an induction into an army and it goes without saying that in such a war optimism is essential. Memoirs of Cancer Centers state this as a creed: a 'positive attitude' influences survival" (2) . He goes on to argue that this belief is not backed by substantial scientific research and therefore adds pressure to patients to always appear positive when the realities of their situations warrant some realistic grief. Verghese cites a study from Australia, which suggests a positive attitude or hope did not have a substantial effect on the survival rate or health of lung cancer patients participating (2). He uses this study to show that hope cannot make a sick person magically better.

Despite Verghese's points many scientists, patients, and medical doctors believe that a hopeful outlook can help a sick person overcome a serious illness. These proponents of the power of hope argue that a person who believes he will get better produces endorphins and enkephalins released by the pituitary gland, which can intercept the feelings of pain in the body from reaching the brain (Groopman, p.170). The research of Jerome Groopman M.D. is some of the most conclusive about the effects hope has on sick people. His research attempts to show the way the brain aids the body's healing and coping ability in sick peoples' bodies. Manipulations of the nervous system sparked by the emotions associated with hope begin a chain of events, which may help sick people recover.

Groopman uses the placebo effect to help explaining how the nervous system, with the help of hope, combats pain (Groopman, p.175-190). The placebo effect is widely accepted by medical doctors and scientists. It shows that in some cases a placebo or fake cure can satisfy patients and make them believe they are cured when in reality no medicine, surgery, or treatment has been given. For example a doctor who prescribes a sugar pill or pill containing no medicine to a group of patients suffering from an illness, without telling them that the drug is a placebo, will have some patients who report their symptoms have faded. It may seem that theses particular patients were not really that sick to begin with, but the placebo effect is actually thought to be a result of "belief and expectancy"(Groopman, p.176). The patient believes and expects the medication to cure her ailments. Belief is encouraged as the patient trusts that a doctor will be able to identify an illness and then find the appropriate medication to treat the problem. The patient then expects the treatment to work. This combination of belief and expectation can sometimes be enough to help a person recover form their symptoms.

Groopman argues that the same type of belief that is present with the placebo effect is similar to that created by a hopeful outlook. Within the body the theory that hope can heal is based within the nervous system. When a person is hopeful their body produces endorphins and enkephalins. These endorphins and enkephalins are chemicals, which alter the messages sent to the brain through the nervous system. The types of endorphins believed to be produced in a hopeful person's pituitary gland include the beta-endorphin, which is thought to improve ones mood by blocking pain (6) . According to Groopman's study in a hopeful patient's body the endorphins prevent the brain from recognizing the message of pain sent through the nervous system. Without the message of pain the body is able to exert the energy necessary to recuperate from an illness. The endorphins and enkephalins are also thought to help improve the immune system. If the body is not preoccupied with the pain of an illness it might be able to fight off a life threatening disease.

The production of endorphins and enkephalins alone cannot explain the positive effects of hope on ill people. A hopeful person benefits from a positive outlook because his body is less likely to produce the chemicals, which can prolong an illness and are associated with a negative outlook. To explain how hopelessness can prolong an illness Groopman looks to the effects of Substance P and cholecystokinin also know as CKK(Groopman, p176). These chemicals, when released in the central nervous system have the opposite effect of endorphins and enkephalins. CKK helps send the messages of pain to the brain thus increasing ones hopelessness and suffering. Groopman argues that these two chemicals are produced when a person is constantly reminded of an illness and the grave circumstances of their infirmity. This is common in patients who have serious illness with low survival rates. The pain creates a cycle, which is hard to escape (Groopman). Groopman argues that this cycle can be broken with hope.

If we accept the theory that hope sends on endorphins and enkephalins that act in a fashion similar to pain killers blocking pain from the brain we are left with the fact that some very hopeful patients never heal and that some very negative thinkers survive the worst of illnesses. The answer to this problem is that while hope may help a person survive or at least feel better it is not a cure for disease. It is simply another tool that can help on the way to recovery. Hopefully more research will come along to redefine and improve on Groopman's observations but for the time being Verghese's belief that hope is not a cure remains. Positive thinking and the mind do not have to power to completely overcome pain. However thanks to Groopman we now know that our minds and bodies together have the ability to protect us from certain pains, which could eventually help seriously ill people heal.


Sources


1.) 1) Groopman, Jerome. The Anatomy of Hope. New York: Random House Press, 2004.
2.) Verghese, Abraham. "Hope and Clarity." The New York Times Magazine February 22, 2004. Available on the web at the following web site:
5)New York Times Web Site,
3.) 5)web site dealing with the issues faced by those with serious illness, a rich resource from Bryn Mawr College
4.) 5)Acumen Journal Web Page, a life science journal
5.) 5)Acumen Journal Web Page, a life science journal
6.) 5), a beta endorphine


Individual versus Group Behavior
Name: Sonam Tama
Date: 2004-02-25 20:44:31
Link to this Comment: 8504

<Individual versus Group Behavior> Biology 202
2004 First Web Paper
On Serendip

"The passions released are of such an impetuosity that they can be restrained by nothing...Everything is just as though he really were transported into a special world, entirely different from the old one where he ordinarily lives, and into an environment filled with exceptionally intense forces that take hold of him and metamorphose him"

Emile Durkheim on Group Consciousness (1965)

The discussions we have been having in class about the brain and the self as well as the idea that we are constantly changing led me to think about group versus individual behavior. Although some people may feel otherwise, I think we are all influenced – in different degrees – by others. All of us, at one time or another, have known what it is like to be part of a group. However, there often seems to be a negative feeling towards the group, with more focus being put on negative group behavior. This paper is an exploration on group and individual behavior and thought. Are certain people more group oriented? Are others more individual minded? Which one is the "real" self?

According to an article I read, the biological explanation behind why we behave differently when we are in a group as opposed to being on our own is that the limbic system in the brain, which is involved with emotional activity, dominates the person's actions and thinking, and therefore suppresses the neo-cortex, or the logical thinking part of the brain when a person joins a crowd. Therefore, the person acts irrationally because he or she is under "emotional pressure (1)." The author of the article uses the stock market as an analogy, stating that the reason why markets crash – in various societies, rich or poor – after a sudden boom, is that people tend to follow crowds (1). This analogy leads me to ask whether non-Western societies are therefore irrational, since they are regarded as collectivists. And also, if stock brokers in various societies act in the same irrational manner, does that not prove that individuals in different societies behave the same? These two ideas seem contradictory to me and also, they raise many questions regarding the idea that joining a group means loss of rationality.

When I studied psychology, it was always made clear that western and non-western psychology were different because of the western emphasis on individuality and the collectivist nature of non-western societies. Statements like these make clear and definite comparisons between western and non-western societies: "Western societies often define adjustment by one's level of individuality, independence, and achievement promoting emotional detachment from social groups...Contrary to Western cultures, many Eastern cultures endorse a communal view of society and do not conceptualize a person apart from his or her relationships." (2). Furthermore, it was stated that social hierarchy, social support, and interdependence are highly valued in these (non-western) cultures. Thus, these different views lead psychoanalysts to believe that Western groups would endorse antisocial coping strategies (strategies targeting independence and self-advancement) and non-Western groups would be more likely to endorse prosocial coping strategies (strategies targeting joining with others for support and considering the needs of others).

But is this really fair? I was raised in a non-western society but I do not feel that I have no individual self. My American friends are no less loyal in their friendships and consider the needs of others. In an unrelated study performed on groups of boys and girls that concluded that there are gender differences in the way we learn, Dr. Grobstein, professor of neurobiology at Bryn Mawr college responded to the study by stating that, "Population differences, while real are of no use whatsoever in characterizing a given male or female...For any particular measure, a given male may be more 'male-like' or more 'female-like' than a given female." Thus, comparing groups may show major differences between boys and girls but comparing individuals may have different results. In the same way, a given non-western person may be more individualistic than a given western person or a given non-western person.

Upon reaching adolesence, parents and teachers warn children about "peer pressure" and the dangers of choosing the "wrong" kind of group. Images of teenagers smoking and drinking excessively or joining gangs are presented as consequences of "peer pressure." Even images of football "hooligans" creating all sorts of trouble after games repeatedly shown on television adds to the idea that even a non-violent person may somehow become violent when in a group. A relatively recent news article about teenagers who performed violent attacks on strangers and videotaped these "pranks" caused concern regarding group pressures. In the article, Jay Reeve, a psychologist at Bradley Hospital at Brown University in Providence, states: "Group pressure can override common sense fairly easily for these folks. ... Teens tend not to have developed a clear sense of right and wrong, apart from their peers." The immediate result, he concludes, is that teens are more prone to impulsive, violent behavior. Additionally, Dr. Alice Sterling Honig, professor emerita of child development at Syracuse University in New York agrees that violence is often linked to peer acceptance and states that, "murderous feelings and triumph of physical power are glorified and held up as splendors by society (3)." To be an individual is praised, to be in a group is to be dependent and to cause trouble. The message too often is that to be easily influenced by others is to be weak and in a dangerous position.

I still did not get a clear explanation on how this change how or why we behave differently. Then I read French sociologist and philosopher, Emile Durkheim's view on group behavior, or more specifically, "group consciousness." Durkheim feels that attempts to explain "irrational" behavior are "post facto attempts to explain socially generated compulsions which cannot be understood nor controlled." I agree with Durkheim's statement because linking group behavior with irrationality seems too clear cut. Also, is positive group behavior considered to be irrational then? Durkheim also states that, "social psychology has its own laws that are not those of individual psychology" and that there is a "conflictual ebb and flow between singularity and community, self and group...on the one hand is our individuality – and, more particularly, our body in which it is based; on the other it is everything in us that expresses something other than ourselves...(These) mutually contradict and deny each other (6).." Both Durkheim and German Sociologist Max Weber not only agreed that individual and collective states of mind are different but also that being in a group as opposed to being alone is "transcendence", an "extraordinary altered state of consciousness among individuals in a group, which Durkheim called "collective effervescence." Also, unlike the idea that non-western people are collectivists and western people are individualists, Durkheim proposes that the individual and the collective state of mind are within all people and that there is a constant struggle between these two states. Furthermore, Durkheim and Weber see the individual as egoistic and immoral but subdued within the transformative grip of the social (1).

An interesting question was raised by a former Biology 202 student, who wanted to know if terrorists are as crazy as we think they are and whether their brains function very different from ours. Of course, most of us would assume that terrorists are crazy. But what she found out was that terrorists are, in fact, like us. Clark McCauley, Professor of Psychology at Bryn Mawr notes that terrorists are not crazy and that "psychopathology and personality disorder no more likely among terrorists than among non-terrorists from the same background (5)." This caught my attention because terrorists are a perfect (negative) example of individuals who crave membership in a group or organization where the members are like family members to each other, "each with their role, and each providing support for their fellow terrorists (5)" The thought of many individuals giving up their lives for a cause or a group of people seems downright crazy to many people. This "blind loyalty" (5) may signal irrationality but I also think that these terrorists may have individual interests in mind. After all, didn't the terrorists of September 11 kill in order to enter paradise? Is this not individual interest? Terrorists are also said to "crave power" (5) and perhaps even fame and notoriety. And what about the fact that stockbrokers follow crowds in the interest of the individual.

I had hoped to find more information on this topic but the lack of information on positive group behavior (online) indicates that group behavior is generally thought of as irrational. It would have been great to find out more on positive group behavior – for instance, my favorite bands would not create wonderful music if there were no such thing as "group consciousness". I have reached to the conclusion that all the answers I have found are too clear-cut. I agree most with Durkheim in believing that both the individual and group modes exist within us, struggling with each other – and maybe even working with each other. Maybe how we behave in groups reflects how we are as individuals, or how we would like to be as individuals.

I would like to conclude this paper by mentioning that there are studies being done on the collective decision-making by ants (of the genus Lepotothorax), which looks at "how individual cognitive abilities are designed to optimize group behavior (4)." I think studies likes these are a great starting point to understanding why we humans behave the way we do.

.

WWW Sources

1) Psychology is the Key

2) How antisocial and prosocial coping influence the support process among men and women in the U.S. Postal Service

3), ABC News Online , Punch-Drunk Teens: Experts Say Peer Pressure, Media Fuel Youth Violence

4) Department of Ecology and Evolutionary Biology, Princeton University , Collective nest site choice by ant colonies

5) the Serendip Website , Terrorists: How different are they? By Stephanie Habelow

6) Dept. of Anthropology Boston University , Charisma, Crowd Psychology and Altered States of Consciousness by Charles Lindholm


The Beta-Amyloid Peptide, the Gamma-Secretase Comp
Name: Jean Yanol
Date: 2004-02-25 21:29:15
Link to this Comment: 8506


<mytitle>

Biology 202
2004 First Web Paper
On Serendip









In many neurological diseases, problems in cellular signaling pathways cause the onset of the major physiological symptoms associated with the disease. Alzheimer's disease (AD) is a neurodegenerative disorder that affects millions of people by inducing dementia. There are two forms of the disease, sporadic and familial. Familial Alzheimer's disease usually affects people earlier in life than its sporadic counterpart. Even though the major hallmarks of both sporadic and familial AD are extra cellular senile plaques, intra cellular neurofibrillary tangles, and subsequent neuronal and synaptic loss(1), the proposed cellular mechanisms by which these two forms of AD function is different. Being that familial AD is genetically linked, there have been significant findings elucidating its pathogenic cellular mechanisms.

The extra cellular senile plaques and intra cellular neurofibrillary tangles associated with AD have been the major focus of research. The neurofibrillary tangles are mostly composed of the hyper phosphorylated tau protein(2) and the senile plaques are composed of deposited 42 amino acid long b-amyloid peptide(3). While the complete methods of synthesis of both structures are unknown, the production of the extra cellular amyloid plaques is one major defining point between familial and sporadic AD. Also mutations in the components that generate the b-amyloid peptide cause most cases of familial Alzheimer's disease.

The b-amyloid peptide exists in two predominant forms, one is the 40 amino acid long peptide and the other is the 42 amino acid long peptide. The differences in peptide length arise from differential cleavage of the amyloid precursor protein (APP) from which numerous forms of the b-amyloid peptide come(5). The 42 amino acid long b-amyloid peptide, which forms the senile plaques, comes from APP cleaved by both b- and g-secretases (Figure 1, modified figure from Sinha and Lieberburg 1999). The principal b-secretase in neurons is the aspartic protease BACE1 (b-site APP Cleavage Enzyme 1) which performs the first APP cleavage to release the NH2 terminus of the b-amyloid peptide from its precursor . Subsequent cleavage by the g-secretase releases the COOH terminus of the b-amyloid peptide(6). The g-secretase is a high molecular weight complex which is composed of Presenilin 1 (PS1), mature Nicastrin, APH-1, and Pen-2 (7). Elucidating the formation of this complex is key to finding pharmaceutical treatments for Alzheimer's disease because mutations in the gene that codes for presenilin 1 are the cause of half of all familial AD cases(8)(other causes are mutation in the APP substrate). The g-secretase is also thought to be involved in the cleavage of ErbB4 (9), intra cellular domains of Notch(10), and other similar types of proteins which show that this secretase is important in other pathways.

PS1 mutations have been shown to increase the amount of secreted 42 amino acid long b-amyloid peptide(11)(12). PS1 is an aspartyl protease (meaning that the active sites are two conserved aspartate residues, D257 and D385 that are located on the 6th and 8th hydrophobic region of PS1) and has between 6 to 8 transmembrane domains (most researchers believe there are eight transmembrane domains, Figure 2 from Kim and Schekman 2004) which are important to its function and interactions in the g-secretase complex (13). This protein is localized primarily in the ER (endoplasmic reticulum) and Golgi complexes. In the ER, PS1 exists as an uncleaved holoprotein( proteins that function in the presence of a non-protein cofactor) which is thought to be inactive, but in the Golgi region PS1 exists as a heterodimer with the NTF(N-terminal fragment) and CTF(C-terminal fragment) seperated, but closely associated in a 1:1 stoichiometry(14)(15). The mechanism by which PS1 is cleaved into its respective NTF and CTF is not known, but it is speculated that the other members of the g-secretase complex Nicastrin, APH-1, and Pen-2 are needed for formation of the stable g-secretase complex and for PS1 maturation(16). Nicastrin is a type 1 transmembrane protein that spans the membrane once and interacts in the g-secretase complex after it is N-glycosylated (this is the factor that is required to make mature Nicastrin) in the ER(17). In a low molecular weight sub complex, nicastrin interacts primarily with APH-1, which is predicted to transverse the membrane seven times(18). This nicastrin/APH-1 sub complex then is predicted to interact with PS1 CTF. Pen-2, which spans the membrane twice, is believed to interact with PS1 NTF and facilitates its maturation. In this model there are two sub complexes, one composed of nicastrin, APH-1, and PS1 CTF, and the other composed of Pen-2 and PS1 NTF (Figure 3 from Fraering et al. )(19). These sub complexes interact through the heterodimeric state of the PS1 NTF and CTF. In yeast, mammalian, and Drosophila cells, presence of PS1, nicastrin, APH-1, and Pen-2 were enough to reconstitute g-secretase activity (7)(20)(21). Once the stable g-secretase complex is formed it can cleave APP into the 42 amino acid long b-amyloid peptide. g-secretase activity is believe to happen in the ER, late golgi/TGN, endosomes, and plasma membrane. Depending on where APP is cleaved in the cell is thought to determine whether it is secreted or not. However it is debated what factors lead to the 42 amino acid long b-amyloid peptide plaques form. Also the role of non-secreted b-amyloid in AD is debated and some researchers think that intra cellular b-amyloid is generated by a distinct presenilin independent g-secretase (22).

One new avenue of research has opened up very recently, which is the role of a PS related protein called IMPAS 1 in presenilin 1 holoprotein cleavage. In cells transiently transfected with IMPAS 1 and PS1 holoprotein, however there was little to no indication of this possibly due to the disadvantages associated with Western Blot analysis (23), however it is highly possible that IMPAS 1 or one of the other proteins in its recently discovered family could possibly be responsible for PS1 holoprotein proteolysis. Further analysis must be performed in order to conclude any possible cleavage interaction between IMPAS 1 and PS1. Being that IMPAS 1 is thought to be able to cleave type 1 transmembrane domain proteins (23), it is possible that it may be part of other similar pathways. Other mechanisms have recently been proposed to be functional in AD such as Inositol triphosphate (IP3) ion-gated calcium ion channels because that PS1 is known to modulate IP3 mediated calcium ion liberation(24). It has been shown that in cells with familial AD linked mutations in the gene that codes for presenilin 1, there is an increase calcium ion transients which serve in many signaling functions. Recent studies have shown elevation in ER excitability due to calcium transient elevation caused by a specific PS1 mutation, but subsequent inhibition in the plasma membrane which will disrupt cell to cell signaling(24). This implies that PS1 not only affects AD through its role in the cleavage of the amyloid precursor protein, but also in elevating specific ion transients which disrupt responsiveness to certain synaptic signaling.

While many factors are thought to contribute to familial Alzheimer's Disease, the g-secretase complex is one of the most unknown and most researched components, due to its implications in other pathways and its novel interactions which have a substantial impact on the formation of the disease. Indeed further analysis of the interaction and stoichiometry of the components is needed in order to fully understand the complex and how it is functional in familial Alzheimer's Disease. By researching the mechanisms of the disease's formation, we can hope to apply this information one day to pharmaceutical treaments that can be used for familial Alzheimer's disease patients and to use this information to possibly elucidate the formation of similar neurodegenerative disorders.


.



References




1.Selkoe, D.J. The molecular pathology of Alzheimer's disease(1991) Neuron 6, 487-498

2.Kang, J., Lemaire, H.-G., Unterbeck, A., Salbaum, J. M., Masters, C. L., Grzeschik, K.-H., Multhaup, G., Beyreuther, K., and Muller-Hill, B. The precursor of Alzheimer's disease amyloid A4 protein resembles a cell-surface receptor (1987) Nature 325, 733-736

3. Roher, A. E., Lowenson, J. D., Clarke, S., Woods, A. S., Cotter, R. J., Gowing, E. & Ball, M. J. beta-Amyloid-(1-42) is a major component of cerebrovascular amyloid deposits: implications for the pathology of Alzheimer disease.(1993) Proc. Natl. Acad. Sci. USA 90, 10836-10840

4. Grundke-Iqbal, I., Iqbal, K., Tung, Y.C., Quinlan, M., Wisniewski, H.M., Binder, L.I., Abnormal phosphorylation of the microtubule-associated protein tau in Alzheimer cytoskeletal pathology (1986) Proc. Natl. Acad. Sci. USA 83, 4913-4917

5. Price, D.L., Sisodia, S.S., Mutant genes in familial Alzheimer's disease and transgenic models. (1998) Annu. Rev. Neurosci. 21, 479-505

6. Sinha, S., Lieberburg, I., Cellular mechanisms of b-amyloid production and secretion. (1999) Proc. Natl. Acad. Sci. USA 96, 11049-11053

7. Kimberly, W.T., LaVoie, M.J., Ostaszewski, B.L., Wenjuan, Y., Wolfe, M.S, Selkoe, D.J. g-Secretase is a membrane protein complex comprised of presenilin, nicastrin, aph-1, and pen-2 (2003) Proc. Natl. Acad. Sci. USA 100, 6382-6387

8. Cruts, M., van Duijin, C.M., Backhovens, H., van Den, B.M., Wehnert, A., Serneels, S., Sherrington, R., Hutton, M., Hardy, J., George-Hyslop, P.H., Hofman, A., van Broeckhoven, C., Estimation of the genetic contribution of presenilin-1 and -2 mutations in a population-based study of presenile Alzheimer's disease. (1998) Hum. Mol. Genet. 7, 43-51

9. Lee, H.J., Jung, K.M., Huang, Y.Z., Bennett, L.B., Lee, J.S., Mei, L., Kim, T.W., Presenilin-dependent g-Secretase-like Intramembrane Cleavage of ErbB4. (2002) J. Biol. Chem. 277, 6318-6323

10. Kimberly, W.T., Esler, W.P., Ye, W., Ostaszewski, B.L., Gao, J., Diehl, T., Selkoe, D.J., Wolfe, M.S., Notch and the amyloid precursor protein are cleaved by similar gamma-secretase(s). (2003) Biochemistry 42, 137-44.

11. Borchelt, D.R., Thinakaran, G., Eckman, C.B., Lee, M.K., Davenport, F., Ratovitsky, T., Prada, C.M., Kim, G., Seekins, S., Yager, D., Slunt, H.H., Wang, R., Seeger, M., Levey, A.I., Gandy, S.E., Copeland, N.G., Jenkins, N.A., Price, D.L., Younkin, S.G., Sisodia, S.S., Familial Alzheimer's disease-linked presenilin 1 variants elevate Abeta1-42/1-40 ratio in vitro and in vivo. (1996) Neuron 17, 1005-13.

12. Mehta, N.D., Refolo, L.M., Eckman, C., Sanders, S., Yager, D., Perez-Tur, J., Younkin, S., Duff, K., Hardy, J., Hutton, M., Increased Abeta42(43) from cell lines expressing presenilin 1 mutations. (1998) Ann Neurol. 43, 256-8

13. Kim, J., Schekman, R., The ins and outs of presenilin 1 membrane topology. (2004) Proc. Natl. Acad. Sci. USA 101, 905-906.

14. Capell A, Grunberg J, Pesold B, Diehlmann A, Citron M, Nixon R, Beyreuther K, Selkoe DJ, Haass C. The proteolytic fragments of the Alzheimer's disease-associated presenilin-1 form heterodimers and occur as a 100-150-kDa molecular mass complex.(1998) J Biol Chem. 273, 3205-11.

15. Thinakaran G, Regard JB, Bouton CM, Harris CL, Price DL, Borchelt DR, Sisodia SS., Stable association of presenilin derivatives and absence of presenilin interactions with APP. (1998) Neurobiol Dis. 4, 438-53.

16. Hu Y, Fortini ME. Different cofactor activities in gamma-secretase assembly: evidence for a nicastrin-Aph-1 subcomplex. (2003) J Cell Biol.161, 685-90.

17. Kimberly, W.T., LaVoie, M.J., Ostaszewski, B.L., Ye, W., Wolfe, M.S., Selkoe, D.J., Complex N-linked Glycosylated Nicastrin Associates with Active gamma-Secretase and Undergoes Tight Cellular Regulation (2002) J. Biol. Chem. 277, 35113-35117

18.Fortna RR, Crystal AS, Morais VA, Pijak DS, Lee VM, Doms RW., Membrane topology and nicastrin-enhanced endoproteolysis of APH-1, a component of the gamma-secretase complex.(2004) J Biol Chem. 279, 3685-93.

19. Fraering PC, LaVoie MJ, Ye W, Ostaszewski BL, Kimberly WT, Selkoe DJ, Wolfe MS., Detergent-dependent dissociation of active gamma-secretase reveals an interaction between Pen-2 and PS1-NTF and offers a model for subunit organization within the complex. (2004) Biochemistry. 43, 323-33.
20. Takasugi N, Tomita T, Hayashi I, Tsuruoka M, Niimura M, Takahashi Y, Thinakaran G, Iwatsubo T., The role of presenilin cofactors in the gamma-secretase complex. (2003) 422, 438-41.

21. Edbauer D, Winkler E, Regula JT, Pesold B, Steiner H, Haass C., Reconstitution of gamma-secretase activity. (2003) Nat Cell Biol. 5, 486-8.

22. Wilson CA, Doms RW, Lee VM. Distinct presenilin-dependent and presenilin-independent gamma-secretases are responsible for total cellular Abeta production. (2003) J Neurosci Res. 74, 361-9.

23. Moliaka YK, Grigorenko A, Madera D, Rogaev EI., Impas 1 possesses endoproteolytic activity against multipass membrane protein substrate cleaving the presenilin 1 holoprotein. (2004) FEBS Lett. 557, 185-92.

24. Stutzmann GE, Caccamo A, LaFerla FM, Parker I., Dysregulated IP3 Signaling in Cortical Neurons of Knock-In Mice Expressing an Alzheimer's-Linked Mutation in Presenilin1 Results in Exaggerated Ca2+ Signals and Altered Membrane Excitability (2004) J Neurosci. 24, 508-13.



The Beta-Amyloid Peptide, the Gamma-Secretase Comp
Name: Jean Yanol
Date: 2004-02-25 21:31:11
Link to this Comment: 8507

<mytitle> Biology 202
2004 First Web Paper
On Serendip


In many neurological diseases, problems in cellular signaling pathways cause the onset of the major physiological symptoms associated with the disease. Alzheimer's disease (AD) is a neurodegenerative disorder that affects millions of people by inducing dementia. There are two forms of the disease, sporadic and familial. Familial Alzheimer's disease usually affects people earlier in life than its sporadic counterpart. Even though the major hallmarks of both sporadic and familial AD are extra cellular senile plaques, intra cellular neurofibrillary tangles, and subsequent neuronal and synaptic loss(1), the proposed cellular mechanisms by which these two forms of AD function is different. Being that familial AD is genetically linked, there have been significant findings elucidating its pathogenic cellular mechanisms.

The extra cellular senile plaques and intra cellular neurofibrillary tangles associated with AD have been the major focus of research. The neurofibrillary tangles are mostly composed of the hyper phosphorylated tau protein(2) and the senile plaques are composed of deposited 42 amino acid long b-amyloid peptide(3). While the complete methods of synthesis of both structures are unknown, the production of the extra cellular amyloid plaques is one major defining point between familial and sporadic AD. Also mutations in the components that generate the b-amyloid peptide cause most cases of familial Alzheimer's disease.

The b-amyloid peptide exists in two predominant forms, one is the 40 amino acid long peptide and the other is the 42 amino acid long peptide. The differences in peptide length arise from differential cleavage of the amyloid precursor protein (APP) from which numerous forms of the b-amyloid peptide come(5). The 42 amino acid long b-amyloid peptide, which forms the senile plaques, comes from APP cleaved by both b- and g-secretases (Figure 1, modified figure from Sinha and Lieberburg 1999). The principal b-secretase in neurons is the aspartic protease BACE1 (b-site APP Cleavage Enzyme 1) which performs the first APP cleavage to release the NH2 terminus of the b-amyloid peptide from its precursor . Subsequent cleavage by the g-secretase releases the COOH terminus of the b-amyloid peptide(6). The g-secretase is a high molecular weight complex which is composed of Presenilin 1 (PS1), mature Nicastrin, APH-1, and Pen-2 (7). Elucidating the formation of this complex is key to finding pharmaceutical treatments for Alzheimer's disease because mutations in the gene that codes for presenilin 1 are the cause of half of all familial AD cases(8)(other causes are mutation in the APP substrate). The g-secretase is also thought to be involved in the cleavage of ErbB4 (9), intra cellular domains of Notch(10), and other similar types of proteins which show that this secretase is important in other pathways.

PS1 mutations have been shown to increase the amount of secreted 42 amino acid long b-amyloid peptide(11)(12). PS1 is an aspartyl protease (meaning that the active sites are two conserved aspartate residues, D257 and D385 that are located on the 6th and 8th hydrophobic region of PS1) and has between 6 to 8 transmembrane domains (most researchers believe there are eight transmembrane domains, Figure 2 from Kim and Schekman 2004) which are important to its function and interactions in the g-secretase complex (13). This protein is localized primarily in the ER (endoplasmic reticulum) and Golgi complexes. In the ER, PS1 exists as an uncleaved holoprotein( proteins that function in the presence of a non-protein cofactor) which is thought to be inactive, but in the Golgi region PS1 exists as a heterodimer with the NTF(N-terminal fragment) and CTF(C-terminal fragment) seperated, but closely associated in a 1:1 stoichiometry(14)(15). The mechanism by which PS1 is cleaved into its respective NTF and CTF is not known, but it is speculated that the other members of the g-secretase complex Nicastrin, APH-1, and Pen-2 are needed for formation of the stable g-secretase complex and for PS1 maturation(16). Nicastrin is a type 1 transmembrane protein that spans the membrane once and interacts in the g-secretase complex after it is N-glycosylated (this is the factor that is required to make mature Nicastrin) in the ER(17). In a low molecular weight sub complex, nicastrin interacts primarily with APH-1, which is predicted to transverse the membrane seven times(18). This nicastrin/APH-1 sub complex then is predicted to interact with PS1 CTF. Pen-2, which spans the membrane twice, is believed to interact with PS1 NTF and facilitates its maturation. In this model there are two sub complexes, one composed of nicastrin, APH-1, and PS1 CTF, and the other composed of Pen-2 and PS1 NTF (Figure 3 from Fraering et al. )(19). These sub complexes interact through the heterodimeric state of the PS1 NTF and CTF. In yeast, mammalian, and Drosophila cells, presence of PS1, nicastrin, APH-1, and Pen-2 were enough to reconstitute g-secretase activity (7)(20)(21). Once the stable g-secretase complex is formed it can cleave APP into the 42 amino acid long b-amyloid peptide. g-secretase activity is believe to happen in the ER, late golgi/TGN, endosomes, and plasma membrane. Depending on where APP is cleaved in the cell is thought to determine whether it is secreted or not. However it is debated what factors lead to the 42 amino acid long b-amyloid peptide plaques form. Also the role of non-secreted b-amyloid in AD is debated and some researchers think that intra cellular b-amyloid is generated by a distinct presenilin independent g-secretase (22).

One new avenue of research has opened up very recently, which is the role of a PS related protein called IMPAS 1 in presenilin 1 holoprotein cleavage. In cells transiently transfected with IMPAS 1 and PS1 holoprotein, however there was little to no indication of this possibly due to the disadvantages associated with Western Blot analysis (23), however it is highly possible that IMPAS 1 or one of the other proteins in its recently discovered family could possibly be responsible for PS1 holoprotein proteolysis. Further analysis must be performed in order to conclude any possible cleavage interaction between IMPAS 1 and PS1. Being that IMPAS 1 is thought to be able to cleave type 1 transmembrane domain proteins (23), it is possible that it may be part of other similar pathways. Other mechanisms have recently been proposed to be functional in AD such as Inositol triphosphate (IP3) ion-gated calcium ion channels because that PS1 is known to modulate IP3 mediated calcium ion liberation(24). It has been shown that in cells with familial AD linked mutations in the gene that codes for presenilin 1, there is an increase calcium ion transients which serve in many signaling functions. Recent studies have shown elevation in ER excitability due to calcium transient elevation caused by a specific PS1 mutation, but subsequent inhibition in the plasma membrane which will disrupt cell to cell signaling(24). This implies that PS1 not only affects AD through its role in the cleavage of the amyloid precursor protein, but also in elevating specific ion transients which disrupt responsiveness to certain synaptic signaling.

While many factors are thought to contribute to familial Alzheimer's Disease, the g-secretase complex is one of the most unknown and most researched components, due to its implications in other pathways and its novel interactions which have a substantial impact on the formation of the disease. Indeed further analysis of the interaction and stoichiometry of the components is needed in order to fully understand the complex and how it is functional in familial Alzheimer's Disease. By researching the mechanisms of the disease's formation, we can hope to apply this information one day to pharmaceutical treaments that can be used for familial Alzheimer's disease patients and to use this information to possibly elucidate the formation of similar neurodegenerative disorders.

.

References


1.Selkoe, D.J. The molecular pathology of Alzheimer's disease(1991) Neuron 6, 487-498

2.Kang, J., Lemaire, H.-G., Unterbeck, A., Salbaum, J. M., Masters, C. L., Grzeschik, K.-H., Multhaup, G., Beyreuther, K., and Muller-Hill, B. The precursor of Alzheimer's disease amyloid A4 protein resembles a cell-surface receptor (1987) Nature 325, 733-736

3. Roher, A. E., Lowenson, J. D., Clarke, S., Woods, A. S., Cotter, R. J., Gowing, E. & Ball, M. J. beta-Amyloid-(1-42) is a major component of cerebrovascular amyloid deposits: implications for the pathology of Alzheimer disease.(1993) Proc. Natl. Acad. Sci. USA 90, 10836-10840

4. Grundke-Iqbal, I., Iqbal, K., Tung, Y.C., Quinlan, M., Wisniewski, H.M., Binder, L.I., Abnormal phosphorylation of the microtubule-associated protein tau in Alzheimer cytoskeletal pathology (1986) Proc. Natl. Acad. Sci. USA 83, 4913-4917

5. Price, D.L., Sisodia, S.S., Mutant genes in familial Alzheimer's disease and transgenic models. (1998) Annu. Rev. Neurosci. 21, 479-505

6. Sinha, S., Lieberburg, I., Cellular mechanisms of b-amyloid production and secretion. (1999) Proc. Natl. Acad. Sci. USA 96, 11049-11053

7. Kimberly, W.T., LaVoie, M.J., Ostaszewski, B.L., Wenjuan, Y., Wolfe, M.S, Selkoe, D.J. g-Secretase is a membrane protein complex comprised of presenilin, nicastrin, aph-1, and pen-2 (2003) Proc. Natl. Acad. Sci. USA 100, 6382-6387

8. Cruts, M., van Duijin, C.M., Backhovens, H., van Den, B.M., Wehnert, A., Serneels, S., Sherrington, R., Hutton, M., Hardy, J., George-Hyslop, P.H., Hofman, A., van Broeckhoven, C., Estimation of the genetic contribution of presenilin-1 and -2 mutations in a population-based study of presenile Alzheimer's disease. (1998) Hum. Mol. Genet. 7, 43-51

9. Lee, H.J., Jung, K.M., Huang, Y.Z., Bennett, L.B., Lee, J.S., Mei, L., Kim, T.W., Presenilin-dependent g-Secretase-like Intramembrane Cleavage of ErbB4. (2002) J. Biol. Chem. 277, 6318-6323

10. Kimberly, W.T., Esler, W.P., Ye, W., Ostaszewski, B.L., Gao, J., Diehl, T., Selkoe, D.J., Wolfe, M.S., Notch and the amyloid precursor protein are cleaved by similar gamma-secretase(s). (2003) Biochemistry 42, 137-44.

11. Borchelt, D.R., Thinakaran, G., Eckman, C.B., Lee, M.K., Davenport, F., Ratovitsky, T., Prada, C.M., Kim, G., Seekins, S., Yager, D., Slunt, H.H., Wang, R., Seeger, M., Levey, A.I., Gandy, S.E., Copeland, N.G., Jenkins, N.A., Price, D.L., Younkin, S.G., Sisodia, S.S., Familial Alzheimer's disease-linked presenilin 1 variants elevate Abeta1-42/1-40 ratio in vitro and in vivo. (1996) Neuron 17, 1005-13.

12. Mehta, N.D., Refolo, L.M., Eckman, C., Sanders, S., Yager, D., Perez-Tur, J., Younkin, S., Duff, K., Hardy, J., Hutton, M., Increased Abeta42(43) from cell lines expressing presenilin 1 mutations. (1998) Ann Neurol. 43, 256-8

13. Kim, J., Schekman, R., The ins and outs of presenilin 1 membrane topology. (2004) Proc. Natl. Acad. Sci. USA 101, 905-906.

14. Capell A, Grunberg J, Pesold B, Diehlmann A, Citron M, Nixon R, Beyreuther K, Selkoe DJ, Haass C. The proteolytic fragments of the Alzheimer's disease-associated presenilin-1 form heterodimers and occur as a 100-150-kDa molecular mass complex.(1998) J Biol Chem. 273, 3205-11.

15. Thinakaran G, Regard JB, Bouton CM, Harris CL, Price DL, Borchelt DR, Sisodia SS., Stable association of presenilin derivatives and absence of presenilin interactions with APP. (1998) Neurobiol Dis. 4, 438-53.

16. Hu Y, Fortini ME. Different cofactor activities in gamma-secretase assembly: evidence for a nicastrin-Aph-1 subcomplex. (2003) J Cell Biol.161, 685-90.

17. Kimberly, W.T., LaVoie, M.J., Ostaszewski, B.L., Ye, W., Wolfe, M.S., Selkoe, D.J., Complex N-linked Glycosylated Nicastrin Associates with Active gamma-Secretase and Undergoes Tight Cellular Regulation (2002) J. Biol. Chem. 277, 35113-35117

18.Fortna RR, Crystal AS, Morais VA, Pijak DS, Lee VM, Doms RW., Membrane topology and nicastrin-enhanced endoproteolysis of APH-1, a component of the gamma-secretase complex.(2004) J Biol Chem. 279, 3685-93.

19. Fraering PC, LaVoie MJ, Ye W, Ostaszewski BL, Kimberly WT, Selkoe DJ, Wolfe MS., Detergent-dependent dissociation of active gamma-secretase reveals an interaction between Pen-2 and PS1-NTF and offers a model for subunit organization within the complex. (2004) Biochemistry. 43, 323-33. 20. Takasugi N, Tomita T, Hayashi I, Tsuruoka M, Niimura M, Takahashi Y, Thinakaran G, Iwatsubo T., The role of presenilin cofactors in the gamma-secretase complex. (2003) 422, 438-41.

21. Edbauer D, Winkler E, Regula JT, Pesold B, Steiner H, Haass C., Reconstitution of gamma-secretase activity. (2003) Nat Cell Biol. 5, 486-8.

22. Wilson CA, Doms RW, Lee VM. Distinct presenilin-dependent and presenilin-independent gamma-secretases are responsible for total cellular Abeta production. (2003) J Neurosci Res. 74, 361-9.

23. Moliaka YK, Grigorenko A, Madera D, Rogaev EI., Impas 1 possesses endoproteolytic activity against multipass membrane protein substrate cleaving the presenilin 1 holoprotein. (2004) FEBS Lett. 557, 185-92.

24. Stutzmann GE, Caccamo A, LaFerla FM, Parker I., Dysregulated IP3 Signaling in Cortical Neurons of Knock-In Mice Expressing an Alzheimer's-Linked Mutation in Presenilin1 Results in Exaggerated Ca2+ Signals and Altered Membrane Excitability (2004) J Neurosci. 24, 508-13.



Quantifying Intelligence
Name: Maria
Date: 2004-02-26 01:55:00
Link to this Comment: 8515


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Quantifying intelligence is something that makes people anxious. Most people, when asked, cannot pinpoint what exactly it is about assigning a number or value to a person's intelligence that makes them so uncomfortable. What most people do know is that there is something about intelligence that causes it to take precedence over other culturally valued traits such as athletic prowess or physical attractiveness. That out of all the characteristics that humans value, intelligence is the one that matters the most. It is not a incorrect assessment: while athleticism might give one an edge in a sports game or appearance might help one in social situations, intelligence helps one navigate life and the broader and more complex challenges of living every day. Much of the anxiety over trying to quantify intelligence stems from the conflict between the deeply held cultural belief that all people are born with equal opportunity and the realities demonstrated by the IQ Test that some people are born with greater intellectual potential than others . While our society might aspire to be egalitarian, the fact remains that intellectual ability varies from individual to individual (1). It is an important difference too, research done over a period of years confirms that intellectual ability of the type measured by the IQ test has a profound and widespread impact on the way in which a given individual lives his or her life (1).


In an individual person's professional life, there appears to be a strong (and not terribly surprising) correlation between an individual's IQ and the type of employment he or she is able to sustain. Those in the top five percent of the adult IQ distribution (above 125) are able to enter whatever profession they choose (1). Individuals with average IQ are not competitive for most high level jobs, but are able to perform the majority of the jobs in the America (1). Individuals in the bottom five percent of the IQ distribution (below 75) are not competitive within in the workforce (1). The government recognizes the correlation between ability and IQ: during World War II Congress banned the enlistment of those with an IQ below 80 because they were too difficult to train (1).


The effect of IQ is not limited to the professional arena. There is an undeniable correlation between low IQ scores and negative social experiences, probably due at least in part to the strong correlation between an individual's IQ and their socio-economic status. Individuals with IQs somewhat below average are seven times more likely to be jailed than those with somewhat higher than average IQs (1). They are eighty-eight times more likely to drop out of high school and are 50 percent more likely to be divorced (1). Obviously one cannot make assumptions about any individual based on these numbers—after all there are many people with high IQs who are divorced. It would also be erroneous to assume that from this information one could attribute the poverty, single motherhood or divorced state of any given individual to a lack of intelligence. Rather these statistics suggest that the lives of those who are not as well equipped to deal with intellectual complexities tend to be more difficult in today's society in economic, social and personal matters.


In order to understand why an individual's ability to do well on the seemingly odd tasks that are involved in an IQ test is so closely linked to the individual's success in life, one has to understand what trait causes the correlation in the first place. At the turn of the last century by the British Psychologist Charles Spearman noticed a pattern of correlation when analyzing the results of IQ tests. The IQ test is made up of subtests on a variety of unrelated topics, yet an individual who did well on one subtest was likely to do well on all of them, no matter how disparate the contents of the various subtests were. This observation lead Spearman to conclude that there was another force at work, a "general intelligence" or g that accounted for this consistency in performance. It is important to note that g is not simply the cumulative result of someone being good at literature and math and spatial exercises. Rather, it is it's own separate function, which has recently been shown to take place in the lateral frontal cortex of one or both hemispheres (2). This is the area that high-g tasks call on, not a wide variety of cognitive functions (2).


The existence of g is not simply an abstract concept created by scientists. The notion that some people are more able at some things than others is familiar to all of us and used by all of us from the time we are small children. People intuitively sense the existence of g, they just have different names for it. Someone might be considered 'bright' or 'smart' which could just be another way of saying that they are able to handle cognitive complexity. That someone is 'quick' is particularly interesting word choice in light of the fact that recent testing has shown that it does indeed take less time and less energy for the brains of those with high IQs to solve problems (2). Which makes it also rather appropriate that people with less high IQs were often called 'slow'. How precisely to define g is exactly is difficult. Simply stated, g can be defined as "the ability to deal with cognitive complexity" (1). Being able to interpret information, recognize similarities and differences and to understand ideas and concepts are all the hallmarks of the intelligent person, and are also the abilities that constitute g (1).


The discomfort that most people feel at the idea of having a value assigned to their intelligence is a natural reaction given that in many ways it would seem that the results of an IQ test are not predictive so much as prophetic. Given that intellectual ability effects so much, it is easy to understand how the individual could feel that there is little room left for autonomy or self-determination. It would indeed be disconcerting if one felt that the results of a single test determined one's fate. I think that viewing g and the continued study of what effects and is effected by g in such a way would not only be incorrect, but also be a serious mistake. The concept of g is a useful one, but only if it seen for what it is: one way of quantifying an individual's ability to function with relative ease in the world. The test itself exists in a cultural context, a culture that values highly the results of tests. As with any test the claims to make broad judgements about an individual's future, could be self-fulfilling (5). As is the case with most differences between people, it is not the differences themselves that pose a potential problem; rather, it is the value judgements that other make based on those differences that are problematic.


The notion of g is not egalitarian one; few things about human makeup are egalitarian. While we may not think of it in those terms, we all accept and make our peace with this inequality every day (4). That I am not skilled at tennis like Venus Williams, musically gifted like Billie Holiday or beautiful like Julie Christie is not news to me or to the other 99.9% of the world for whom the same can be said. And while I admire the ability in other to do what I cannot, I don't feel that it detracts in any way from the capabilities that I do have. The same principle applies to intelligence, even though the effect of intelligence on one's life is more far-reaching than musical skills or athleticism. Every day we all tacitly acknowledge the existence of g in whose advice we seek, who we do or do not consider competent to perform a given task and the assumptions we make about people based on their profession, socio-economic status or lifestyle. As is true with any valued trait, the ability to quantify it carries with it the concern that we will begin to allow the value society places on the trait to determine the value society places on the individual who happens to possess (or not possess) it. There are a lot of ways that people can be extraordinary and there are a lot of ways that people can lead productive, useful lives regardless of how they score on a test. If correctly approached, the study of g can help us understand why people have the experiences that they do in life and can ultimately help us as a society to accept the mixed bag of skills and weaknesses that are each person.


References

1)Godfredson, Linda. The General Intelligence Factor, A very thorough article detailing the implications and importance of g in daily life.

2) Duncan, John. A Neural Basis for General Intelligence. Science Magazine. Vol.289; 21 July 2000.

3) Article by Ari Berkowtiz on Serendip, Quite a good discussion of the role genetics plays in IQ among other things.

4) An Article from Science on the International Society For Intelligence Research, General thoughts on uses of intelligence and such.

5) Letter to the Editor of NY Times Book Review by Professor Grobstein Pretty much what the title says what it is: a letter to the editor of the NYT Book Review by PG


Brain Dependence: The Debate Over the Addictive Pe
Name: MaryBeth C
Date: 2004-02-26 02:54:02
Link to this Comment: 8519


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Though alcoholism and other damaging addictions are often be traced as symptoms of depression and other emotional distress, the relatively new notion of the "addictive personality" has a significant community of supporters. According to its supporters, the addictive personality is a distinct psychological trait that predisposes particular individuals to addictions. While the nature and the very existence of this trait is still actively debated in the medical, neurobiological and psychology communities, there are definite implications in the brain that contribute to addiction. Also important to this debate are the issues of gender in relation to addiction and how these are and are not compatible with the addictive personality theory.

Addiction, as typically defined, is a reliance on a substance or behavior that the individual has little power to resist. This definition, however, fails to address the neurological aspects of this phenomenon. Dr. Alan Leshner, PhD, of the National Institute on Drug Abuse describes addiction instead as "a brain disease" and "a chronic relapsing disease", in that there are visible alterations in the brains of addicted individuals and these effects are long-lasting within their neurological patterns.(1)Also important in describing addiction is addressing the types of addiction and substance abuse that are often accredited to the addictive personality. There are two primary forms of addiction, one being the substance-based, the second being behavior-based.

The substance-based addictions, such as alcoholism, as well as nicotine, prescription and narcotic addictions, are more easily explained and identified neurologically. Particular drugs, such as crack and heroine cause massive surges in dopamine in the brain, with different sensations ranging from invincibility and strength to euphoric and enlightened states. Use of these substances almost immediately changes particular aspects of the brain's behavior, making most individuals immediately susceptible to future abuse or addiction.

Also common are the behavioral addictions including gambling, shopping, eating, and sexual activity. These addictions are not as easily explained neurologically, but are generally included in the addiction susceptibility characterized by the personality trait. Also common are sorts of combined addictions, that is, addictions that include both substance, as well as behavioral aspects, most commonly the addiction to nicotine, either smoking or chewing. This particular addiction combines a physical addiction to nicotine and a mental facet, the repeated routine of the behavior, such as a cigarette after meals.

Another issue interestingly related to addiction is the relative relationship between these abuses and addictions regarding gender. A collection of recent studies have shown that male adolescents are more active in early drug and alcohol experimentation and that men in general are four times more likely to become dependent on alcohol, twice as likely to routinely use marijuana, and one and a half times more likely to become addicted to cigarettes. Conversely, female adolescents are far more likely to experience the activities associated with behavioral addictions, and women far outnumber men in addictions to eating, binging and purging, thus developing eating disorders at a greater rate.(2)

This stratification may either evidence a key difference in the nature of addictive personalities and a link to gender, or it may discredit the theory as a whole, depending on perspective. It has been shown with other diseases, cancers and genetic traits that particular disorders favor one gender over another, therefore these statistics may show an interesting aspect of the genetic or neurobiological nature of the inherited trait. On the other hand, the variances in the addictions of men and women are often traced to societal values and the images presented to young men and women. In one interesting element of this debate, it seems that the popular image of alcohol consumption among Americans as in mass advertising is one that is largely geared towards men. Some of the symptoms of alcohol consumption and drunkenness are less acceptable for women, such as uncontrolled behavior, lessened inhibitions and weight gain, while these are more acceptable for men. It also seems that popular images associated with cigarettes have a similarly masculine undertone, as the primary face of the tobacco industry, the "Marlboro Man" embodies popular American manhood like few other icons.

While no one has succeeded in proving the existence of a true addictive personality, many experts now believe that the predisposition to addiction is more accurately a combination of biological, psychological and environmental factors. Certainly, as with all issues of psychology and behavior, the distinct combinations of genetics and inheritance must be countered with an acknowledgment of environmental factors, and the biology of addiction is no exception.

References

1)Sommerset Medical Service Website: The Science of Addiction

2)Hendrick Health System Website: Addiction


The Relationship Between Epilepsy and the Brain
Name: Chevon Dep
Date: 2004-02-27 00:30:08
Link to this Comment: 8537

Uncontrollable shaking, tongue biting, and eyes rolling have frequently been associated with demonic spirits. In the film "The Exorcist" the little girl displayed these actions and they are labeled as demonic. Unfortunately, the labeling of such actions was not just a notion of films but also in the medical field. For example, epileptic patients were characterized as being possessed, because they exhibited such behavior and it was unexplainable. As more information was gathered about the relationship between the brain and these episodes, this notion began to disappear. Since my mother has been an epileptic patient for quite some time, it is important to understand the brain's role in her recurring seizures.

Since the neurons communicate with one another by firing tiny electrical signals that pass from cell to cell, the firing pattern of these electrical signals reflects how busy the brain is at any moment, and the location of the signals indicates what the brain is doing, such as thinking, seeing, seeing, feeling, hearing, and controlling the movement of muscles."(1) Epilepsy is a brain disorder that occurs when the electrical signals in the brain are disrupted.(1) Disturbance occurs when the firing pattern of the brain's electrical signals become abnormal and intense, either in an isolated area of the brain or throughout the brain. (2) More specifically, epilepsy is a condition that involves having repetitive seizures. (2)Two or more seizures must occur before a person can be diagnosed as having epilepsy.(3)

One of the most serious types of seizures is Grand Mal Seizures (Generalized Seizures). This particular type of seizure occurs when changes in the electrical signals spread through the entire brain at once.(1) Once the entire brain is affected, there can be a loss of consciousness and shaking of all limbs.(1) According to Scott, an epileptic attack can be divided into three parts: the warning, actual fit, and the recovery.(4) When my mother experiences an attack, there is usually no warning. It seems that she goes immediately into the attack. Air is forced through the larynx and the cry that is produced indicates that the attack has begun and not that the person is in pain. (4) The gagging sound does seem that the person is in pain, because it is difficult to breath. The reason that breathing has ceased is because no oxygen is entering the lungs. (4) Not only is there no oxygen getting to the lungs but also the brain. During an epileptic fit, oxygen consumption of the brain may be increased up to fifty percent. (4) In order to function normally the nervous system requires vitamins and oxygen, which are carried to the brain.(4) Therefore, if a person has numerous seizures a serious problem can arise because there is little oxygen getting to the brain.

The tonic phase leads into the clonic stage. Many things occur in this stage such as loss of bowel or bladder control. This is due to the violent contractions of the body. Epileptic patients are unable to control their movement because of the change in location of the signals that communicates with the brain. It is not unusual for one to become unconscious or fall into a deep sleep from a few minutes to several hours.(4) After a seizure, the person rarely has memory of it.

In most cases, the cause of epilepsy is unknown. The term used is idiopathic, because there is no definite abnormality of the brain. My mother's grand mal seizures are characterized as idiopathic, because she did not experience any short term or lasting scarring or damage to the brain from head injury or serious brain infections. The electro-encephalogram (EEG) is a test that records the electrical activity of the brain.(4) This test is helpful in diagnosing epileptic patients because it reveals the unusual brain activity. The EEG is sometimes used to determine the nature of the abnormality causing the seizures. (3) Those with epilepsy have brain cells, which have disordered electrical functions and this leads to a seizure. An epileptic patient's brain cells are less able to suppress electrical discharges. Although the cause of many epileptic episodes are unknown, there are things which triggers seizures such as stress, lack of sleep, starvation, and flashing lights. (1)

In order to control seizures, many patients are prescribed some type of medication. The type of prescription one receives depends on the type of seizure. For major attacks such as grand mal seizures, phenobarbitone and dilantin are widely used by epileptic patients. There are, of course, side effects to these medications. Drowsiness and skin rashes are the most common.(4) The purpose of the medication is to control the number of seizures. However in my mother's case, she constantly has to keep switching medications, because she frequently has grand mal seizures. First, she was on phenobarbitone, but that did not seem to work so now she takes dilantin. The frequency in seizures can be the result of the triggers and not necessarily the medication.

Epilepsy is the second most common neurological disease in the United States, affecting approximately two million people.(2) More importantly, each year 125,000 to 150,000 people are diagnosed with epilepsy.(2) Serious cases of epilepsy prohibit certain activities such as driving. Employment is sometimes difficult for epileptic patients to find, because employers feel that the patients are liable to accidents and will more likely to take time off of work. (3) Epilepsy is a serious disorder that not only affects the brain but also limits the activity one can perform. When people watch "The Exorcist," it stirs up a lot of eerie feelings. Imagine living with a disorder that prevents you from controlling your actions. Which is scarier watching it or living with it?

WWW Sources
1)Trileptal Home Page, A Good Web Source

2)MayoClinic Home Page, A Good Web

3)Aetna Intelihealth Home Page, A Good Web

4)Scott, Donald. About Epilepsy. New York: International University Press, Inc., 1973


Anxiety: Simply Stage-Fright or a Daily Demon
Name: Maja Hadzi
Date: 2004-02-27 14:56:05
Link to this Comment: 8543


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Anyone who has ever been on a rollercoaster ride knows the sound of metal hitting metal as the safety bar bangs to close in front of you. A heavy sensation develops at the pit of your stomach as you are pulled up against gravity to the top of the ride. Fear and the weightless feeling of dropping down at inhumane speeds are soon to follow. Your heart races, you feel the palms of your hands sweating, and you know you have no control over your fate at this point. Well, imagine going though that every time you attempt to do a mundane daily task.

Anxiety is a part of the normal human pallet of feelings and emotions, and everyone has experienced it at one point or another. Weather it be the butterflies in one's stomach before a large performance, getting weak-kneed and tense before a date, or the fear of an approaching snake. It appears that the feelings of anxiety are a normal part of biology and the human experience, a type of defense mechanism in our sympathetic nervous system which functions on a "flight or fight" system. For some people, however, these moments of anxiety are not brief, mild, rare and isolated incidents like they are for most. Instead, anxiety is a constant and dominating force that goes far beyond the occasional nervousness and severely disrupts their quality of life. Anxiety disorders are chronic, relentless, and can grow progressively worse if not treated (2).

Anxiety disorders are the most common mental illness in the United States with 19.1 million adult Americans affected. The disorder manifests itself in a number of distinct but related forms that all share extreme debilitating anxiety at their core. The different types of anxiety disorders are as follows: Generalized Anxiety Disorder (GAD), Obsessive-Compulsive Disorder (OCD), Panic Disorder, Post-Traumatic Stress Disorder (PTSD), Social Anxiety Disorder (Social Phobia), and Specific Phobia (5).

Generalized Anxiety Disorder is characterized by excessive, unrealistic worry that lasts for at least six months. It's chronic and fills ones day with exaggerated worry and tension, even though there is little or nothing to provoke it. GAD symptoms also include trembling, muscular aches, abdominal upsets, insomnia, irritability, and dizziness. GAD rarely occurs alone, so it is usually accompanied with another anxiety disorder, depression, or substance abuse (1). "I always thought I was just a worrier. I'd feel keyed up and unable to relax. At times it would come and go, and at times it would be constant. It could go on for days. I'd worry about what I was going to fix for a dinner party, or what would be a great present for somebody. I just couldn't let something go" (2).

Obsessive-Compulsive Disorder involves anxious thoughts or rituals the individual feels they can't control. They are plagued by persistent and unwelcome thoughts or images and the urgent need to engage in certain rituals. Most people recognize that what they're doing is senseless, but they can't stop it. There is no pleasure in carrying out the rituals that they are drawn to, only temporary relief from the anxiety that grows when they don't perform them (1). "I couldn't do anything without rituals. They invaded every aspect of my life. Counting really bogged me down. I would wash my hair three times as opposed to once because three was a good luck number and one wasn't. It took me longer to read because I'd count the lines in a paragraph. When I set my alarm at night, I had to set it to a number that wouldn't add up to a "bad" number" (2).

People with Panic Disorder have feelings of terror that strike suddenly and repeatedly with no warning. Symptoms include heart palpitations, chest pain or discomfort, sweating, trembling, tingling sensations, feeling of choking, fear of dying, fear of losing control, and feelings of unreality. When people's lives become so restricted that they avoid normal everyday activities such as grocery shopping or driving, the conditions is called agoraphobia (1). "For me, a panic attack is almost a violent experience. I feel disconnected from reality. I feel like I'm losing control in a very extreme way. My heart pounds really hard, I feel like I can't get my breath, and there's an overwhelming feeling that things are crashing in on me" (2).

Post-Traumatic Stress Disorder is a deliberating condition that can develop following a deliberating event. There are three main symptoms associated with PTSD: reliving of the traumatic event in the form of flashbacks and nightmares, avoidance behaviors, emotional numbing and detachment from others, and physiological arousal such as difficult sleeping, irritability, or poor concentration (1). "Then I started having flashbacks. They kind of came over me like a splash of water. I would be terrified. Suddenly I was reliving the rape. Every instant was startling. I wasn't aware of anything around me, I was in a bubble, just kind of floating. And it was scary. Having a flashback can wring you out" (2).

Social Anxiety Disorder, also called Social Phobia, involves overwhelming anxiety and excessive self-consciousness in everyday social situations. People who suffer from it, have a persistent, chronic, and intense fear of being watched and judged by others and of being embarrassed and humiliated by their own actions. While many people recognize that their fear may be excessive or unreasonable, they are unable to overcome it (1). "In any social situation, I felt fear. I would be anxious before I even left the house, and it would escalate as I got closer to a college class, a party, or whatever. I would feel sick at my stomach-it almost felt like I had the flu. My heart would pound, my palms would get sweaty, and I would get this feeling of being removed from myself and from everybody else" (2).

A Specific Phobia is an intense fear of something that poses little or no actual danger. The level of fear is usually inappropriate for the situation and is recognized by the sufferer as being irrational. This inordinate fear can lead to the avoidance of common, every day situations (1). "I'm scared to death of flying, and I never do it anymore. I used to start dreading a plane trip a month before I was due to leave. It was an awful feeling when that airplane door closed and I felt trapped. My heart would pound and I would sweat bullets. When the airplane would start to ascend, it just reinforced the feeling that I couldn't get out. When I think about flying, I picture myself losing control, freaking out, climbing the walls, but of course I never did that. I'm not afraid of crashing or hitting turbulence. It's just that feeling of being trapped" (2).

It is interesting to examine how something that has evolved to help us survive, when in imbalance, impedes our daily life. Studies have shown that people with panic disorders might have a serotonin deficiency, or serotonin isn't being used correctly by the body. Experts believe that anxiety disorders are caused by a combination of biological and environmental factors (4). In general, there are two types of treatment available for anxiety disorders: medication and psychotherapy, which includes behavior therapy, cognitive therapy, and relaxation techniques. The goal of behavior therapy is to modify and gain control over unwanted behavior. Cognitive therapy is aimed at changing the unproductive or harmful thought patterns. Relaxation techniques help individuals develop the ability to deal with the stress that triggers anxiety as well as some of the physical symptoms associated with it (5).

However, because anxiety is something that everyone will experience at some point, many people and certain cultures do not consider it to be an illness or a problem. They may see it is a personality setback or a lack of self-control and will-power. As Irina Moissiu argued in her paper, "If it was not enough to subject adults to these ridiculous, socially constructed illnesses, we have decided to put our children through the same traumas" (3). As with any other disease, it is entirely possible for there to be a misdiagnosis for anxiety problems. However, I think that it would be more traumatic for the child to have the problem ignored and let them become overwhelmed with anxiety as the problem escalates, instead of recognizing it and using psychotherapy to teach them how to deal with it at an early age. Those who argue that anxiety disorders are not a problem do not realize that the frequency, intensity, and type of anxiety that a person with an anxiety disorder experiences is much different from the usual nervousness most stressed-out individuals feel from time to time. Thus it is unfair to compare the two on the same level.


References

1) Anxiety Disorders Association of America

2) National Institute of Mental Health

3) Anxiety Disorders

4) The Physiology of Panic Disorders, Part II

5) Treatments for Depression, Anxiety Treatments, and Stress Relief

6) Anxiety Disorder Association of Ontario

7) The Anxiety Panic Internet Resource


The Self; Social Yet Biological
Name: La Toiya L
Date: 2004-02-28 04:05:19
Link to this Comment: 8550


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Socialization effects social image in so many ways. Socialization is how we learn and process the norms of our culture, and in take in the values and beliefs we are supposed to follow in order to develop a sense of whom we are. Scientists and sociologist debate which of nurture or nature does most to affect our sense of self. Also whether entwined sociology and biology theories better explain how and what is affecting us. What about these effects contribute to how you feel about your self?
Nature versus nurture, which one helps form our self-image? In the late nineteenth and twentieth century, scientists felt the basis of their argument was stronger to believe in the fact that it was nature. Many scientists support Charles Darwin's theory that it is the survival of the fittest. This theory is often misinterpreted as the strongest, but it's much more than that. What Darwin actually meant by "fittest" was the best possible fit between organism and environment. And the organism that fits best is the one that is most capable of adapting and using its strengths to meet the challenges presented to it. (1)

With today's consistently evolving society, humans, the social beings we are, must rethink and reevaluate how we socialize and how we equip ourselves to do so. In our rapidly changing societies the fittest persons will be those who survive through adaptation to the social norms, knowledge, and conceptualizations. In this light knowledge and how we use it to socialize and adapt is key. Thomas Spencer, a behavioral psychologist, has said, "The average worker of today will probably have to relearn his job five different times in his career." And he could be underestimating it significantly. Marshall McLuhan put it another way: "The future of work now consists of learning a living rather than earning a living."(1)

A study was done on twins where they were separated at early age. This experiment was supposed to reveal how heredity and social environment help form behavior. The study concluded that nature and nurture equally shape. The concept of combining ideas and theories help us understand and clarify things. Another way of trying to approach this question is through the lens of sociobiology, where people - including Darwin himself -- have been speculating on how our social behaviors (and feelings, attitudes, and so on) might also be affected by evolution.(3) integrates theories and research from biology and sociology in an effort to better understand human behavior. The main idea of sociobiology is biology; genetics and physiology help develop our characteristics. An example that will demonstrate this is the process of early childhood socialization. At conception and during prenatal development, our DNA already can tell us what our sex, race, skin, color, hair, and eye color will be. Researchers believe that the first two to three years of a baby's brain are like vacuums ready to receive any knowledge available.

How does socialization affect self-image? Self-image is based on your personality, a person's attitude, feelings, and behavior. There are three different parts to personality: the id, the superego, and the ego. In Freudian theory the superego is the division of the unconscious that is formed through the internalization of moral standards of parents and society, and that censors and restrains the ego.(2)
The id consists of the essentials to life, drives what you are aspiring to be, the ego is what balances out one's superego and the ego is the conscience. According to Freud, socialization was due to internal factors not the environments.

Socialization for every person is different. For instance for women socialization has changed, women were once viewed as inanimate objects to role models. The norms of the culture were for women not think as much and just stick to their daily chores and their life would be fulfilled. But women are not socialized to things for themselves in society and they have aspiration in life.

In my culture the norms and beliefs that we were taught are opposite to the type of person I am now. When you're a child you are taught that everything your parents teach you is right and that they are never wrong, even though they know eventually you'll realize no one's always right. As a child I was a bit troublesome with all my questions and testing what I was told. For example when I was told I couldn't wear pants to church, I was the brat that would rebel and sit with my legs open in a skirt until my mom let me wear pants to church. That personality in a much more mature way is still apart of who I am. My ability to free think is what liberates me, and it also gives me the strength and foundation to challenge myself in different situations. How I carry myself and more importantly challenge myself is not only my socialization skills but also my behavior. Socialization affects us in so many ways far beyond the visible. Our individual socialization patterns shape our mentalities. The things we individual experiences in society directly affect our minds, which explains how our minds register and react to incidents and situations we encounter differently.

.

References

1)Waking Up in the Age of Information,

2)Dictionary.com,


3)Sociobiology, Excellent site!!! Very well written :)

The writings of Charles Darwin on the web


Sociobiology Another great site if you like - or want to like sociobioloy.


LSD- Origins and Neurobiological Implications
Name: Michael Fi
Date: 2004-02-28 18:14:45
Link to this Comment: 8555


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

D-Lysergic acid diethylamide, commonly known as the drug LSD was discovered serendipitously in 1938 by Dr. Albert Hofmann during an attempt to synthesize coramine, a circulatory and breathing stimulant. (1) The compound was considered chemically uninteresting and was ignored until 1943 when Hofmann, while reopening his research of lysergic compounds accidentally ingested some of the compound and he "suddenly became strangely inebriated. The external world became changed as in a dream. Objects appeared to gain inrelief; they assumed unusual dimensions; and colors became more glowing. Even self-perception and the sense of time were changed." (2) How was it that Hofmann, who subsequently became the father of psychopharmacology, hallucinated after ingesting d-lysergic acid diethylamide? How was his perception of reality changed? Most importantly, how did LSD affect his Central Nervous System, physically and otherwise, in order to bring about these effects, and what do these effects imply about the Central Nervous System and the neurobiology of behavior as they relate to an alteration or a divergence of consciousness?
Physical Response
LSD is a structure comprised of four cyclic structures and three notable functional groups- two ethyl groups and a methyl group. The structure of LSD bears a striking similarity to that of serotonin, which is the molecule principally responsible for determination of mood. (3) A useful explanation for the brain's receptivity to LSD is its structural similarity to serotonin. A C14 marking of ingested LSD shows that about 10% of LSD molecules ingested by a subject pass through the blood brain barrier and bind to serotonin receptors in the hypothalamus. (4) The hypothalamus is part of the limbic system, which has a diverse array of functions associated with homeostasis, movement and more importantly emotion and organization of responses. (5) Once the LSD molecule binds to the serotonin site, it alters the responsiveness of the subject's neurotransmitters. A hallucinogen produces the sensory distortion known as hallucination by lowering the threshold at which nerves produce a response signal. This means that neurons which normally require a large chemical stimulus to produce a signal which is then sent to the brain produce signals at the slightest chemical prompting. (6) This increased volume of neuron activity and signaling means more sensory information is being sent to the brain than it can handle.
The consequence of this mechanism is that LSD molecules, when introduced into the system can become an inhibitor of serotonin. This may cause depression depending on other factors. However, non-hallucinogenic LSD derivatives such as 2-brominated-LSD can be used as serotonin inhibitors to control chemically-based psychological disorders. (7)
Consciousness and Mind Expansion
If the hypothalamus, a center of organizational control and emotion is adversely affected by the binding of LSD to its serotonin receptor sites and functioning irregularly, the outward effects of LSD seem sensible. However, this explanation of neurochemical phenomena barely begins to address the idea of altered and different forms of consciousness. Once one becomes able to see sounds and hear smells, and experience a trip outside of his normal neurological configuration, one could truly say he has experienced a different form of consciousness. (8) Could thoughts generated during an acid trip have been generated under "normal" conditions? If consciousness is merely a function of the pattern or manner of impulse generation and reception, can consciousness be electrically manipulated?
The most profound manifestation of this difference in consciousness is the flashback. In a flashback, an individual returns unexpectedly to the mental state of an acid trip. Whether there are residual LSD molecules involved in a flashback, it is unclear, but a flashback, with its deviation from an individual's perceived reality, provides an excellent juxtaposition between the individual's normative consciousness and the consciousness generated by LSD. The flashback concept also introduces the idea of an LSD placebo of sorts. A brain can generate an LSD-like consciousness state without the aid of the drug itself, showing an ability to redirect the processing of neuron impulses in ways usually thought to be automated.
Ultimately the barrier to LSD research is the inherently philosophical nature of the drug itself (not to mention its illegality). The realms of consciousness reserved for psychology are yet to be blended with the realms of neurophysiology and biochemistry. LSD is peculiar amongst drugs in that it produces emotions and sensations which bend the realm of ordinary human conceptions of consciousness and defy chemical and scientific description at our current level of scientific advancement.


References

1) "Stanislav Grof interviews Dr. Albert Hofmann, Esalen Institute, Big Sur, California, 1984," MAPS, Volume XI, Number 2, Fall 2001.

2)Hofmann, Albert. LSD- My Problem Child. McGraw Hill: New York, 1980.

3)C. D. Nichols, J. Ronesi, W. Pratt and E. Sanders-Bush, "Hallucinogens and Drosophila: Linking Serotonin Receptor Activation to Behavior," Neuroscience, Volume 115, Issue 3, 9 December 2002, Pages 979-984

4) "Stanislav Grof interviews Dr. Albert Hofmann."

5) David B. Givens, "The Hypothalamus, "Center for Nonverbal Studies, 2001.

6) Anna Bacon, Heather Cagle, Paul Mikowski, Michael Rosol, "The Effect of LSD on the Human Brain," Michigan State University, 1996.

7)See "Stanislav Grof interviews Dr. Albert Hofmann" as well as the journal article by Watts, Val J.; Lawler, Cindy P.; Fox, David D.; Neve, Kim A.; Nichols, David E.; Mailman, Richard B. "LSD and structural analogs: pharmacological evaluation at D1 dopamine receptors." Psychopharmacology (Berlin) 
(1995),  118(4),  401-9.  CODEN: PSCHDL  ISSN: 0033-3158.  Journal  written in English.    CAN 123:74809    AN 1995:603436    CAPLUS.

8) National Institutes of Health, "NIDA factsheet," 2003.


In Search of the Neural Substrate of Humanity
Name: Emily Haye
Date: 2004-03-01 18:17:11
Link to this Comment: 8597


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Introduction: Ontogeny Recapitulating Phylogeny

The idea that ontogeny recapitulates phylogeny is both a catchphrase and backbone of evolutionary study. I assume, in my search for the neural root of what we call "humanity," that this assertion is true, that the development of an individual organism in many ways mirrors the evolution of its species. If this is true, then there must be some point in human neurological ontogeny that defines us, a point at which our own development stops mirroring that of our closest mammalian cousins, the chimpanzee and other higher apes. At this point, our development begins to mirror the next leg of our phylogeny: the evolution of extinct hominids. Therefore, again, there must be a point in our ontogeny at which we separate from even the most recent of these ancestors. It is this point, the point in our ontogeny that mirrors the point at which we became fully human, or distinct in derived characteristics from our hominid ancestors, that must be derived from a specific neurodevelopmental event, likely the full maturation of a specific and unique brain structure. It is this point that can lead us to the neural substrate of humanity. What is this derived characteristic? Are there many? What are their neural correlates? This is what I hope to discover.

Human Development: Ontogeny ((1),(2))

I will use the evolution of a child's play to represent development in general cognitive function. I do this because as a child's play evolves with age it becomes increasingly symbolic. We will see that this symbolism is integral in finding the neural substrate of humanity.

From the age of eighteen months to approximately three years, a child's play develops within certain parameters. Toddlers are very physically oriented; they do not so much play as do. They imitate: people, dogs, birds, cars, anything exhibiting interesting actions or sounds. This imitation is not symbolic, however. The child is not pretending to be a dog. Rather, she is making the sound a dog makes. She is getting to know what "dog" is in her environment. This may seem to be a minor distinction, but it is an important one.

During the toddler phase, a child is also beginning to develop capacity for language ((3). This, too, follows a pattern of imitation and doing. The language of a toddler progresses from babbling and repetition to the expression of simple mental activities, like hunger, in simple verbal phrases, usually lacking syntax (i.e. "want milk"). This lack of syntax correlates with a lack of the full symbolic nature of language. The child is very oriented, both in speech and play, around the present. She imitates things she has recently seen or heard and names things within her immediate environment. She may recall these things from memory but her relationship with the world is not yet fully matured, and therefore her capacity for symbolic cognition is not fully developed.

Around age three, a child begins to play pretend. She is still tied to the present in that she needs props for her games; she does not mime a phone but rather needs a play phone in order to pretend to have a phone conversation. In this way, she is not yet fully removed from the constraints of time and place, as are cognitively mature humans. At this stage, the child will also begin to assume roles in her play, but they are ones of concrete and immediate importance: mommy, daddy, and baby. This development is significant because symbolic representation, or the association of meaning with arbitrary symbols, be they auditory or pictoral, is a capacity only of the evolved intellect ((18)). There is however some confusion between reality and fantasy. She is beginning to be able to leave the present in her games of pretend, but they are not sustained, they require props, and she does not cognitively distinguish them fully from reality.

It is important to the understanding of this stage to understand what is happening in the child's development of language. During this time, the child's language is becoming more complex, her syntax more complete. Show now expresses her desire for milk by saying, "give me milk," or "I want milk," sentences rather than a phrases, with subjects (one of them implied), verbs and direct and indirect objects. This is far more advanced than the "want milk" of a toddler. While this example does not correlate directly to the examples of play development above, the more complex language it represents does. One of the major results of human language is that it allows us to be free of present time and space in that we can discuss things absent, past, future, and intangible ((19)). This allows us to think (which we do in language) about what is going on, to project into the future the consequences of a present action, and to evaluate risk and benefit. In other words, we are freed, to some degree, from the reflexive and instinctive reactions of other animals. We are able to willfully control our responses to certain stimuli. If you extend this idea of freedom from the immediate, you will see the basic outline of how humans migrated out of the tropics, tamed the environment through agriculture, and developed art, religion, philosophy, etc. So again, while the "Give me milk" of a three-year-old may seem a menial developmental step, it is an important step on her path to full cognitive maturity.

The ages of four and five bring the culmination of the child's cognitive development. By the end of these phases, she has the basic capacity for mature human cognition; in other words, her capacity for symbolic representation is complete.

Around age four, a child's distinction between reality and pretend, which was cloudy at age three, solidifies. She begins to exhibit "sophisticated role-taking;" the family in a game will expand to include the dog and cat ((1)). She becomes less physically constrained in her play. She no longer needs a toy phone to hold an imaginary conversation, but may instead pretend that a banana or a block is a phone. Also, with the full language of her age, the child can engage in "cooperative play" ((1)) in which the idea for a game is communicated among and shared by all the players. Up until this point, play was "parallel;" while several toddlers may be playing in the same vicinity, they are not sharing their games. The full language capacity that comes around age four is integral in the development of cooperative play; the children can now express their own imaginings and plans to the others in order to engage them.

Age five sees more complex games of pretend and cooperative play, but also the important arrival of our final developmental step: the ability to solve problems verbally. At this age, children are able to communicate complex mental activity, like the desire for a toy (rather than a survival-basic like food) into words and use her words to obtain said toy (to solve a problem, namely not having the toy). To an adult, this seems an obvious use for language, but in the development of a child it is a huge step. Having developed the ability to use the full symbolic character of language in communicating about imaginary games and using language to solve problems, we will call the child, for our purposes, cognitively mature. There are many other steps in cognitive development leading to a fully mature adult mind, but for our purposes these can be ignored. We will see why shortly.

What We Know About Apes and Hominids: Phylogeny

The cognitive faculties of a human two year-old have been compared to those of a chimpanzee, in that both operate using a "general intelligence," or "simple, general-use computer program" about the world ((4). Like a two year old, a chimpanzee may know what a phone is and what a banana is, but neither would use a banana to represent a phone. Here, we see a divergence in the ontogeny and phylogeny of humans: There is evidence that our closest living mammalian relative is the chimpanzee, but very early in our own development (at age four) we diverge cognitively from this close relative. This means that the human capacity to "use" a banana as a phone (in other words, our ability to pretend or imagine) is a derived characteristic; it is not shared with our close relative. But is this characteristic derived only from the chimpanzee, or from our hominid ancestors as well?

This is not an easy question to answer. We cannot put an extinct hominid in a lab, expose him to bananas and telephones, and then see if he talks into the banana as he had seen people do into a phone. However, there is evidence we can use to deduce whether a hominid would have been cognitively capable of doing this.

Language is the tool I will use to deduce whether a hominid would have been capable of the banana/phone trick. I will do this in a round about fashion, without analyzing the endocasts of various extinct hominid species. Rather, I will use two major pieces of archaeological evidence to glean a general idea of hominids' capacity for language.

KNM WT 1500, or Nariokotome Boy ((5), is a hominid specimen that has clarified, for some, the issue of whether his species, Homo ergaster, was linguate (the word KNM WT 1500 expert Walker (6) uses to describe "having the capacity for full language.") It was originally thought that Brocca's area was sufficient evidence for linguacy in hominids. It was known that Brocca's area played some role in human language, and therefore it was assumed that a bump in the region of Brocca's area on a hominid endocast was evidence of language in that species.

Walker's study of Nariokotome Boy changed this hypothesis. With the advent of PET scans, it has been discovered that while Brocca's is involved in human language, it is not the only center of high metabolic activity during language tasks; in other words, Brocca's area is not solely responsible for language. This meant that Nariokotome Boy, though he had a Brocca's area, was not necessarily linguate. Close osteological study revealed that the foramen in 1500's thoracic vertebrae were significantly smaller than those in modern humans. This implied that Nariokotome Boy did not have the capacity for the complex muscle, specifically diaphragm, control needed to produce the full range of sounds involved in human speech. If he did not have the capacity for human speech, then he certainly did not have the capacity for human language. (6)

The next piece of evidence is the center of heated anthropological debate: the date of the appearance of anatomically modern humans. (I refer to the early members of our species as "anatomically moderns" in order to avoid entanglement in the Homo sapiens v. Homo sapiens sapiens argument (7).) Anatomically moderns have been dated by some as early as 100,000 years ago in Africa and Asia, where the evidence for this speciation is the appearance of tool industries not associated with preceeding hominid forms (7). The emergence of anatomically moderns in Europe is generally dated about 50,000-60,000 years later, when they replaced or evolved from Neandertals (7), (8)). For my purposes, I assume that anatomically moderns appeared at the later date, 40,000 years ago, subscribing to the school of thought that describes human evolution in terms of two "out of Africa" waves and implying that Homo neanderthalensis is a distinct and extinct side branch of human evolution, rather than direct evolutionary predecessors to H. sapiens. Within this context, while the appearance of bone and more advanced stone tool industries is evidence of higher cognitive function by their makers, this is not significant enough to place anatomically moderns at the dates coinciding with these tools (7). Rather, the event marking the appearance of anatomically moderns occurred in Europe around 40,000 years ago: what is known as a "cultural explosion" ((9)). In simplest terms, art appeared.

This is why art, rather than advanced tool making, is the defining behavior of anatomically moderns: Art is symbolic representation. Symbolic representation is a cognitive behavior possible only with fully developed language (also evidence of symbolic representation, as discussed in the section on ontogeny). Nariokotome Boy, who, remember, was not capable of human language, was a member of Homo erectus. In the timeline of hominid evolution ((10)), H. erectus is the species directly preceding H. sapiens. It has been concluded that the evolution of the human vocal tract, necessary for full speech and therefore for language, would have been slow ((11)). Also, as the appearance of the human vocal tract would be a derived characteristic, worthy of attributing any specimen with a human vocal tract to a species separate from H. erectus, I conclude that this species is H. sapiens. Anatomically moderns, and they alone, are capable of full human speech and therefore human language. It follows, then, that only anatomically moderns are capable of the symbolic representation, which removes them from chronological, special, and biological immediacy (see section on Ontogeny), and therefore are the creators of the art appearing 40,000 years ago. In other words, we have found our derived characteristic: symbolic representation and its resulting independence from chronological, special, and biological immediacy. This would have allowed for agriculture which emerged approximately 10,000 years ago ((12)), science, philosophy, religion, etc, all of which require language and the capacity for symbolic representation.

Conclusions: Recapitulation and Neural Correlates

I have demonstrated that symbolic representation, as manifested in art and language, is the derived behavioral characteristic of humanity, as it is the basis for all other things which we consider to be "human": science, math, philosophy, religion, theology, civilization (which is based on agriculture), etc. The appearance of full capacity for symbolic representation in human ontogeny (not until 4-5 years) implies that this capacity arose late in human phylogeny. I have demonstrated that this is indeed true, with the capacity for symbolic representation being a faculty of anatomically modern humans alone. I have discussed some behavioral manifestations of symbolic representation: language, art, religion, etc. This begs the question: If brain equals behavior, then what is the neural correlate of these behaviors? What is the neural substrate of these solely human behaviors?

Recent research has demonstrated that the human brain has no more cerebral cortex than would be expected of a primate of our brain size ((13)). However, the human encephalization quotient (EQ) is 7.44, which means that, for body size, humans are seven times as encephalated as should be expected ((14)). But simply having a lot of brain can't account for symbolic representation; there must be something unique about this large quantity of brain that is the correlate.

Significant research has been done on the prefrontal cortex in humans and great apes in an effort to discern differences. In 2001 Semendeferi et al. published their findings regarding Area 10 of the prefrontal cortex, one of the regions to which "higher cognitive functions such as the undertaking of initiatives and the planning of future actions" has been attributed ((15)). Semendefri et al. discovered that the GLI (gray-level index) of humans is unique among hominoids (humans and great apes). This means that humans have more room for connections among neurons that do the great apes. To me, this implies the following: that higher cognitive functions, facilitated by symbolic representation, arise from the huge number of connections in the human brain. Somehow, this must relate to, if not solely be, the neural substrate of humanity.

While research on the uniqueness of the human brain seems to be concentrated in the prefrontal and visual cortexes, I would assert that the temporal lobe may yield interesting findings, as well. Dr. V.S. Ramachandran is conducting fascinating research at the University of California, San Diego, about the role of the temporal lobe in human spirituality ((16), (17)). I have argued that the practice of religion is one of the uniquely human behaviors made possible by symbolic representation. As spirituality is the foundation of religious practice it is likely that some important findings regarding symbolic representation could result from further study of the temporal lobe.

I am in no way educated enough in the methods and knowledge of modern neuroscience to be able to draw a highly credible conclusion about what I am calling the neural substrate of humanity. In accordance with the research that I have done, however, it seems to me that all things which we consider to be human, all things we do in excess of survival, are facilitated by or directly associated with symbolic representation and language. It makes sense to me, then, that the neural correlates for these behaviors, being the result of a complex and advanced cognitive function, would lie in the areas associated with higher cognitive functioning, namely the frontal, and as I have suggested, temporal lobes. It seems, also, that the huge EQ of humans and the large degree of connection shown by Semendeferi et al. would have something to do with the generation of these higher functions.

.

References


1) Dehouske/Schomburg, educational chart on human cognitive development, Carlow College, 3/15/80.

2) Personal interview with Nancy Hayes, Masters Equivalent in Early Childhood Education, 2/26/04

3) The Development of Children, 2nd Ed, Michael and Shelia Cole. Scientific American Books, New York: 1993.

4) Patricia Greenfield, in The Prehistory of the Mind, Stephen Mithen. Thames and Hudson, New York: 1996.

5)Nariokotome Boy A description of specimen KNM WT 1500

6) The Wisdom of the Bones: In Search of Human Origins, Alan Walker and Pat Shipman. Knopf, New York: 1996.

7)Human Evolution: Summary of the Debate

8)Indiana University, Archaeology Page

9) The Prehistory of the Mind, Stephen Mithen. Thames and Hudson, New York: 1996.

10)

11) "On the Nature and Evolution of the Neural Bases of Human Lanugage," Philip Lieberman. Published in Yearbook of Physical Anthropology 45:36-62, 2002.

12) a paper on the Neolithic Agricultural Revolution

13)Development of the Cerebral Cortex

14)comparative neuroatnatomy site on Serendip

15) "Prefrontal Cortex in Humans and Apes: A Comparative Study of Area 10," Katerina Semendeferi et al. 2001. Available at:

16) "A 'God-module' in the human brain?" Published in: Perspectives: A Journal of Reforme Thought v.14 n.2 (1999) p. 17, 23. Available at:

17) Phantoms in the Brain, V.S. Ramachandran, M.D., Ph.D, and Sandra Blakeslee. William Morrow and Company, Inc, New York: 1998.

18) Davis, Rick, November 22, 2002. Class notes from Anthropology 101 at Bryn Mawr College, Bryn Mawr, PA.

19) The Ape That Spoke: Language and the Evolution of The Human Mind, John McCrone. William Morrow and Company, Inc, New York: 1991.


SSRI's: Successes and Questions
Name: Mariya Sim
Date: 2004-03-05 18:59:51
Link to this Comment: 8713


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


If you go to any pharmacy on the Main Line and look behind the counter, where they have the frequently-asked-for drugs, you will see that, apart from one or two birth control brands, all dispenser labels read "Amitril," "Prozac," "Zoloft," "Paxil," "Lexapro," etc. There is nothing special about the Main Line, it does not have an uncanny ability to attract emotionally imbalanced people; its pharmacies, in fact, simply reflect a worldwide and perpetually worsening epidemic called depression. Called by some "the cancer of emotions," depression affects approximately twelve percent of American women and eight percent of American men in their lifetime (1). Although the causes of depression are unknown, a range of effective antidepressants is available and is widely used by psychiatrists to treat various subtypes of depression. Moreover, it is frequently said that pharmacological treatment of depression has a great advantage over mere psychotherapeutical approach, and that any lasting effect and remission can only be achieved, if the patient combines an antidepressant with therapy (2).

There are several classes of antidepressants, but I would like to focus on selective serotonin uptake inhibitors (SSRI's), which have been developed in late 1980's (3) and which are, perhaps, the most popular antidepressants currently used. The reason for this popularity is not so much their effectiveness as compared with other drugs (for example, with monoamine oxidase inhibitors or tricyclic antidepressants) – studies show that their efficacies are similar (4) – but, rather, SSRIs' significantly smaller range of side effects (5). Patients taking SSRI's are more likely to complete the full course of treatment and, therefore, are more likely to reach remission.

This relative safety and tolerability of SSRI's are due to their selective action. Most antidepressants work by reestablishing communication between neurons through increasing the available level of neurotransmitters in the synaptic cleft. While other antidepressants affect several factors of the communication process (and some of their actions are unclear to the researchers), SSRI's action is focused strictly on the reuptake of serotonin by the presynaptic neuron, providing less leeway for possible side effects. By inhibiting the work of the serotonin reuptake transporter of the presynaptic cell, SSRI's increase the level of serotonin in the synaptic cleft, thus increasing the time in which serotonin can bind to the post-synaptic cell's receptors and the quantity of serotonin molecules in the cleft. (3)

Although patients treated with SSRI's often reach recovery, the assertion that serotonin deficiency is at the root of depression is not only arguable but most likely false. Several researches have shown that, although the connection between serotonin and depression is evident, it is by no means clear how the level of this neurotransmitter affects the condition, or whether it is even an important factor for all patients. Thus, the tryptophan depletion test, which allows researchers to reduce the level of serotonin in test subjects, shows that only 50 percent of healthy subjects with prior history of depression suffered from a relapse after their level of serotonin dropped. Evidently, this parameter is essential only for about half of depressed patients. Moreover, depression was not induced in the healthy subjects who never had depression prior to the test, which shows that serotonin deficiency is not the cause, but, most likely, itself one of the effects of depression. (1)

Another interesting dilemma is the fact that symptoms of depression can be alleviated not only by inhibiting the reuptake of serotonin, thus increasing its level in the synaptic cleft, but also by enhancing it, thus lowering the level of the neurotransmitter. For instance, tianeptine, a drug available in Europe, is as effective as most antidepressants, but its mechanism of action is the opposite of SSRI's. (1), (9) This once again highlights our current ignorance both of the cause of depression and of exact pathways of effective treatments.

Yet another SSRI's mystery is the time that is needed for them to take effect. In vitro studies show that they stop the uptake of serotonin into presynaptic neurons as soon as their level in plasma reaches the needed mark (which should take 2-8 hours) (4), but the actual therapeutic effect of alleviating depressive symptoms does not show until 2-6 weeks after the start of the treatment. (7), (8) This discrepancy may, in fact, provide an insight into various possible causes of depression that either stem from or influence the level of serotonin in the synaptic cleft.

One hypothesis that attempts to account for this time discrepancy proposes that the increased amount of serotonin in the synaptic cleft activates both the postsynaptic receptors and the autoreceptors of the presynaptic cell. The latter decrease the level of serotonin released by the presynaptic cell, not allowing serotonin to build up in the cleft. Over time, the autoreceptors become desensitized, the serotonin release is increased, and the therapeutic effect of the drug is noticeable. (11)

Another way proposed to account for this is that the therapeutic lag may be related to the number and sensitivity of postsynaptic receptors. In depressed patients, 5-HT (serotonin) receptors in the postsynaptic cell are up-regulated to compensate for the lack of serotonin. Studies show that SSRI's treatment causes down-regulation of the receptors at first, and then their activity is finally balanced. It is interesting that the time that it takes for the receptors to begin functioning normally is consistent with the time of the therapeutic lag. (11) This hypothesis suggests that the causes of depression are connected not to the level of serotonin but, rather, to receptor activity.

But perhaps the most interesting story that is currently used to account for SSRI's therapeutic lag is the connection between serotonin and hypothalamic pituitary adrenal system, which is involved in human stress response. Stressful events cause the neurons in hypothalamus to release corticotrophin releasing factor (CRF), a 41-amino acid containing neuropeptide, into blood. CRF affects the anterior pituitary, which responds by releasing adrenocorticotrophic hormone (ACTH), which is then transported to the adrenalin gland that produces glucocorticoids (cortisol in humans). Cortisol, in its turn, influences the anterior pituitary, hypothalamus, and the hippocampus through glucocorticoid receptors – a negative feedback process which maintains normal level of cortisol in the nervous system. In response to stressful events, cortisol levels rise, providing the organism with extra energy and increasing alertness. CRF-producing neurons are found not only in hippocampus, but also throughout the central nervous system – in the cerebral cortex, amygdala, and the brain stem (including the locus ceruleus and raphe nuclei – the sites of origin for norepinephrine and serotonin neurons). In addition to regulating the release of ACTH, CRF appears to have the functions of neurotransmitter, mediating the endocrinal, immune, autonomic, emotional, and cognitive responses to stress. (1), (10)

Studies document that there is a substantial increase of the levels of CRF, ACTH, and cortisol in depressed patients and the anatomical increase in the number of CRF-producing neurons. These constant high levels cause the downregulation of the glucocorticoid receptors ("glucocorticoid resistance"), and this imbalance may lead to the development of depressive symptoms, although the exact reasons for that are not clear. (1), (10) The imbalanced glucocorticoid receptors activity is also thought to decrease cell resilience, increase cellular death, and decrease neurogenesis, ultimately leading to decreased hippocampal volume (8). Laboratory animal studies show that antidepressants, including SSRI's normalize glucocorticoid receptors activity and indirectly influence cell survival and cell plasticity. These effects of antidepressants take about 2 weeks, which may explain their therapeutic lag in humans. (1), (8), (10) Antidepressants, including SSRI's, may not only eliminate depressive symptoms, but also help reduce stress vulnerability. However, studies suggest that chronic antidepressant treatment is needed in order for these effects to be stable. (1), (8)

Several important lessons can be drawn from the history of the use of SSRI's. First of all, contrary to popular assumptions, the existence of a successful (or partially successful) drug does not imply that medical researchers are clear on the origin of the disease. It is far more likely, as the case is with antidepressants, that effective treatments will be developed as a result of chance observation, and that the existence of appropriate drugs and the ability to monitor their effect on the patient will ultimately lead to the understanding of the causes of the illness. Secondly, the widespread prejudice against "dirty drugs," the drugs that affect not only one or two, but several biological processes (which often include unclear ones as well) are not necessarily better that "clean drugs," which exert influence over much smaller range of biological processes. It is clear that the more "dirty" tryciclic antidepressants may be more effective in some cases than the "clean" SSRI's. Thirdly, the inability of SSRI's to effectively cure about half of patients suggests that there cannot exist a single possible medication, whether for depression or for other illnesses, due to biological differences between humans. Fourthly, the study of the successes and, more importantly, of the failures of SSRI's have led the researchers to surmise that what we term depression may be not one, but, in fact, multiple disorders, having distinct paths of origin and, consequently, necessitating different treatments. The popular assumption that SSRI's may be a cure-all for depression is thus challenged. Fifthly, and most provocatively, the inquiries into SSRI's actions suggest that there is no single cause even for individual subtypes of depression, but, rather, that multiple processes – environmental, genetic, intracellular – in multiple parts of the brain are combined (and this combination may be different in individuals), producing depressive symptoms. Overall, serotonin imbalances may be just one of the "final pathways" (1) that multiple causes of depression take, and it is lucky that modifying secondary serotonin imbalances may affect primary processes involved in depression. All of this taken into account, there is a need to continue searching for new understandings of the causes of depression and to develop new medications that will include serotonin-regulating effects, but whose mechanisms of action will also include directly modifying other imbalances involved in depressive disorders.

References

Web Sources:
1) Noha Sadek, MD, Charles B. Nemeroff, MD, PhD. "Update on the Neurobiology of Depression.", on MedScape site (from a collection of articles from Clinical Update, an online journal for continuing education of medical professionals).
2) All About Depression, a website with a general review of the causes and treatments of depression.
3) Charles B. Nemeroff, MD. "Neurobiology of Depression." Scientific American, June 1998. Web access to the archived article available from BMC campus.
4) Stuart A Montgomery. "Selective Serotonin Reuptake Inhibitors in the Acute Treatment of Depression.", on The American College of Neuropsychopharmacology site.
5) Barbui C. et al. "Treatment discontinuation with selective serotonin reuptake inhibitors (SSRIs) versus tricyclic antidepressants (TCAs).", on MedScape site (from a collection of articles from WebMD Scientific American® Medicine online textbook for continuing education of medical professionals).
6) Thomas AM Kramer, MD. "Mechanisms of Action.", on MedScape site (from a collection of articles from Medscape General Medicine online journal for continuing education of medical professionals).
7) "Depression: Beyond the Catecholamine Theory of Mood.", a comprehensive site developed for a University of Plymouth psychology course.
8) Husseini K. Manji et al. "The Cellular Neurobiology of Depression.", from Nature Medicine online.
9) David Gutman, BS, Charles B. Nemeroff, ND, PhD. "The Neurobiology of Depression: Unmet Needs.", on MedScape site (from a collection of articles from Clinical Update, an online journal for continuing education of medical professionals).
10) Juan F. Lopez, MD. "The Neurobiology of Depression.", on The Doctor Will See You Now website.
11) "Neurobiology of Depression.", a handout for a San Diego State University psychology course with a comprehensive discussion on neurobiology of depression.


An Ethical Minefield: Stem Cells
Name: Allison Br
Date: 2004-03-06 03:04:45
Link to this Comment: 8714


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Stem cells derived from either embryos or adults do not constitute human life. Therefore, stem cells should not be afforded the same protection as human life. The purpose of my analysis is to examine particular ethical questions surrounding stem cell research. Though I am fully aware of the benefits and risks of stem sell research, I am not going to explore the science or results of various research studies.

Stem cells provide the foundation for every organ, tissue, and cell in the body to develop. Three major types of stem cells exist: totipotent, pluripotent, and multipotent. Totipotent cells contain the complete genetic information needed to manufacture all the cells of the body as well as the placenta. Totipotent cells are only present in the first stage after the egg has been fertilized, after three or four divisions the cells become increasingly specialized. This second stage of division results in pluripotent cells. These cells are extremely adaptable, and have the capacity to develop into any cell type with the exception of the placenta. The further division of pluripotent cells creates multipotent cells. Multipotent cells are far more specialized than the previous two types of stem cells, and therefore can only produce limited cell types. Multipotent cells can generate hematopoietic cells, blood stem cells with the ability to create red blood cells, white blood cells, and platelets, but are unable to develop into brain cells. Terminally differentiated cells are the products of the chain of stem cell divisions. These cells are programmed to serve a specific function. Terminally differentiated cells comprise the embryo (1)

Stem cells can be obtained from embryos as well as the adult body. Embryonic stem cells are derived from the inner cell mass of the blastocyst, an early embryo consisting of approximately 150 cells. Adult stem cells are commonly retrieved from bone marrow (2). Totipotent, pluripotent, and multipotent cells are all present in the embryo, but only pluripotent and multipotent cells can be found in adults. Embryonic stem cells are highly versatile, and scientists assert have more potential for research than adult stem cells, because embryonic stem cells have the capacity to generate practically every type of cell in the human body (1). I will focus my analysis of human life only on embryonic stem cells because of the more controversial nature of the debate.

After examining the scientific components of stem cells, I can now analyze the bulk of my assertion, the controversy over human life. In a speech to the American public regarding stem cells, President Bush vows to "foster and encourage respect for life," (3). The President's reason for not granting federal tax dollars for stem cell research is, "because extracting the stem cell destroys the embryo, and thus destroys its potential for life," (3). Herein lies the problem, "potential for life" is just that potential. It is not in and of itself human life. President Bush does not outright declare stem cells to be human life. To use a rudimentary example, a grape seed is not a grape. Under favorable conditions and with an elapsed period of time, the seed will become a grape, but it is simply not a grape when in the seed stage. It is important to differentiate between the two stages. I will concede stem cells do hold the potential for life, but destroying the embryo ends this potential. Without a potential for life, stem cells cannot constitute human life regardless of how the potential was destroyed.

In a speech to the Vatican, Pope John Paul II denounces stem cell research based on the fact it, "destroys human life in its embryonic stage," (4). As previously noted, embryonic stem cells are extracted from a 150 cell blastocyst. I do not consider a cluster of 150 cells to be human life. Multipotent stem cells extracted from an embryo are designed to have a prescribed function, but because the cluster of cells as a whole is not developed further the multipotent stem cells cannot and are not functioning. Because the multipotent stem cells, the most specialized form of stem cells, are not functioning, the cluster of cells, in essence, is a blank slate as well as lacking the vital characteristics of human life.

Through analyzing President Bush and the Pope's comments, new questions arise, where does human life begin? Does human life begin when "potential" is realized? What is "potential"? What are the characteristics of human life?

I cannot attempt to provide concrete scientific answers any of these questions, but can explore my opinions and the implications of different opinions/answers. In my opinion, human life begins at birth. Not after a baby's first breath, because I consider stillborn babies to be human. Birth is the act of completely exiting the mother's body, with the exception of conjoined twins, birth implies complete physical separation from other human beings. Having said this, birth is not final until the umbilical cord is severed. A fetus is the term used before birth, for me this distinction elucidates the term potential. The potential for life is a fetus, birth is the realization of this potential, and therefore birth denotes the beginning of human life. From this assessment, I believe the basic characteristic of human life is birth. With the wide range of physical and mental disorders affecting a small percentage of babies, it would be almost impossible to attribute anything else such as: sight, sensing touch or pain, movement, or cognition, as a basic characteristic of human life.

From my political perspective, it is important human life is defined as beginning at birth. If human life was categorically defined before birth there would be sufficient cause to overturn the current federal abortion law. With the exception of state sanctioned murders, intentionally killing another human being is strictly prohibited by all 50 states. If an aborted fetus was determined to be a human, the doctor who performed the abortion would be subject to premeditated murder indictments, and the mother of the fetus would be subject to charges of conspiracy to commit murder.

In conclusion, stem cells provide the foundation for the entire body, but alone stem cells do not constitute human life. Human life is characterized by birth. All stages before birth, including totipotent, pluripotent, and multipotent stem cells, have the potential for human life. The potential for human life is realized at birth. Characterizing human life before birth would give the judicial system an adequate basis to overturn Roe v. Wade, and therefore restrict individual autonomy over one's body.


References

1) The Stem Cell Research Foundation.

2) International Society for Stem Cell Research.

3) A Whitehouse press release dated August 9, 2001, on the official government Whitehouse site.

4) A Vatican press release dated November 10, 2003, on the official Vatican website.


Behavioral Response to Smell: the answer may be un
Name: Sarah Cald
Date: 2004-04-05 00:04:33
Link to this Comment: 9157


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Of the five senses, smell is perhaps the least understood both mechanistically and behaviorally. There are many questions as to why people behave differently, if at all, to certain smells. This difference in behavior may be interpreted as being due to a physical characteristic of the human body. However, it remains to be seen what is responsible for this difference in behavior, the brain or an alternative organ?

General conclusions regarding olfaction can be made using observations, however such conclusions give little insight into the actual mechanism of olfaction and behavioral responses to smell. Regardless, they are a good starting point in exploring these issues. First, we can conclude that odors and smells are perceived in humans through a common pathway. We know this because on some basic level, all humans can agree that certain things smell. For example, we can all agree for the most part that a rose smells—we may not all agree on what a rose smells like, but it does have a scent. Along these lines we can also conclude, generally, that there are different odors which differ somehow in their chemical components causing them to be received differently. For example, the smell perceived from an orange can easily be identified as different from that of gasoline.

In addition to expanding and understanding further the aforementioned conclusions, this paper seeks to understand how humans can receive the same odor and behave differently to it. Gasoline is one example of an odor that elicits different behaviors in different people: many people despise the smell of gasoline saying it causes feelings of nausea, while others find the smell somewhat pleasant. This particular phenomenon intrigues me. More broadly speaking, what is responsible for the behavioral response to odor?

In order to fully explore this question, a better understanding of the mechanism of olfaction is needed. Odorants are collected in the sensory epithelia of humans, located in the upper regions of the nasal cavity ((1)). Odorant molecules are absorbed in the mucus layer of the sensory epithelia where they then travel to receptor cells, which are located on the cilia that line the nasal cavity. These cilia each comprise receptor proteins that are specific to certain odorant molecules ((1)). Binding of an odorant to a receptor protein results in an activation of a second messenger pathway. There are two known pathways to date. The most common, involves the activation of the enzyme adenyl cyclase upon the binding of an odorant molecule ((2)). This enzyme catalyzes the release of cyclic AMP (cAMP). The increase in cAMP levels causes ligand-gated sodium channels to open, causing a depolarization of the membrane. This depolarization results in an action potential ((2)). These electric signals are carried by olfactory receptor neurons through the olfactory bulb. The olfactory bulb then relays information to the cerebral cortex resulting in sensory perception of smell ((2)).

On average, humans can recognize up to 10,000 separate odors ((3)), yet only have about 1,000 different olfactory receptor proteins ((4)). Clearly, there is a step in the pathway of olfaction that allows for combinations of odorant molecules to be organized. This step was found to take place in the olfactory bulb. Within this organ, the activity of different olfactory receptors in combination is used to signal the brain for specific smells ((4)). Richard Axel M.D., an investigator at Columbia University College of Physicians and Surgeons and a pioneer in the field of olfactory research, explains this processing role of the olfactory bulb best:
The brain is essentially saying... 'I'm seeing activity in positions 1, 15, and 54 of the olfactory bulb, which correspond to odorant receptors 1, 15 and 54, so that must be jasmine ((4)).


Knowledge about the mechanism of olfaction now allows us to explore what is responsible for behavioral responses to odor. My initial answer to this question was the brain. One thing I have learned in our class discussions was that for the most part, behavior is the result of inputs and outputs from the brain and how they are processed. Accordingly, the brain should be responsible for the different behaviors observed in response to smell. However, after exploring and learning about olfaction on a more detailed level, I now believe that the source of behavioral response to odor may lie within the olfactory bulb. One role of the olfactory bulb is to receive signals from odorant receptors and relay that information to the brain. In this way the olfactory bulb is functioning to process and interpret the input signals from odorant receptors and produce corresponding output signals for the brain to subsequently interpret. It seems logical that in processing the inputs from odorant receptors, the olfactory bulb is also producing some type of output that results in a behavioral response.

Further investigation revealed evidence that may support this hypothesis. Signals sent from the olfactory bulb are sent not only to the cerebral cortex, which is responsible for conscious thought processes; but also signals the limbic system, which generates emotional feelings ((5)). This leads me to question whether the signals sent to the cortex and limbic system are identical or similar in any way? Also, is there a difference in the number of signals sent between the two locations in response to odorant reception? Meaning, do more signals get sent to the cortex when a person smells oranges, compared to the limbic system? All of these questions are worth pursuing; perhaps it is information in the signals sent to the limbic system, which is responsible for the behavioral responses to odor.

There is much about olfaction that remains unclear, particularly about the relationship between behavior and olfaction. To date, there is little evidence that suggests what portion of the body is responsible for behavioral response to odors. Further investigations involving the olfactory bulb may prove to be a worthwhile endeavor.


References

a name="1">1)Monell Chemical Senses Center, an overview of olfaction

2) Lancet, Doron. "Vertebrate Olfactory Reception." Ann. Rev. Neurosci. 9 (1986): 329-355.

3)The Mystery of Smell: The Vivid World of Odors

4)The Mystery of Smell: How Rats and Mice—and Probably Humans—Recognize Odors

5)Sensing Smell


Synethesia and the Human Brain: Questions Answere
Name: MaryBeth C
Date: 2004-04-06 00:48:04
Link to this Comment: 9193

<mytitle> Biology 202
2004 First Web Paper
On Serendip

"It had never come up in any conversation before. I had never thought to mention it to anyone. For as long as I could remember, each letter of the alphabet had a different color. Each word had a different color too (generally, the same color as the first letter) and so did each number. The colors of letters, words and numbers were as intrinsic a part of them as their shapes, and like the shapes, the colors never changed. They appeared automatically whenever I saw or thought about letters or words, and I couldn't alter them."(1)

At some point, most people consider the way that they perceive the world and how these perceptions may vary from other people's perceptions. We may wonder how the same words sound to different people, or whether or not colors are the same in everyone's eyes. Though most of these differences will never be resolved due to the indescribable nature of sensory observations, one key difference in the perception of the world has been pinpointed, that is, the world of the synesthete. Synesthetes experience language and ideas differently from the average human brain. Ideas, words, letters, numbers and sounds become inherently linked with a color association that manifests itself differently from one synesthete to the next.

Originally, experts in science and psychology were skeptical of the very existence of this rare condition. A recent British study, however has shown that synesthetes were able to recall these complex color and shape associations for significantly longer periods of time than nonsynesthetes. Many experts speculate that these associates are simply the remnants of early methods of learning the alphabet, numbers, shapes and the like, such as colored letters in a book, or multi-colored refrigerator magnets.(2)These associations that survive could be evidence of a very visually-oriented learner, such as many with photographic or visual memories of learned ideas and concepts.

It seems, however, that the neurological patterns of synesthetes are variant from normal patterns in many more ways than association and visualization. Repeated studies show different aptitudes among synesthetes at particular associative exercises, suggesting completely different thought patterns. Some experts now believe that synesthetes actually have a rare "cross-wiring" of the regions of the brain that deal with numbers and computation and colors and visual perception, two regions that are located in close proximity in the brain. Dr. Jeffrey A. Gray has also done some brain scan research that has shown increased activity in the color region of the brain among synesthetes upon hearing words and letters than the control subjects also used in the experiment.(3)

Some synesthetes' also associate particular colors with emotions or experiences. For "Carol", orange is associated with pain, stress and anxiety. When she was experiencing the pain of a toothache and approached her dentist for root canal, she immediately regarded the tooth as being "orange". Further, as the dentist was performing the procedure, her eyes were flooded with the color orange.(4)

The synesthete phenomenon is an important and telling discovery in the field of neurology and brain behavior for many reasons. Firstly, the condition raises the question of what is, in fact, the "normal" perception of letters, numbers, words, and ideas. The very nature of the human senses is called into question. While similar color associations and visualizations may unite synesthetes under a similar experience, does this make nonsynethete experience also similar? Also raised are the questions concerning the source of these associations and what makes them different from other learned associations. While only one in every two-thousand people is regarded as a synesthete, many people of all ages are classified as "visual learners". Many people remember particular facts and experiences by where they were when they learned them or where the sentence was located on the page. Is this mode of memorization related in anyway to synesthesia? Are the associations of synesthesia simply a more complex manifestation of this "visual" way of thinking?

Secondly, the condition of synesthesia raises a new notion of the so-called "cross-wirings" or "cross-firings" in the brain. Popular theories in neurology have suggested that there are distinct areas of strength and weakness in each individual's brain. One particular theory within this notion is that of "left-brained" and "right-brained" individuals. Along these lines, synesthesia suggests a co-operation between two or more regions of the brain that are seemingly unrelated. Whereas the left- and right-brained theory suggests a separation between visual and artistic individuals from number-oriented individuals, synesthesia suggests that these areas are not oppositional, and further, that these regions of the brain may, in fact, work together.

For those with synesthesia, recent research and identification of the condition has provided some answers. The same research, however, has raised countless questions about the nature of the human experience and the differences in individual perceptions of and within this experience. As puzzling as this condition may be, it provides one more unique insight into the individual nature of the human brain.

References

1)Blue Cats and Chartreuse Kittens, A book by Patricia Lynne Duffy that describes her personal experiences with synesthesia.

2)Synesthetes Show Their Colors, An article by Lila Guterman that explores some of the scientific aspects of the condition, as well as some of the recent research.

3)Synesthetes Show Their Colors: Dr. Jeffrey A. Gray's Experiment, A discussion in Guterman's article that describes one of the recent studies and some of the results.

4)Audio Transcripts from Interviews with "Carol", Interviews with another synesthete that describe some of her unique experiences.


Un-Full House: The Story Of Amnesic Syndrome
Name: Akudo Ejel
Date: 2004-04-06 08:55:25
Link to this Comment: 9199

Un-Full House: The Story of Amnesic Syndrome
By: Akudo Ejelonu

Do remember the series finale for the television sitcom, Full House, in which the youngest daughter, Michelle fell off her horse while trying to jump a log and developed symptoms of amnesia? Luckily, for Michelle, her memory was restored and she returned to a full functioning state. We would all like for this happy conclusion to occur in the lives of those we know who are suffering from the memory loss deficit known as amnesia. However, we all know that images on television are sometimes false and are methods for producers to draw viewers away from the melancholy of their life and into the arena of happiness and goodness. Though Michelle's accident did make us aware of amnesia, some of us may not understand how one gets amnesia, its various types, and how it is cured. This will be explained in this paper. So sit back, relax and enjoy the show.

The brain performs main functions such as storing, processing and drawing on memory. "Amnesia is a profound memory loss which is usually caused either by physical injury to the brain or by the ingestion of a toxic substance which affects the brain...memory loss can be caused by a traumatic, emotional event."(1). Memory loss may result from bilateral damage to the limbic system and the hippocampus in the medial temporal lobe, which are parts of the brain that are vital for memory storage, processing, or recall. When someone has amnesia, tissues in the temporal lobes of the brain are destroyed along the medial borders. Amnesia is a symptom of various neurodegenerative diseases. Individuals having lost his or her memory are described as amnesiacs. Syndromes and diseases such as Wernicke-Korsakoff syndrome and herpes can cause amnesia by damaging the brain's memory centers from the use of substances such as alcohol or by infections in the brain tissue. Some medical treatments such as Magnetic Resonance imaging (MRI) and Psychological testing called neuropsychological testing can be very helpful in determining the presence of amnesia.

Amnesia is an inability to form or retrieve memories and is a defect in declarative memory. Declarative memory is known as cognitive system, stores facts and events that are accessible to conscious recollection. It is located inside the medial temporal lobe, medial thalamus, and orbital prefrontal lobe of the brain. The hippocampal system, which "contributes to (1) the temporary maintenance of memories and (2) the processing of a particular type of memory representation,"(2). is what is first affected during memory loss. It also plays a vital role in memory and learning because it secures the link between immediate memory and the long-term storage. Although amnesia results from medial temporal lobe damage, there have been cases in which severe amnesia can occur without the hippocampus being damaged. That is only if the cortical areas surrounding the hippocampus are infected.

When someone has amnesia, he or she has difficulties recalling old and/or new information. The three main types of amnesia are anterograde, retrograde and transient global amnesia. Anterograde is the inability to remember events that occurred after an incident. Though their short term memory many disappear, "victims can recall events prior to the trauma with clarity."(3). Common causes of this type of memory loss are Alzheimer's disease, stroke, and trauma. The patient cannot create new memories and can only recall what they know from the past. However, how is it that new memories of the present cannot be stored again by the brain? Anterograde is also called post-traumatic amnesia (PTA) because it usually follows a traumatic injury to the brain. When short-term memories are in the process of becoming long term, they go through consolidation. During consolidation, short-term memory is repeated and rooted for long-term admission. When one has anterograde amnesia, their short-term memory cannot be restored for access.

Retrograde amnesia is opposite from anterograde because it is the inability to remember events that happen before the incidence of trauma, "but cannot remember previously familiar information of the events preceding the trauma."(4). In other words, one cannot recall memories of the past. "A person who experiences physical trauma to the brain or an electroconvulsive shock may forget his past while retaining the ability to create new materials." (5). It can also encode memory for one's emotional behavior such as being happy, sad and ecstatic. If the hippocampus is damaged, the amnesiac will not be able to recall new memories but can recollect older memories. "Usually, when a person has a brain injury resulting in a memory disorder, there is some degree of both anterograde and retrograde amnesia. Often, the anterograde amnesia is more severe and more difficult to deal with." (6). The last type for amnesia is transient global amnesia, which is a brief cereval ischemia that produces sudden loss of memory that can last from minutes to days. Usually middle-aged to elderly people suffer from this. In severe cases, a person can be extremely confused and may experience retrograde amnesia that can last for several years.

What Michelle suffered from was retrograde amnesia; she was not able to remember events that happened before the accident. However, what was significant about her case as that she had a concussion after she fell off her "wild stallion". Concussion is a head injury that results in temporary loss of consciousness. Most often amnesia is usually caused by concussion. Michelle got amnesia because she only forgot what happened to her before her head injury. Therefore, if she had an argument with her father the night before, she would not have been able to remember it unless her father reminds her about it. When she was released from the hospital, Michelle's doctor did not diagnose her because the best way for her to recall her memories was being her family familiarizing her with the things she loves and having he go through her daily routine. This method works for some people but for other depending on the severity of their case, may be prescribed a drug called Amytal (sodium amorbarbital). Amytal helps them recover some lost memories. "Cognitive rehabilitation may be helpful in learning strategies to cope with memory impairment."(7). In addition, psychotherapy can be used for people who amnesia is caused by emotion trauma.

Memory is the persistence of learning over time. Memory impairment occurs with a variety of neurological conditions and is associated with symptoms such as cognitive and motor impairments, brain trauma and Parkinson's disease. Nevertheless, once one has amnesia, they have to try to relearn the things that they have forgotten and learn new information. "In the medical field, amnesia means a disturbance of long-term memory and a loss of memory caused by brain damage."(8). There are various prognoses for amnesia, depending on the type of amnesia and the severity of the case. The next time you get the rare opportunity to watch the last two episodes of Full House, you will be able to make links between what I have stated in this paper and how Michelle's behavioral pattern changes before, after, and during the memory loss.

1)Blueprint for Health: Amnesia, A Good Web Source

2)Two Component Functions of the Hippocampal Memory System, A Good Web Source

3)Amnesia., A Good Web Source

4)Blueprint for Health: Amnesia, A Good Web Source

5)Kinder, Annette and Shanks, David R. "Amnesia and the declarative/ nondeclarative distinction: A recurrent network model of classification, recognition, and repetition priming". Journal of Cognitive Neuroscience, July 1, 2001, v13 i5., A Good Book

6)What is amnesia? , A Good Web Source

7)What is amnesia? , A Good Web Source

8)Amnesia, A Good Web Source


Additional Sources:
9), A Good Book

10) Long. Charles J. PH. D. Physiological Psychology, 24. Memory., A Good Web Source

11)What is amnesia?, A Good Web Source

12)Cohen, Neal J. and Howard Eichenbaum. Memory, Amnesia, and the Hippocampal System. MIT Press: Cambridge, 1993, A Good Book

13)Memory., A Good Web Source


Health: Mind and Society II
Name: Aiham Korb
Date: 2004-04-07 21:01:28
Link to this Comment: 9241


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


In the previous paper, Health: Mind and Society I, we argued that many different variables interact to influence health and disease. Through the principles of psychoneuroimmunology and the biopsychosocial model, we showed that the nervous system is the center of the interactions of these multiple factors (1). The connections and interplay between the neuro-endocrine and immune systems is the physiological basis upon which we will continue our study of how psychosocial factors can, and do, promote poor health. In this paper, we shall explore the biological associations between socio-economic status, stress, and disease. The links between these will shed more light on the social structures and atmospheres fostering such stress, rather than on the physical outcome of disease itself. But first, let us take a look at some of the leading causes of death in our society.

"Atherosclerosis, a disease of the large arteries, is the underlying cause of approximately 50% of all deaths in modern western society" (2). In fact, heart disease is the first leading cause of death in the United States, followed by cancer and deaths from iatrogenic causes (such as unnecssary surgery, medication errors, infections in hospitals, etc.) (3). Therefore, heart and artery diseases constitute a major health concern in our society. It is important to note that these diseases are more prevalent among people from low socio-economic classes. This interesting and distressing finding implies inevitable links between the environment and physical well-being. Besides predisposition, and access to health (or the lack thereof), it is clear that social factors contribute to these significant health problems. "There is a marked socioeconomic gradient in the incidence of CAD [Coronary Artery Disease] such that people of low socioeconomic status, as defined by occupation, education or income, have an increased risk of CAD and acute coronary syndromes" (4). These patterns have also been observed in monkeys. Living in dominance hierarchies, monkeys who are more socially subordinate were found to have higher levels of athersclerosis than the dominant monkeys (5). Given the greater complexities of human society, this may serve as an idea of how powerful socio-political and socioeconomic environments can be in influencing health, and even promoting disease. One can not help but ask whether socioeconomic systems based on cooperation might be healthier than those based on competition and hierarchies. This is only one of many hypotheses that attempt to account for the grave failures of the political and economic structures of our society. In any case, let us now turn to an interdisciplinary research study on socioeconomic status, stress, and its physiological outcomes.

The following study was a collaboration between the Department of Epidemiology and Public Health (Psychobiology Group) and the Department of Medicine at University College London, U.K. (2). Participants were divided into high and low SES (socioeconomic status) groups based on occupation grades. They were then administered two short stress-inducing mental tasks. The two SES groups did not differ at baseline. Yet, the results showed significant differences in the physiological responses to stress in the two groups. Following the test, those in the low SES group had a delayed recovery in blood pressure and heart rate than those in the high SES group (2). "Heart rate increased to the same extent following stress in both groups, however by 2h post-stress, it had returned to baseline in 75% of the high SES group compared with only 38.1% of the low SES group" (2). Another significant difference was in the delayed recovery in interleukin-6 levels experiences by the low SES group, as compared with that of high SES group. "Stress induced increases in plasma IL-6 in all participants, however, in the low SES group, IL-6 continued to increase between 75 min and 2h post-stress, whereas IL-6 levels stabilized at 75 min in the high SES group" (2).

It should be noted that interleukin-6 (IL-6) is a "circulating cytokine" associated with stress (6). Cytokines are chemical messengers which serve in the "bi-directional communication" between the CNS (central nervous system) and immune system. However, excessive amounts of cytokines can be toxic to nerves in the brain (6). Therefore, frequent and prolonged increases in IL-6 levels would have adverse effects on the body. Also, IL-6 stimulates the HPA (hypothalamus, pituitary and adrenal glands) axis. As we have seen in the previous paper, the overworking of the stress and neuro-endocrine responses causes a dampening of the immune system, and a negative outcome on health. Moreover, "HPA hyperactivity is associated with central obesity, hypertension, insulin resistance, and dislipidaemia, all risk factors for CAD" (2).

Thus, taking the results and relevant data, the experimenters came to the following conclusion: People of low SES have a "dysfunctional adaptive response" to psychological stress due to chronic stress-related increases in IL-6 and HPA activity. This chronic stress is understandable if one considers the psychosocial conditions that are more common in low SES groups. The study mentioned such conditions as "the exposure to adverse work characteristics, chronic life stress, social isolation, hostility, depression, and anxiety". All of these factors have been consistently identified as to increasing the risk of cardiovascular disease (2). This highlights again the relevance of the environment and its strong effects on health and the etiology of disease. Moreover, the study adds: "people of low SES tend to be more exposed to sources of chronic stress such as low job control, financial strain, and neighborhood stress, and generally have less social support" (2). Apparently then, the socioeconomic gaps are not such a benign outcome of our capitalist society. This experiment is one of many that have linked SES inequalities to heart disease and other ailments. In fact, longitudinal studies (which follow participants over several years) have also found that chronically stressful environments increase the chances of developing heart disease. Such examples are a small sampling of the accumulating evidence that support the relevance of psychosocial factors in defining and influencing health.

So far, we have seen that considering environmental factors is essential to a better understanding of their important effects on the origin and progress of pathology. Going a step further, and building on the issues raised thus far, the integration of psychosocial socio-political and socioeconomic factors into a broader formula of health should be possible. In the next paper, we will continue to follow the pathological effects of stress-related increases in IL-6. For example high levels of IL-6 have been associated with age-related conditions, general morbidity and mortality (2). We will also explore social isolation and its correlation with HIV progression. As we progress in our study, we become more aware of the role of the environment on our bodies and on our health. Being "social animals", the existence of human being necessarily involves intricate political, economic and social systems. It is more and more evident that these systems could be potential catalysts of disease. Therefore, it is our responsibility to create and monitor systems such that they would cater for a healthy population and society. Indeed, we can build psychosocial protective factors, such as social support and networks. So, perhaps we should consider again the question of an environment based on cooperation rather on competition. Which social structure is more likely to induce malady? And which one would cushion against pathology?


Sources:


1) Psychoneuroimmunology and health psychology: An integrative model, By Erin Castanzo and Susan Lutgendorf. Brain, Behavior and Immunity 17. 2003. p. 225-232.

2) Socioeconomic status and stress-induced increases in interleukin-6, By Brydon, Edwards, Mohamed-Ali et al. Brain, Behavior and Immunity 18. 2004. p. 281-290.

3) Is US Health Really the Best in the World? By Dr. Barbara Starfield. Journal of American Medical Association (JAMA). Vol 284, No. 4. July, 2000. p. 483-485.

4) Social class and coronary heart disease, By Marmot, M. and Bartley, M., in Stansfield, S., Marmot, M. (Eds.), Stress and the Heart. BMJ Books, London, 2002. p. 5-19.

5) Social status and coronary artery atherosclerosis in female monkeys. By Shively, C.A. and Clarkson, T.B. Arterioscler. Thromb. 14. 1994. p. 721-726.

6) The Mind-Body Interaction in Disease. By Esther Sternberg and Philip Gold. Scientific American. 2002.


Schizophrenia
Name: Laura Silv
Date: 2004-04-07 23:38:28
Link to this Comment: 9242


<mytitle>

Biology 202
2004 Second Web Paper
On Serendip

One problem with the wonders of modern-day medicine, as a friend of mine in medical school tells me constantly, is that they tend to work so well that those taking the medicines begin to believe that they no longer need it and therefore cease to take it. I began to think about this and remembered hearing a similar comment about patients of schizophrenia, and thought I would investigate this further, and a paper for this class would be the perfect opportunity to do so.


So naturally, the best place to begin would probably be the beginning – what is schizophrenia? It is a brain disease affecting one out of one hundred people. While men and women have equal chances of getting the disease, men tend to develop the symptoms earlier on, even as early as late teens(1). Early symptoms are paranoia and emotional indifference, which make schizophrenia hard to distinguish from other kinds of brain diseases like depression or bipolar disorder.


As the disease develops, two types of symptoms emerge; negative symptoms, when formerly enthusiastic, lively and social people suddenly become introverted, unemotional and reclusive, and positive symptoms, which are more forceful and include strong hallucinations and delusions. These positive symptoms are called the "psychosis", or "acute" phase of schizophrenia(2). This is the phase of schizophrenia most often portrayed by the media, the phase that you're most likely to find in movies or on TV. Many people in this phase are mistaken for being high or drunk, and indeed some patients begin to rely on illegal substances as self-treatment to keep some of the stronger symptoms in check.


While the exact cause of schizophrenia is unknown, there are indications that it is hereditary. According to Schizophrenia.com, people with a close relative who has schizophrenia run a higher risk - as high as 50% - of eventually getting it themselves. Scientists are looking for particular genes that may either cause or predispose one to the illness, much like what was recently done for heart attacks. But schizophrenia is a brain, not a genetic disease, and is generally thought to be caused by an imbalance between the brain chemical dopamine and other brain chemicals such as serotonin(3) or glutamate(4). Dopamine controls one's emotions, and some of its neurotransmitters are also thought to affect attention and motivation. Serotonin controls sleep and appetite, and also acts as a stimulant of physical movement. Glutamate is the nervous system's main neurotransmitter between cells.


Schizophrenia is also thought to be caused by certain physical deformities within the brain. While this is not a reliable or fool-proof method of predicting who might become schizophrenic, the most common attribute which sufferers have is enlarged ventricles. Ventricles are holes within the brain which transport fluids from one part of the brain to another. The Surgeon General, in his 2002 report on the causes of schizophrenia(5), also cites "environmental factors" as one of the possible causes of schizophrenia and why family members of one sufferer run the greater risk of developing symptoms, but he fails to list what those factors might be.


For those diagnosed with schizophrenia, the most common and effective method of treatment is drug therapy, which treats the chemical imbalances previously described and which also keep the psychotic symptoms – hallucinations, et cetera – from returning. Recommended dosages differ from patient to patient, as each case is different. Traditional medications include Haloperidol (trade name Haldol), which treats hyperactivity and mania but which is known to cause other problems such as lethargy, and Trifluoperazine, called Stelazine, which treats anxiety and nausea but fails to treat the withdrawal. Other medications include Loxapine, Perphenazine and Prolixin, all of which treat only some of the symptoms and none of which cure schizophrenia(6). And, of course, the problem with treating the symptoms and not the causes of the disease is that patients tend to think they've been cured, and therefore cease taking their medications. The disease is permanent – symptoms might disappear for a while but generally return.


While there is no cure for schizophrenia, so much research is being invested into discovering more about the disease. Doctors, hospitals, charities and medical societies are all donating time, effort, money and resources to find better treatments for the disease, and perhaps, one day, a cure. Though the end may not yet be in sight, the outlook for schizophrenics and their families is good.


References


1.) 1) Schizophrenia.com


2.) 2) Mental Wellness Mental Wellness Online: www.mentalwellness.com/


3.) 3) Mental Wellness


4.) 4) Glutamatergic Aspects of Schizophrenia


5.) 5) Schizophrenia.com


6.) 6) Schizophrenia.com


The Psychometric Approach to Intelligence: How Sma
Name: Bradley Co
Date: 2004-04-09 00:09:39
Link to this Comment: 9253


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Aristotle and Cicero were some of the first great minds to contemplate, and even allocate a word to, the phenomenon now referred to as intelligence (1). History has been filled with people trying to pin down the precise existence of the term because of the general belief that it is an exact term. In this century the psychometric approach has been the primary method of studying intelligence (2). This method is based on the presumption that intelligence is a measurable factor, and thus the IQ test was born.

Over the past several decades children across the globe have been given IQ tests at some point in their elementary education. A letter comes home and parents are given a number which compares their child to all other children their age. This number is meant to be a measure of the child's intelligence and is believed to play a role in determining his or her track in life. However, the downfall of these results is that they have created the modern day common belief that intelligence is a finite characteristic. It is universally understood that different people contain varying degrees of intelligence, however it is highly misunderstood what intelligence is.

IQ, or "Intelligence Quotient," was originally obtained by dividing a person's mental age by their actual age (2). More recently IQ tests have been defined to measure specific abilities. These abilities include but are not limited to such things as verbal ability, problem-solving ability, social competence, knowledge, motivation, dealing with abstract concepts, ability to classify patterns, ability to modify behavior, ability to reason deductively, ability to reason inductively, and ability to understand (3) (4). There are literally hundreds of different skills or abilities that can be measured. These measurements are scaled in comparison with many other people of the same age. The scale is set to have a mean of 100 and standard deviation of 15 (5). The belief behind the psychometric principle of measuring intelligence is that, like many modern psychiatrists and psychologists today believe, intelligence is essentially what the tests measure (3). However, these measurements are merely data. This data is used to draw conclusions, and hopefully a definition of intelligence.

The purpose of collecting intelligence data is to better understand the meaning of intelligence itself. Although the notion of intelligence is widely accepted and referred to, the definition is vague. At one specific symposium, twelve psychologists were separately asked to define intelligence, and twelve distinct responses were returned (1). This vagueness is evident in the fact that there are hundreds of separate IQ tests that measure many different abilities. Granted that an actual IQ test will cover several of these abilities, it is inevitable that many will be left out. To render this problem Charles Spearman, in the 1920's, created a statistical extraction using factor analysis called the general factor (g) (6). The g factor is a correlation among the varying IQ tests and mental abilities. The significance of the g factor is that it claims to explain the differences among ability tests and will hold up regardless of the test type or manner in which the test is administered (6). Many professionals do not doubt that the g factor does enter, to varying degrees, the countless mental activities that guide human behavior (1). The g factor is an extractable statistical factor of intelligence.

The implications of obtaining a correlation among all mental abilities, and therefore intelligence, are immense. It gives us the capability to set levels of intelligence, as well as make predictions about such things as way of life, success in life, and even happiness. Although it has been found that a persons given intellectual performance can vary from day to day, as well as among abilities (2), it has also been found that a person's intellectual ability is generally stable and unchanged after adolescence (6). This infers that intelligence is a set factor and therefore so is their intelligence based fate. Studies have been shown that a person's g factor and IQ positively correlate with success both in school and out of school (2) (6). Thus, the smart people will be successful and the stupid people will not. Not only will they not be successful but they cannot be helped. The U.S. army banned people with IQ scores below the tenth percentile from enlisting during WWII because they felt they could not be taught to be good soldiers (6). These correlations can even imply that due to an inability to succeed in life, a person's happiness will be determined by intelligence. The intelligent will be rich and happy while the unintelligent will be poor and sad. These implications are extremities, but not far fetched. The holes and misconceptions of the g factor and psychometric principle become clear in the extremities of their implications.

The major fault of the psychometric principle, that intelligence is measurable, is in its central understanding. An essential aspect of the theory is that even though there are innumerable mental abilities of intelligence there is a general g factor that correlates to them all and can be extracted. However, in admitting to the multitude of mental abilities that comprise intelligence, any removable characteristic is negated as being a sole representation of intelligence. In searching for a finite characteristic of intelligence, this theory provides evidence that in fact there is no fixed attribute. In the 1980's these problems sparked new theories of intelligence.

The evolution of intelligence theory began to acknowledge the vastness of what the term intelligence represents. People such as Howard Gardner and Robert Sternberg approached the concept arguing that the psychometric attempt leaves out much of what intelligence is. While analytical mental capabilities were measured, practical and creative aspects were ignored (1) (3) (5). This new train of thought was based on a perception that there are many types of intelligence in which only some can be measured. There are many aspects to intelligence which make it too complex to distinguish. This makes the phenomenon of intelligence indefinable, but rather better described, in order to portray a more detailed image than any definition could provide (4). It is this idea that has fueled the search and study of intelligence in the past few decades. The idea of "painting a clearer picture" is the motivating force behind the research.

Unfortunately, the aspiration of finding a certain aspect of intelligence that can be recognized as the primary factor has still fogged modern research. There have been studies both proving and disproving positive and negative affects on intelligence by such factors as knowledge, education, exercise, stress, and even listening to Beethoven (2) (5) (6). Correlations between such things as brain size, gender, and ethnicity have gone through cycles of being published, then disputed, then revoked, and republished (7) (8). The disputes over causes and affects are often about specific characteristics but also often deal with the issue of environment verses genetics. Which is the dominant factor in intelligence, or is it both? Research on issues such as stress and upbringing clearly emphasize environmental factors of intelligence. However genetics are often referred to as essentially important. Until recently, the beneficial aspect of these research elements is that both sides understood and further exclaimed that it is most likely both aspects which affect intelligence.

In most recent times technological advances, such as the completion of the human genome project and advanced brain scanning techniques like MRI, have driven the research and beliefs about intelligence to go full circle. We are now in a time when once again the search for a specific factor of intelligence is underway with new technology. Twin studies have tried to determine the factors of environment and genetics and even more specifically tried to find specific genes related to intelligence (9). Although it is understood that the relationship between genes and behavior is rarely a one to one correlation (9), this has not halted the search for the linked genes. New measurable factors such as the degree of branching in cortical neurons, the rate of brain metabolism, and the number of neural connections are being studied with regards to intelligence as well (2). These studies are simply using new tools for the psychometric approach of measuring intelligence. It is not impractical to predict that the near future will entail new IQ tests which simply take a sample of DNA and a brain scan to report a new number of intelligence.

The psychometric approach has provided very good correlations to many varying aspects of intelligence. It has set standards and scales that future researchers can compare with and expand upon. However, it ignores the major factor of its theory; the enormity of what intelligence really is, a conglomeration of the many mental abilities both measurable and immeasurable. The realization of this factor has been set aside because it creates great difficulty in providing validation to intellect studies. Yet, every study and researcher makes note of it. In essence future research would be wise to deal with the primary issue of the extensive nature of intelligence rather than the futile search of a straightforward quantifiable conclusion.


References

1)The Evidence for the Concept of Intelligence, A rich source of both history and intelligence theories

2)IQ and Intelligence, A Brain.com article on the relations between IQ and Intelligence

3)Genetics of Childhood Disorders, A good article demonstrating disagreements about intelligence

4)The Concept of Intelligence in Cognitive Science, A review of modern theories of intelligence

5)Intelligence: Knowns and Unknowns, An in depth look at intelligence theories and research

6)The general Intelligence factor, An explanation of the g factor and intelligence

7) Does Brain Size Matter? , Research relating brain size to intelligence

8)Cranial Capacity and IQ , Research relating cranial capacity to intelligence

9)Our genes, ourselves?, An in depth look into the role of genetics and environment in intelligence


Laughter: The Glue of Humanity?
Name: Kristen Co
Date: 2004-04-10 19:44:37
Link to this Comment: 9260


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

A sign on a repair shop door in England says: We can repair anything. (Please knock hard on the door - the bell doesn't work) (1).

Did you laugh? If you did you just expended calories, lowered your blood pressure, and increased the number of immune response cells in your blood. You unconsciously triggered a neural circuit in the brain which resulted in the physical response of laughter. Laughter is an unconscious behavioral response that results from complex interactions in the brain as a result of a stimulus which we deem to be "funny." The implications of laughter, however, extend much further than expressions of enjoyment. Laughter is a cultural mechanism that has evolved from the need for members of the same species to get along. It is an example of how the unconscious workings of the brain have an effect on the conscious workings of our lives.

Ernst Haeckel, a German Evolutionist, referred to laughter as a kind of reflex or response to "psychological tickling" in which vasomotor nerves were stimulated by either a physical or mental stimulus (2). Modern scientists have made further discoveries regarding the workings of the human brain during laughter, but the general idea that laughter is an uncontrollable response to some kind of mental stimulus remains. Laughter has been found to result from a signal that travels through a loop of connected neurons located in various regions of the brain. Studies have been done that monitor these electrical signals in the brain (3). If enough voltage occurs to create an action potential, the wave of activity will travel through different regions of the brain and will result in a laugh.

Laughter is a combination of three ideas, the intellectual "getting" of something humorous, the emotional response, and the physical response of laughter (3). The main part of the brain responsible for the correct interpretation of a joke is the frontal lobe. People with damaged frontal lobes don't laugh or smile as much when shown humorous material and when given a test, often choose the wrong punch line to a written joke (4). Emotional interpretation of something humorous occurs mainly in the limbic system of the brain. This system is composed of several different parts which enable it to serve as the emotional center of the brain. The hypothalamus, in particular, deals with expressions of emotion such as laughter (5). The motor response of laughter occurs in the motor cortex, which sends signals to various muscles of the face and body. The link between the motor cortex and the brain was inadvertently discovered while testing an epilepsy patient. When the motor area was stimulated, the patient would smile or laugh uncontrollably (6).

These three portions of the brain that deal with the three aspects of intellectual comprehension, emotional response, and physical response work together to create what we know as laughter. The instigation of this event, however, is not under our conscious control. It is very difficult for one to laugh on cue without a stimulus, and there are several instances of pathological laughter in which the subject laughs without the intent (7). The most cited example of uncontrollable laughter was an epidemic which occurred in 1962 in Tanganyika and lasted for six months (2). Laughter can also be brought on by non-humorous stimuli such as laughing gas or alcohol (7). The fact that laughter can occur unconsciously through these neural circuits is significant. It implies that it is not necessarily a highly cognitive function and may have a more basic purpose.

Laughter is not only found in humans. Behavior similar to laughter can be observed in other mammals as well. Scientists have identified laugh-like behavior in rats at play. High-pitched vocalizations can be elicited by tickling and seem to indicate whether the rat is playing or fighting. Puppies are also known for a kind of laughing when they play. If a young dog lacks the ability to laugh, its actions will be interpreted as aggressive and it will get beaten up (2). Laughter, in this way, is a tool for survival. Chimpanzees and apes also exhibit laughter. It is not the definitive "ha ha ha" of the human, but more of a breathless panting noise. They produce this noise only in positive social situations such as physical play or tickling (7). Observing laughter in other species indicates that laughter seems evolved as a method of determining friend from foe.

Humans take laughter to another level of sophistication from our biological ancestors. Other animals maintain the limbic system and motor areas of the brain, but lack the highly developed cortex which enables humans to perform more analytical processes. Instead of primarily responding to physical stimuli, humans also respond to stimuli that are visual or aural. The primary purpose of laughter, however, remains the same in both humans and other mammals. It is a form of communication that encourages social bonding amongst the species.

Laughter begins to develop at a very young age, between three and four months. It is a way in which a baby can communicate without using words (8). As development occurs, laughter is used during everyday speech as punctuation. It sends an additional message to those around us that we are in a good mood and want to "play." Laughter is most often heard in groups of children as they learn how to get along and work with each other (8). Adults continue to use laughter in social situations, improving relationships with those around them. People are 30 times more likely to laugh in groups than when alone and less than 20% of what we laugh at is pre-determined jokes. Most of the things we find funny are simple everyday phrases (7). Using laughter evokes trust and works to inhibit the fight-or-flight response (1).

Have you ever found yourself laughing for no apparent reason just because someone near you is also laughing? This is because laughter can be quite contagious. Some scientists believe that humans possess some kind of laugh detector which is triggered by particular species-specific vocalizations. This detector acts as a sensory receptor and sets off the serious of neurons which results in laughter (7). The contagiousness of laughter lends itself to social bonding. When we laugh with someone, we feel instantly at ease. People often laugh in nervous situations in order to make others feel more comfortable.

Laughter has a great impact on social dynamics. Scientific observations have concluded that women laugh more than men and they laugh the most when in the presence of men (7). Also, in the office the boss tends to laugh more than the employees and when employees laugh, it is generally in response to a joke told by the boss (2). Laughter, in this way can be used to manipulate and control a relationship. Although most of its effect is positive, laughter can also be used to exclude or alienate. If a person is socially ridiculed or laughed at they might feel the need to either conform to or leave a particular group (1).

It is clear that laughter has a great impact on our lives. It enables us to build or maintain relationships by releasing social tension. What is significant about laughter is the fact that it is an unconscious response to our social situations. It is an example of one of the many little rules which work together to allow the complicated emergent system of culture to operate. It demonstrates just how little conscious control we have over our lives, and stresses the close ties between brain and behavior.


References

1)LOL website, website dedicated to the health benefits of laughter
2)Our Ancient Laughing Brain by Sylvia H. Cardoso , about the evolution of laughter
3)How Laughter Works , general overview of how laughter works
4)Brain Briefing "Humor, Laughter, and the Brain, 2001 Society for Neuroscience newsletter
5)"Limbic System: Center for Emotions", by Júlio Rocha do Amaral, MD and Jorge Martins de Oliveira, MD, PhD. Overview of the limbic system and how it effects emotions
6)"Scientists Find Sense of Humor" , from BBC News Feb. 1998
7)"Laughter" , American Scientist article by Robert Provine from 1996
8)"A Big Mystery: Why Do We Laugh?", MSNBC article by Robert Provine from 1999


Are you being brainwashed by Muzak?
Name: Debbie Han
Date: 2004-04-11 11:13:00
Link to this Comment: 9263


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

People listen to music for various reasons. Some people use music in order to increase relaxation. Others use music as a form of energy. Music is heard in cars, in homes, at shopping malls, and at dentists' offices, among many other places around the world. Sometimes, a song gets into your head and you find yourself humming a tune all day long and then you realize that a stranger who had passed you hours ago had been whistling that song, or that you had heard 2 seconds of that song on your radio alarm that morning before pressing the snooze button. This is the idea behind Muzak.

In 1922, General George Squier invented Muzak, a type of music to deliver from phonograph records to workplaces via electrical wires. He realized that the transmission of music at the workplace increased productivity of his employees. Soon after, there was a study that showed that people work harder when they listen to specific kinds of music. As a result, the BBC began to broadcast music in factories during World War II in order to awaken fatigued workers (1).

Muzak's patented "Stimulus Progression" which consists of quarter-hour groupings of songs is the foundation of its success. Stimulus Progression incorporates the idea that intensity affects productivity. Each song receives a stimulus value between 1 and 6 - 1 is slower and 6 is upbeat and invigorating. A contemporary, instrumental song full of strings, brass, and percussion (27 instruments in total) would most likely receive a stimulus value of 5 (3). During a quarter hour, about six songs of varying stimuli values are played followed by a 15-minute period of silence (2). A 24-hour plan is engineered to provide more stimulating tunes when people are the most lethargic - at 11 a.m. and 3 p.m. and slower songs after lunch and towards the end of the day.

Careful programming of Muzak has been proven to increase morale and productivity at workplaces, increase sales at supermarkets, and even dissuade potential shoplifting at department stores. Over 20 years ago, numerous department stores in the United States and Canada installed what was called "the little black box," which mixed music and anti-theft messages. The quick repetition of "I am honest. I will not steal." 9,000 times an hour at a barely audible volume was able to curb shoplifting at one of the department stores by 37% during a nine-month trial (4).

More recently, Adrian C. North, a psychologist at the University of Leicester, measured the influence of music on decision-making. He and his colleagues tested the effect of in-store music on wine selections at a supermarket by setting up a wine shelf with French and German wines. On alternating days, French accordian music or German pieces done by a Bierkeller brass band were played over a two-week period. Prices were similar, shelf ordering was reorganized daily, and if French music played the first Monday, German music was played on the following Monday. In order to make the nationality of the wines clear, national flags were attached to the display adjacent to the wines. After the shoppers made their wine selections, an interviewer disguised as a shopper approached the consumers to fill out a questionnaire regarding their purchase. The questions delineated whether the respondent had a preference for French or German wines before the purchase, to what extent the music made him/her think of France or Germany, and if the music influenced his/her wine selection. 82 shoppers bought wine from the display during the two-week period, and 44 agreed to complete the questionnaire (5).

The results indicated that music did indeed influence shoppers' wine selections. When French music played, 40 bottles of French wine and 8 bottles of German wine were purchased. When German music played, 22 bottles of German wine and 12 bottles of French wine were purchased (5).

Researchers Charles Areni and David Kim have established a preference-for-prototypes model, which suggests that the mind is composed of closely packed, interconnected cognitive units which relate music and other structures and ideas (6). According to their model, music can stimulate the mind into thinking about ideas similar to the music. For example, French music conjures up images of France. In addition, the speed of music can influence behavior. For example, several studies have been conducted which illustrate how fast music makes supermarket shoppers more around more quickly. Likewise, fast music causes diners to eat faster and slow music slows eating down (and leads to more drinks being purchased at the bar) (6).

What is interesting about background music is that it is intended to be just that - noiseless noise. The concept of barely audible tunes affecting one's behavior leads to the question as to whether one's behavior can be manipulated by another individual without the person being aware of the manipulation. According to the research conducted by North, unobtrusive music selected by store managers, business managers, and companies like Muzak can affect a person's thoughts and action without the person even knowing.

This is evidence that stimuli below the threshold of conscious can influence thoughts, feelings, and actions without the I-function becoming involved or even knowing about it - that there is unconscious perception. It is likely that a number of other things can cause the same result. Events which an individual didn't realize was being witnessed could be interpreted by the person's unconscious and correspond to behavior (7). This theory diminishes the power of the I-function because conscious recognition is not necessary to cause action. This suggests that there are lag times in processing information to the I-function and that only selected information gets to the I-function. As a result, the I-function is not getting involved and people are not consciously recognizing that they are being manipulated by music when it is occurring. Therefore, the unconscious is being manipulated. Is it even manipulation if you didn't know it was happening and you didn't know it had an effect on you?

If the conscious is a sieve, then the unconscious is a vacuum. The influence music has an individual's actions and behavior is evidence that the unconscious is substantially faster than the conscious mind. Sights and sounds that are not registered by the conscious are likely to be registered by the unconscious. It seems like we should not be scared of subliminal messaging through music, but rather, be amazed by the power of the unconscious mind.


References

1) Muzak Home Page, The Muzak website with some interesting information on the company's background

2)Article on Stimulus Progression , A good foundation for understanding Muzak's patented Stimulus Progression

3)Muzak Stimulus Progression graph, Interesting graph on how Muzak chooses ratings for different instrumental songs

4) "Secret Voices: Messages that Manipulate," Time, September 10, 1979, 71. ~ Good background on the subliminal messaging through music in department stores

5)Adrian C. North's homepage, Fascinating research on the effect of French or German music on the selection of French or German wines

6) Areni, C.S., & Kim, D. "The influence of background music on shopping behavior: Classical versus top-forty music in a wine store," Advances in Consumer Research, 1993, 20, 336-340. ~ Additional research on the influence of music on shopping behavior

7)Committee for the Scientific Investigation of Claims of the Paranormal Webpage , A good resource on different types of subliminal perception

Further Reading

BBCi Web page , Muzak: Past, Present, and Future

University of Waterloo Department of Psychology Web site , Additional background on the influence on subliminal messaging on the unconscious mind


The Oracle at Delphi
Name: Eleni Kard
Date: 2004-04-11 20:51:09
Link to this Comment: 9271


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The city of Delphi, in which the ancient ruins of the Temple to Apollo still remain, lies on the southern slopes of Mount Parnassus, 100 miles northwest of Athens. The ancient Greeks considered Delphi the center of the universe. Legend has it that when Zeus unleashed two eagles from opposite ends of the earth in order to locate its center, the two eagles met over Delphi. (1) Delphi was also famous for its oracle; a priestess who could communicate with the gods and predict one's future in exchange for gifts. Both rulers and everyday people journeyed to Delphi to consult the oracle for her advice. (2)

The oracle worked in the following way: a priestess would sit in a small underground room and breath vapors from the ground while drinking water or inhaling mist from the warm spring beneath the Temple of Apollo. She would enter an exalted state of mind and give advice as if in a trance. (3) While in this state, she would mutter words, in a somewhat cryptic manner, and a priest would translate them to the person seeking advice. Sometimes her words were just ramblings. Other times her answers went over her questioner's head. On at least one occasion, the priestess is said to have gone into seizures and died. (3)

The existence of this oracle is not disputed, but how and why it worked has been questioned. How was it possible for this young woman to enter a trance-like state and give people supposedly relevant advice and true predictions of their futures? A recent hypothesis claims that ethylene gas, a common hydrocarbon found in nature, was detected in rocks and water near the Temple of Apollo and could have been responsible for producing the trance-like state. In high doses, ethylene gas can even cause death, which would account for the fate of at least one of the priestesses.

Ethylene (molecular formula H2C=CH2) is a sweet smelling gas known to effect the nervous system. Ethylene gas is naturally emitted by fruits, flowers, and other vegetation. It is the substance that causes fruits to ripen. Among the many changes that ethylene causes is the destruction of chlorophyll. With the breakdown of chlorophyll, the red and/or yellow pigments in the cells of the fruit are uncovered giving the fruit its ripened appearance. (4) Small amounts of ethylene are also found in volcanic emissions and natural gas. The production of ethylene from inside the earth led researchers to analyze the rocks and springs located beneath and surrounding the Temple of Apollo in search of answers of the workings of the oracle.

Geologist Jelle Z. de Boer of Wesleyan University in Connecticut and Archaeologist John Hale of the University of Louisville led the research team at Delphi. The team conducted tests on the Delphi rock and on the water of a nearby spring. Both contained the presence of methane, ethane, and ethylene. (5) They also examined pieces of travertine, a limestone stalactite deposited by an ancient spring, and detected measurable amounts of ethane and methane there as well. (3) The team concluded that the Temple of Apollo sits on crisscrossing geological faults. When the faults shift and rub each other, a large amount of heat is given off, which causes hydrocarbons to vaporize and come up though fissures in the ground. In this way, the gases can seep into nearby springs or fuse into crystalline rock formations.

The results from the tests indicate the presence of ethylene in the rocks and springs at Delphi. The geological aspect also provides a logical explanation of how the gases could have come up to the surface. This opens the possibility that ethylene gas was present in the chamber where the priestess sat since it was detected in the same vicinity. If this was the case, did the ethylene gas have an effect on the priestess and was it responsible for her trance-like state and ultimately influence what she said?

The main threat of ethylene gas is that it can displace oxygen in the air, which can result in symptoms associated with oxygen deficiency. A lack of oxygen to the brain can cause symptoms such as rapid breathing, diminished mental alertness, impaired muscular coordination, faulty judgment, depression of all sensations, emotional instability, and fatigue. As asphyxiation progresses loss of consciousness may result, eventually leading to convulsions, coma, and even death. (6) High concentrations of ethylene, as well as ethane, propane, and propylene, may have anesthetic-like effects (central nervous system depression) causing drowsiness, dizziness, and confusion. (6)

Historically, ethylene was in fact used as an anesthetic until less flammable compounds were discovered. The process by which general anesthetics work is still unknown. The two main hypotheses are the "lipid theory" which proposes that anesthesia directly interacts with cell membranes involved in brain functions (7) and the "protein theory" which suggests that anesthesia blocks sodium channels in the nerve membrane, which inhibits nerve impulses. (8) Biophysicists Wu and Hu have proposed another theory in which anesthesia works by reducing oxygen to the brain. "In essence, their mechanism holds that anesthetics act as barriers to oxygen transport in both membranes and proteins, reducing oxygen availability to the brain." (7) This idea seems to draw from both of the above hypothesis-that some function of/in membranes and proteins is altered (through lack of oxygen in this case). The concept of decreasing oxygen to the brain can also be applied to ethylene gas, for it displaces oxygen as well, which could account for why it worked as an anesthetic.

Attempting to explain how the oracle at Delphi worked through geological findings is one way to try to understand this mystical being. It seems likely that ethylene gas was present in the chamber where the priestess sat since it was detected in mineral formations and springs under and surrounding the Temple of Apollo. Was the ethylene gas indeed capable of producing the trance like state? The strongest support for this argument is that ethylene gas affects the nervous system by displacing oxygen to the brain. The symptoms of oxygen deficiency described above do include loss of consciousness. As one source writes, small doses of ethylene "produce a floating sensation and euphoria. In other words, just what an oracle needs to start having visions." (2) Its former use as an anesthetic also supports this idea, especially if Drs. Wu and Hu's hypothesis that anesthetics work by decreasing oxygen to the brain turns out to be correct, since that would imply that ethylene's displacement of oxygen could lead to anesthetic/trance-like effects. On that note, until further research is conducted on anesthetics, the ethylene gas hypothesis remains only a possible explanation for what went on thousands of years ago at the oracle at Delphi.

References

1) Greece Taxi Tours information website , good background information on ancient Delphi as well as travel information to modern Delphi.
2) Wikipedia, a free online encyclopedia , provides good links to related concepts.
3) What You Need to Know About website , contains many geology related articles.
4) Ethylene gas, provides interesting information on ethylene gas, particularly its relevance to the fruit industry.
5) Hallucinogens website , an article from the Washington Post with the history and geological findings at Delphi.
6) The BOC Group website , a material safety and chemical data sheet on ethylene gas.
7) Article from a UPI Science correspondent , describes the possible mechanisms for the workings of anesthesia.
8) Dr. Joseph F. Smith Medical Library online , a useful source to search for information on medical related terms.


Behavioral Response to Smell: the answer may be un
Name: Sarah Cald
Date: 2004-04-12 01:25:54
Link to this Comment: 9277


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Of the five senses, smell is perhaps the least understood both mechanistically and behaviorally. There are many questions as to why people react differently, if at all, to certain smells. This difference in behavior may be interpreted as being due to a physical characteristic of the human body. However, it remains to be seen what is responsible for this difference in behavior, the brain or an alternative organ?

Some elementary conclusions regarding olfaction can be made using general observations, however such conclusions give little insight into the actual mechanism of olfaction and behavioral responses to smell. Regardless, they are a good starting point in exploring these issues. First, we can conclude that odors and smells are perceived in humans through a common pathway. We know this because on some basic level, all humans can agree that certain things smell. For example, we can all agree for the most part that a rose smells-we may not all agree on what a rose smells like, but it does have a scent. Along these lines we can also conclude, generally, that there are distinct odors which differ somehow in their chemical components causing them to be received differently. For example, the smell perceived from an orange can easily be identified as different from that of gasoline.

In addition to expanding and understanding further the aforementioned conclusions, this paper seeks to understand how humans can receive the same odor and behave or respond differently to it. Gasoline is one example of an odor that elicits different behaviors in different people. Many people despise the smell of gasoline saying it causes feelings of nausea, and avoid smelling gasoline as much as possible. Yet there are others who find the smell somewhat pleasant, and go out of their way to smell more of it by taking longer to pump gas or by taking deeper breaths while doing so. This particular phenomenon intrigues me. More broadly speaking, what is responsible for the behavioral response to odor?

In order to fully explore this question, a better understanding of the mechanism of olfaction is needed. Odorants are collected in the sensory epithelia of humans, located in the upper regions of the nasal cavity ((1)). Odorant molecules are absorbed in the mucus layer of the sensory epithelia where they then travel to receptor cells, which are located on the cilia that line the nasal cavity. These cilia each comprise receptor proteins that are specific to certain odorant molecules ((1)). Binding of an odorant to a receptor protein results in an activation of a second messenger pathway. There are two known pathways to date. The most common, involves the activation of the enzyme adenyl cyclase upon the binding of an odorant molecule ((2)). This enzyme catalyzes the release of cyclic AMP (cAMP). The increase in cAMP levels causes ligand-gated sodium channels to open, causing a depolarization of the membrane. This depolarization results in an action potential ((2)). These electric signals are carried by olfactory receptor neurons through the olfactory bulb. The olfactory bulb then relays information to the cerebral cortex resulting in sensory perception of smell ((2)).

On average, humans can recognize up to 10,000 separate odors ((3)), yet only have about 1,000 different olfactory receptor proteins ((4)). Clearly, there is a step in the pathway of olfaction that allows for combinations of odorant molecules to be organized. This step was found to take place in the olfactory bulb. Within this organ, the activity of different olfactory receptors in combination is used to signal the brain for specific smells ((4)). Richard Axel M.D., an investigator at Columbia University College of Physicians and Surgeons and a pioneer in the field of olfactory research, explains this processing role of the olfactory bulb best:
"The brain is essentially saying... 'I'm seeing activity in positions 1, 15, and 54 of the olfactory bulb, which correspond to odorant receptors 1, 15 and 54, so that must be jasmine ((4))."


Knowledge about the mechanism of olfaction now allows us to explore what is responsible for behavioral responses to odor. My initial answer to this question was the brain. One thing I have learned in our class discussions was that for the most part, behavior is the result of inputs and outputs from the brain and how they are processed. Accordingly, the brain should be responsible for the different behaviors observed in response to smell. However, after exploring and learning about olfaction on a more detailed level, I now believe that the source of behavioral response to odor may lie within the olfactory bulb. One role of the olfactory bulb is to receive signals from odorant receptors and relay that information to the brain. In this way the olfactory bulb is functioning to process and interpret the input signals from odorant receptors and produce corresponding output signals for the brain to subsequently interpret. It seems logical that in processing the inputs from odorant receptors, the olfactory bulb is also producing some type of output that results in a behavioral response.

Further investigation revealed evidence that may support this hypothesis. Signals sent from the olfactory bulb are sent not only to the cerebral cortex, which is responsible for conscious thought processes; but also signals the limbic system, which generates emotional feelings ((5)). This leads me to question whether the signals sent to the cortex and limbic system are identical or similar in any way? Also, is there a difference in the number of signals sent between the two locations in response to odorant reception? Meaning, do more signals get sent to the cortex when a person smells oranges, compared to the limbic system? All of these questions are worth pursuing; perhaps it is information in the signals sent to the limbic system, which is responsible for the behavioral responses to odor.

There is much about olfaction that remains unclear, particularly about the relationship between behavior and olfaction. To date, there is little evidence that suggests what portion of the body is responsible for behavioral response to odors. Further investigations involving the olfactory bulb may prove worthwhile in determining what is responsible for the behavioral response to smell.

References

1)Monell Chemical Senses Center, an overview of olfaction

2) Lancet, Doron. "Vertebrate Olfactory Reception." Ann. Rev. Neurosci. 9 (1986): 329-355.

3)The Mystery of Smell: The Vivid World of Odors

4)The Mystery of Smell: How Rats and Mice-and Probably Humans-Recognize Odors

5)Sensing Smell


Artificial Intelligence: Is Data Really 'Fully Fun
Name: Dana Bakal
Date: 2004-04-12 10:34:18
Link to this Comment: 9283


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Every day, as I walk into Park Science Building and round the corner, I am faced with an intriguing poster. This poster asks about the possible personhood of machines. If a computer, robot, or android can pass the Turing test, the poster asks, can it then be considered a person? If it cannot pass, can its personhood be discounted?

Since humans began to develop complex machinery, and recently computers that mimic the human mind in many ways, they have been preoccupied by this question. Consider the science fiction series Star Trek: The Next Generation. One of the major characters is Data, an android. It ( I will call Data "it" until we conclude his personhood satisfactorily) is generally treated as a person by its crew mates on the Enterprise, and people relate to it as if it were not only a person, but a friend. But is Data really a person, and can we refer to it as "he"?

For this paper, I feel I need to define several terms, or the discussion will be very confusing. I am defining a "person" as an entity with "a sort of awareness - of self, of interaction with the world, of thought processes taking place, and of our ability to at least partially control these processes. We also associate consciousness with an inner voice that expresses our high level, deliberate, thoughts, as well as intentionality and emotion" (2). I will refer to members of the species Homo Sapiens as "humans." "Humans" are not necessarily "persons," but many or most are, and all "humans" deserve the presumption of "personhood."

Alan Turing believed that personhood could be tested for. He devised a test wherein a human subject sits in one room and interacts indirectly, like through a computer terminal, with two tentative persons. One of these tentative persons is a human, and one is a computer, an artificial intelligence. The subject is allowed to communicate with both tentative persons, to ask questions, state feelings, etc. If the human subject cannot identify which of the terminals represents a human, or if she determines that the AI is a human, then the AI has passed the Turing test, and must be considered a person (2).

Lets say, then, that Data is subjected to a Turing test. If it passes, (which it is almost certain to do, based on evidence of the way it is treated on the Enterprise,) it will be a person, according to Turing. Can we then be sure that Data is a person and should have rights as such? Not really. One major argument against the Turing Test providing indisputable proof of personhood is the Chinese Room Paradigm.

This thought experiment, as suggested by Searle in 1980. He asks us to imagine a room containing one human and a code book. Chinese writing is pushed under the door of the room by humans outside. The human inside does not speak or read Chinese, but humans outside do. The code book contains a complex set of directions detailing how to "correlate one set of formal symbols with another set of formal symbols." (1). The human in the room can thus provide the correct answers to questions in Chinese without having any understanding either of the questions or of his responses. To summarize, the person in the room has a codebook which allows him to produce output which looks like understood Chinese. Applied to an AI, this experiment claims that an entity like Data could process input and provide output such that its shipmates would perceive it as a person, but without having any consciousness or understanding of either the input or the output. Data could pass a Turing test, but pass it only because it is running a very convincing code.

The Chinese room experiment cautions us to not conclude personhood when none may be present. There are several responses that challenge Searles's conclusion. The Systems Response claims that the human in the room cannot understand Chinese, but the room and the human taken as a total system can. The Robot Reply says that if we can get a robot to act as if it was perceiving, understanding, etc., then it would. This is a similar argument to the Turing test. These replies bring up interesting ideas, and there are many more of them to explore and consider.

Back to Data. If we cannot prove it is a person (it might just be a Chinese room), can we assume that it is not one? I would suggest that we must err on the side of caution, and assume that he (I will now call Data "he") is indeed a person. I say this out of fear. What would happen if he was a person but was not considered such? What would the ethical implications of this be? What about humans who do not seem to be persons (could not pass the Turing test, show very little intelligence)? If an Autistic human is unable to pass a Turing test, should we deny that human personhood? The ethics would be appalling, and true persons would be denied their basic rights simply because we cannot prove their personhood.

"I think, therefore I am" is an interesting statement to apply to this discussion. Since we can perceive our own emotions and thoughts, we consider ourselves to be persons. We cannot directly observe the thought processes of other humans or of artificial intelligences, so we cannot prove that they are persons. In order to be safe, in order to keep society running, and in order to remain sane, we assume that other humans are persons, unless proven otherwise. Since we cannot prove that Data is not a person, we have the same evidence of his personhood and of the personhood of humans around us. The response to Searle that I want to emphasize is the other minds response. "If you are going to attribute cognition to other people you must in principle also attribute it to computers (1)."

So Data should be considered a person. But Data is a fictional Android created by a fictional mad doctor who took the secret of how to construct a person to the grave. Can we now construct artificial persons? Is it even reasonable to believe that we will ever be able to? If we cannot create artificial persons, even in theory, then their potential personhood is moot.

Perhaps the largest problem in artificial intelligence, and in computing in general, is the frame problem. This problem was described eloquently in 1984 by Daniel Dennet, a leading author in philosophy of mind. He tells a story in which scientists build a series of robots. The first, R1, fails in its task to survive because it does not anticipate the reactions that will be caused by its actions and the secondary etc reactions caused by those. The second robot, R1D1 (Robot-deducer1), fails because it does consider all implications, and is locked in an infinite computation of all the possibilities. The third robot, R2D1, is programmed to decide which implications are relevant and which are not, and likewise fails as it sits and rejects those thousands it deems to be irrelevant. Dr. Westland of the University of Derby provides a more complete explanation of Dennet's story and of the frame problem on his website (4).

Westland explains the with robots, you start at zero. The things that seem obvious to a human, the things you never have to explain, need to be explained in detail to a robot. You do not have to tell a child, to use an example from Professor Grobstein, that opening the refrigerator door will not cause a nuclear holocaust in the kitchen. That possibility never occurs to them; that is, it is rejected implicitly. With artificial minds, the implicit processing is not there, so the simplest tasks require the processing of impossible amounts of information (4).

But how do humans solve the frame problem? Where does our implicit programming come from? Nobody really knows. I would claim that since humans have solved this problem, the possibility exists, however remote and impossible it seems from here, that AI's could be developed which were not subject to it.

Data, to get back to our original example, seems to have solved it perfectly well, although he does sometimes need to be told simple things, taught like a child. Organisations such as IDSA and CSEM and projects such as the SWARMBOT EU project, are doing just that (5). They are working on algorithms and neural networks that allow robots to learn.

Assuming we can develop intelligent robots that can learn and pass the Turing test, we should treat them as if they were people, because we do not know that they are not. In order to develop these, the frame problem must be mastered, perhaps through the use of learning algorithms. Who knows, one day we may be attending a march for robot's rights!

References

1)The Internet Encyclopedia of Philosophy, description of Chinese room argument

2)Brain Web Entrainment Technology, Introduction to artificial intelligence

3)Internet Encyclopedia of Philosophy, overview of AI

4)Dr. Westland's Site, description of frame problem

5)Learning Robots, site of IDSA robot project


Hypnotism: Entertainment or Science?
Name: Allison Ga
Date: 2004-04-12 13:53:35
Link to this Comment: 9284


<mytitle>

Biology 202
2004 Second Web Paper
On Serendip

Hypnosis has been referred to and observed as a mode of entertainment. Stage hypnotists make appearances at many college campuses and on television. Good Morning America featured Tom Deluca, a hypnotist who hypnotized a portion of the audience and had them perform his bidding: they laughed when he told them to and one man was unable to feel the affects of ice water on his hand (1). Is hypnotism only to be used for entertainment purposes? This question has influenced me to look at the issue of hypnotism and explore its history, how it is perceived in the 21st century and why it remains so controversial.

Everyone has been hypnotized at some point; becoming fully engrossed in a film or book is similar to a hypnotic trance. Have you ever been on your way home, down a familiar road and suddenly you are at your destination without being totally sure how you arrived there? This experience is also similar to a hypnotic trance. The history of hypnosis can be traced back to ancient Egypt and even has a part in Greek myths; Greek oracles and soothsayers were said to reach this place of clarity through self-hypnosis (2). Franz Mesmer, scientist during the mid-1700's, began the foray into the scientific uses of hypnotism through his belief that magnets held healing purposes. Many believed that his overwhelming presence influenced his patients to go into trances, in this way Mesmer was able to bring about the resurgence of hypnotism. A surgeon by the name of James Braid followed in the steps of Mesmer and Mesmerism when he deduced a fundamental rule of hypnotism: the success of a hypnotic state came from within the subject, not the hypnotizer. He also came up with the term hypnotism from the Greek word hypnos, which means sleep. In 1889 Albert Moll wrote the book Hypnotism in which he insisted that it was a scientific subject to be included in the growing study of Psychology. With the help of these men, the exploration of hypnotism's medical use as well as the debate of it as science or entertainment found its beginnings. Consequently, the stereotype of hypnotists as evil mind-controlling people developed and made its way into books and film. In 1894 George du Maurier's book Trilby included the character "Svengali" who controlled Trilby with hypnotism. Soon after, in the 1900's films began to be made with this same Svengali character in the guise of an evil hypnotist (3). This perception that a hypnotized person is under complete control of the hypnotist is quite a misconception. In a hypnotized state, the person is hyper-attentive and still retains their ability to act freely (4). The imagination is peaked and the subconscious is tapped into, making the person open to things that their conscious self would not normally allow them to do or say. They are extremely suggestible and open to the ideas of the hypnotizer.

Today, the perception of hypnotism appears to be moving toward one of increased acceptance to its therapeutic possibilities, but it is still not taken completely seriously. Present day films are an interesting source of information as to how the 21st century perceives this age-old practice. Two examples will be discussed: one from a comedic film, the other from a drama. In the 2001 film Shallow Hal, Jack Black plays a superficial man who does not feel the need to look beyond the surface of the women he encounters. He meets a self-help guru, Tony Robbins, who hypnotizes him to see the "inner beauty" of all women; those who were gorgeous are now repulsive, while 300 pound Rosemary played by Gwyneth Paltrow, (who, according to the film, is automatically unattractive due to her weight) is now stunning and skinny in Hal's eyes. Hypnotism in this example is used as a way to teach Hal a lesson, since he remains unaware of how his perception has been altered for a good portion of the film. This example presents hypnotism as entertainment since Hal's inability to see things "as they are" becomes funny as well as ironic. Hypnotism becomes a non-scientific enterprise, which is extremely evident by the fact that the process is prefaced by the phrase "Devils come out!" Hypnotism retains its entertainment value as well as a comparable Svengali character who runs the show.

For another look at hypnotism in the media is the 2003 film, The Butterfly Effect. Ashton Kutcher plays Evan, the film's main character, whose childhood has been filled with several traumatic experiences which he has blocked out of his memory. These experiences have shaped him and his childhood friends in different ways which he tries to remember. He discovers that he is able to revisit the past through self-hypnosis made possible when he reads his childhood journals; this endeavor backfires for Evan and his friends. While revisiting his past, he relives when his thirteen year old self is hypnotized by his psychiatrist in order to remember his hidden memories. The doctor is forced by Evan's mother to bring him out of his hypnotic state when his nose begins bleeding, presumably as a result of the trauma of the memory. In this instance, hypnosis is portrayed as something utilized in the scientific context, but not guaranteed to work. This implicates that the process of being hypnotized is not in the hands of the hypnotizer, or the hypnotized but rather an entity on its own. Although this brings about the interesting point that in hypnosis the conscious and subconscious are separated, it still does not present the practice of hypnotism as a serious and helpful scientific practice.

Hypnotism has a variety of uses: psychiatric hypnotherapy where psychiatrists help the hypnotized access memories that are the cause of phobias and mental anguish; in law enforcement referred to as forensic hypnotherapy where witnesses are hypnotized in order to access memories they have forgotten or blocked out; and medical hypnotherapy which suggests that people can be cured from illnesses directly as a result of influencing the subconscious to heal the body (5). Forensic hypnotherapy is extremely controversial because hypnotism is a union of memory and imagination which indicates that the hypnotizer can influence the witness to have false memories and/or the hypnotized can also mix reality and imagination together. These doubts create the idea that hypnotism used in this sense is highly unreliable and thus should not be used. Medical hypnotherapy is also extremely controversial since many people believe that the cure for various illnesses should not be left to something as unknown as the subconscious. Two important questions arise out of the concern of forensic and medical hypnotherapy. The first is how can it be possible to separate the conscious and the unconscious? And second: how is it possible to remain in control of your actions and thoughts if you are in an extremely imaginative and suggestible place in your consciousness? Similar to the latent desires and associations that are revealed in dreams, the subconscious area of the brain is closed off to our conscious mind. Perhaps this is so because the implication of various impulses and hidden memories that our brain buries deep in the subconscious are done so for a reason. The ability to change possibly disturbing memories makes hypnosis seem unreliable in this medical context. Hypnotism's ability to access memories that the subconscious has buried has become an issue that science cannot explain.

Throughout its history, hypnotism has long been thought of as a means for mind control or as pure entertainment. It is interesting to note that hypnotism is important in its various medical and scientific uses, although the controversies and questions over its effectiveness and actual use are understandable. This simple idea of hypnotism as merely entertaining serves to detract from its numerous other uses of which many people remain unaware.


References

1)ABC News, article titled "Is Hypnotism Science or a Sideshow?" about Deluca on Good Morning America

2)History of Hypnotism, a helpful website detailing the origins of hypnotism

3)History of Hypnotism

4)How Hypnotism Works, informational website on different aspects of hypnotism, how it works, its background, and what it can be used for

5)How Hypnotism Works


Fibromyalgia, Pain and What It Means
Name: Erica Grah
Date: 2004-04-12 20:08:32
Link to this Comment: 9294


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Introduction
Fibromyalgia is a musculoskeletal syndrome whose main symptoms consist of chronic pain and fatigue. The pain is generally widespread, but diagnosis is dependent on the existence of 11 of the 18 specifically designated tender points on the body (1), which are generally located at the intersection points of various muscles and tendons (2). A tender point is generally defined to be an area of hyperalgesia, which is another way of saying that a painful stimulus is perceived to be much more painful than it actually is. In the case of those suffering from fibromyalgia, even slight pressure to a tender point or any surrounding area has the capacity to cause intense pain. The fatigue experienced in fibromyalgia is quite similar to that suffered by patients with chronic fatigue syndrome. Therefore, it is the occurrence of excruciating pain given moderate pain experience that distinguishes the two. Although widespread body pain is sometimes a symptom of chronic fatigue syndrome, it tends to be more intense and localized in origin (i.e. tender points) in cases of fibromyalgia. Thus, it is this aspect of the illness on which I will focus.

A Sensory Experience
There are two components of physical pain in humans. The first component is sensory. Our nervous system provides a way in which different parts of the body can recognize and react to pain. Painful information that originates within the nervous system is referred to as noxious stimuli. These noxious stimuli are received by pain receptors, called nocioceptors, which are located in the somatic or visceral tissues of the body. The nocioceptors are either chemical, mechanical or thermal in nature, and they function by transmitting impulses to the rest of the brain, notifying it of the existence of pain (3). This, in theory, will trigger natural responses that will remove the pain over time and permit healing, restoring the body to its natural state of equilibrium, free of pain.

Deficiencies occur when the brain's ability to free the body of pain is lacking or slowed. Intuitively speaking, persistent painful stimuli will not allow the brain sufficient time to respond effectively to the nocioceptive impulses. Thus, as the impulses are created and travel through to the brain, the stimuli build up and consequently amplify the intensity of pain. Neurologically, this idea helps to explain the concept of central sensitization, which occurs when the nocioceptors' response to noxious stimuli is amplified greatly. Under normal circumstances, about one-fifth of nocioceptors are triggered to regulate everyday pain impulses. However, when injury or inflammation of tissue occurs, the majority of them are activated. Therefore, under the influence of prior injury and post-healing, the amplified state of both nocioceptive (i.e. noxious) and non-nocioceptive impulses remains. Hence, continuous pain can result in the sensitization of nocioceptive-specific neurons in the presence of such input. Another type of neuron, called wide dynamic range(WDR) neurons, responds to both painful and painless stimuli. Because both types of impulses are heightened upon tissue injury, the WDR neurons can also be sensitized, thus reducing the individual's threshold for pain, as any stimulus can be treated as a noxious one. This is especially significant with respect to WDR neurons, since their response to noxious stimuli is greater than nocioceptive-specific neurons (3). This process is one proposed cause of chronic pain in fibromyalgia patients, particularly since the locations designated as tender points and the surrounding areas are more prone to minor, almost insignificant, injury (2).

Contributing to the body's (dys)regulation of pain impulses are neurotransmitters. More specifically, people suffering from fibromyalgia generally have much greater quantities of substance P, which is a chemical that excites pain responses in the nervous system and further works to sensitize neurons receiving nocioceptive information (2),(4). In addition, these individuals tend to have lower amounts of serotonin and norepinephrine, both of which play a partial role in the reduction of pain (3),(5).

Determining the origins of chronic pain is a relatively sketchy process, particularly when injury or other easily noticeable factors cease to exist. I think that the notion of setpoints would be put to good use in this circumstance. Because our bodies are used to operating at equilibrium, any and everything they can do to maintain such an equilibrium puts us at ease (4). However, everyone operates at a different equilibrium, and that level for fibromyalgia sufferers is perhaps even lower than the average person's, with respect to pain processing in particular. Given this reasoning, an individual operating at the fibromyalgia pain equilibrium, versus one at "normal" equilibrium, will have an increased number of nocioceptors reporting pain at any given point. This is interesting given the results of a study in which people with the syndrome, upon application of mild pressure to their thumbs, exhibited increased brain activity in twelve areas, whereas control subjects had activity in two (6). Given a specific threshold setpoint for pain, it is rather plausible that the results of this study can be generalized to the wider population of people with fibromyalgia. If our bodies possess a pain setpoint which regulates the minimum number of nocioceptors that are activated at any given point, it follows that given a higher setpoint in a fibromyalgia patient, any noxious stimulus, regardless of severity, would activate a higher number of nocioceptors. This would thus increase the magnitude of the pain signals being sent to the brain, and therefore activate more parts of the brain. As a result, even at rest, there is an increased state of pain response in comparison to that in an individual without chronic pain.

The same logic may apply to the existence of greater amounts of substance P in the system. The setpoint for the substance is higher in suffering patients, and as a result, internal signals that excite its release fire more often. Similarly, if the equilibrium amount of serotonin and norepinephrine are lower, pain impulses will not be inhibited as easily.

A Perceptual Experience
Pain is not only a sensory experience in humans, but a perceptual one as well. Simply put, people feel pain. This feeling of pain is as important as the existence of the pain itself. Like the sensory impulses, awareness of pain, in addition to the subsequent physical and emotional effects that it has on the individual, is contingent upon past experiences with pain, genetic factors and cognitive dispositions. In other words, there are many other factors, both conscious and unconscious, that can affect the intensity and magnitude of the pain impulses and the individual's awareness of them. This concept refers to the gate-control theory of pain, in which a person's thoughts or emotions at the time of pain processing can either reduce or amplify their perception of that pain (2),(3).

I find it is very important to separate the overall pain experience into two separate components. The main reason for this is the I-function. Because the I-function is the part of the brain that consciously experiences, it is rather easy to differentiate between pain exclusive of the I-function and that which is inclusive. Sensory pain being the former, if the same chronic pain were to occur in an individual who possessed a disconnection between the I-function and the parts of the body being affected, which are linked to the rest of the brain, the intensity of the existing pain would be reduced, but rather the awareness of that pain and the emotional consequences that frequently accompany it. Therefore, it goes without saying that pain perception varies on an individual basis. However, in the case of those suffering from fibromyalgia, because the equilibrium threshold for pain is lowered, the conscious experience is greater with respect to the amplitude of the pain, and it is the resulting emotions stemming from this experience that feed into whether that conscious state of pain is consistent, lessened or exacerbated. It seems that people in general experience a feedback loop of emotion when it comes to pain, to the extent that the I-function goes through a process of assessing pain, developing a proportional response/reaction, which in turn may or may not benefit the individual and their perception of what is occurring. This is true even more so in cases of persistent and intense pain.

Observations
I think the concept of chronic pain and pain in general is an interesting one. In gathering information for this topic, I have come to question the true origin of pain. I think that observing that which is happening on a neurobiological level, and the impact that it has, raises a question of where one should look to solve the problem. What exactly is pain if it cannot be felt? I think that the I-function plays a major role in identifying any kind of pain, arguably more so with chronic pain; it is, after all, the I-function that recognizes the pain not just as a continuous stream of impulses but as a problem. I think the gate-control theory would be put to good use, specifically because it seems to account for those cases in which a person can experience serious injury and not feel any pain. If the I-function is preoccupied, or focused elsewhere, the statement, "there is a problem" does not exist in consciousness. I am sure that we all, although maybe not consciously, possess the ability to eliminate the perceptual effects of noxious stimuli. I think that this capability, with respect to chronic pain, could possibly beneficial. However, I do think that our awareness of pain is necessary to the extent that if the idea "there is a serious problem" needs to be available in consciousness for the purposes of medical attention or some equivalent, we can actually act accordingly before extreme damage is done to our bodies. As a result, I believe we are notified of any pain, chronic or otherwise, as a warning to protect ourselves – both our bodies and I-functions included – from existing threats.


References

1) An Overview Of Fibromyalgia , from the Mayo Clinic

2) Understanding Chronic Pain and Fibromyalgia: A Review of Recent Discoveries , from the National Fibromyalgia Association

3)The Neurobiology of Pain , from The National Pain Foundation

4) The Neuroscience and Endocrinology of Fibromyalgia , report from a workshop held at the NIH, from the National Institute of Arthritis and Musculoskeletal and Skin Diseases

5) Fibromyalgia: Not All in Your Head, Newsweek article written on the subject, posted by the National Fibromyalgia Association

6) New Brain Study Finds Fibromyalgia Pain Isn't All in Patients' Heads, from Science Daily


The Effects of Methamphetamine on the Brain
Name: Amy Gao
Date: 2004-04-12 20:38:42
Link to this Comment: 9296

<mytitle> Biology 202
2004 Second Web Paper
On Serendip

When the word "meth" is mentioned, what is the image that immediately flashes into your mind? Perhaps a picture of individuals huddled together, inhaling substances that give off repugnant odors in a dark alley somewhere? Or drug cartels that wage bloody warfare upon each other on the Mexican-American border over the control the supply of the drug? These stereotypical impressions may have been correct years ago, but methamphetamine, whose street names include speed, chalk, ice, crystal, crank, and glass has long moved beyond just being the dominant drug of choice in the San Diego, CA area.(1) The use of this drug has spread to rural and urban areas of the Midwest and the South. Thus, the problem of methamphetamine is no longer confined to a certain geographical area; it has become a nation-wide problem.(3)

Methamphetamine, a powerful synthetically produced stimulant of the central nervous system (CNS), is a substance that has similar effects on the human body not unlike cocaine. Under federal regulations, it is a schedule II drug, which means that it has a high potential for abuse with severe liability to cause dependence.(2) The drug, like many illicit substances, may be injected, ingested, snorted, or smoked. However, unlike cocaine, it has a longer, lasting affect on the human body. In animal models, methamphetamine have shown to cause the release of high levels of dopamine, a neurotransmitter that stimulates the brain cells, which in turn would enhance temperament and body moment. Consumption of this compound also has a neurotoxic effect on the brain cells that store dopamine and serotonin, another substance that is responsible for neurotransmission.

Even minute consumption of methamphetamine will induce wakefulness, increased physical activity, decreased appetite, increased respiration, hyperthermia, euphoria. Effects of methamphetamine on the CNS also include irritability, insomnia, confusion, paranoia, and aggressiveness. Since it is known that it is difficult for nerve cells to be regenerated after having been damaged, it is a clear indication that use of this drug—in small or large quantities—cause irreversible damages in the CNS. This observation was reported in a study by the National Institute on Drug Abuse (NIDA), which also found that individuals who have a long history of methamphetamine abuse have reduced levels in dopamine transporters, which are associated with slowed motor skills and weakened memories in the individuals.(4) Abusers who remained abstinent for at least nine months were found to have recovered from damage to their dopamine transporters, but their motor skills and memories were not found to have significantly recovered.

"Methamphetamine abuse is a grave problem that can lead to serious health conditions including brain damage, memory loss, psychotic-like behavior, heart damage, hepatitis, and HIV transmission," says Dr. Nora D. Volkow, director of the National Institute on Drug Abuse (NIDA).(5) In another study done by Drs. Ernst and Chang at the Harbor-UCLA Medical Center in Torrance, CA., it was found that methamphetamine users had abnormal chemistry in all parts of their brains. According to Dr. Chang, "In one of the regions, the amount of damage was also related to the history of drug use-those abusers who had the greatest cumulative lifetime methamphetamine use had the strongest indications of cell damage."

There have been more than two decades of studies and researches focused on the effects of methamphetamine upon the body, especially the damages that the compound does to the brain. Even though the substance may bring about extreme pleasures, these "flashes" only last for a few minutes. It is well-known that users can become addicted very quickly, and the drugs are used with increasing frequency and in increasing doses.(6)

As with drug addiction of any kind—methamphetamine addiction included—may be successfully treated. The treatment usually includes counseling, psychotherapy, support groups, and family therapy. Medications prescribed to individuals assist in the suppression of the withdrawal syndrome, craving of the drug, and in blocking the effects of the drug upon the body. It has been found that the more treatment given, and the longer the period, the more successful the addict will stay abstinent from the source of addiction.(7)

The use of methamphetamine has been proven repeatedly to be associated with irreversible damages to the brain. Even though the neurotransmitters in the brain may recover once the individual has abstained from the drug, the damages have already been done and the effects cannot be reversed. With each consumption/inhalation of the substance, the individual sinks lower into a never ending spiral of drug abuse. A few moments of pleasure in exchange for permanent damages done to the organ that is the most important in the body—after all, brain is the only organ that can never be transplanted—is it really worth it?

References

1. National Institute on Drug Abuse

2. Street Drugs, an informational site about drug abuse

3. U.S. Drug Enforcement Administration

4. NIDA, NIDA Research on Withdrawl from Methamphetamine

5. National Institute of Health, NIH News

6. NIDA, NIDA information about methamphetamine

7. NIDA, Information about drug addiction treatment


Tourette's Syndrome and Education
Name: Nicole Woo
Date: 2004-04-12 20:46:13
Link to this Comment: 9297


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Tourette's syndrome, though better known as the cursing disease, often manifests itself is much less extreme expressions. Though the media has created a sensationalistic portrayal of those individuals with TS who suffer from coprolalia, whose symptoms include excessive swearing and foul language, those who suffer from this disorder are only a small minority of individuals with TS (4) . In fact, less than ten percent of people with TS are though to have coprolalia (3). As those who suffer from Tourette's syndrome are usually diagnosed in their childhood, around ages five to eleven, the varying tics and abnormalities which TS encompasses can greatly impact their social development and educations. In addition, tics, which often result in alienation, can directly indirectly be the cause of psychological damage. The educators and parents of today must then address the question of how to teach and socialize their children, despite their disorder.


Tourette's syndrome, identified by the French physician Georges Gilles de la Tourette in 1885, is defined generally as a neurological disorder that results in repeated, involuntary body movements (known as tics) and uncontrollable vocal sounds (5). Tourette documented in his research nine individuals who experienced involuntary movements and compulsive rituals of behavior since childhood. The criteria for diagnosis of Tourette's syndrome, as defined by the Tourette's Syndrome Association (5), are as follows;
1) Both multiple motor and one or more vocal tics are present some time during the illness although not necessarily simultaneously.
2) The occurrence of tics many times a day (usually in bouts) nearly everyday or intermittently throughout a span of more than one year and
3) Periodic changes in the number, frequency, type and location of the tics, and waxing and waning of their severity. Symptoms can sometimes appear for weeks or months at a time.
4) Onset before the age of 18.


Though the average age that TS begins is 6-7 years old, and though almost all cases of TS emerge before the age of 18, there are exceptions. The most common tics among those who are diagnosed with TS involve movements of the neck, mouth, and eyes. Tics, particularly in childhood, vary in their severity and frequency, following what is known as a waxing and waning process (2). Often, tics are particularly noticeable for a finite period of time, after which they may subside for weeks or months. As a result of this waxing and waning period, parents or educators may either dismiss such actions as a phase, or else attribute actions to physical problems. A child who, for instance, continual sniffs their nose, may be thought to have a cold or an allergy to something in his or her environment. However, when brought to a physician, they cannot attribute the tics to an illness or allergy. After the tic has waned, parents tend to think that either the phase or unidentifiable sickness has run its course.


The urge to act out a tic is experienced as irresistible and, similar to the urge to sneeze, eventually must be expressed (3). Both the severity and frequency of tics are increased as a result of tension and stress and decrease during relaxation or when focused on a particularly absorbing task. A continual source of frustration for parents is the fact that their children are sometimes able to remain tic-free, something which would seem to suggest that they have a certain amount of control over the tick. It is not uncommon for children with TS to be "free" of their tics when engrossed in a particular task, such as playing Nintendo. This is generally misinterpreted as children having more control over their tics than they in fact do (4).


Tourette's syndrome appears to be a genetic, inherited predisposition, although outside factors do appear to have some affect upon the severity of the symptoms (4). Recent research presents a convincing case demonstrating the relationship between a parent's own status with TS and that of their children's. In 2003, researchers compared the onset of TS in children whose parents had TS compared to those children whose parents did not have the disorder (4). Children who were considered to be "at-risk" or prone to TS and "control" children, children whose parents did not have TS, were observed between the ages of 3 and 6 years and followed with yearly structured assessments over intervals of 2-5 years. The results of this study, conducted by McMahon, Carter, Fredine, and Pauls (2003) seem to indicate a definite genetic component to the onset of TS:


"Of the 34 at-risk children who were tic-free at baseline, 10 (29%) subsequently developed a tic disorder; 3 of those 10 met criteria for TS. None of the 13 control children developed a tic disorder" (4).


Research also suggests that gender is also a factor when considering who is prone to develop TS, as males are affected 3-4 times more than females. While the transmission of Tourette's syndrome does appear to be genetic, the "basic underlying defect" which causes TS remains unknown (2). There is speculation by a number of researchers who suggest that TS results from abnormalities of neurotransmitters, more specifically, the activity between dopamine within the basal ganglia. This conclusion has been tentatively made after observing biochemical brain analyses of those diagnosed with TS. Researches observe that dopamine-blocking agents often suppress tics in patients.


While TS, and the tics that result from it, are serious in and of themselves, often the most serious problems for those with this disorder are not caused by TS. Clinical populations o those who suffer from TS also have other behavioral problems, especially obsessive-compulsive behaviors (2). As many as sixty percent of children treated for TS have symptoms associated with attention deficit hyperactivity disorder (ADHD). Other conditions that are known to occur simultaneously with TS are mood disorders such as depression and Bi-polar (4). In the previously mentioned study, conducted by McMahon, Carter, Fredine, and Pauls (2003), they noted that:


"Obsessive-Compulsive Disorder (OCD) or features or OCD emerged in 11 of the at-risk cases, but not in any of the controls, while Attention Deficit Hyperactivity Disorder (ADHD) occurred in 14 at-risk children but not in any of the controls" (4) .


Tourette's syndrome may result in difficulties in the child's education, learning disabilities that may encompass, but are not limited to, difficulty reading or writing, problems with mathematical computations, or perceptual problems (5). Knowing this information, what is it that teachers can do to help and encourage their students with TS?


While there is no cure for TS, and though there are numerous options which attempt to chemically combat the effect of Tourette's syndrome, the parents of children who suffer from TS as well as their teachers are required to think beyond the scope of chemicals. It is of great importance that TS is diagnosed early on. Because tics can alienate children from their peers, it is just as important for the parents to recognize the problem as it is for the teacher to nurture understanding in the classroom. Generally speaking, those diagnosed with TS have the same intelligence level as those who are not affected by the disorder, thus, students with TS should be held to the same standards as other students. However, that being said, additional measures should be taken to lessen stress and anxiety. Untimed exams and/or a separate room for exams help in reducing stress for the student. It is also helpful to for the teacher to give directions in stages, as too much information at one time may be overwhelming. As the urge to express the tic may at times become unbearable, teachers should make it clear that the student can leave the class, possibly to go to a "safe place," where the tic can be freely expressed. Perhaps most importantly, teachers and parents alike need to give positive feedback when the child performs well in a social or academic setting. For children whose actions often seem out of place, positive feedback is invaluable. Though there is still hope for a cure for Tourette's syndrome, until then, both parents and teachers must realize that Tourette's syndrome, if understood and dealt with lovingly, does not have to be a debilitating disorder.


References


Sources


1) Health: Diseases, Database of various illnesses


2)MDVU Library
, Good discussion of the causes of TS


3)SCoTENS
, discusses special education needs


4) Tourette's Syndrome
, Very good website for general as well as more in depth information on TS


5)Tourette's Syndrome Association , Helpful in reference to TS and education


Alcohol and Impulse Control
Name: Elizabeth
Date: 2004-04-13 00:36:04
Link to this Comment: 9311


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

One of the most visible ways alcohol affects an individual is the loss of inhibitions observed in those with blood alcohol levels as low as .01% (1). Every college student has experience with the behavioral effects of alcohol. Friends become more outgoing and appear to lose all inhibitions as they continue to drink. A normally shy individual may be table dancing or a quiet friend may be the center of attention. This paper will explore the possible causes of this outgoing and sometimes outrageous behavior as well as the reasoning behind the consumption of alcohol beyond an individual's limit that occurs during drinking.

The prefrontal cortex, located at the anterior end of the frontal lobes, is specifically responsible for normal control of impulses. The prefrontal cortex has been linked to impulse control because damage to this region of the brain can lead to loss of inhibitions (2). One particular example of prefrontal cortex damage is the injury suffered by Phineas Gage. Gage had a steel rod penetrate his brain. He survived the incident but had poor impulse control over his actions that had not been part of his personality before the injury (5).

Individuals who consume alcohol can show impulsive and reckless behavior similar to those with frontal lobe damage. Since the frontal lobes have been previously linked to impulse control through studying individuals like Gage, I hypothesize that alcohol may act on these same regions to cause a loss of inhibitions. Additional evidence that alcohol acts on the frontal lobes was discovered when chronic alcoholism was linked to structural and neurophysiologic abnormalities that can be observed on functional magnetic resonance imaging scans (8). Ethanol must be working on the frontal lobes in order to inflict this damage over time. Further study of ethanol's effects on the frontal lobes led to alcohol's specific interactions with two neurotransmitters.

Neurotransmitters are released into a synaptic cleft between neurons and can cause an excitatory or inhibitory response. An excitatory response is produced when a neurotransmitter from the pre-synaptic neuron causes the depolarization and release of a neurotransmitter from the post-synaptic neuron (3). An inhibitory response is caused when the pre-synaptic neurotransmitter inhibits the release of a post-synaptic neurotransmitter (3).

Two neurotransmitters, gamma-amino butyric acid (GABA) and dopamine are responsible for the loss of impulse control in those who consume alcohol. Dopamine causes an excitatory response at dopamine receptors in the frontal lobes (7). Alcohol increases the amount of dopamine acting on receptors and enhances the normal feeling of pleasure associated with the dopamine system (7). Alcohol may function like cigarette smoke to inhibit the action of enzyme monoamine oxidase, the enzyme responsible for breaking down dopamine in the synaptic cleft (7). Since dopamine is not broken down as efficiently when ethanol is present, it can act on the post-synaptic neuron for a longer period of time. The feeling of pleasure will be increased and the individual will want to keep drinking to maintain the sensation. Individuals want to continue to experience the feelings caused by dopamine, so they continue to consume alcohol. The response of ordering another drink when one is already visibly intoxicated can be explained by the pleasurable effect that an increased alcohol concentration has on the brain.

Alcohol also enhances the effects of the neurotransmitter GABA on GABA receptors in the prefrontal cortex (4). GABA neurotransmitters inhibit the release of other neurotransmitters from post-synaptic neurons. Ethanol co-binds with GABA neurotransmitters to GABA receptors on chloride ion channels (6). Ethanol causes the prolonged opening of the chloride ion channels and the greater uptake of chloride ions by the post-synaptic cell. The presence of chloride ions hyperpolarizes the post-synaptic neuron so it cannot conduct an action potential and initiate a response to stimulus (6). Since the post-synaptic neuron cannot release a signal, the ability of the neurons in the frontal lobes to inhibit socially unacceptable behavior is reduced. Decision-making is also impaired and the impulsive, uncontrolled behavior of intoxicated individuals results. Dr. Richard Olsen conducted research on specific GABA receptors. GABA receptors with beta-3-detla subunits remain open for an extended period of time when exposed to low levels of alcohol (1). This particular subunit probably has a higher affinity for the binding of ethanol. GABA receptors Dr. Olsen studied respond to much lower levels than GABA receptors with gamma-2 subunits and nervous system control over behavior can be altered after one drink (1). The varying binding site shapes of GABA receptors may explain the progressive loss of control that alcohol causes. Some receptors respond to lower levels of ethanol and as alcohol concentrations increase more GABA receptors are affected. The loss of inhibitions results because the post-synaptic neurons are progressively less able to conduct an action potential and illicit a response.

The effect of alcohol on the GABA and dopamine systems causes the loss of control that can be observed when individuals drink. Through excitatory and inhibitory synapses, the actions of certain neurotransmitters alter the behavior of an intoxicated individual. Further study of the specific binding of ethanol to receptors may lead to treatment of intoxicated individuals. Also, studying the effects of alcohol will lead to a greater understanding of the role GABA and dopamine neurotransmitters play in altering observable human behavior. .

References

1)Even a little alcohol Affects the Brain, This online article written by Steven Reinberg contains a summary of the Research done by Dr. Richard Olsen on various GABA receptors.

2)Executive Functions and Frontal Cortex, This is a website containing information on the function of the frontal lobes and specifically the prefrontal cortex.

3)Synapses, This article provides good background information on the structure and function of neurons as well as a description of excitatory and inhibitory responses.

4) Neural Activity and GABA/Glutamate in Prefrontal Cortex: A Combined fMRI/MRS-Study, This site states that GABA does function in the prefrontal cortex. It wants to study the specific amounts of GABA in individuals using fMRI technology.

5)The Story of Phineas Gage, This is the story of Phineas Gage, including his background and specifics about his injury.

6) How Drugs Affect Neurotransmitters, This site provides text and a diagram about the affects of GABA on a post-synaptic neuron.

7) Tobacco, Alcohol and Dopamine, This site discusses the many impacts that alcohol and tobacco have on the brain, including their effects on the dopamine system.

8) FRONTAL LOBE CHANGES IN ALCOHOLISM: A REVIEW OF THE LITERATURE, This site presents research data from those who studied the effects of alcoholism on the frontal lobes.


Would you like fries with that?
Name: Erin Okaza
Date: 2004-04-13 01:25:40
Link to this Comment: 9314


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Exhausted, you return home from work with a bag of McDonald's and flip on the television. Halfway through your juicy Big Mac, crispy fries and your 44oz. Coke, a public service announcement for the American Heart Association comes on at the tail end of the commercial break to tell you how you are currently sucking down enough saturated fat to harden the arteries of an elephant.

Whether or not you realize it, you are probably one of the millions of Americans bombarded by the anti-cholesterol revolution. Most people are aware of the well-publicized relation between high cholesterol and the risk it poses to our heart. However, a vast majority of individuals is unaware of cholesterol's surprising connection to behavior. This paper will investigate this rather interesting connection by first laying out the platform of the current cholesterol movement. Then, it will look at studies supporting cholesterol's impact on behavior. Next it will examine how these two viewpoints combine to provide a way of looking at "set-points" and the nervous system. Finally, it will consider why most people wouldn't anticipate this connection and the implications such a discovery might have about understanding ourselves.

It is well known that too much cholesterol in our blood is not a good thing – however is that the whole picture? For most people, the scare of coronary artery disease and atherosclerosis – where the insides of the arteries become hard and narrow due to (cholesterol) plaque buildup – is enough to make anyone shudder with any mention of cholesterol (1),(2). However, that does not mean that all cholesterol is bad. Lipoproteins carry cholesterol through the bloodstream in two types: LDL (low density lipoprotein), which cause buildup in the arteries, and HDL (high density lipoprotein), which carry cholesterol to the liver. Higher levels of LDL or "bad cholesterol" increase your chance of getting heart disease, whereas higher levels of HDL or "good cholesterol" do the opposite (2). There are healthy levels of both cholesterols in our bodies; however, there are no symptoms of high cholesterol so its only indicator is a blood test (1). In May 2001, the National Cholesterol Education Program (NCEP) altered the 1993 cholesterol guidelines (1),(2) by lowering the range of acceptable "normal" cholesterol levels. As a result, 13 million more Americans were advised to make dietary changes to lower cholesterol (3). The good thing, is that this measure is heightening people's awareness and generally increasing overall health. The bad side, however, is that this indirectly projects the mentality that "lower cholesterol is better". With the media and campaigns pushing an "a.s.a.p" lowering of cholesterol, are there consequences? Possibly ones we are not aware of?

We know that physiological deviation from what is considered "normal" can cause drastic results – high levels of bad cholesterol stymie the operation of our heart and cardiovascular system (1). But now lets challenge the completeness of this picture and ask, what about the other way around? How else does deviation from acceptable levels of cholesterol affect our body? Is there a consequence of having levels of cholesterol that are too low?

While the negative affects of cholesterol keep us maintaining low-fat diets for the benefit of our physical health, several studies raise suspicions that taking our obsession too far might be at a sacrifice to our mental health. Ignited by a Yale study proposing a cholesterol-serotonin hypothesis of aggression, Dutch researchers revealed consequences of low cholesterol by providing evidence that linked low cholesterol levels to increased depression in men (5). Subsequent studies support a connection between low/lowered cholesterol levels and adverse behavioral outcomes (aggressive behavior and depression) (4),(7). It is believed that cholesterol negatively affects the metabolism and activity of the brain neurotransmitter, serotonin, known to be involved in the regulation of mood. Other explanations target a certain type of fatty acid, omega-3, found in large quantities in the brain (6). It is speculated that low levels of omega-3 could possibly impact behavior through mechanisms still unknown. The focus of this information is not to undermine current wisdom and treatment of high levels of cholesterol on heart disease, but rather to focus on the possible connection between mental heath consequences and low cholesterol. The other significant consideration of such findings is how cholesterol might help us to better understand alterations in mood and behavior. More generally, these findings underline the notion that the role of the nervous system is more interconnected with, and impacted by, known physiological mechanisms than we were previously aware.

It is established that too much cholesterol is not good for you; however, it is incorrect to assume that the lower your cholesterol, the healthier you are. When we put the two pieces together, evidence advocating either side of the cholesterol argument suggests that the body is able to operate at maximum efficiency at an optimal level – a certain cholesterol set-point (8). Alternation of cholesterol levels below the "set-point," disturbs the consistency of serotonin metabolism and other unknown mechanisms that might act as a regulatory loop for behavior. An interruption of this process results in the previously noted behavioral outcomes. Cholesterol is something we cannot control; there are no symptoms of high or low cholesterol. We can't consciously manage the level of cholesterol in our body – inferring that such a regulation is not happening in our I-function. As a result, we have no direct control over our arteries clogging up with plaque or the metabolism rate of our serotonin and the "other part" of our nervous system must account for these mechanisms. In effect, we can extend this notion of the "other part" of our nervous system (I-functionless nervous system) to account for behavioral phenomena. We can use such reasoning to explain how cholesterol plays a role in behavioral outcomes, such as violence and depression, and occurs by way of set-point irregularity without the I-function.

Why did it not seem that these two sides could be put together to come to the above conclusion? Perhaps it has something to do with the fact that it is hard to actually bring to consciousness that which one is unaware of. For example, medical professionals might think they have a full explanation about the impact a certain molecule has on the body (in this case cholesterol), but not be aware of other existing pathways, loops or interactions. Cholesterol, studied from a physiological standpoint offered a very reasonable explanation for one particular set of medical outcomes. However, when approached from the standpoint of the nervous system, a new, previously unknown explanation is manifest offering further information about the linkage between cholesterol and behavior variation. In turn, we might question the true extent of our knowledge and ask if what we know really stops there.

Though this paper investigated the less known connection between cholesterol and behavior by using set-point variation and aspects of the nervous system, it raises concern over current knowledge of our physiological processes with emphasis on the completeness of what we think we know for sure. The nervous system offers an additional explanation about the connection between our bodies and behavior. If such connections were previously overlooked due to a lack of awareness about the existence of mechanisms between the nervous system and the molecular workings of our bodies, how might we become "aware" of mechanisms in our body that do not go through the I-function, but nonetheless exist and impact mental and physical outcomes? Another question arising from this discussion of cholesterol's impact on behavior through set-point alteration is if feedback loops that regulate set points are permanently alterable without the possibility of long-term negative consequences.

In this discussion, cholesterol is more than a culprit of heart attacks. As it turns out, we can use information from both sides of the cholesterol debate to shed unique light on how cholesterol can influence behavior by set-point alteration without our being conscience of what it is happening. Next time, don't be so quick to replace your #4 extra value meal with a soy burger, baked potato and a jug of OJ. Don't just do it...think for a second. Odds are, cholesterol goes to both the heart as well as the head.


References

1)Medicine Net, Site provides good general information about cholesterol in general, especially LDL and HDL

2)National Heart, Lung and Blood Institute, good information about anything having to do with the heart, heart conditions, cholesterol and heart disease

3) Dr. Mercola web page , a doctor's commentary about change in cholesterol guidelines, includes JAMA citations

4)Skali homepage, a good compilation of articles documenting the bad effects of having low cholesterol, good citations to articles

5)The Brain , This site showcases an article reported by Reuters News about Depression linked to low cholesterol summarizing a study by Dutch researchers in Psychosomatic Medicine.

6)New Century homepage, This site displays an article about the connection between mood and food with good references.

7)Science Daily, article about the price of low cholesterol among women, from the Center for the Advancement of Health

8)Dr. Mercola web page, a doctor's commentary about the link between low cholesterol, aggressive behavior and depression, includes Journal of Behavioral Medicine


Psychological Components of Chronic Pain
Name: Natalie Me
Date: 2004-04-13 01:53:03
Link to this Comment: 9315


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

My sister suffers from a chronic autoimmune disease associated with chronic pain and fatigue. Over the years I have witnessed her struggle with the disease and observed her symptoms fluctuate with her mood. Am I suggesting she is faking her symptoms, forgetting them when happy or exaggerating them when down? Certainly not; I have merely noticed a common trend: mental state affects physical state. What is this unique mind-body connection? And how can one's mental state affect one's physical perceptions?

Upon my online investigation of chronic pain, I discovered the constant distinction being noted between ACUTE and CHRONIC pain. Acute pain is typically natural and 'healthy' pain. Pain normally serves a very useful function: to warn us of danger and to protect our bodies. "Without pain we would have no way of knowing that something was wrong" and would "be unable to take action to correct the problem or situation that is causing the pain" (1). Acute pain is short-term and involves a physically observable or physiologically provable source. Chronic pain, however, is persistent or recurrent, lasting for "at least three months and most probably for several years" (2). This is not considered a healthy body response especially with apparent nonexistent stimuli.

The problem is that this chronic pain has no specific etiology. There is no diagnostic test that can be done to prove you suffer from chronic pain (though tests have been done compare brain states of patients with increased pain sensitivity to touch to equal pain inducing stimuli in patients with 'normal' sensitivity that have found that those experiences produce similar brain states). There is also no proof of the exact psychological processes involved in the experience and management of chronic pain. Does chronic pain cause a bad mood, depression, anger, and anxiety or do those states cause chronic pain? It seems that no one really knows; "the exact medical causes of the chronic pain condition are unknown or poorly understood" (2). Research seems to suggest that the relationships are reflexive, Thomas A. reports that "pain and psychological illness have reciprocal psychological and behavioral effects" implicating a co-morbidity of depression and pain (3).

Again though, there does not seem to be a discernable cause for this chronic pain, nor its association with depression. Perhaps this is why in all my reading chronic pain is constantly being defended. Consider the following examples:


"Emotional stress and negative thinking can actually increase the intensity of the pain, but the presence of psychological factors does not mean that the pain is imaginary" (1).

"We've all heard it before: 'It's in your head'" (4).

:

"Sometimes those with chronic pain are blamed for their condition or made to feel like they were making it all up..." (2).

Where does the need come from to defend chronic pain against accusations that it is 'imaginary,' in one's head or, just a lie? Why is it necessary to declare chronic pain as real? Apparently this question seems to be the real one. Dr. Nortin M. Hadler reports that the "escalating discordance between feeling miserable and possessing no demonstrable primary pathophysiology" is a byproduct of a brand of medical science and the real problem with treating chronic pain (5). The western biomedical approach, with its focus on diagnosis and labeling as well as its symptomatic definition of health has produced a pathological focus in healing that mal-socializes patients and doctors to define disease in a detrimental way.

Western medicine is based on a specific duality that has pervaded culture since Descartes first separated mind and body. By treating the mind and body as separate, one is forced into having either a physical or mental ailment. "Reductionistic clinical thinking that has enslaved western physicians for generations" induces physicians to diagnose and label a disease along those specific and separate lines – mind or body (1). Patients begin to feel as though their disease must be one or the other, and for chronic pain sufferers, without a specific etiology to blame, western medicine turns to the other source: the mind.

I realize I am quite a distance from where I started. I began wanting to know how mood might affect pain or disease and general and have ended with a critique of our medical culture. The problems with our conceptualization of disease are numerous and I could spend volumes discussing the issue. Dr. Bennett argues that we should avoid using labels that, once culturally defined, stigmatize the patient. However this process is engrained in other spheres of our life as well, certainly it is something we cannot avoid without a great deal of social change. "To understand the language of pain, we must learn to listen to how the pain echoes and reverberates between the physical, psychological, and social dimensions of the human condition" and this is not something easy to do for patients and doctors both (1). As a sociologist currently looking at social movements, I can't help but wonder what sort of a collective behavior would be needed to change the way we define health, science and ourselves as both social and biological agents of action.


References


Works Cited:

1. http://www.addiction-free.com/pain_management_&_addiction_psycho_components_of_pain.htm

2. http://www.aboutarachnoiditis.org/website_captures/chronicpainhandbook/

3. http://rockhawk.com/chronic_pain_and_depression.htm

4. http://webhome.idirect.com/~readon/pain.html

5. http://www.rheuma21st.com/archives/cutting_edge_fibromyalgia.html

Works Consulted:

1. http://www.mindpub.com/art203.htm

2. http://www.pearsonassessments.com/resources/painprofile.htm


.


Monkey See, Monkey Do?
Name: Lindsey Do
Date: 2004-04-13 02:13:41
Link to this Comment: 9317


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In the words of the 18th c. Poet Edward Young, "we were all born originals, why is it that so many die copies?" (1). Indeed, if the recent discovery of "mirror neurons" in monkeys suggests a similar pre-existing brain structure in humans for imitative behavior, the question becomes: what does it mean to emulate others to the extent that we adopt observable behavior as our own? How can we define imitation as a conscious or unconscious aspect of human behavior from both a social and neurological standpoint? Can human behavior ever resemble "true imitation?"

"Mirror neurons" are considered to be one of the most exciting and controversial new developments thought to have potentially widespread implications across the natural and social science fields. Mirror neurons were discovered in the Macaque monkey's ventral pre-motor cortex, which controls hand and mouth movements. Neurons in this area, labeled as F5, were found to fire when the monkey observed an action performed by another (perhaps conspecific) creature (seeing another monkey or human grasping a nut) and when the monkey performed the same or similar action (grasping a nut) (2). The implications of this discovery become even more meaningful because the F5 area is homologous to Broca's region, which is thought to be involved in speech control as well as pre-linguistic analysis of other's behavior in humans (2).

Mirror neurons imply that we may not have to physically execute an action in order to imitate, but rather our motor system becomes active (observing neural activity) as if we were executing that very same action we are observing (on an unconscious level). Clearly, we should not be so quick to translate the neural activity found in the Macaque's to our own neural behavior. Humans have a higher consciousness that implies we have the ability to imagine ourselves acting, or to internally stimulate a vision of this action. However, envisioning ourselves imitating others does not necessarily translate into the actual imitation of these actions. Perhaps this suggests, that unlike the Macaques, we can consciously choose to imitate, if and when we do. How do we then distinguish between actions/observations that become internally integrated (conscious processing akin to learning) and echoed (unconscious processing) in imitation?

First, let us define what we mean by imitation. Imitation is defined as "to be, become, or make oneself like; to assume the aspect or semblance of; to simulate: intentionally or consciously; unintentionally or unconsciously" (3). Another description from the psychologist Thorndike, who was possibly the first to provide a clear definition of imitation within a social context, is given as "learning to do an act from seeing it done" (4). In other words, he suggests that we learn new behaviors by copying others.

These definitions suggest that imitation is not merely an unconscious, automatic reflex as suggested by mirror neurons, but a mechanism that involves a certain amount of integration and perception similar to learning. Gallese theorizes that mirror neurons allow us to implicitly perceive an action as equivalent to internally stimulating it (2), however, in humans, imitation seems to be inextricably linked to our higher consciousness.

For some, imitation involves an exact copy of behavior that is most commonly found in animals (i.e. the Macaque monkey, bird song, etc. See (5) for examples). If mirror neurons constitute what Vittorio Gallese proposes as constituting a "neural mechanism [that] enables implicit action understanding," then we have the capacity to represent and recreate the mental states of others (Theory of Mind) as part of our behavioral imitation. This notion underlies that mirror neurons might also provide us with the ability to distinguish our self from others (6), which is relevant in a social context. If we follow this idea, a dysfunction of mirror neurons would not only interfere with our imitative abilities, but also with our awareness of our relationship to others around us on an observable level.

Patients experiencing Anosognosia deny not only their own paralysis but also the paralysis of others (7). This case suggests that an individual's lack of awareness of his or her own physical capability is intrinsically connected to a similar physical ability observed in another. Echopraxia is another example of a possible impairment of mirror neurons and the imitative reflex. This disorder is explained as the "impulsive tendency to imitate other's movements. Imitation is performed immediately with the speed of a reflex action" (8). In this case, imitation is involuntary and spontaneous, suggesting that it is a behavior autonomous from the I-function. Unlike echopraxia however, individuals with imitative behavior do not imitate the movements of the acting individual, but rather perform an action identical to the observed one. It is the goal rather than the movement" that is imitated in this pathology (8). Although these actions cannot be simply reduced to a defect in mirror neurons, there is a certain imitative aspect inherent in these behaviors that suggest an unconscious connection between mirror neurons and how we act.

Laughing and yawning are also given as examples of other imitative actions, although they are thought to be "contagious" behaviors resulting from a stimulus. We can suppress these actions voluntarily if we choose to, but we can't deny that the observation of these actions will often generate a similar response in others. Regardless, laughter and yawning are not examples of "true imitation" because they are innate behaviors, not actions that we have learned to execute by observing others.

Clearly, imitation involves a certain degree of intentionality and goal-orientation that is inherent to our I-function. On the unconscious level of our "copycat" behavior, mirror neurons are said to function as a recognition and representation of specific actions/behaviors between others and ourselves. In order to get it "less wrong then", let me suggest a hypothetical situation: if we isolate ourselves in a vacuum, it is likely that we lose the ability to regulate our mind and body (without input to inhibit the action potentials generated by the brain). Therefore, taking a different stance, it seems logical to me that imitation might on a larger, social scope act as a regulatory mechanism. Mimicry enables individuals to know what they are doing is "ok" because they are acting like and along with others, creating a bond (9).

Perhaps mirror neurons have evolved as an evolutionary behavior for humans in order to inhibit corollary discharges, serving as a reference point for the "correct behavior" in a negative feedback loop/homeostasis within a social context. Although my hypothesis may be reductionist, it might be helpful to think of mirror neurons as homeostatic because observation (input) seems to be directly related to performance (output) in neural activity.

Imitation occurs in all ages. It might be interesting to research imitative behavior as age-specific: if children imitate more than adults, this might provide more evidence that mimicry can act on an unconscious level since children are not endowed with the same cognitive processing as adults. We often see young children imitating their parents, integrating innate behaviors with their observations (walking, talking, etc). But we also see adults watching others, adopting similar behaviors. For example, I often observe those who may not have the knowledge how to properly lift weights observe, then copy others (although not always correctly). Imitation evidently serves as a quick way to learn a new behavior that might serve us well—by watching others first, we can assess how these actions will be accepted and if they "work" or not. Our choice to imitate and whether we actually correctly assimilate these behaviors in our understanding and interpretation of them may be questioned.

Returning to my original, and perhaps unanswerable question, can humans even truly "imitate" if we are subject to so many internal and social forces? Imitation allows us to both consciously and unconsciously ape other's behavior, although it seems to occur more commonly on a level of self-awareness. Mirror neurons may suggest a neurological explanation for mimicry, but until we can pinpoint its exact function in learning/adopting behaviors observed in others, whether consciously or unconsciously, we must be careful to draw conclusions about its capabilities. If indeed mirror neurons follow an involuntary "monkey see, monkey do" role, then we must alter our concept of brain=behavior, in that our behavior is more reflexive of our external perceptions of the world and our relation to those around us.

References

1)Quotes

2)What Mirror Neurons Can and Cannot Do, a different take on Mirror Neurons

3)Online Version of the Oxford Classical Dictionary
; definition of Imitation

4)Imitation and the Definition of a Meme, Susan Blackmore

5)Animal Imitation

6)The Roots of Empathy: The Shared Manifold Hypothesis and the Neural Basis of Intersubjectivity, Vittorio Gallese

7)Ramachandran, Bio 202 lecture notes link

8)Shared Manifold Hypothesis from Mirror Neurons to Empathy, Gallese

9)Nature Magazine, an interesting link on "copycatting"


Fact--or Fantasy? The Truth Behind Munchausen Synd
Name: Shadia B
Date: 2004-04-13 02:15:32
Link to this Comment: 9318


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

You don your white overcoat and grab a stethoscope, expecting to conduct a routine examination. Your young, female patient enumerates a variety of complaints including painful swelling over her right breast. You notice multiple scars on her torso, question her about her medical history, and learn that "she has a history of similar recurrent swellings over the abdominal wall, which needed repeated surgical drainage on about 20 occasions". Her problem had started at the age of 17 when she was first diagnosed with immune deficiency. Soon after medication was administered, she developed symptoms suggestive of deep vein thrombosis in one leg. Since medication was given under supervision, she was thought to have developed a resistance to the drug. "She soon complained of bilateral painful swellings associated with weakness of the lower limbs and consistent with bilateral femoral nerve palsy and hematoma. Surgical evacuation was rapidly carried out but recurrent abscesses remained a problem." The list continues, each item more spectacular than the last. And yet, the cause of her illness remains undiagnosed. Baffled and confused you consult your fellow doctors and order a battery of tests, determined to detect the cause. Would it ever occur to you that your patient is really a pretender? In his case report(2)summarized above, Aamer Aleem, a doctor in the UK, illustrates a typical scenario a Munchausen patient presents.

Munchausen Syndrome is an extremely disturbing medical condition-often going undetected for decades. Not to be confused with hypochondriacs who experience physical symptoms of illnesses and visit doctors truly believing they are ill (4), those with Munchausen's make "capitalizing on, exploiting, exaggerating or feigning illness, injury, or personal misfortune"(1). a habit in order to gain the attention they feel cannot be gained by any other means. Named after a German soldier renowned for exaggerated tales, the disease is deemed a factitious disorder, and is predominant in females (71% of cases).(5). The disorder is relatively rare and incredibly difficult to treat. Awareness and early detection are crucial factors. Those afflicted with Munchausen Syndrome rely on the fact that a doctor will trust history and symptoms reported in order to fabricate an intricate web of deception.

Aleem's report continues to illustrate the difficulties faced by the medical community in properly diagnosing and treating the disorder. Although routine questioning reveals that her mother had suffered from breast cancer and that no near relatives were involved in the medical field, "suspicion is raised regarding a possible factitious nature of her problem because of an inability to explain the cause of her abscesses and the growth of multiple organisms from the lesions"(2). A high level of suspicion is required to detect Munchausen, and doctors need to be on the look out for one of these essential features: "pathologic lying (pseudologia fantastica), peregrination, and recurrent, feigned or simulated illness"(2). Supporting features include borderline and/or antisocial personality traits, deprivation in childhood, knowledge/experience in the medical field, multiple hospitalizations, and multiple scars coupled with an unusual or dramatic presentation.(2)

Ironically, those with Munchausen Syndrome really are sick, yet they rarely seek the right kind of medical advice. When confronted, they vehemently deny any claims and ingenuity is required to catch them. In this student's case, a psychiatric consultation was conducted (without giving the patient any hints about the suspected factitious disorder) during which she was judged very defensive and conflicted when responding. Soon after, when the patient was not in bed, the nurses found a syringe full of fecal material along with needles-the source behind the mysterious swelling and cultures. When the patient returned, she was informed and became very hostile. Finally, against medical advice, she left the hospital and was lost to follow-up(2).

In researching this intriguing disease, I was struck with the realization that Munchausen's highlights many issues of neurobiological importance. It is very much an extension of the mind-body riddle for within the seemingly physical nature of the victims' symptoms, there lies a neurological cause. What is it that any individual could possibly gain by harming themself? Research suggests that women who have led emotionally deprived childhoods and who may themselves have been physically abused or even victims of Munchausen's, are the most likely to be afflicted. Presenting oneself as a false victim is very much a Munchausen trait. Often suffering from "narcissistic tendencies, low self-esteem, and a fragile ego"(1), they crave the attention and sympathy a grave illness or seriously ill child, immediately elicits. Sufferers also relish the status of power and control that accompanies being the only person who "knows" while an intellectual medical community remains baffled. The real question remains-do they knowingly deceive, or are they themselves deceived?

A related disease, Munchausen Syndrome by Proxy (MSBP) is illuminating because in this case, the victim is not the MSBP sufferer. In fact, in this more dangerous variation of the disease, it is usually a very young child who will be targeted. Often the MSBP sufferer will assume a caregiver role, working as a nurse, perhaps in a ward for sick children or in a home for the elderly, or with severely handicapped people-"the common thread is a victim who is vulnerable, whose verbal skills or emotional state or mental condition prevents them from explaining what the MSBP person is doing to them and whose hold on life may already be precarious"(1). It has been estimated that one in five cot deaths (SIDS) is really a murder resulting from a mother with MSBP (1). Sufferers become adept at inflicting harm upon others in a manner that leaves little or no forensic evidence. Methods employed include restricting breathing by 'placing a hand over the mouth, lying on top of the baby, smothering, placing plastic or cling film over the person's face, withholding food and medicine, over-medicating or medicating when unnecessary, or delaying calling for medical assistance when an emergency arises". Then "when the victim reacts with a fit, breathing difficulties, collapse, etc the MSBP sufferer can-after ensuring the condition is sufficiently life-threatening-rush to the rescue and later be hailed as a hero for being such a wonderful, kind, caring, compassionate person for having saved this person's life" (1). Sadly MSBP is rarely suspected because very often the abuser appears to be an ideal caretaker-attentive, knowledgeable about their child's condition, and extremely interested in the medical field.

In closing, the calculative mentality needed to perpetrate a crime on a child in order to elicit sympathy suggests that the perpetrator is "conscious" of their actions. It is clear that premeditation is needed to research medical data and falsify symptoms, all the while outwardly placing oneself in a sorrowful situation. However many symptoms reveal a psychiatric origin. Those with Munchausen illustrate how very fine the distinction between pleasure and pain really is. Often they exhibit sadistic/masochistic behaviors exploiting their victim's pain for their own pleasure. Must the I-function be involved, or is this behavior pathological and uncontrollable? These are questions that remain to be grappled with within the medical and legal community. The debate over deliberate child abuse vs. psychological disorder remains unresolved. Coupled with the needs for early detection and appropriate treatment, these issues remain a priority.

References

1Bully Online, A detailed report on the two syndromes.

2)Case Report:Munchausen Syndrome, A very comprehensive case report and review of literature surround Munchausen.

3 The Merck Manual Site on Psychiatry in Medicine,

4Page Wise,gives overview of syndrome.

5)WebMD,Article by Daniel DeNoon entitled "Some Kids Cry Out in the Language of Illness"

6)Village Voice,Cybersickness: Article on Munchausen and the Internet

7)Feldman, Marc, MD. Munchausen by Internet, Southern Medical Journal. Vol. 93, No. 7, July 200.


Genetic basis for Violence
Name: amar patel
Date: 2004-04-13 02:56:43
Link to this Comment: 9319


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Aside from the strict science and reporting in genetic based cases, one of the major points that have been stressed in all studies of genes for behaviorism is the minimal affect they have compared to any environmental factors. All scientists agree on the idea that behavior stems from the nervous system, but the real question has been the degree that the nurturing environment plays in initiating certain behaviors. The easiest breakdown is comparing violence genes to those for alcoholism. Since it is assumed that alcoholism is genetic, one must understand that the genes cannot take any affect until someone is exposed to alcohol. In the same way, violence must be initiated by a case of abuse before the cycle can be perpetuated. It is quite obvious that no matter what chemicals or genes are found to be related to violence, all cases start from the impact of a patient's surroundings.

Although all violence was traditionally thought to be in the realm of sociology, or psychology, we are now finding increasing evidence of its biological initiation. Many recent studies support the notion of a genetic "deficiency" causing aggressive behavior. These genes code for certain enzymes that are responsible for the metabolism or synthesis of neurotransmitters. This genetic analysis will show the genes coding for the Monoamine oxidase (MAOA) and Tryptophan hydroxylase (TPH) enzymes (catalyzing proteins) have been linked to specific cases of violent behavior.

Each of these enzymes works on neurotransmitters to control activity in the brain. A neurotransmitter is essentially a type of chemical that carries a signal across a synapse between neurons. The primary neurotransmitters that have been associated with the onset of aggression or violence are Serotonin (5-HT), Norepinephrine, and Dopamine. These neurotransmitters are three of the most common chemicals found in the brain. Serotonin is responsible for different moods, appetite, sexual activity, homeostasis, and sleep. Norepinephrine is affected by stress and moods in the brain; it is also involved in the sympathetic nervous system. (2) Dopamine is used to regulate emotion, the "pleasure center" of the brain, and motivation. (3)

In order to comprehend the function and relation of neurotransmitters better, one must understand the way in which neurons communicate via synapses. A common nerve cell holds a partially negative charge, relative to its outside environment. There are various channels that allow the flow of positive sodium ions into the nerve cell. These sodium channels are activated when the channel prior to it has moved in ions. This essential "domino effect" of allowing positive ions into the cell creates what is known as an action potential. When the action potential reaches the end of the axon it will allow for the intake of calcium ions, and the release of synaptic vessels which contain the neurotransmitter chemicals. When these vessels reach the axon's cell membrane they release their neurotransmitters into the synaptic cleft. All of these neurotransmitters are taken up by the dendrites of the next cell's membrane.

The MAOA enzyme operates on the molecules leftover in the axon. Monoamine Oxidase is an enzyme used to metabolize the neurotransmitters Serotonin, Norepinephrine, and Dopamine. The purpose of the MAOA enzyme is essential to inhibit the reactivity of the neurotransmitters. (4) Any leftover neurotransmitters will be broken down by the MAOA enzymes. Since this enzyme is translated from a gene that is located on the X-chromosome, of which women have 2 copies and males only one, males have a greater probability of having a deficiency of the enzyme.

Another interesting aspect of the study conducted on MAOA stated that the link between violence and genetic mutations in which no genes for MAOA existed, proved inconclusive for an entire population. (4) The reason that these results are not conclusive on the entire population is in relation to the entire nature versus nurture battle. On a whole, the majority of the population has not experienced abusive situations. After narrowing the search criteria, the research did eventually find links between the MAOA enzyme and aggression. Such results further the notion that genetic backgrounds are not utilized without a behavioral initiator.

The most cohesive link was found between the MAOA enzyme
activity and adolescent conduct disorder in 'maltreated' males. (4) The conclusions drawn from these studies show that although there are instances of the MAOA enzyme being completely deficient, these cases are rare. There is, however, a large portion of the population which has a low MAOA enzyme activity. (4) Whenever neurotransmitters are released, from fear etc., they will remain in the synaptic cleft and cause more aggressive behavior. In previously abused children, this activity bolsters violent behavior by stopping Serotonin activity. (4)

The other enzyme that has been equally promoted as a cause of violent is TPH, an enzyme which is concerned with limiting the rate of synthesis of the neurotransmitter Serotonin. (5) The biology behind the TPH enzymes makes scientists aware of the fact that it is the only catalyst in the reaction producing Serotonin and therefore can limit its production.(1) Many studies have shown that altered Serotonergic activity exists in many males with suicidal and aggression issues. (6) Any deficiency in the amount of TPH produced creates a dearth of Serotonin in areas of the brain which use it to hinder impulsive behavior. Many published experiments show that in order to understand the prevalence of cases with TPH deficiency better, one must look at the genetics basis of the enzyme's production.

The TPH allele is associated with the gene A218C. One of the studies conducted with TPH enzymes showed that people with a single nucleotide substitution on the TPH gene, creating an A779C single, had more issues with aggression.(5) The presence of the A779C is what leads to a deficiency in the amount of TPH present in the brain.(1) The lack of this TPH will consequently cause a lower than normal level of Serotonin production. The low Serotonin level will lead to difficulties in inhibiting impulsive behaviors.

As seen in the MAOA enzymes, the lack of the TPH enzyme is also not something that is found in a majority of the population. When examining the various scientific studies, one cannot help but understand that genetics is not the sole factor in violent behavior. The scarcity of cases of violent behavior with relation to deficient enzymes shows that not all violent behavior can be accounted for through genetics. This is not to say that there is no genetic basis for behavior in genes, but one can safely maintain the notion that outside influence plays a larger role in the behavior of a person.

References

1) Dysfunction in the neural circuitry of Emotion regulation- a possible prelude to violence

2)Definition of Norepinephrine


3)Dopamine definition


4) Role of Genotype in the Cycle of Violence in Maltreated Children. Science Magazine Auguest 2, 2002. Vol. 297


5)TPH synthesis

6)Biology of Violence presentation


7) , Serotonin description


Shifting Realities through Vipassana Meditation
Name: Hannah Mes
Date: 2004-04-13 05:06:49
Link to this Comment: 9321


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Shifting Realities through Vipassana Meditation

It has been suggested in class that a disconnect of information exists between the "I-function", an individuals experience of being, and the "unconscious", discussed as the behaviors controlled by the central nervous system of which one is unaware of. I found this concept intriguing as my own experiences with Vipassana, a Buddhist meditation technique, allowed me to make the gap between my conscious and unconscious less sharp. This is an attempt at comparing and contrasting the relationship between these two realms, drawing upon both my own experiences and those, such as Pilou Thirakoul and Dr. James Austin, who have already explored this field.

My experience with Vipassana meditation began with a 10-day course that required several serious commitments from all potential students. All meditators accept 5 noble precepts for the duration of the course which include an abstention from killing, stealing, lying, sexual misconduct, and the use of intoxicants. In order to maintain an environment that is conducive to intense meditation, all students take a vow of "Noble Silence" in which one abstains from any type of verbal or gestural communication. All precepts are taken in an effort to preserve a sense of shila or morality that creates a state of mental purity that aids meditation.

The course began with a 3-day instruction of annapanna that focuses the mind on becoming increasingly aware of the flow of natural respiration. One observes the subtleties of unadulterated respiration and the mind becomes calmer and sharper, ready to enter the field of panna or wisdom. Vipassana, literally translated as "seeing clearly" allows one to change the habit patterns of the mind at the deepest level. 1a> Through a process of self-observation and sustained equanimity an individual is able to change the general flow of sensory information. Inputs that were previously only recognized by the unconscious can now be processed through the I-function. I realized that consciousness is a subjective state that exists at different levels of awareness for every individual. There are sensations constantly arising and passing away that are too subtle for our I-function to achieve without intentional observation. This only becomes clear when one observes the chaotic habit patterns within one's own mind.

Vipassana teaches that there is a strong connection between mind and body, and that by focusing on bodily sensations one can understand the concept of constant change or annicha at an experiential level. One observes a variety of sensations on the body while remaining equanimous and detached, observing the sensation without any feelings of craving or aversion. By maintaining the balance of my mind, my old habit patterns of "blind" reaction grew weaker and weaker. I realized that my concepts of pain and pleasure were states that, with practice, I could observe as an outsider. My self-awareness had reached new heights and I felt a deep connection to the ways in which my body responded to sensory inputs.

A similar thought pattern in regard to the pattern principles of "mindfulness" is reflected in Pilou Thirakoul's essay titled "Buddhist Meditation and Personal Construct Theory" On my last retreat I delved deeper into the practice of Vipassana. I no longer felt the need to change my posture during meditation periods. Sensations existed everywhere in a constant state of flux and flow. Although I could identify sensations as uncomfortable or pleasant, my mind focused less on a physical reaction. As neurologist and Zen meditator James Austin explains a similar experience of detatchment from sensation during a meditation session. He states, "Awareness was steering itself toward a vague layer beyond thought. Here, pain alone could be turned off, pain in and of itself." 4 He concludes with the idea that there exists both opiod and non-opiod mechanisms for changing the way one interprets pain.

I began to observe my body as an objective outsider, examining each individual part of my body. Eventually I experienced a complete dissolution of mind and matter with the experiential realization that all my sensations were just an amalgamation of impermanent vibrations. As Thirakoul explains, "Indeed, an understanding of identity as essentially a flow of psychic processes avoids any notion of a discrete, absolute, metaphysical self. This Buddhist doctrine of the non-existence of the self, or annata, is important to understand; for the self, or rather the illusion of self, is the primary factor which keeps individuals in the cycle of suffering." 2 Thirakoul touches upon the concepts of self-dissolution at a physical level. At this stage, information enters the central nervous system and the conscious mind simultaneously, resulting in a deepened awareness that is partnered with a mental equanimity.

Although the perspectives of other meditators such as Austin and Thirakoul prove helpful in drawing parallels between my own experiences and the experiences of others, the specific mechanisms for achieving increased awareness or a higher level of consciousness still remains unclear. Our neurobiology and behavior class has attempted to explain the connection (an at times disconnection) between our mind and body, our consciousness and unconsciousness, our I-function and our central nervous system. My own perspective on meditation and understanding of my own consciousness have shifted as a result of our class discussions. At an experiential level I felt this shift from a pervasive "unconsciousness" to an awareness that is generalized to many aspects of my life. This was done without any understanding of where information was being processed and through what specific methods new information became available to me.

In future discussions of consciousness and the I function I would encourage an even more detailed description of how this information is transformed at a chemical level within the brain. When information passes from the unconscious to the I-function what changes can we observe at a gross, physical level and at a more subtle chemical level? These are all questions that would allow for a more dynamic discussion of this exciting topic.

References


Resources:

1. http://www.spiritual-learning.com/meditate-mind.html

2. http://serendipstudio.org/bb/Pilou.html

3. http://serendipstudio.org/sci_cult/bridges/matspirit.html

4. Austin, James. Zen and the Brain. New York: Yale University Press, 1999.


Parsomnias & the I-function
Name: Jennifer
Date: 2004-04-13 09:23:21
Link to this Comment: 9328


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"Dreaming permits each and everyone of us to be quietly and safely insane every night of our lives."
William Dement, MD

For centuries people have been fascinated with altered states of consciousness. Through sleep, illness, or chemicals, people are awed by the actions of a person who appears to be not himself. Most sleep occurs separate from the waking world, safely and quietly in one's bed, with the I-function appearing to be turned off. But some sleepers are able to perform complex activities while in the non-rapid eye movement (NREM) stages of sleep and have no recollection of this activity the following day. It appears that the body moves without the I-function, and the I-function therefore has not recall of the previous night's events. (1) This phenomenon is called somnambulism or more commonly, sleepwalking. Another parasomnia that occurs less frequently is rapid eye movement behavioral disorder. During the REM sleep cycle, a person has vivid, life like dreams. In people with REM behavioral disorder, the body is not paralyzed during the dream and they act out their dreams which are often violent. Unlike true sleep walkers, those with REM behavioral disorder will remember their dreams clearly the next day, and think they were doing some logical task in their dream while they were actually doing something quite different. For example, one man ran head on into his dresser while dreaming he was tackling an opponent in a football game. (2)

To understand the anomalies in sleep among those who sleepwalk or have REM behavioral disorder, it is useful to examine the five stages of sleep The first four stages of sleep constitute non REM sleep, which is markedly separate in terms of the level of consciousness from the fifth stage of sleep, REM sleep. During the first stage of sleep, brain scans have shown rapid small brain waves and people have reported fragmented visual images often mixed with visual and auditory input from their surroundings. Between the first and second stages of sleep, people may experience hypnic myoclonia, the rapid contraction of muscles often preceded by the feeling of falling. Sleep paralysis also occurs in the early stages of sleep. (3) This occurs when the I-function appears to wake up, but the body hasn't released the chemicals to counteract the paralysis that is normal while asleep. In stage two, eye movement stops and brain waves become sporadic. Stages three and four are considered to be deep sleep, and people are often hard to wake up while in these stages of sleep. During stage three, longer delta waves begin to predominate over the shorter, sporadic waves. Stage four sleeping is characterized by the presence of only delta waves and no eye movement. Somnambulism and night terrors occur during the third and fourth stages of sleep. The person in these stages of sleep will have no memory of the events during this time period. About 75% of the night is spent in NREM sleep. The brain wave patterns display such a dramatic difference between REM sleep and NREM sleep that they are thought to be entirely different levels of consciousness, as different from each other as they are from the fully awake conscious state. (4)

In NREM sleep, the I-function appears to be turned off. People have no memories or explanations for events that occur during these sleep stages. The night terrors and somnambulism that occur during the NREM stages are not recalled by the patient. The only way to know that these events are occurring is through the observation of family members or injuries that occur while sleep walking (5).

During REM sleep, the I-function seems to be at a different level of consciousness, but not entirely absent. While the person in the REM stage of sleep is normally paralyzed and appears to be lying silently, their mind is quite active. This is evident from brain scans, and also from the patients' subjective experiences. It appears to be an alternate world for the I-function; a world that is not affected by external stimuli. Yet the I-function is alert, as evidenced by a person's recollection of "events" that seem to be occurring to them in their dreams. It's this sense of consciousness that allows a person in the REM stage of sleep to make more concerted movements. Frequently, the activity of those who experience RBD is much more violent and directed than the behavior of those with somnambulism.

During REM sleep, which is thought to be the most restorative stage of sleep, there are several key physiological changes. (6) The eyes move rapidly, the heart rate, breathing rate, and blood pressure become elevated, and breathing becomes shallow. The body is also unable to adequately regulate temperature while in the REM stage of sleep. REM sleep allows the I-function to temporarily exist in a world without corollary discharge, and in most individuals, without motor pattern generation. It is not well understood why the body seems to let its homeostatic settings shift during REM sleep, nor is it clear why this change appears to be restorative.

When people acquire a sleep debt, they do not cycle through the five stages of sleep normally. In severe sleep debt, they will advance directly to REM sleep from fully awake. This causes several problems. The sleep stages allow a person to transition from awake and conscious to dreaming. Without the gradual change, a person may experience dreams that appear as hallucinatory images while still partially awake. Or the person may not fully be paralyzed before entering REM sleep, which can result in REM behavior disorder. (4)

The sleep disorders mentioned have been extremely useful in understanding the workings of the brain during sleep. By noting the difference between true sleepwalking and REM behavioral disorder, it can be inferred that a person is aware of his brain activity during REM sleep, but not during NREM sleep.

The psychological explanations for sleepwalking and RBD vary. RBD patients almost universally have mild mannered, amiable personalities during their waking hours. These patients report vivid, violent dreams of being chased or attacked, and often injure themselves or their bed partner while acting out such dreams. Previously, psychologists and physicians had suggested that repressed anger caused these nighttime outbursts, but as more has been discovered about the neurochemistry, this idea has faded. Patients who exhibit classic somnambulism frequently lead stressful lives. Depression and anxiety both disrupt a person's natural sleep cycle. Stress management, cognitive behavioral therapy, and other psychotherapeutic treatments for these underlying disorders have proven moderately effective in eliminating somnambulism. (6)

New research has identified a gene that may be partially responsible for somnambulism. Some neurochemicals, such as dopamine and acetylcholine, are present in lower amounts in individuals exhibiting ambulatory parasomnias, but not enough data is present to show a causal relationship. (7)Researchers have postulated that those who are ambulatory during REM or NREM sleep lack a certain neurochemical necessary to inducing paralysis during sleep. This chemical imbalance has not been pinpointed and it seems unlikely that there is one direct cause of sleepwalking.

Much of the literature attempts to make a distinction between sleep disorders caused by problems of the brain and behavioral problems. This distinction does not seem to be helpful in understanding the nature of disease, as the separation of brain and behavior are really only indicative of our current perception and knowledge of the human nervous system. What is classified today as a biological disorder is classified as such because we can demonstrate clear biological causes of it. Unless the brain is able to be fully understood, the distinctions made to organize it will remain biased towards our perception and current knowledge. The biologically based problems seem to be more socially acceptable. For example, parents are told not to worry about their sleepwalking children, as this is a normal biological process. (8) However, only 15% of children exhibit any sleepwalking, so it is not by simple majority that a behavior is perceived as normal. Rather, a biologically based reason seems to justify the sleepwalking. The 6% of adults who sleepwalk are often advised to seek professional help. Adults with RBD or sleepwalking have benefited from cognitive behavioral therapy and the use of medication. (3) Clearly, what we think of as brain and behavior aren't separate, but intertwined in a complex relationship. Both neurochemical approach and the behavioral approach result in a change in behavior. Administering small doses of tranquilizers, such as clonazepam, frequently relieves all RBD symptoms. (9) Learning stress management techniques and other psychodynamic therapies also affect the frequency and severity of the parasomnias. (4) Studies demonstrating the effects of psychological approaches to parasomnias on the neurochemistry could help explain the relationship between brain and behavior in this case.

Sleepwalking, though fascinating, is a benign problem in the most of the 15% of children it affects. RBD is more serious, though less common, because of the violent outbursts often seen in these patients. Treatment for RBD using medication has been effective, which has given patients hope and led to more patients seeking treatment. It is also worth noting that many patients with RBD will later develop Parkinson's disease, although this relationship is not well understood, it is being studied in depth. (9)

Research considering the chemical changes during puberty of adolescents who stop sleepwalking might help explain the chemical differences responsible for creating ambulation during sleep. Studies analyzing brain activity during REM sleep of those with RBD could be analyzed, comparing the data from nights with ambulation and nights without ambulation to observe the differences in brain activity for still nights versus nights with activity.

The sleeping and waking mind continue to raise interesting questions about our perceptions of life, reality and free will. The law has wavered on the consideration of the free will of a sleeping person, sometimes acquitting those who commit crimes while asleep. (10). Science is yet to define that point of the brain, if such a place exists, where what we think of as the free will, or I-function, is physically housed, but the sleep disorders have demonstrated that consciousness is more variable than it was once believed to be. A vast continuum exists, encompassing the fully brain, the deeply sleeping and apparently unaware brain, and many unknown levels in between.

References

1) Sleepwalking Disorder Article

2) Sleep Disorders May be Linked to Faulty Brain Chemistry

3) Sleep Paralysis and Associated Hypnopompic Experiences Article

4) Yahoo Stress Health Center

5) Parasomnias: Sleepwalking, Night Terrors, and Sleep Related Eating Article

6) REM Behavioral Disorder Website

7) ABC Science Website, Article on the genetics of sleepwalking.

8)A to Z Answers for Parents, article on sleepwalking in children.

9)New York Times Article, informative website with a reprinted New York Times article on RBD.

10) Sleepwalking- Insanity or Automatism , an interesting compilation of legal cases involving sleepwalking and RBD.


What is the Function of Dreaming?
Name: Ghazal Zek
Date: 2004-04-13 09:52:34
Link to this Comment: 9329


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Plutarch, a Greek biographer and author (circa 46 - 125 AD) (1) is credited to having said, "all men whilst they are awake are in one common world: but each of them, when he is asleep, is in a world of his own." (2) Plutarch is essentially speaking of the phenomenon of dreaming. The idea that the mind creates its own world while asleep is quite thought-provoking. What is it about sleep that takes us to another world? Where does this other world come from? What purpose, if any does dreaming serve? One school of thought suggests that dreaming is a product of random electrical activity that the cortex tries to interpret (3) that really serves no purpose (4), while another insists that the purpose of dreaming has come about as a byproduct of evolution (4). Which story is right?—or rather, less wrong?

It will first prove helpful to understand the process of sleep. Sleep is a dynamic activity controlled by neurotransmitters acting on different neurons in the brain. We sleep in cycles of 5 stages: 1, 2, 3, 4 and Rapid Eye Movement (REM). Light sleep occurs during stage 1, where a person can easily drift in and out of sleep. People waking up from stage 1 sleep often experience flashbacks of fragmented images, and/or sudden muscle contractions called "hypnic myoclonia" which usually precede the sensation of just starting to fall. In stage 2 sleep, brain waves slow down and eye movement stops. Stage 3 and 4 are collectively called "deep sleep" as it is usually very difficult to wake someone up in either stage. During stage 3, delta waves (very slow brain waves) appear, interspersed with smaller faster waves which leave altogether during stage 4. During the REM stage, we experience shallow, irregular and more rapid breathing, our eyes move rapidly in various directions, our limb muscles become temporarily paralyzed, our heart rate and blood pressure increase, and males develop penile erections. When someone wakes up during the REM stage, they often describe outlandish, unfounded tales – those which we call: dreams. (5)

REM sleep begins with signals being sent from the pons to the thalamus which then relays the signals to the cerebral cortex. The cerebral cortex is the part of the brain used for learning, thinking, and organizing information, so this is an important point. Infants tend to spend much more time in the REM stage than adults, possibly for this very reason, that the REM stage stimulates the brain regions used in learning. (5)

Many scientists believe that the random electric activity is just that – random. They then assert that the cortex creates stories in order to makes sense of the signals being generated. (6) In late 2000, Antti Revonsuo published a paper in "Behavioral and Brain Sciences," asserting that the content of our dreams is not as disorganized as the aforementioned theory claims and that there is an evolutionary explanation to dream content. In essence, Revonsuo is suggesting that dreaming was selected for during our evolution, (7) but why would this happen?. Stating that waking experiences have a consistent and profound effect on dream content, Revonsuo hypothesizes that there is a biological function to dreaming – to stimulate threatening events and rehearse the perception and avoidance of threats. Revonsuo argues that the ancestral human lifespan was short and full of threatening situations, therefore, any mechanism that would stimulate these situations and play them over and over in different combinations would be advantageous for improving threat-avoidance skills. Finally, Revonsuo asserts that this ancestoral mechanism has left some traces in the dream content of the present human population.

Since one cannot be certain of the validity of a hypothesis, it will prove helpful to discern which hypothesis seems "less wrong." Revonsuo's idea about the original purpose of dreams simply provides us with a more complete look at the story behind dreaming. That is to say, it is by no means a complete idea on its own. While it is interesting to think that some of the content of our dreams may have had an evolutionary function, it should be noted that dreams are not predictable. (8) Each person experiences life differently, and through dreaming, can create experiences that will be unique to them, therefore entering a "world of his own" as Plutarch suggested.

As modern-day humans, we are not faced with the same limitations as our ancestors. Our survival and chances of reproduction have little to do with our threat avoidance capabilities. So, if we assume that dreams initially served as a feature of evolution, what function, if any, does dreaming serve in humans presently? On the one hand, we could revert back to the original theory, with a twist. We can suggest that dreaming serves no real function at present. For example, people having suffered through traumatic ordeals often complain of nightmares. Dreamless nights would in fact be helpful in these situations, as far as mental health is concerned. So while dreams are sometimes a welcome escape from reality, other times reality is a welcome escape from our dreams. On the other hand, dreams perhaps serve a more fundamental purpose nowadays. In recalling our dreams, we are able to learn about ourselves using a broader spectrum of information. Above all, it is important to keep in mind, that we are all different. We therefore experience the world differently, react differently, and dream differently.


References

1)E-classics.com background on Plutarch

2)A website containing famous quotes about sleep

3)HowStuffWorks.com : Sleep, A simple explanation of the process of sleep.

4) The reinterpretation of dreams: An evolutionary hypothesis of the function of dreaming., Abstract from Behavioral and Brain Sciences, Dec 2000 v23 i6 p877.

5)Brain Basics: Understanding Sleep, A detailed explanation of sleep and dreaming from the National Institute of Neurological Disorders and Stroke.

6)HowStuffWorks.com: Dreams, A simple explanation of the process of dreaming.

7)Dreaming and Consciousness: Testing the Threat Simulation Theory of the Function of Dreaming, More on the Evolutionary basis of dreaming from Revonsuo, et al. PSYCHE, 6(8), October 2000.

8)From Genomes to Dreams, an essay by Paul Grobstein, Winter 1991, from the Serendip website of Bryn Mawr College.


Not Just the Baby Blues: The Tragedy of Andrea Yat
Name: Elissa Set
Date: 2004-04-13 11:28:39
Link to this Comment: 9331


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Many of us envision motherhood as a joyous time in women's lives. Holding one's own newborn baby in her arms must bring great happiness to mothers. However, what happens when those feelings begin to subside, and those feelings of happiness are replaced with anger, hate, guilt, and loneliness? On June 20, 2001, Andrea Yates had those feelings overcome her and she killed all five of her children by drowning them in the bathtub (8). Yet, as disturbing and shocking as the event was, what surprised many other people is that there were many mothers who understood or sympathized with Yates.

"As I was changing my son on his changing table, an intrusive thought started running through my head, 'What if I push him off the table?'" (3)

"I would look at the baby and just say, oh, how vulnerable it is. I could put a pillow of the top of it. Its neck was so tiny, it could break so easily." (3)

Up to eighty percent of women suffer from baby blues after they have children (6). Ten percent suffer from postpartum depression (6), and about one in 500 have the most serious case, postpartum psychosis (6). Andrea Yates suffered from postpartum psychosis and this led her to kill all of her children. Her illness began after she had her fourth child and tried to commit suicide. After her fifth child, she tried to commit suicide again, and she was hospitalized twice (9). However, both times, she was released from the hospital while she was still ill. Finally, the postpartum psychosis took over Yates when she drowned all of her children. While Yates was afterward able to recognize that killing her children was a horrendous thing to do, at the time she was not in a stable mental state of mind. This event exemplifies how serious postpartum psychosis is. Though rare, baby blues can escalate to postpartum depression, which could then turn into psychosis if left uncared. Psychosis is not a mental illness that can be cured with a few visits to the therapist or a prescription to an anti-depressant. More research must be conducted in order to understand the nature of the disease, and how to help the women and their families who suffer from it.

The least detrimental of the three illnesses is postpartum blues, also known as the baby blues. The baby blues usually occur in the first few weeks after childbirth for women and they can include mood swings of happiness and sadness. The new mothers can feel irritable, stressed, and lonely. These feelings may last only for a few hours or for multiple weeks (6). It has been shown in many cases that women can overcome the baby blues without receiving professional counseling or medication (5).

Postpartum depression is more serious than the baby blues. The feelings of sadness, anxiety, irritability, and stress are also apparent, yet far more acute than in the baby blues (5). The women's ability to function everyday is affected, and she may neglect the care of the baby (5). Other symptoms include fatigue, exhaustion, confusion, and changes in appetite (3).

The gravest case of postpartum illness is postpartum psychosis. Though extremely rare, it is the most dangerous, and requires medical attention for recovery (5). In addition to the symptoms of postpartum depression, postpartum psychosis also includes visual and auditory hallucinations (5). Frequent thoughts of hurting the baby may enter the mother's mind, and she may actually carry out those thoughts (3).

The exact cause of depression is still not known, because it may vary with each individual. The term "depression" can be used to describe a variety of moods, from mild feelings of sadness to deep severe melancholia (4). There are theories due to biological, genetic and environmental factors. The biological factors are related to the hormone, such as cortisol. Cortisol is a hormone that controls the body's response to stress, anger, and fear. When people are depressed, cortisol will peak in the morning, and does not decrease later in the day, as it does in normal people (1).

A possible neurobiological factor is that there may be an imbalance of neurotransmitters in the brain (1). Neurotransmitters are chemicals that help the brain cells communicate with each other. Two neurotransmitters linked to depression are serotonin and norepinephrine. When there are deficiencies in neurotransmitters, impulses sent between nerves are decreased (4). Deficiencies in those neurotransmitters cause changes in sleep habits, increase irritability and anxiety, and may make individuals feel sadder and fatigued (1).

Postpartum depression may also have some other causes besides those of regular depression. When a woman is pregnant, her female hormonal levels change dramatically. Estrogen and progesterone increase during the pregnancy, and after childbirth, the levels decrease rapidly down to the levels before the woman was pregnant (5). These fluctuations are similar to those prior to when a woman menstruates, and can be more irritable and depressed. With postpartum depression, the levels of estrogen and progesterone may not be decreasing at a normal rate, causing an imbalance in the system. This may lead to symptoms of the various forms of postpartum illnesses.

While forms of postpartum depression were recognized in Yates, she never completed any treatment of her depression or psychosis due to insurance limitations (3). However, her husband and her doctor did not recognize the seriousness of the situation. Apparently, her husband, Russell, said to a friend, "I'm not going to coddle her, I'm not going to hold her hand. She needs to be strong, she needs to help herself." (2). However, when depression is as deep as Andrea Yates' psychosis, the ability to help oneself is incredibly decreased. Proof that Andrea was definitely suffering from postpartum psychosis is that she would hear voices in her head telling her to hurt other people, including the children (3). However, Russell still did not see Andrea as a threat to their children, despite two suicide attempts, including one after the birth of her fifth child (9). Scarily enough neither did Andrea's doctor, who two days before she killed the children, did not believe that Andrea needed to be hospitalized (3).

Unfortunately, it has taken the deaths of all of one family's children to shed light on the gravity of the issue of postpartum depression. Postpartum illnesses can affect any mother, whether she has had one baby or four, and it can recur, as can be seen with the case of Andrea Yates. Since neither her husband nor her physician was able to recognize that Andrea was suffering from a serious illness, more research must be done in order to understand the disease, and how to recognize it. These events, though rare, can be prevented. Psychosis is not something that people can just snap out of, but must be treated with great care as it is a disease that obviously has severe consequences. Although, in the trial for Andrea Yates, the jury did not believe that she was insane at the time of the killing, it is obvious that she has suffered from postpartum depression and psychosis. Her illnesses do not excuse the fact that she committed these atrocities, but learning more about the illnesses will help people understand why she did it, and how to prevent other situations like this.

References

1) Causes of Depression

2) "I Could Just Kick Him"

3) More Than the Baby Blues

4) The Neurobiology of Depression

5) The Postpartum Depression

6) The Postpartum Depression

7) Postpartum psychosis: a difficult defense

8) Postpartum Psychosis to blame for murdered Houston Children?

9) Russell Yates describes wife as a victim


Music, Emotion and the Brain
Name: Geetanjali
Date: 2004-04-13 12:18:05
Link to this Comment: 9335


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

There is a beautiful passage in a book called "Home of the Gentry", by Ivan Turgenev, where the protagonist of the novel listens to a piece of music being played on the piano that touches him to the very depths of his soul. I will quote part of this passage, since it describes very eloquently the almost mystical power that music wields over the human mind, a power which I find fascinating.

"The sweet, passionate melody captivated his heart from the first note; it was full of radiance, full of the tender throbbing of inspiration and happiness and beauty, continually growing and melting away; it rumoured of everything on earth that is dear and secret and sacred to mankind; it breathed of immortal sadness and it departed from the earth to die in the heavens." (10)

The tremendous ability that music has to affect and manipulate emotions and the brain is undeniable, and yet largely inexplicable. Very little serious research had gone into the mechanism behind music's ability to physically influence the brain until relatively recently, and even now very little is known about the neurological effects of music. The fields of music and biology are generally seen as mutually exclusive, and to find a Neurobiologist also proficient in music is not very common. However, some do exist, and partly as a result of their research some questions about the biology of music have been answered. I will attempt to summarize some of the research that has been done on music and the brain in recent years. I will focus in particular on music's ability to produce emotional responses in the brain.

One great problem that arises in trying to study music's emotional power is that the emotional content of music is very subjective. A piece of music may be undeniably emotionally powerful, and at the same time be experienced in very different ways by each person who hears it. The emotion created by a piece of music may be affected by memories associated with the piece, by the environment it is being played in, by the mood of the person listening and their personality, by the culture they were brought up in: by any number of factors both impossible to control and impossible to quantify. Under such circumstances, it is extremely difficult to deduce what intrinsic quality of the music, if any, created a specific emotional response in the listener. Even when such seemingly intrinsic qualities are found, they are often found to be at least partially culturally dependant.

Several characteristics have been suggested that might influence the emotion of music. For example, according to one study (11)(12), major keys and rapid tempos cause happiness, whereas minor keys and slow tempos cause sadness, and rapid tempos together with dissonance cause fear. There is also a theory that dissonance sounds unpleasant to listeners across all cultures. Dissonance is to a certain degree culture-dependent, but also appears to be partly intrinsic to the music. Studies have shown that infants as young as 4 months old show negative reactions to dissonance. (3)(6)(9)

It is possible to both see and measure the emotional responses created by music in the brain by using imagery techniques such as PET scans. However, as these emotional responses would generally be caused by factors out of the experimenter's control, the data collected would be very difficult to interpret.

A recent experiment dealt with this problem by attempting to minimize subjectivity, by measuring responses to dissonance. (1) Dissonance can consistently create feelings of unpleasantness in a subject, even if the subject has never heard the music before. Music of varying dissonance was played for the subjects, while their cerebral blood flow was measured. Increased blood flow in a specific area of the brain corresponded with increased activity. It was found that the varying degrees of dissonance caused increased activity in the paralimbic regions of the brain, which are associated with emotional processes.

Another recent experiment measured the activity in the brain while subjects were played previously-chosen musical pieces which created feelings of intense pleasure for them. (2) The musical pieces had an intrinsic emotional value for the subjects, and no memories or other associations attached to them. Activity was seen in the reward/motivation, emotion, and arousal areas of the brain. This result was interesting partly because these areas are associated with the pleasure induced by food, sex, and drugs of abuse, which would imply a connection between such pleasure and the pleasure induced by music.

Experiments such as these are not able to answer such questions as how or why the emotional responses were created in the first place. However, their results can still be informative. These two experiments both show that music has the power to produce significant emotional responses, and they localize and quantify these responses within the brain.

Another quantifiable aspect of emotional responses to music is its effect on hormone levels in the body. (5)(7) There is evidence that music can lower levels of cortisol in the body (associated with arousal and stress), and raise levels of melatonin (which can induce sleep). (5) This is outwardly visible in terms of music's ability to relax, to calm, and to give peace. Music is often used in the background hospitals to relax the patients, or in mental hospitals to calm potentially belligerent patients. It also can cause the release of endorphins, (7) and can therefore help relieve pain.

Love for and appreciation of music is a universal feature of human culture. It has been theorized that music even predates language.(8) There is no question that music has grown to be an important part of human life, but we can only guess why. It has been theorized that music is important evolutionarily, (8) but all such theories are at this point conjecture. No concrete evidence has been found that music is evolutionarily beneficial. There are many questions one could ask about the powerful link between music and the brain, but very few answers exist. How does music succeed in prompting emotions within us? And why are these emotions often so powerful? The simple answer is that no one knows. We are able to quantify the emotional responses caused by music, but we cannot explain them.


References

1) Blood, A.J., Zatorre, R.J., Bermudez, P., and Evans, A.C. (1999) "Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions." Nature Neuroscience, 2, 382-387.

2) Blood, A.J. & Zatorre, R.J. (2001) "Intensely pleasurable responses to music correlate with activity in brain regions implicated with reward and emotion."Proceedings of the National Academy of Sciences, 98, 11818-11823

3) Harvard Gazette Archives. Cromie, William J. (2001) "Music on the brain: Researchers explore biology of music."

4) Harvard Gazette Archives. Cromie, William J. (1997) "How Your Brain Listens to Music."

5) Musica Humana. Heslet, Prof. Dr. Lars. "Our Musical Brain"

6) transcript of episode of Closer to Truth. "What Makes Music So Significant?" Interview with Jeanne Bamberger, Robert Freeman, and Mark Tramo, conducted by Robert Kuhn.

7) Time Reports. Lemonick, Michael. (2000) "Music on the Brain: Biologists and psychologists join forces to investigate how and why humans appreciate music."

8) Levitin, Daniel J. "In Search of the Musical Mind", (2000) Cerebrum, Vol 2, No 4

9) Tramo, Mark Jude. "Biology and music: Enhanced: Music of the Hemispheres." (2001) Science, Vol 291, Sigue 5501, 54-56

10) Turgenev, Ivan. Home of the Gentry.

11) "The Biology of Music.", (2000) The Economist

12) "Exploring the Musical Brain", (2001) Scientific American

13) The Power of Music


Cocaine Addiction
Name: Shirley Ra
Date: 2004-04-13 12:54:53
Link to this Comment: 9337


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Addiction to cocaine is amongst the most severe health problems facing the United States. For example, in 1997 there were approximately 1.5 million Americans twelve years and older that were chronic cocaine users (1). The question presented, then, is given how negative and damaging cocaine is to a user's health and society as a whole why is it that people addicted to cocaine have great difficulty quitting? One possible answer to this question is biological; namely that cocaine alters the normal state of the brain making it difficult to quit. Two properties which make cocaine one of the most addictive popularly used drugs is that it is reinforcing when administered acutely, but also produces obsessive use if administered chronically (2). However, arguments can be made that cocaine addict's perpetual abuse of the drug is, at least in part, a result of social factors. In other words, it is not only cocaine's biological effects on the brain that makes it difficult for addicts to give up the use of the drug. If the fact that perpetual cocaine abuse resulted from biological and social factors was better understood and more widely accepted, we would be able to better help cocaine addicts quit using the drug.

In order to combat cocaine addiction we must first understand what addiction is. The World Health Organization defines addiction as a "behavioral pattern of compulsive drug characterized by overwhelming involvement with the use of the drug, the securing of its supply, and a high tendency to relapse after withdrawal (3). There is an established sequence of events that defines addiction. First, there are the euphoria effects that the drug of abuse produces. Second, tolerance develops, meaning that the addict needs more and more of the drug to produce an affect. Finally, there is physical dependence, in which the addict feels they need the drug to survive; they are addicted. Under this definition a person can theoretically be addicted to almost any substance, for example chocolate. However, while it may be difficult for a person to refrain from eating chocolate all would concede that it is much more difficult to quit cocaine once addicted. The question is why this is the case.

Like chocolate, cocaine is associated to the nucleus accumbens. The nucleus accumbens is known as the brains pleasure center since several studies demonstrate that pleasurable stimulus, such as sex, food and other drugs of abuse, cause an increase in activity in this are of the brain (4). The mesoaccumbens dopamine (DA) pathway, which extends from the ventral tegmental area (VTA) of the midbrain to the nucleus accumbens (NAc), has been linked to the reinforcing effects of cocaine. This was found through intracranial self-stimulation, a process which consisted of implanting electrodes into different regions of an animal's brain, and demonstrated that when dopamine is involved reinforcement of the behavior increased (2). In essence, this shows that pleasurable events, such as sex, chocolate consumption and cocaine abuse are accompanied by a large increase in the amounts of dopamine released in the nucleus accumbens.

Given the similarities in which pathways are activated, why is it that the use of cocaine is more difficult to quit when compared to other fore mentioned pleasurable events? A person addicted to chocolate and cocaine both will have excess amount of dopamine released from the nucleus accumbens, but both individuals will react to that biological circumstance differently. The initial effects of both the chocolate and cocaine will be euphoria, but after these pleasurable stimuli are removed the individual addicted to cocaine will experience very severe physical withdrawal effects, whereas the individual addicted to chocolate will be able to cope with its loss. Is that due to the fact that the excess dopamine is derived from different pathways? Is the initial euphoria effect stronger in cocaine? It is clear that at least part of the answer lies in the way that cocaine biologically affects the brain.

In the dopamine pathway of individuals not addicted to cocaine, dopamine is released by a transmitting neuron into the synapse, where it binds to receptors in the postsynaptic neuron, propagating a signal. After the binding has occurred, the dopamine reuptake transporters (DAT) of the presynaptic cell reuptake the remaining unused dopamine back into the cell (5).

As mentioned earlier, cocaine's major effects are thought to be due to action on dopamenergic systems. In addicted individuals, cocaine has the ability to bind to the dopamine reuptake transporters (DAT), therefore, blocking them from reuptaking dopamine, consequently resulting in an accumulation of dopamine in the synapse. This accumulation of dopamine causes continuous stimulation of the post-synaptic neuron, resulting in the euphoria commonly reported by cocaine abusers. Cocaine also affects serotonin and norepinephrine reuptake transporters, enhancing the levels of these neurotransmitters in the cell (6). The latter is important since researchers speculate that more than one neurotransmitter is responsible for the pleasurable feeling cocaine provides. In addition, cocaine simulates the "fight or flight" response, by increasing activity of the sympathetic nervous system, due to its action on norepinephrine transport (7). Some of the increased activity is illustrated by constricted blood vessels, dilated pupils and increased heart rate as well as blood pressure. In other words, cocaine has a great variety of biological effects on the brain which lead to a very strong addiction.

Cocaine's biological effects on the brain also make it very difficult for an addict to quit abusing it. When an individual becomes addicted to cocaine the repeated euphoric responses to the drug alters the brain, creating a dependency within the addict's brain. The individual will therefore, continue to take cocaine to re-experience the extreme euphoric effects of cocaine. Also, addicts continue to take cocaine because after cocaine administration dopamine levels decrease significantly compared to normal pre-consumption levels. Therefore the addict feels a "low" and the immediate response to ease this low is to administer more cocaine to raise the dopamine levels. It is clear then that a significant reason why addicts find it difficult to quit cocaine is because their brains are biologically altered. In a sense, it could be said that the brain is no longer biologically whole in that it no longer produces dopamine levels in the way it once did.

The fact that addicts develop tolerance or sensitization to cocaine also makes it difficult to quit abusing the drug. After chronic administration of cocaine the brain reduces the number of dopamine receptors on the dendrites of neurons. As a result, there is less stimulation of the nerves in the dopamine pathway. This physical change in the brain alters the way it responds to different doses of cocaine. This is where tolerance develops in many addicts, wherein a larger dose is needed to attain the same euphoric effects initially experienced. Other addicts experience sensitization, in which the user becomes more responsive to cocaine without increasing the dose. Recent research has investigated why some addicts experience sensitization and others tolerance. Is it due to different brain make-up or is it due to manner of administration of the drug? In either case it is still clear that both of these phenomenon's present yet another biological hurdle that a user must overcome when quitting cocaine.

However, the obstacles a cocaine abuser faces when trying to quit her or his abusive tendencies are not exclusively biological. The cocaine abuser also faces several psychological/social barriers in the path to becoming drug free. After constant administration of cocaine the phenomenon known as place conditioning becomes activated. The place conditioning theory suggests that the environment in which you administer cocaine will be associated with the act of cocaine use (8). For example, if a drug addict purchases cocaine at a specific grocery shop and experiences the drug effect shortly thereafter, eventually the grocery shop becomes linked in the mind of the drug addict to the rewarding effects of cocaine (8). This has been extensively proven in animal models, where rats return to the environment where they administered cocaine. In humans, place conditioning might cause addicts to have an overdose. This is because addicts are accustomed to administering the drug in a particular environment and begin to associate the rewards of the cocaine to the environment itself. Therefore, in a different environment, not associated with the administration of cocaine, the same dose will produce a larger effect because the environmental cues are not associated with the rewards of the drug. Perhaps more significantly, this demonstrates that the environment can itself lead to a greater addiction to cocaine since the environmental stimuli will constantly remind the user of the pleasurable effects of cocaine.

It follows then that the difficulty in quitting cocaine cannot be 100 percent biological. If addiction was only biological then place conditioning would not be an issue. All too often the view that brain=behavior, meaning, that brain elicits behavior, is acceptable as complete. However, in this instance the environmental stimulus has the power to elicit brain stimulation ending in craving of cocaine, demonstrating that biological and social/environmental factors are deeply intertwined and each play a role in rendering cocaine incredibly addictive.

Most relapses occur when an individual returns to the environment where he/she would administer the drug. Exposure to such cues and stimuli reminds the addict of the feeling and taste of cocaine therefore, they will begin to crave cocaine. But how exactly do the environmental stimuli trigger the drug craving? Recent research has brought forth that the extended amygdala might play a major role in "in context" craving. The extended amygdala is part of the limbic system, a region of the brain associated with memories and emotions. Researchers at the National Institute on Drug Abuse believe that it is in the extended amygdala where memories relating to drug administration are converted into craving for that specific drug (8). As mentioned earlier, memories give rise to craving when the environment where cocaine is abused becomes a conditioned stimulus. The latter is the reason why so many people relapse and continue to be addicted.

Unfortunately, to date there is no one treatment that will eliminate addiction or all characteristics associated with addiction. Perhaps finding a treatment to cocaine addiction has not been overly successful because researchers do not fully account for the biological and social factors which make it extremely difficult to quit using this drug. It follows that a combination of medical treatment (to address biological factors) and counseling (to address the social factors) would be most beneficial to addicts. Drugs are being developed aiming at blocking cocaine from binding to the dopamine transporters allowing for the reuptake of dopamine, which may prove very effective at stabilizing the biological factors. However, such medication alone will not suffice. In terms of combating the social factors, the most effective counseling is "cocaine-specific skills training" which consist of identifying the environments and stimuli that triggers craving in order to control and avoid such stimuli (4).

The problem of cocaine addiction in America will not disappear overnight. However, a greater understanding of why cocaine addiction is so uniquely strong will help lead to discovering a better understanding of how to combat it. It is important to understand that both biological and social factors work together to form cocaine's powerful addiction. As such any effective treatment must aim to counteract both. As such, we must first fully understand cocaine addiction and its properties before we can hope to eradicate it.

References:
1)NIDA Home Page,Various information about Cocaine. Specifically Statistics.

2)Article on Addiction, Describes biology of Addiction.

3)Substance Abuse Facts,

4)National Institute of Health., Various information about cocaine addiction and health hazards, treatments etc.

5)Effects of Cocaine Biologically,

6) More information about Cocaine and how it effects your neurotransmission,

7)Research on Cocaine,

8)Amygdala and Memories,


Munchausen By Proxy
Name: Emma Berda
Date: 2004-04-13 17:09:58
Link to this Comment: 9343


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Although Munchausen By Proxy was first described less than thirty years ago by Dr. Roy Meadow(1) it has garnered much press in recent years with appearences in "The Sixth Sense" and "Law and Order" and high profile court cases such as that of Kathy Bush. But what exactly is Munchausen By Proxy and why does it occur? How could something that destroys our image of a mother exist in society?

"MBP is sometimes called Munchausen Syndrome by Proxy, Munchausen by Proxy Syndrome, or Factitious Disorder by Proxy. All of these terms apply to a well-established variant of maltreatment (abuse and/or neglect) in which caregivers deliberately feign or produce ailments in others. The perpetrator deliberately misleads others knowing that there is no reason to believe the victim has an underlying physical and/or psychological-behavioral problem. The signs and symptoms perpetrators falsify or create are usually physical."(2) In 98% of the documented cases the mother is the one with MBP. There are several ways that Munchausen By Proxy in manifested in behavior. Sometime the perpetrator will falsly report that their child has an illness, other times the perpetrator will create evidence of a problem but will conceal their role in it. The perpetrator may also exagerrate a real medical problem that the child has. Finally, the perpetrator may worsen an already existing ailment or cause a problem in the child on purpose.(2) These final two manifestations are the most intrigueing from a behavior point of view because the directly contradict our ideas of maternal instinct.

There is no set profile for MBP. (2) But there are some basic facts that usually apply to MBP perpetrators. Perpetrators usually seem to be "normal" and have loving relationships with their victms. However, MBP perpetrators are usually good at decieving and manipulating people
and may have a history of feigning problems in themselves.(2) They often have a dramatic flair and sometimes falsly accuse others of wrongdoings, if charged with wrongdoing themselves they will vehemintly deny it. "MBP perpetrators do not necessarily have to have extensive health care knowledge or be particularly intelligent. It does not take special knowledge to engage in many kinds of MBP maltreatment."(2) This fact is especially important to remember since many people often dismiss accusations of MBP by saying that Ms. X couldn't possibly outwit a team of doctors. All information reported in this paragraph is preceeded by the word usually or often. There could easily be a MBP perpetrator who posseses none of these characterics or an innocent mother who has all of them.

Munchausen By Proxy is extrememly difficult to diagnose. Each case must be taken by itself and while past information can be useful it should not be used to determine whether MBP is the cause of a child's illness. There is a vary broad range of charasterictics that are attributed to MBP.(3) This can lead to misdiagnosis of MBP in mothers of severly ill children. Even if MBP is suspected it is hard to get physical data to prove it. Sometimes a hidden camera can catch the perpetrator inducing illness but most of the time its just the word of other people against the perpetrator. Kathy Bush was found guilty of aggravated child abuse and put in jail without any definitive evidence.(4) It was her word versus that of her daughters doctors, nurses, and the police. Because Kathy Bush had previously been named mother of the year by Hillary Rodham Clinton the case garnered national attention. MBP perpetrators can vary greatly in their behavior. Another suspected MBP perpetrator is Marie Noes whose 10 babies sucessivly died in the 1940's-1960's.(5) Unlike Kathy Bush who was a doting mother, Marie Noes seemed to have little interest in her children. When one child was in the hospital for 2 monthes she visited only twice.(5) Without delving deeper one would never guess that these two women were perhaps commiting similiar acts. MBP seems to be the only thing they have in common.

Perpetrators of MBP can come from vastly different backrounds. Some are rich, some poor. Some were abused as children, some were not. There is no set of conditions that MBP seems to arise from. Because of this we cannot know what drives these women to do this. Most of the literature says that MBP perpetrators do it so that they can get attention and sympathy from doctors and other medical staff. This is a reasonable conclusion but it leaves unanswered why these women would need attention so desperately that they would harm their children. It is important to note that MBP perpetrators do not seem to possess any sort of homocidal tendancies. Although MBP can lead to the death of the child, that death is often an accident caused by miscalculation of the MBP perpetrator. (2) These women are not looking to rid themselves of their children, the children are merely a means to an end.

How could somebody's need for attention be so great that they would go so far as to harm their own children? We tend to think of maternal behavior as a natural occurance but maybe it is not. Perhaps then maternal behavior is a result of human society. It could be something we percieve that does not have any neurobiological/behavioral foundation. This would make MBP much easier to account for because if there is no beavioral basis then it is not so strange that something like this could occur. MBP shocks us to our core because we cannot imagine a mother harming her innocent children. But what if this is just what society tells us? What if there is nothing in our genes that tells us to nurture our children? Do perpetrators of MBP have a psychological prohlem or do they merely deviate from our societiy's norm?

Since the perpetrators of MBP are so different it is likely that it does not derive from a specific psychological problem. It seems likely that these perpetrators probably have other psychological problems that may contribute towards MBP but mostly they just deviate from what we expect from mothers.

References

1)Basic MBP Information
2)A rich MBP resource
3)General MBP Information
4)News Articles about the Kathy Bush Trial
5)An Article about the Noes Family


Angsty Teenage Depression
Name: Amanda
Date: 2004-04-13 18:38:18
Link to this Comment: 9346


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


Depression wreaks havoc on nearly one in ten people in the United States (1). While it is not a discriminatory disease, affecting all races, ages, and both genders, it is becoming a more common diagnosis among children and teenagers. Despite the barriers, eighty to ninety percent of medically treated patients improve (1). Before treatment though, depression can cause feelings like being "tired, listless, hopeless, helpless, and generally overwhelmed by life. Simple pleasures are no longer enjoyed, and their world can appear dark and uncontrollable" (1). These symptoms can be especially devastating to a teenager.

Depression is diagnosed with a person experiences a number of symptoms. First the person needs to always be sad or anxious for at least two weeks. Secondly, the person needs to have one of the following five symptoms: appetite changes (not because of a diet), insomnia or oversleeping, fatigue and energy loss, restlessness, guilty or worthless feelings, concentration and thinking difficulty, or thoughts of death and suicide (1). While there may seem like a lot of symptoms, each one is important in the implications it has on the victim's life.

Depression can be caused by a number of things including biochemistry, genetics, personality, and the environment. Each of these, or a combination, can form a dark cloud over a person. Biochemistry is related because "deficiencies in two chemicals in the brain, serotonin and norepinephrine, are thought to be responsible for certain symptoms of depression, including anxiety, irritability, and fatigue" (1). Genetics also runs in certain families, which leads scientists to believe that there is a genetic twist that encourages depression. For example, in my family, seven out of twelve people within my father's brothers, sisters, their children, and his parents have been depressed at one point or another in their lives. There is a definite genetic link. A person's character can also lead to depression. People who normally have low self-esteem or are pessimistic are more likely to become depressed than those with great self-esteem who look at the world optimistically. Lastly, the environment can lead to depression. "Continuous exposure to violence, neglect, abuse, or poverty may make people who are already susceptible to depression all the more vulnerable to the illness" (1). A stressful environment encourages a person to downward spiral. At http://www.teachhealth.com/#stressscale a person can take a stress test to see what his level of stress is.

While all sorts of people suffer from depression, it can be exceptionally difficult for teenagers. Teenagers have all factors, biochemical, genetic, personality, and the environment, in depression's favor, especially the last two. Teenagers are a group of people who already have horrible self-esteem. Out of all age groups, teenagers statistically have the lowest self-esteem as a group. Puberty encourages the most popular of adolescents to become shy and nervous. "The vast hormonal changes of puberty are severe stressors. A person's body actually changes shape, sexual organs begin to function, new hormones are released in large quantities. Puberty, as we all know, is very stressful," states the Health Education website (2). Girls begin to grow body parts they are unaccustomed to and feel they must hide while boys begin to get deeper voices and then become hairier. Everyone gets acne and a lot of people get braces. Most teenagers do not know how to deal with the raging hormones and thus become shyer about themselves. The middle or high school environment does not help. Cliques form and exclude people. If a girl does not wear the right outfit, she can be the outcast for the rest of the year or if a boy cannot catch the Frisbee, he can be "out". Socially, in middle and high school, people can be brutal. Teenagers may be forced into substance abuse, which becomes more accessible. All these factors can lead into depression.

Teenagers may show specific warning signs that others should notice for depression. They may have scholarly problems because of skipping classes, poor concentration, lack of interest, or low energy. This can even lead to teens dropping out. The low grades that are achieved add to the self-criticism and then encourage low self-esteem. This cannot only cause anger, depression, or indifference, but a change of social scenes into one that encourages drugs and alcohol. Depression should also be looked for with teenagers who have eating disorders, extreme feelings of ugliness, and in those who cut themselves (3).

Once someone is depressed, it is extremely difficult to break out. As with all illnesses, the farther along it is, the harder it is to cure. While there are things that a person can do for herself, such as exercise or relaxing, if it is clinical depression, the cure will probably take more. There are support groups such as the National Foundation for Depressive Illnesses, Inc. and the Depression and Bipolar Support Alliance (4). While these support groups are beneficial, most people will probably need the guide of a psychiatrist, who will also do the diagnostic evaluation. If the doctor feels that psychotherapy will not be the most effective cure, she can also prescribe antidepressants. Antidepressants, which usually take about three to six weeks for full effect, help correct chemical imbalances in the brain (1).

The psychiatrist will also encourage "talk therapy." Because of this, it is important for a patient to choose a doctor he or she is comfortable with. One thing to look at is how much the doctor includes the patient in making decisions. The UK Depression Alliance website encourages asking, "How do you go about deciding which treatment is right for me?" (4). This helps enable a patient to find a comfortable doctor. There are specific doctors for adolescents with depression. These are exceptionally helpful as "grown up" doctors might forget about the extremely tough time that teenagers go through.

Depression is a life-altering illness that not only affects the patient but her friends and family. If not treated, it will cause problems throughout life until something like suicide. While depression is noticed in adults, it has also become a distinct problem in children and teenagers. While in some communities, especially middle to upper-class suburbia, depression in adolescents is being diagnosed and treated, it is being neglected in other areas. Teenagers are just as important to have depression treated, especially as they are "the future of America."

WWW Sources
1.) 1)American Psychiatric Assocation, Founded in 1844

2.) 2)Health Education: Stress, Depression, Anxiety, Drug Use , For Classes

3.) 3) Kids' Health , Depression

4.) 4) Depression Alliance , UK Alliance

References


The Implications of Bilinguality and Bilingual Aph
Name: Prachi Dav
Date: 2004-04-14 00:17:10
Link to this Comment: 9354


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The bilingual, and the polyglot, for that matter is an individual whose status as the speaker of two or more languages has been widely discussed. The past has seen a disdainful view of bilinguals transform slowly into a more positive and currently, even glowing, regard for their language flexibility. A point of view concerning language learning among infants involves the conception that babies are born with the innate capacity to learn any language in which it is immersed (1) (furthermore, exposure to multiple languages allows the infant to learn as many), and therefore the baby is a universal language speaker, giving support to Chomsky's conception of the existence of a "universal grammar" that forms the basis of every existent language. Bilingual ability and status have implications for the individual's identity and for neurolinguistic study involving a wider academic interest. Particularly, bilingual aphasia is fast becoming, for the study of the bilingual brain, that which aphasia was for the now heavily studied monolingual brain.

Human language has captured both the artistic and scientific imagination for, perhaps, centuries. Bilingualism, therefore appeals equally to the same imagination for both it's theoretic and practical implications. Bilingualism as a cognitive state that supposedly requires the sharing of cognitive resources has been openly frowned upon and a dates and ignorant assertion by Laurie (1980, in Wei, 2000) (3) declared:

"If it were possible for a child to live in two languages at one equally well, so much the worse. His intellectual and spiritual growth would not thereby be doubled , but halved. Unity of mind and character would have great difficulty asserting itself in such circumstances."

this point of view, among other myths (2), had been very popular and eventually blended into a purist, monolingual viewpoint of bilinguals which accepted into the bilingual category only those who are absolutely proficient in both their languages while all other speakers of more than one language have been relegated to either one of a long list of subordinate categories (alingual, semilingual, covert bilingual) (3), (4). This view has been emphatically refuted by Grosjean (1989) (4) who asserts the need to define bilinguals in accordance to the contexts of language usage. The former view has to a great extent been obliterated and bilingualism is now believed to be advantageous in cultural, social, cognitive and even transnational domains. These successive realisations to which researchers in this field have come have broadened the scope for it's study.

Researchers have been very interested lately in understanding the cortical representation of both a native and a second language. Particularly, curiousity as to whether or not these languages converge upon similar brain areas has been piqued given various contradictory findings that indicate both the shared and divergent representation of language in the bilingual brain. Those who want to understand the regions involved in language processing and production have looked both to neuroimaging studies among normal bilinguals and studies involving clinical populations of bilingual aphasics. In support of the view that propounds anatomical overlap between first and second languages, Chee et al. (1999) (5), (6) showed, in an fMRI study that during word-stem completion among Mandarin-English bilinguals, the task resulted in similar activation of the left prefrontal region, involving the inferior frontal gyrus, the supplementary motor area and the occipital and parietal areas bilaterally, during the task in both languages. These results argue for shared lexicons between first and second languages. The structural dissimilarity between Mandarin and English provide a rigorous test for the hypothesis regarding shared cortical representation for bilinguals' languages for it is surprising that two such divergent languages overlap in terms of lexical representation.

Additionally, Illes et al. (1999) (7) provided support for the above findings whereby in another fMRI study, they also showed inferior frontal gyrus activation among Spanish-English bilinguals performing semantic judgment tasks in both languages. These studies concern a question integral to the study of language, that is, to what extent do overlapping cortical representations for vastly differing languages imply similarity between them in terms of personal identification and comfort with the languages? This is a question to which we will return in the following paragraphs. These studies, although showing what seem to be reliable findings, are confounded such that although age of language acquisition (a factor thought to affect language lateralisation in the brain) (8) remains stable at approximately age twelve, level of language proficiency (a further issue impacting upon language lateralisation) (8) is unreported or reported as "moderate." The lack of attention paid to such intervening variables must be corrected if reliable results are to be obtained. Certainly, however, a swift scan of the frequently cited literature supporting shared anatomical correlates between native and second languages is limited (9), (10) and often barely comparable due to the variety of linguistic tasks employed to understand either language comprehension or production in the bilingual brain. The findings, however, that second language learners may display shared cortical areas between their languages are interesting for they implicitly refute the classic assertion of a critical window of time for language-learning (by implying similar proficiency in both first and second languages) and they reiterate the phenomenon of brain plasticity. The latter statement a propos plasticity is particularly relevant in terms of Obler's stage hypothesis (11) which asserts that language learning moves from right hemisphere lateralisation in the early stages to left hemisphere overlap with the native language as proficiency increases. However, a test of this hypothesis requires some knowledge of findings whereby L1 and L2 are seemingly separately localised in the brain.

In 1997, Kim et al (12) used the fMRI method to examine cortical activation among a range of bilinguals who were proficient in various languages. The participant pool was divided into two groups, early (L2 acquisition before the age of five) and late (after the age of twelve) bilinguals. The results suggested anatomical variation between early and late bilinguals such that although early bilinguals showed similar activation in both Broca's and Wernicke's areas during a silent sentence generation task while late bilinguals displayed common activation in Wernicke's but not in Broca's area. These results indicate a role for late language acquisition, suggesting that the "critical period" concept cannot be discarded and that to some extent language learning after a certain age is differentially represented in the brain. This point of view both confounds the conclusions that can be drawn from studies cited earlier but adds confounds inherent to the study. The study included sentence generation tasks which can barely be compared to the early single-word generation studies for the former requires additional and more complex linguistic tasks in contrast to the latter. Additionally, a silent sentence generation task is a measure whose accuracy is difficult to measure across participants. However, other reports do support the finding that L1 and L2 may be anatomically separate in the bilingual brain (13) both in scientific terms and in experiential terms whereby the difficulty of becoming proficient in a second language beyond a certain age indicates, intuitively, that some corresponding anatomical difference too, must exist.

Not only do observations from various experiemental studies provide a source of information reagrding the interaction between different languages in terms of cortical representation, but recovery patterns among bilingual aphasics (14), (15), (16) too allow for the contruction of hypotheses regarding the anatomical correlates of language. The patterns of recovery, selective, parallel, differential, antagonistic, blended and successive (17) observed in previous cases of bilingual aphasia, when combined with the knowledge gleaned from neuroimaging studies must allow for a more comprehensive assessment of the processes involved in maintaining the two languages in the brain. The scientific examination of bilingual aphasics must be combined with studies concerned with impact of this impairment on the identity of aphasics (18), for identity is often attached to one's language and damage to this ability may also cause devastating effect on the aphasic himself.

The study of bilinguals and bilingual aphasia has a great deal of promise both for the study of identity as attached to language and for the mapping of multiple languages in the brain. Studies of bilingual aphasics and the recovery patterns observed within and among their languages have challenged existing accounts of language representation in the brain. A consolidation and analysis of the various findings is ongoing and will perhaps lead to a growth in knowledge regarding the various aspects bilingualism.


References

1)Timothy Mason's Site

2)A Note on Myths about Language, Learning, and Minority Children

3) Wei, L. (2000).The Bilingualism Reader. Routledge: London ; New York.

4) Grosjean, F. (1989) Neurolinguists beware! The bilingual is not two monolinguals in one person. Brain and Language, 36, 3-15.

5)Nature: Science Update 6) Chee, M. W. L., Tan, E. W. L., & Thiel, T. (1999). Mandarian and English single word processing studied with functional magnetic resonance imaging. The Journal of Neuroscience, 19, 3050 056.

7) Illes, J., Francis, W. S., Desmond, J. E., Gabrieli, J. D. E., Glover, G. H., Poldrack, R., Lee, C. J., & Wagner, A. D. (1999). Convergent cortical representation of semantic processing in bilinguals. Brain and Language, 70, 347 63.

8) Obler, L. K., Zatorre, R. J., & Galloway, L. (2000) Cerebral lateralization in bilinguals: methodological issues, pp. 381-394. In Wei, L.The Bilingualism Reader. Routledge: London ; New York.

9) Klein, D., Milner, B., Zatorre, R. J., Zhao, V., & Nikelski, J. (1999). Cerebral organization in bilinguals: A PET study of Chinese English verb generation. NeuroReport, 10, 2841 846.

10) Chee, M. W. L., Caplan, D., Soon, C. S., Sriram, N., Tan, E. W. L., Thiel, T., & Weekes, B. (1999). Processing of visually presented sentences in Mandarian and English studies with fMRI. Neuron, 23, 127 37.

11)Acquisition of second languages 12)Kim, K. H. S., Relkin, N. R., Lee, K. M., & Hirsch, J. (1997). Distinct cortical areas associated with native and second languages. Nature, 388, 171 74.

13)Study sheds light on how brain processes languages 14) Junque, C., Vendrell, P., Vendrell-Brucet, J. M., & Tobena, A. (1989). Brain and Language, 36, 16-22.

15) Nilipour, R., & Ashayeri, H. (1989). Alternating antagonism between two languages with successive recovery of a thrid in a trilingual aphasic patient. Brain and Language, 36, 23-48.

16) Paradis, M., &Goldblum, M. (1989). Selective crossed aphasia in a trilingual aphasic patient followed by reciprocal antagonsim. Brain and Language, 36, 62-75.

17)The Neurocognition of Recovery Patterns

18)Bilingualism and Identity


In the Mind of a Serial Killer
Name: Chevon Dep
Date: 2004-04-14 00:57:09
Link to this Comment: 9355


<mytitle>

Biology 202
2004 Second Web Paper
On Serendip

The movie "Natural Born Killers" did not simply explore the subject of serial killers. It also dealt with the mentality and personal background that influences many of the real life serial killers of society. The release of such movies and documentaries dealing with this topic shows that there is a fascination with serial killers. But why? Ted Bundy, Charles Manson, David Berkowitz, John Wayne Gacy Jr., Jeffrey Dahmer, and Jack the Ripper are all infamous serial killers of the twentieth century whose behavior and personal background has been 'studied' by the media and psychiatrists. In many cases of serial killings, the behavior is influenced by either the past experiences/ backgrounds or the psychological processes of the serial killers. However, there is a difficulty in understanding the psyche of a serial killer, which means that only interpretations can be made regarding this topic.

The way in which the term "serial killer" came into existence is interesting. During the mid-1970s, the FBI agent Robert K. Ressler coined this phrase after serial movies. As Lippit argues, "Like each episode of a serial movie, the completion of each serial murder lays the foundation for the next act which in turn precipitates future acts, leaving the serial subject always wanting more, always hungry, addicted."(1) Serial killers 'addiction' to killing does not cease after the first time but instead increases. In fact the FBI estimated that at any given time between 200 and 500 serial killers are at large, and that they kill 3,500 people a year. (2) This high average among the serial killers shows that killing becomes a pattern that is difficult to break.

The inability to break such a pattern can be attributed to the brain function of the person. Since the frontal lobe deals with the decision-making, this could possibly be an explanation as to what is going on in the mind of a serial killer. If there is frontal lobe damage or abnormal activity in this region of the brain, there is an inability to make rational decisions. This is in no way serves as a justification for such behavior. Instead, it serves as a possible distinction between the mind of a serial killer and the mind of a 'normal' brain.

The interviews of the serial killer John Wayne Gacy address some important issues that are useful to understanding the relationship between his brain and behavior. During Gacy's childhood and adolescence, his father expressed contempt for his illness, psychomotor epilepsy, and the pampering by Gacy's mother. (2) This particular epilepsy can cause a clouding of consciousness and amnesia of an event, because it is in the temporal lobe that deals with visual output. Along with this, the behavior of a person can be altered and a burst of anger, emotional outbursts, and fear is displayed. (3) Symptoms such as these could have possibly been a factor in Gacy's behavior in his adulthood. Also, Gacy's father continuously said that John was going to be a queer and called him a "he-she". (2) Gacy internalized this verbal abuse from his father and applied it to his victims. He referred to his victims as worthless little queers and punks. (2) Due to his father's verbal abuse, did it lead Gacy developing a homophobia and thus raping, sodomizing, torturing, and strangling to death thirty-three young men over the course of more than a decade. This is a strong possibility. However, it could also be a mixture of his psychological mind with his childhood experiences and background. As Simon says, "Although character has a genetic component, much of it is shaped by the nature and quality of our early relationships and experiences."(2) Therefore both good and bad experiences become embedded in the child's developing personality and also have an influence on adult character, as in the case of many serial killers.

Even though the brain could be instrumental in determining the mind of a serial killer, it is important to point out that most serial killers have not lost their grip on reality and thus have some control over their decisions. For example, when the police interrogate serial killers, many of them are not willing to talk. Instead, they tell you what they want you to know and to some of them it is a mind game. (4) The serial killers realize that the police want the information and the answers can only come from them. Therefore, many of serial killers play games, which increases their 'appetite' to kill more people. These mind games leaves the police even more puzzled.

The strategy the serial killer develops can be equated with the idea of being labeled as a Dr. Jekyll and Mr. Hyde. This is an interesting concept to explore. Dr. Jekyll represents the 'normal' lives of serial killers that include working, having a family, and paying taxes. (2) On the other hand, there is the extreme, Mr. Hyde, who represents the dark side of humanity that tortures and kills victims. The ability to make the 'normal' and 'sinister' life two separate entities shows that the serial killers have control of their decisions to a certain extend. In fact, this ability furthers the yearning to kill more people until the authorities catch them. Simon argues, "Suspension of empathy is necessary for someone to intentionally harm other people, and it is usually accompanied by the psychological mechanism of devaluation and projection."(2) In order to carry out such an act serial killers not only have to disregard the feelings of the victims but also project their insecurities on their victims to have control. For example, Ted Bundy referred to his victims as "cargo" and "damaged goods." (2) Often times, serial killers have to place their victims in sub-human categories to execute the act with little or no remorse.

The pattern of killing is not the same for all serial killers. Believe it or not, they have specific targets. For example, Ted Bundy stalked young women with dark hair. There is no exact explanation for the specificity of victims. However, the history and experiences of the serial killer can provide some insight for such profiling. In the Jeffrey Dahmer case, African-Americans were the majority of the victims. Many psychiatrists have attributed this to Dahmer's job as a chocolate mixer. (1) This may sound a little far-fetched, but studies have been done to draw a connection. Since he worked at a chocolate factory, Dahmer combined his hatred of blacks with consuming dark food. (1) Another example of profiling occurs with the female serial killer, Aileen Wuornos, who killed truck drivers. Due to her experiences and background, Aileen targeted the truck drivers. It was not necessarily a psychological process in her case.

It is difficult to pinpoint what exactly causes serial killers to become serial killers. There are numerous factors that can influence such behavior. This leaves us with the questions: What makes a serial killer? Could anyone become a serial killer? According to Simon, everyone has trial evils that have same failures in empathy and devaluation of others.(2) Since these are characteristics of serial killer, does that mean everyone is a potential serial killer? If this is the case, are the serial killers just translating these feelings and emotions by killing people?

References


1)Lippit, Akira Mizuta. "The infinite series: fathers, cannibals, chemists..." Criticism. Summer 1996: 1-18, A Good Article

2)Simon, Robert. "Serial Killers, Evil, and Us." National Forum. Fall 2000: 1-12, A Good Article

3)Psychomotor Epilepsy, A Good Web Source

4)Warning Over Mind Games of Serial Killers." European Intelligence Wire. 21 Feb. 2004: 1-2, A Good Article


Forget About It: The Quest to Forget Bad Memories
Name: Millicent
Date: 2004-04-14 21:50:35
Link to this Comment: 9372


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Nightmares, violent flashbacks, and an inability to simply forget painful memories for even a moment, these are some of the consequences of experiencing a trauma. The haunting nature of the memories are often so horrible that erasing the memory all together is desirable. While in the past this idea of erasing memories only existed in movies, scientists are getting closer to methods to erase memories. The process is referred to as "therapeutic forgetting" (5). As research advances so do the debates on the ethics of the process. Therapeutic forgetting has opened a discussion among policy makers, scientists, and those who suffer from horrible memories. If successful, drugs that alter memories could help suffers of Post Traumatic Stress Disorder. However if abused the same medication could change the way we process emotional pain and hinder our ability to work through bad memories.

Post traumatic stress disorder, a disorder which often occurs in people who live through a traumatic experience, can be debilitating at its worse. Individuals with the disorder often times relive their traumatic experience through dreams and flashbacks of their violent memories (6). The psychiatric disorder is often associated with Veterans of War but its effects are felt by many survivors of traumatic experiences. In the most severe cases PTSD can be incapacitating because of the frequent flashbacks. Currently those who suffer from the disorder are treated with various types of therapies, sleeping pills, and antidepressants. While these treatments can be helpful in easing the pain associated with awful memories none of them solve the problem fully.
Recent research has shown that the ability erase or at least decrease the intensity of painful memories may soon be possible. One study led by Roger Pitman, of Harvard Medical School, tests the ability of the drug propranolol to impact the hormones involved with painful memories (5). This theory is based on the concept that painful memories become predominate in the mind. Adrenaline and norepinephrine are produced during a stirring experience. These homomones are believed to increase the ability of the brain to grab onto and hold a memory. As a traumatic experience is particularly stirring, memories from these experiences can be particularly haunting. The drug propranolol, once used to treat heart problems, goes to the brain and interferes with adrenaline and norepinephrine production (5). Pitman believes that when used immediately after a trauma the drug could help people deal with negative experiences. If Pitman is right, PTSD could be avoided by many people who live through trauma. Currently the study is incomplete but it still raises interesting questions for scientists, policy makers, and PTSD sufferers.

At first the benefits of therapeutic forgetting for victims of PTSD seem overwhelming. With the use of a drug that decreases the ability to remember traumatic experiences we could prevent the disorder all together. However many bioethicists believe that using medication to forget is unethical (5). Life is about overcoming obstacles and by overcoming or learning to deal with the disorder individuals are learning how to adjust to a problem. How can individuals have empathy and understanding of emotional pain if their painful memories are dulled? Also the effects of propranolol for example are not limited to negative memories. A stirring memory that is positive could also be faded(5).

As research and the possibility of creating memory erasing treatments advance policy makers have been forced to respond. As recently as October of 2003 the Presidents Council on Bioethics commented on drugs meant to help people forget trauma saying, "they could also be used to ease the soul and enhance the mood of nearly anyone". The Council then argues that the use drugs would then open a new market to help people avoid unpleasant thoughts which allow "our pursuit of happiness and our sense of self-satisfaction [to] become increasingly open to
direct biotechnical intervention"(1). For these reasons the Council opposes the pursuit of therapeutic forgetting until further regulations are established.

Despite the opposition to therapeutic forgetting it is difficult to explain why individuals should be forced to relive negative experiences with flashbacks and dreams that are debilitating. Certainly the risks the Presidential Bioethics Council cite are valid but the benefits for PTSD patients are crucial. In addition, many Veterans argue that those with PTSD, as a result of combat in a War supported by the government, should be supported by politicians. Some of this support can come in the way of encouraging researchers to continue researching therapeutic forgetting. Argument that memory alteration is unethical and dangerous because it could be used unnecessarily have some validity. However, some memories such as those that cause PTSD are so hash that no person should have to live through them once let alone multiple times in flashbacks.
Altering memories is a particularly interesting subject because its possibilities are so varied. In the case of therapeutic forgetting, however, it is important to remember that this treatment could not make a person completely remove a memory. It could only soften the intensity of the traumatic experience. Studies on these should continue so that we can one day look forward to avoiding PTSD. No one is suggesting that the drugs be used to avoid all emotional pain, but rather that they be used to aid those people who would otherwise be disabled by traumatic memories.

References


1.) 1) Government's Report on Bioethics,

2.) 2)Center For Cognitive Liberty and Ethics,article on memory


3.)3)Exploratoriam Memory Site, Interactive Memory Site

4.) 4)Infinity Web Site, an Informative Web Source

5.) 5)New York Time Web Site, New York Times Magazine article

6.)5)National Center for Post Traumatic Stress Disorder,

7.)5)Serendip, "Forgetting to Remember: The Source of your Symptoms?" by Kristine Hoeldtke


The Punch behind the Peck: A Behavioral and Physio
Name: Ginger Kel
Date: 2004-04-15 12:03:02
Link to this Comment: 9383


<mytitle>

Biology 202
2004 Second Web Paper
On Serendip

Robin Hood, reeling in the corner from nearly being skewered, gazes across the room to his Maid Merion. Disheveled from worry about her love, she lets out a sob. Then with joy she rushes forward to embrace her man. Miraculously, a second wind innervates Robin as he jumps up to..... Shake hands with his lady fair. Just doesn't have the momentum without the dramatic lip lock, does it (1)?

The kiss is arguably the most popular franchise of all time. Butterflies, Eskimos, and the French each have their own brand. Hospitals are fitted with equipment to bestow "kisses of life" to their patients (2). Teenagers, taking advantage of the dark, awkwardly embrace on front porch steps. Poets muse about it. Cher sings about it. Movies revere those who die for it. To make a long story short, mankind is batty for this simple act. Yet, there are organisms in nature that reproduce and thrive without smooching. Why is the kiss such a vital part of the human experience? What is the origin of this kissing behavior? Are people the only living creatures that find merit in the deed?

The average human being will spend two full weeks of his or her life kissing (3). For an organism to focus that much energy on any endeavor, there must be some advantage to negate the cost. Kissing is a positive re-enforcement behavior. To promote habit formation, participants in the activity are rewarded with pleasurable sensations. The organs involved in the kiss are well suited to this function. The lips and the area around the mouth happen to have the highest concentration of sensory nerves endings of all the tactile senses (4). As icing to the cake, the lips are also outfitted with a very thin layer of skin making them the most sensitive part of the body (5). So, could one claim the structure of the mouth was patterned by the kissing function? No, most likely, the lips are ultra sensitive to make humans more discriminating critics regarding what they should ingest. The pleasure potential of the mouth is a parallel role. What then causes a co-mingling of two sets of lips to be pleasurable? The warm and tingly feelings associated with pleasure are the outcome of a potent surge of dopamine, norepinephrine, and phenylethylamine in the brain (6). This "cocktail" of neurotransmitters, which is triggered by electrical signals from the lips, is received by the emotional portions of the brain (5). Almost immediately, the brain responds by producing feelings of elation similar to those induced by certain drugs--kisses: the ultimate anti-depressant?

The euphoria experienced from a kiss has a purpose. To repeat the above assertion, the body is not an altruistic entity. There is a catch to every gift. So, why does the body encourage the act of kissing? It can be a risky activity after all: a single kiss can exchange 278 species of helpful and harmful bacteria in the saliva, not to mention diseases (e.g. herpes and mononucleosis) and viruses (7). There are health benefits to kissing too. Studies have shown that kisses assist in the prevention of tooth decay, stress relief, weight loss, and can raise self confidence (8). However, it is possible that a few of these results may fall more directly under the Placebo Effect sphere. For example, my previous statement implies that kissing is a direct treatment for stress. This could be true, but it could also be faith in the treatment that really yields the desired result.

Have I answered the question I posed in the previous paragraph? Is dental hygiene reason enough to necessitate the use of satisfaction hormones? Contrary to how my dentist may feel, I'm inclined to say no. Evolutionarily, nature favors organisms that can survive to perpetuate the species. Thus, most resources in the body are devoted to bettering the odds of producing viable offspring. It is logical then to assume kissing would have a reproductive function as well. Kissing is oftentimes a precursor to sexual activity. So, the act of kissing could serve as a trigger for the release of sexual hormones. One of the theories behind the development of the kiss builds on this procreation principle. Many philematologists (people who study kissing) feel that the mouth kiss is a derivation of the "Eskimo" kiss. In this genre of kissing, companions rub noses as an act of greeting (9). This motion of noses creates a proximity that allows olfactory neurons to detect the other person's pheromones (5). Pheromones are an organisms' unique scent. They reveal the mood, health, disposition, and recent exploits of the particular individual (9). Thus, pheromones could be used as evaluation of compatibility as a mate. It is important to note that the "Eskimo" kiss is not exclusive to human beings. In fact, many animals practice this exchange of information (10). When your cat rubs his face against yours, he's sizing you up.

Is it plausible that mouth kissing could have evolved as a means of further testing genetic fitness? Perhaps, body fluids are a pretty intimate aspect of a person after all. In addition to bacteria, saliva contains immunoglobulin (a compound that binds to bacteria to signal disposal by the immune system). Stress and anxiety levels can also be measured in saliva by monitoring the breakdown of noradrenaline (11). In other words, a person can make a pretty educated guess about a potential mates' health just by swapping spit.

Kissing is somewhat of an enigma. In comparison to other aspects of life, scientists know relatively little about the embrace. The theory about kissing originating as a means of data collection (as explained above) is only one in many. Some experts feel that the kiss's roots are more superstitious. There was a belief, at a time, that "the human breath carried the power of one's soul (9)." Thus, kissing was a way for loved one's to exchange this power and merge their souls forever. Although tempting to toss this theory aside, it has as much credibility as any other theory. Remnants of this faith are still seen today: after all, why do you think the bride and groom kiss at the end of a marriage ceremony? Another theory asserts that kissing began as a descent from a prehistoric feeding practice. Frequently, mothers would do as the birds do—chew up food then push it into their children's mouths (10). Kissing later developed as a way the mother could convey her love for the child. This theory is interesting because it allows for the association of emotion with kissing (10). What began as a symbol of the mother child connection may have evolved to become the poster child for fondness in all relationships.

Why do all these theories vary vastly from one to another? Is kissing that challenging of a concept to pin down? It appears so based on what I have presented to you. The real issue in philematology, however, is whether kisses come from a genetic or a cultural origin; the classic "nature" versus "nurture" argument once again rears its ugly head. Scientists cannot formulate accurate "source" hypothesis without knowing if it needs to look in science or anthropology. Most modern research is taking a step backward to try to solve that conundrum. A German researcher, Onur Hunturkun, spent two years documenting the trends in how couples kiss. He found that most couples lean to the right when kissing; he interpreted this as evidence of genetic asymmetries of motor and sensory functions (12). However, he also noted that cultural identity affects the way that couples kissed (12). Hunturkun's findings give more insight into kissing's mysterious parents, yet we are still left at an uncomfortable place. Most couples lean right, which implies a genetic pre-disposition for kissing to the right. However, whatever codes for this asymmetry probably codes for all motor functions (e.g. right handedness); kissing just happen fall under its jurisdiction. Hunturkun also mentions the effect of culture patterns on kissing. This is further evidence that kissing probably comes from a mostly "nurture" background. Yet, to not exclude the theories of origin previous mentioned, it is possible that the "nurture" act may have stemmed from a "nature" need. Yet, without conclusive evidence, philematology continues to be a study of near leads and suggestion.

You don't usually walk out into a field and see two horses engaged in a passionate embrace. So are humans the only species that practices the kooky art of kissing? Surprisingly, no we are not. Although you will never see two horses making out, you will oftentimes see them smelling one another's head—the "Eskimo" kiss. Lawrence Katz, a neurobiology researcher who did studies mice, found that pheromones are critical for animals to receive information (13). Mice and other creatures have developed very powerful vomeronasal organs to read pheromones in detail. Humans have this ability as well, but to a much lesser extent; our evolution placed emphasis on sense of sight over sense of smell. Katz also deduced that pheromones are nature's prevention against inbreeding: "When mice met their genetic twin, certain neurons fired. When they encountered mice from a different strain, different neurons activated (13)." Thus, in animals, kissing serves a vital reproductive function: for finding both a responsive and genetically different mate. Could this research be further evidence that kissing in humans has a reproductive basis as well?

Ingrid Bergman mused that "a kiss is a lovely trick designed by nature to stop speech when words become superfluous (14)." In other words, a kiss is an act that communicates unmistakably without words. The bulk of this paper has been devoted to the scientific aspects of kissing. Yet, there are volumes of emotional and psychological implications behind the practice that I haven't even touched upon. Kissing may have evolved as a way to increase the fitness of a species, but it quickly became intertwined with emotion. It has since become a physicalization of untangible qualities like love, commradery, and devotion. Because kissing means so much in human culture, we owe it to ourselves to understand it fully. Maybe then we'll understand why marriages that lack kissing usually result in divorce (7).

References

1) Self Written Scene Parody of Robin Hood: Prince of Thieves, prod. by Morgan Creek, dir. by James G. Robinson, 144 min. , Warner Brothers, 1991, videocassette.
2)Kissing, Written by Peta Heskell
3)Fun Facts About Kissing, from HiCards
4)Kissing, from Barbelith Underground Community
5)Can a Kiss be Bettter than Sex?, Written by John Triggs
6)News: The Science of Kissing, Written by Rob Bhatt
7)Science of a kiss, Written by Raj Kaushik
8)Reasons Why Kissing is Good for You, from CoolNurse
9)First Kisses, Written by David Templeton
10)The Science of Kissing,Written by Edward Willett
11)Kissing—how it all began..., from NZGirl
12)Your Kiss is All Right With Me, Written by Amanda Gardner
13)There's No Mistaking Mouse Lust, Written by Jennifer Thomas
14)Quotations About Kisses, from The Quote Garden


Principles of Neurological Signaling
Name: Jean Yanol
Date: 2004-04-15 20:22:03
Link to this Comment: 9394

<mytitle> Biology 202
2004 Second Web Paper
On Serendip

. In order to understand how the nervous system works, we must research how parts of this system communicate with each other. The nervous system is the most essential part of our anatomy because it has influence on organ systems, controls motor function, houses our consciousness, and can influence many cellular components. Many of this system's actions are carried out through signaling from one part of the nervous system to another. Through an understanding of these signaling methods we can hope to fix problems that may arise in neurological signaling along with having a basic grasp on how our bodies function. If we did not research signaling in the nervous system neurological biomedicine would be at a nearly complete stand still. Neurological signaling has a variety of different components and acts in substantially different ways. Here I will discuss some properties of signaling and why they are important.

Probably the most important signaling apparatus involved in the nervous system's activities would be water soluble ion channels (pores), which can be opened or closed chemically or electrically. These ion channels usually only let one type of ion flow such as sodium ions due to the charge of the ion and the charge of the surrounding environment. Chemically these ion pores can be opened to permit ion flow by ligands, which are small molecules that attach to receptor proteins in a membrane in order to cause a change in pore conformation. One example is in fast synaptic transmission these chemical ion controlling ligands are either the molecule glutamate or gamma-aminobutyric acid which allow certain ions to flow through the pore, but not others and they are involved in many types of ligand gated ion channel activities. Ion pores however in some signaling cases are controlled by a change in the concentration gradient of ions. The gradient opens and closes the pore to control ion flow. As ions flow through the pore, electrical potentials are changed.

Ion permeability controls differences in electrical potential in order to send a signal is called an action potential. Action potentials can send signals very quickly over a relatively long distance in the body via the projections of neurons(1). They are the skipping of current from one node of ranvier on a neuronal projection to the next node. (2) Some of their roles lie in the control of immediate muscle movements among other things. Action potentials are the most immediate form of signaling and are used constantly by the nervous system in order to function in somatic control. Action potentials occur due to depolarization of the membrane; therefore the speed of action potential propagation is dependent on the speed of depolarization. Action potentials also travel better when there is less membrane capacitance, which is the ability of the membrane to store charge. (2) When discussing action potentials it is very important to discuss resting potential and synaptic potential. Resting potential goes hand-in-hand with action potential. Resting potential is the normal potential of the membrane which can change at anytime, but is the normal membrane state. If the membrane did not have resting potential, action potentials would be unable to occur. Synaptic potential is the term used to describe the release of a neurotransmitter over the synapse once an action potential has reached the end of a neuronal projection. The released neurotransmitter from one neuron travels over the synapse and can then if it comes in contact with another neuron causes changes in the neuron.

While action potentials are clearly a very important signaling method in relation to the nervous system, there are other neurological signaling events which can have a great impact on the body as well which are generally slower or have a lesser impact on other systems in the body than action potentials and rely more heavily on chemicals in their signaling mechanisms. These types of signaling can have a tendency to act more locally, meaning that they do not travel as much as an action potential might. These signaling methods include more chemically based approaches such as interactions between proteins associated with neurons. Chemicals can interact with cells in order to change concentration, conformation, activation, or formation of proteins, ions, etc. (3) There are different receptors which associate with different ligands and in the cell produce different responses. There are countless of different cellular interactions which occur in this way and disruptions in these types of reactions can cause neurological diseases as well such as Alzheimer's disease and perhaps schizophrenia among others.

In short, signaling and chemical interactions are important in understanding how parts of the nervous system interact with other parts of the nervous system and how the nervous system interacts with other somatic systems. By gaining knowledge about the types of signaling that occurs in the nervous system we can assess problems with the nervous system and possibly develop treatments for problems among other uses for this information. However processes of the nervous system are much more complicated than this and each reaction and process is unique, which is why the nervous system is still somewhat of a mystery to us today.

References

1)Neurobiology and Behavior, 2004

2)Nelson Lecture 4

3) Helmreich, Ernst J.M. The Biochemistry of Cell Signaling. New York: Oxford University Press Inc., 2001.


Mind Over Body: Studying the Placebo Effect
Name: Mridula Sh
Date: 2004-04-15 22:38:57
Link to this Comment: 9396


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"The power which a man's imagination has over his body to heal it or make it sick is a force which none of us are born without. The first man had it; the last one will possess it." -Mark Twain, 1903 (4).

You've been experiencing shooting pains down your shoulder blades along the sides of your back for a week, ever since you woke up the morning after that intense rugby game. You decide its time to see the physical therapist. Twenty minutes into the consultation and he has made note of your history, asked you to perform some maneuvers and given you instructions to ice and stretch those constricted muscles. "If the pain persists for a couple of days," he says, "take some ibuprofen." On the drive back home, you're already feeling better. Two days later you're back on the field feeling invincible. The answer to this miraculous recovery could lie in the mysterious, highly controversial benefits of the placebo effect.

The placebo effect is the "the measurable, observable, or felt improvement in health not attributable to treatment. This effect is caused by the administering of treatment that has no intrinsic therapeutic value in the healing process (1),(5). The Placebo effect first caused waves in the medical community when in the 1950's, Henry K. Bleecher of Harvard University published experimental findings which suggested that a significant number of patients (30-40%) suffering from chronic ailments improved after taking a placebo. (5) Over the decades the astounding advancements in medical technology and efficacious procedures have enhanced the quality and longevity of peoples' lives. Concurrent studies investigating the placebo effect have yielded unexpected results, often lessening the legitimacy of drug treatments with serious implications in the ethical, scientific and medical worlds. My research in this area stems from an interest in the neurobiological implications of such studies with a view towards understanding the psychological vs. physiological aspects of the human brain-body relationship. This paper will investigate the placebo effect, outline some plausible causes, study the ethical dilemma surrounding these inert substances and attempt to gain an understanding of the phenomenon through a study of the complex brain/body relationship.

In a study conducted by Tor D. Wager and colleagues, participants were exposed to a series of electric shocks.(7) At some point during the course of the experiment, a placebo skin cream was administered to a certain proportion of the participants. About 70% of these participants claimed to feel less pain after the application of the skin cream. Analysis of data collected using a functional magnetic resonance scanner gave evidence of reduced activity in parts of the brain associated with pain perception in participants who felt reduced pain. (7),(8) So what triggers these results? The answer might lie in the brain=behavior theory. While the placebo has no visible pharmacological effects, its psychological effects generate expectations in the mind of a patient of a certain consequence (a reduction in pain.) This expectation in turn influences the perception of feeling (In this case analgesia.) (7) The brain relies on the sensory input it receives. Thus the brain is continually changing in response to (supposed) altering stimuli. These neurological changes in turn cause behavioral modifications. Thus one might understand the placebo effect as essentially a biological change that results from a change that is largely psychological.

These findings also showed that the 70% who reported a reduction in pain showed a pattern of brain activity that was different from those that did not. (9) This result suggests a degree of plasticity of the human brain. Studies conducted by Walter A Brown confirm that depressed patients who respond to placebos differ in their biochemical pathways from those who do not respond to placebo treatments. (5) This could explain why only 70% of those treated with placebos in the shock experiment responded with the reduced feeling of pain whereas the other 30% did not.

Neurologists in British Columbia used PET scans to determine the amount of dopamine activity in the brains of patients with Parkinson's disease when given either a placebo or a drug that mimicked dopamine. Results showed that the brains of patients given the placebo responded by releasing as much dopamine as those that got the active drug. (3) If a placebo can induce the same result as an active drug then is there something more powerful than the chemical substance that causes this change? It turns out that the effectiveness of treatment is also a function of the mental approach of the patient receiving the treatment as well as the attitude of the physician administering it. (5)

Studies have proven that the biochemical responses to anti-depressant medication largely depend on the faith and mental outlook of the patient towards the treatment. (1) A patient who has high expectations of improvement is more likely to feel better than a skeptic. A thorough consultation with the doctor and the act of undertaking a therapeutic process boosts the confidence of the patient and gives one a sense of control over a condition that was previously hopeless. The alleviation of anxiety and generation of positive emotions trigger physical changes such as the activation of endogenous pain control centers that release endorphins to reduce symptoms of illness. (1) This explanation could give one a better understanding of the high success rate of homeopathic (and other alternative) treatments that use natural remedies to cure ailments. Charismatic practitioners make use of trust and beliefs of the patients to induce the body's own healing processes to bring about a change. (1)

If placebos are seen as broadly effective therapeutic devices, then why is their use so controversial? The mysterious power of the placebo effect is responsible for the ethical dilemma it causes. The placebo essentially makes use of the fact that if the brain can be "tricked" into thinking that the brain/body is being treated for an illness then it will trigger the necessary natural biochemical processes to bring about the change. If this is indeed how the placebo works then why are patients administered active drugs? The answer lies in the fact that the use of therapeutic placebos involves deception on the part of the practitioner. (1),(2),(4) It violates the fundamental principles of trust and faith that the doctor-patient relationship is based on. Yet it is ironic that it is this very violation that is responsible for the success of the placebo effect. Studies have shown that patients exhibiting the placebo effect stop doing so once they are told that they are on a placebo. Using the brain=behavior theory to interpret this result, it seems as though the brain in response to new conflicting information sets off a negative psychological feedback process which in turn induces the biological change that is seen as a resistance to the placebo.

Conflicting research results regarding the legitimacy of the placebo effect have also been a cause of concern. Recent research conducted at the University of Copenhagen produced results that disclaim its authenticity (1),(3). While some trials showed results, these successes were not significant enough to prove the powerful clinical effects claimed of these inert "drugs." Thus at present, the dearth of scientific knowledge regarding placebos, the lack of awareness of the manner in which they bring about change and other ethical issues surrounding their use has left the phenomenon of the placebo effect in a shroud of controversy. Yet this need for caution must not negate the genuine findings of clinical researches and dismiss entirely therapeutic placebo procedures used by practitioners.

We have become a pill popping society who has succumbed to the manipulations of large pharmaceutical companies. Intriguing phenomena such as the placebo effect reveal the healing power of the brain and environment. (4) A comprehensive understanding of this phenomenon requires an in-depth knowledge of the complex brain, and its neural mechanisms, a process that is still in its infancy. Yet results of studies are promising and who knows sometime in the future we just might be able to make those dummy pills work as well as drugs.

References

1)The Skeptic's Dictionary, the placebo effect.
2)Tamar Nordenberg, The Healing Power of Placebos. FDA Consumer magazine, January February 2000
3)W. Wayt Gibbs, All in the Mind. Fact or Artifact? The Placebo Effect May be a Little of Both . Scientific American, 2001
4)Kenneth E. Legins, Is Prescribing Placebos Ethical? Yes. American Council on Science and Health (1997 & 1998)
5)Walter A. Brown, The Placebo Effect. Scientific American (1997)
6)American Psychological Association press releases, Placebo Effect Accounts for Fifty Percent of improvement in Depressed Patients Taking Antidepressants.
7)Tor D. Wager, James K. Rilling, Edward E. Smith, Alex Sokolik, Kenneth L. Casey, Richard J. Davidson, Stephen M. Kosslyn, Robert M. Rose, Jonathan D. Cohen, Placebo-Induced Changes in fMRI in the Anticipation and Experience of Pain. Science Magazine (February 2004)
8)Dennis O'Brien, Brain response to placebo found: Study says pain reaction differs if patient believes treatment is effective Dennis O'Brien, Brain response to placebo found: Study says pain reaction differs if patient believes treatment is effective. The Baltimore sun Company (February 20, 2004)
9)Jerome Burne, Cured by an Imposter. The Times (London) (April 10, 2004


Creutzfeldt-Jakob's Disease - The Misunderstood Di
Name: Katina Kra
Date: 2004-04-19 10:20:58
Link to this Comment: 9433


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


It comes inconspicuously, through family genes, random mutation of bad luck, or even from contaminated body masses. Creutzfeldt –Jakob disease (CJD), named after the two men who discovered the pattern of its symptoms in the 1920's, is a disease in which the brain rapidly deteriorates and holes being to form within its structure. (1,2,3) It is the long incubatory period, though, of Creutzfeldt - Jakob disease (CJD) which makes it particularly hard to track and treat. The relation CJD has to the epidemic in Europe regarding Bovine Spongiform Encephalopathy (BSE) is crucial. BSE caused the deaths of nearly 200,000 heads of cattle, and a new variant of the disease emerged. (3) ) Despite the new scientific understanding of both diseases, a great deal of misinformation has been spread, and the fear of Mad Cow Disease has permeated the culture of Europe and many other countries. There is a difference between these two neurological degenerative disorders - BSE related vCJD and CJD - but this distinction is rarely clear to those without medical training; the relative lack of information, combined with its grim diagnosis has led many citizens and governments to a skewed perception of what the disease truly is.


In the early 1990's, when scientists and doctors began to notice a pattern of spongiform illness in patients in England, they looked at CJD as a possible cause for this believed "outbreak." However, when former cases of CJD were compared to the new cases, a difference was noticed. The age of the afflicted had dropped drastically; those who had this "new" illness were young, primarily under the age of 40. Typically, CJD occurred in older populations, predominantly those over fifty years of age. (1, 2, 3, 6) Even the course of the variant was longer; it would typically last longer within patient than CJD, but shared similar neurological symptoms. However, in the initial stages, CJD patients typically developed neuromuscular coordination problems as well as personality changes. In vCJD, by way of contrast, psychological symptoms appear first, such as depression and anxiety, later progressing to neurological degeneration. (2) Both have similar neurological consequences, but researchers determined that this vCJD was not necessarily the natural or identical genetic type as classic CJD.


Neurologically, CJD is one of the most devastating diseases of the brain. Photographs of CJD patient's after autopsies show a grim picture; a brain that has literally been eaten away, pockmarked with holes. (9) Symptoms begin suddenly after remaining dormant anywhere from 5 to 40 years, and rapidly degenerate the brain and body. The beginning stages start as mild dementia and possible psychological problems, with confusion and memory loss the most commonly found, along with loss of muscular control. Quickly, it proceeds to dyesthesia, a condition which causes pain sensations in areas such as the face or limbs, and severe mental impairment and muscle spasms known as myoclonus. Often, CJD patients lose their ability to speak, and much of their immunity, and become susceptible to illnesses such as pneumonia. At the peak of the symptoms, those suffering from the disease lose the ability to control themselves physically, have severe dementia or mental impairments, and commonly lapse into comas. (2) It is this rapid downward progression of the symptoms that make the disease so fatal, as death generally occurs one year after the onset of the disease, and approximately two years for vCJD.


Treatment today for the disease remains unclear; there is no physical way to prevent the brain from deteriorating, and all medical professionals can do is lessen the symptoms and pain associated with it. Even diagnosing CJD or vCJD is still very primitive, a brain biopsy or physical exam of the brain at autopsy has been the only proven way to tell. A promising new method of detection has come out using spinal cord fluid, but only detects the disease once visible symptoms have developed; meaning preventative treatment is still too late. In a recent study, scientists found that nearly 13% of diagnosed Alzheimer's patients after autopsy were actually suffering from CJD, showing that the symptoms can appear from many neurological degenerative diseases. (8) Even with modern technology and medicine, CJD and its variants can still not be cured, nor can it always be detected or diagnosed.


The disease itself though, is not what one typically associates with transmissible diseases; it's neither a virus nor bacteria. In early studies, it was believed to be a "slow virus," meaning there existed an extended period of time between the onset of infection to the progression of visible symptoms. More recently, scientists have found that abnormally shaped prions, which are protein structures within the body, are the probable cause for transmissible spongiform encephalopathy (TSE) disease. ((4)) It is when this prion becomes mutated and folded, becoming infectious, that the TSE will occur. It is now believed that these prions, random mutations of proteins, create the degeneration within the brain, forming holes and gaps that are so indicative of TSE. (1, 5)


CDJ has progressed relatively rapidly through human populations. There are three different ways in which one can become infected. In 85 to 90% of all cases, it is a sporadic and random mutation which leads to the disease. However, there is a certain genetic link to CJD, and approximately 5 to 10% of reported cases occur in someone with a familial connection to CDJ. However, the infected prion must have been altered within the gametes for it to be passed along. The last and most uncommon way of transmission is know as iatrongenic, which accounts for 1% of CJD cases. (1) This occurs when there is direct contact with CJD contaminated instruments or body matter. Formerly, cornea or dura matter grafts, the use of natural growth hormones, and unsterilized surgical tools could have possibly transmitted the disease. Now, because of the more controlled and sanitary settings in medical facilities, and the use of synthetically derived hormones, this way of transmission has become increasingly unlikely. With the different possibilities of becoming infected though, health associations still only estimate CJD occurs in approximately one out of every million people. (3)


However, for the variant CDJ, the prospects of the methods of transmission are much more frightening and unusual. When an outbreak of the prion related disease Bovine Spongiform Encephalopathy (BSE) occurred in mid 1980's in Great Britain, the only concern was the removed the cattle diagnosed with the disease. But when a large number of TSE cases became present in early 1996, scientists and residents became increasingly worried about the health of the population. As the two forms of spongiform diseases were compared, a strong similarity of the prions was seen and a theory developed as to how it this variant could be explained. The theory was that consumption of the beef products from cattle tainted with BSE could result in the formation of a spongiform illness within humans. The cattle could develop BSE from eating mixtures of scraps not fit for human consumption from the nervous system of ruminants, or other large, grass eating mammals. (3, 6) Despite no significant or direct proof that this was the cause of the illness, Great Britain and Europe erupted into a frenzy of fear. Imports of British cattle were banned, and people stopped eating beef out of the fear they would get "Mad Cow Disease." But from statistics, only between 140 to 155 cases of actual vCJD have been confirmed or suspected in the population. (7)


Yet, even with these relatively low numbers that are comparable to the infection rate of CJD, Europe, Japan, and the United States have all taken extraordinary measures to "prevent" transmission of the disease. Sanctions on importation of beef and blood donations have been put into place to "reduce the number of cases" of the disease. By restricting donations of blood by people who have spent three or more months in Europe since 1980, the United States and other countries are putting themselves at risk for a blood shortage, even when CJD and vCJD have not been proven transmittable through the lymphatic system. (2) But because of the long incubatory period of both forms of CJD, and the inability for scientists to detect the disease before symptoms, these preventative measures may still not protect the populations from the minor risks of the disease.


The undetecability and lethality of CJD and vCJD are what have created the uproar within many cultures and countries, leaving behind fear and confusion of what the disease truly is. With much of the disinformation still being given freely, fears of the public have not subsided, causing drastic actions, such as mass slaughter of cattle, with many restrictions still remaining on the cattle industry and in blood donation because of the risk of CJD or vCJD. In understanding what the diseases are, the differences between the original and variant CJD, and the transmission and symptoms of both, the misinformation of the people can be corrected and the worries subsided. However, this goal by the medical community to try and inform the public has been a difficult task. Many people are still not ready to accept what CJD and vCJD are and how they occur, and will not change their marked beliefs regarding CJD, allowing the disease to further become an issue of global politics, media, and what the public would like to believe.


References


1)NORD Site; The National Organization of Rare Diseases site with extremely detailed information.

2)Creutzfeldt Jakob's Voice; The CJD Voice's site, with information and many interesting links.

3)WHO Site; The World Health Organizations site with information regarding the global impact.

4)Kuru and TSE; A site describing Kuru, another TSE associated with cannibalism.

5)Encylopedia definition; The encyclopedia definition of what prions are

6)Massachusetts Health Department; The Massachusetts State information guide to CDJ and Mad Cow Disease.

7)Statistics and Numbers of CJD; The statistics of CJD prevalence.

8)Alzheimer's and CJD; A site describing the misdiagnosis of CJD as Alzheimer's.

9)Photograph comparing brains; A photograph of a brain suffering from a TSE compared to a normal brain.


On Heroin
Name: Ariel Sing
Date: 2004-04-19 17:43:00
Link to this Comment: 9443


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Heroin: one of the most potent drugs on Earth, addiction, poverty, tweaking, Mingus, track marks, creativity, dealing, poetry, blow, Warhol, needles, glamour, oblivion, AIDS, music, hallucinations, Coltrane, junkie, anesthesia, lines, joy, smack, the Velvet Underground, rush, brain dead, genius. Death. Life.

Heroin: no matter who you are, how you have been raised or what experiences you have had in life, when someone mentions it, you have a reaction, a brief flood of words and accompanying images. Maybe you are a recovering addict, maybe your best friend does it, maybe you campaign against it, or maybe you listen to the music and watch the movies. Even if you are none of these, or all of them, you cannot have escaped the inevitable stereotype that surrounds this deadly miracle drug: try it once and you become an addict for life.

In case you have not heard people talking about the instant addictive powers of heroin for yourself, here are a few examples perpetuating the myth:
"And once you try heroin, it's almost impossible to get off it without help." (1)
"After only one 'try', a heroin user can become addicted immediately." (2)
"All they have to do is try heroin once and that's all it takes." (3)
"Even trying heroin once can spell addiction"(4)

Before explaining how heroin effects the brain it seems necessary to describe the symptoms of use: rush, pleasure, euphoria, nausea, comfort, lack of pain, happiness, drowsiness, warmth, heaviness, constipation, floating, blurriness, contentment. (5)

"...the cold wash of anesthesia hit me it swept over me, a wave that started at the tip of my, rushing across my face to my head, running down my neck to my chest, crashing into a warm golden explosion in my stomach, my groin, a blessed sensation beyond the peak of orgasm and relief of nausea, as every muscle in my body relaxed and my head lolled gently my shoulder, every sense unwinding, unburdened of the crushing weight of pain I never even knew that I had: the rush, the wave, death, heaven, completion. For hours and hours. The hit. Sensual ultimatum...." (6)

And the symptoms of withdrawal: goose bumps, watery eyes, runny nose, tremors, hallucinations, panic, chills, nausea, cramps, diarrhea, vomiting, (7) drug craving, kicking spasms, bone pain, insomnia. (8)

"Relinquishing junk. Stage one, preparation. For this you will need one room which you will not leave. Soothing music. Tomato soup, ten tins of. Mushroom soup, eight tins of, for consumption cold. Ice cream, vanilla, one large tub of. Magnesia, milk of, one bottle. Paracetamol, mouthwash, vitamins. Mineral water, Lucozade, pornography. One mattress. One bucket for urine, one for feces and one for vomitus. One television and one bottle of Valium. Which I've already procured from my mother. Who is, in her own domestic and socially acceptable way also a drug addict. And now I'm ready. All I need is one final hit to soothe the pain while the Valium takes effect." (9)

And now for some science: Papaver somniferum is the derivation of heroin. And as many people probably know another derivative is commonly found on morning snacks. Papaver somniferum is the opium poppy plant. (As an interesting note: somni is derived from the Latin word for dream and ferum is likely from the work for wild or savage). When the secretion from this poppy is dried it becomes opium. The major component of opium is morphine. When this alkaloid is combined with acetic acid it forms heroin, technically diacetylmorphine, by "acetylation of the phenolic and alcoholic OH groups." (10)

Contrary to popular belief heroin has very little effect on the central nervous system. It is primarily the transportation mechanism for the highly potent morphine that is at its core. Imagine: you have a tourniquet wrapped around your biceps so that your veins will rise. The hypodermic needle has been filled with the heated heroin. You insert the spike into the vein and the heroin rushes through your bloodstream. It hits you blood-brain barrier, already having been converted to 6-mono-acetylmorphine (MAM) through hydrolysis. This compound, unlike pure morphine, is lipid-soluble and races through into your brain with almost no delay. Now the MAM rapidly brakes down into morphine and the rush is over, but the high has just begun. (11)

Once the morphine is in the brain it can go to work. One of the primary ways in which heroin creates its effect is by simulating the natural opiate-like neurotransmitters (called the endogenous opioids, which include endorphins) in the brain. There are receptors for these natural opiates (which will except both the natural and artificial varieties) on neurons containing GABA neurotransmitters. GABA proteins are involved in the inhibition of the release of dopamine.

Normally the GABA neuron receives a signal and releases a large number of neurotransmitters, these bind to receptors on the dopamine neuron and allow the Cl¯ waiting in the synaptic cleft to enter the dopamine neuron. This signals the neuron to release only a small, specific, amount of dopamine, which in turn binds to another neuron and leads to "normal" feelings of contentment or pleasure. (12)

The presence of morphine alters this pattern. When the morphine binds to the opiate receptor on the GABA neuron it represses the release of the GABA neurotransmitters, this in turn represses the amount of Cl¯ that is allowed into the dopamine neuron. Without the Cl¯ to inhibit it, the neuron releases a large amount of dopamine, leading to the feeling of euphoria and supreme contentment. (13)

The reason that coming down after taking heroin is so painful is because you have used up a huge quantity of dopamine in one rush. Thus your body has to make more before it can begin to release it normally again.

When a person becomes an addict, this problem only becomes worse, each use of heroin adding to the last. Finally, when the cells that create dopamine are put under a significant amount of stress, they will start to shut down, producing less dopamine. This is one of the reasons that withdrawal from heroin is so extreme. (14)

As with most experiences, once is not enough to make you an addict. The technical definition of an addict is "someone who is physiologically dependent on a substance [and] abrupt deprivation of the substance produces withdrawal symptoms."15 To become "physiologically dependent" means that your body needs to have the drug to function, without it you will go through withdrawal. It seems that the chemical actions that cause withdrawal come when heroin has been used so much that the body cannot function when only being supplied with the normal level of dopamine. If heroin is only taken once the user will suffer a "low" after taking the drug, because a large amount of their dopamine has been used up, but their neurons have not become damaged or adjusted to the drug, and do not require it to work, thus the person is not addicted.

None of this analysis includes psychological need. It seems quite possible that a person might try heroin just once and then continue to take it, and eventually become addicted, because they believe that they cannot live without the feeling that its creates for them. However it is important to realize that just because a person feels a need for the drug, it does not follow that the body has become addicted, or dependent upon that drug.

Thus, it is possible to see that while many people will assert that heroin can be instantaneously addictive, they are incorrect. Heroin is highly addictive and can cause serious problems for people who become addicts. This, however, does not justify the spreading of incorrect information. All people should be fully educated, and then allowed to make their own decisions. We cannot protect people from the truth, they will learn it and as adults, they must make their own choices.

While researching the effects of heroin, it seemed that no one was fully able to describe how using heroin feels, except for the Lou Reed and the Velvet Underground in the song, aptly titled

Heroin:

I don't know just where I'm going
But I'm gonna try for the kingdom, if I can
'Cause it makes me feel like I'm a man
When I put a spike into my vein
And I'll tell ya, things aren't quite the same
When I'm rushing on my run
And I feel just like Jesus' son
And I guess that I just don't know
And I guess that I just don't know

I have made the big decision
I'm gonna try to nullify my life
'Cause when the blood begins to flow
When it shoots up the dropper's neck
When I'm closing in on death
And you can't help me not, you guys
And all you sweet girls with all your sweet talk
You can all go take a walk
And I guess that I just don't know
And I guess that I just don't know

I wish that I was born a thousand years ago
I wish that I'd sail the darkened seas
On a great big clipper ship
Going from this land here to that
In a sailor's suit and cap
Away from the big city


Where a man cannot be free
Of all of the evils of this town
And of himself, and those around
Oh, and I guess that I just don't know
Oh, and I guess that I just don't know

Heroin, be the death of me
Heroin, it's my wife and it's my life
Because a mainer to my vein
Leads to a center in my head
And then I'm better off and dead
Because when the smack begins to flow
I really don't care anymore
About all the Jim-Jim's in this town
And all the politicians makin' crazy sounds
And everybody puttin' everybody else down
And all the dead bodies piled up in mounds

'Cause when the smack begins to flow
Then I really don't care anymore
Ah, when the heroin is in my blood
And that blood is in my head
Then thank God that I'm as good as dead
Then thank your God that I'm not aware
And thank God that I just don't care
And I guess I just don't know
And I guess I just don't know


References

1 ) Heroin: How Big is the Problem? , Channel 6 News: WJAC, 2003.

2 ) Goal One - Education , A Heroin Dealer a Day, February 23, 2004.

3 ) Fighting Drugs: Mother' Fears, Sorrows, Regrets , Chesterton Tribune, March 22, 2004.

4 ) Heroin Reaches the Well-To-Do Adolescent Population , Medscape Special Report, November 12, 2002.

5 ) Heroin , Drug Info Clearinghouse.

6 ) Carnwath, Tom, and Ian Smith, Heroin Century, London and New York: Routledge, 2002, 98.

7 ) Heroin Withdrawal , Narconon Southern California.

8 ) Info Facts: Heroin , National Institute on Drug Abuse, June 25, 2003.

9 ) Memorable Quotes from Trainspotting , International Movie Database.

10 ) Platt, Jerome J., and Christina Labate, Heroin Addiction: Theory, Research and Treatment, New York: Wiley-Interscience Publications, 1976, 48.

11 ) Platt, Jerome J., and Christina Labate, Heroin Addiction: Theory, Research and Treatment, New York: Wiley-Interscience Publications, 1976, 52-53.

12 ) How Drugs Affect Neurotransmitters: Opiates , The Brain from Top to Bottom.

13 ) How Drugs Affect Neurotransmitters: Opiates , The Brain from Top to Bottom.

14 ) The Science of Addiction , Somerset Medical Center, February 2003.

15 ) Addict , Dictionary.com, 1997.


Right Brain, Wrong Body:
The (Trans)Sexual Hy

Name: Emily Haye
Date: 2004-04-20 11:03:16
Link to this Comment: 9479


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Transsexuality, like homosexuality before it, is defined as a disorder by the American Psychiatry Association. The DSM-IV lists various criteria that one must meet to be diagnosed with what it calls "Gender Identity Disorder." (1) But transsexuality, also known as "gender dysphoria," is not a medical condition. It is not a disease, or a malfunction of the body. The body of a transsexual operates normally, just out of sync with itself. The brain operates as one gender, while the body operates as the other. The result is a fully functional individual who feels trapped in the wrong body. This is quite a phenomenon; for some reason, the brain and body do not correlate. Why? What can this disjunction teach us about the brains and bodies of people who are not transsexual? In what ways can transsexuality inform out thinking and understanding of the brain in general?

Gender and sex are often used interchangeably, but their distinct meanings are important in the study of transsexuality. Sex is a label assigned at birth based upon one's genitalia. It is further physically defined by genetics (XX v. XY) and the gonads: male or female. Gender, or gender identity, on the other hand, is a self-assigned condition. One identifies as a man or as a woman. Usually, the imposed sex and experienced gender are one and the same. A person with a penis and testes identifies as a man whereas a person with a vulva, vagina, and ovaries identifies as a woman. For transsexuals, however, this is not the case. They are one gender trapped in body of the opposite sex.

What does this mean, "trapped in the body of the opposite sex"? It implies that the self is not the body, because the self feels right and the body feels wrong. Self must be the brain, then, the way the body is experienced. If the brain is the right sex, then, does this mean that male-to-female transsexuals (MTF), or individuals with a male sex who identify as women, have female brains? Was the wrong one grabbed of the shelf during assembly?

If only it were that simple. It's not, of course. The brain is overwhelmingly complex, not just in its function but also in its development. Countless factors affect the adult brain, from the womb to the present moment. In the past two decades, however, neuroscientists have begun to uncover conclusive information about the brain's gender that begins to explain transsexuality.

If brain=behavior, then the brain must be sexually dimorphic. Men and women don't act the same, so naturally their brains aren't the same. Overall, the brains of men and women are similar; the dimorphism is of particular structures. A series of studies published between 1985 and 2001 by scientists of the Netherlands Institute for Brain Research document data on the sexual dimorphism of one of these structures, the hypothalamus, and its implication in transsexuality.

In 1985, D.F. Swaab and E. Fliers published their data on the volume and cell number of a region of the human hypothalamus known as the sexually dimorphic nucleus of the pre-optic area, or SDN-POA. Post-mortem analysis of the brains of 13 men and 18 women, between the ages of 10 and 93, concluded that the male SDN-POA is on average 2.5 times larger by volume than its female counterpart and contains an average of 2.2 times as many cells. No function was attributed to the SDN-POA at the time of publication, but the authors noted that "it is located in an area essential for gonadotropin release and sexual behavior in other mammals." (2)

Conclusive studies were published in 1995 and 2000 correlating the size of another hypothalamus region, the central nucleus of the bed nucleus of the stria terminalus (BSTc) with gender identity. The 1995 study determined that, regardless of sexual orientation, the male BSTc was significantly larger in individuals identifying as men than in those identifying as women. For the first time, MTFs were also studied, and their BSTc sizes fell in the female rather than male range. (3) The small sample size (only six M-F transsexuals) and clear results implies that the trend of female-sized BSTc's in MTF is a strong one, as a subtler trend would not appear in a small sample size. (4) The 2000 study confirmed the female-ness of MTF BSTc's. Again, hetero- and homosexual men, heterosexual women, and MTF were studied, this time with attention paid to the quantity of a particular cell type in the BSTc. Again, the MTF data fell within the female range. The study also published the first data of a female to male transsexual (FTM) hypothalamus, whose data fell within the male range. (5) These studies answer, at least in part, the question of what it means to have the right brain but the wrong body: the twelve MTF and one FTM all had BSTc's that correlated with their gender identity rather than their physical sex.

A myriad of important and useful questions are raised by these findings. There are major implications for two central ways that we, as a class, think about the brain. The first of these is the thus-far supported hypothesis that brain=behavior. (6) The second is the existence of an entity known as the I-function, one of the many interconnected "boxes" within the brain. (6)

At first, my thinking was that because psychotherapy has no affect on transsexuality, it is a case in which brain does not equal behavior, or perhaps one in which the brain is larger than the reaches of behavior. This, however, is not the case. The behavioral practices of psychotherapy act on the cerebral cortex, so of course they would have no effect on transsexuality, which lies, we think, in the hypothalamus. Other behavior, however, outside the setting and practices of psychotherapy, has no effect on transsexuality. Often times, transsexual people attempt to assume the gender role, or expected social and cultural practices, associated with their assigned physical sex. Some are able to come to terms with their feeling of gender dysphoria and live in the opposite gender role, while others cannot. I found nothing to suggest that a transsexual person has ever been "cured," or able to change their gender identity. It is always the gender role and/or physical sex that is modified to match the gender identity. Regardless, altered behavior is not changing the brain. Assuming the opposite gender role does not alleviate the gender dysphoria and therefore does not alter the hypothalamus. Why not? Why does brain=behavior not apply here? At this early point in the study of the brain it isn't possible to answer this question.

Another brain/behavior question is one of the chicken and the egg: Did the brain come first or did the behavior? In other words, does the structure of the brain lead to the experience of transsexuality, or does the experience of transsexuality, created by social and other non-biological factors, influence brain structure? (7)) The 1995 and 2000 studies discussed above lend some insight. Brain material of postmenopausal, as well as castrated and non-castrated MTFs was used. These samples did not exhibit statistical differences from their groups, implying that adult levels of sex hormones do not influence the structure of the BSTc. ((2), (5) The structure must then have been determined developmentally, and it was some case of altered exposure or sensitivity to androgens (male sex hormones) in prenatal and early postnatal development that caused the smaller, female-like BSTc in the MTFs. (8), (9)

Our understanding of the I-function is challenged by these studies as well. We recently concluded that the I-function must reside in the neocortex. Animals with neocortexes seem to display some degree of I-functionality, whereas animals without do not. (6) But we never determined what exactly the I-function is. It is the part of Christopher Reeves that cannot move his leg, although the leg can move. But how much of Christopher Reeves is it? Is it all of Christopher Reeves, as a self? The role of the BSTc in (trans)sexuality says "no." The BSTc is part f the hypothalamus, not the neocortex, and so is not a part of the I-function. And yet it is responsible for a huge part of the self, a part so large people undergo sex-change operations to bring their bodies in line with what the BSTc says. So self, that elusive entity, must be more than the I-function.

Transsexuality, though incomprehensible to those of us for whom gender and sex is aligned, is telling us a lot about our aligned selves. From recent studies, we know that our behavior can't always change our reality like many shrinks say. We know that our gender identity is established at a very young age, even to some degree, in the womb. We know that our gender identity, a huge part of who we are, lies at least somewhat in a tiny bundle of nerves in a structure so deep in our brains we can't even point to it. We know that our neocortex, which we so prize as a sign of human intelligence and consciousness, the seat of the psyche and the mind, isn't the sole seat of our selves. Our selves are more spread out, residing in part in a region of the brain that we share with rats, gerbils, and guinea pigs. And while we know these things, we don't know everything. In short, those of us for whom the distinction between gender and sex is not immediately important, for whom the two correlate, take a lot for granted. And we are all a lot more complicated than we think.

References

Cited References

1)Diagnostic and Statistical Manual of the American Psychiatric Association, Fourth
Addition, 1994.

2). "A Sexually Dimorphic Nucleus in the Human Brain.", Swaab, D.F, and E. Fliers. Originally published in Science, New Series, Vol 228, No 4703 (May 31, 1985), 1112-1115).

3)"A Sex Difference in the Human Brain and Its Relation to Transsexuality.", Zhou, J.N. et al. Originally published in Nature 378: 68-70 (1995).

4)genderpsychology.org

5)"Male-to-Female Transsexuals Have Female Neuron Numbers in a Limbic Nucleus.", Kruijver, Frank P.M. et al. Originally published in The Journal of Clinical Endocrinology and Metabolism, Vol 85, 2034-2041 (2000.

6)Grobstein, Paul. Class Lecture/Discussion, Biology 202: Neurobiology and Behavior. Bryn Mawr College, Spring 2004.

7)"The Chicken-and-Egg Argument as It Applies to the Brains of Transsexuals: Does It Matter.", Breedlove, Marc. At genderpsych.org

8)"A Role for Ovarian Hormones in Sexual Differentiation of the Brain.", Fitch, Roslyn Holly and Victor H Denenberg Originally published in Behavioral and Brain Sciences, 21: 311-352 (1998).

9)"Sexuality: Gender Identity.", Ghosh, Shuvo et al.

Other Interesting Resources

Gender Identity Research and Education Society (gires), Britain.

Sex Differences in the Brain, by Dr. Doreen Kimura.

Transsexuality: An Introduciton.

A collection of links to transsexual information, both scientific and personal

Notes on Gender Identity Disorder by Anne Vitale, Ph.D.

"Structural and Functional Sex Differences in the Human Hypothalamus.", Swabb, Dick F. et al. Originally published in Hormones and Behavior 40: 93-98 (2001).


Irritable Bowel Syndrome and Hypnosis
Name: Kimberley
Date: 2004-04-20 12:50:24
Link to this Comment: 9485


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Irritable bowel syndrome (IBS) is a disorder fraught with controversy. Its cause is unclear and its cure for all those who suffer with the syndrome is yet to be determined. Yet, it cannot be ignored due to the fact that it affects 10% to 17% of the population (1) (2) and billions of dollars go to physicians' visits, prescriptions and lost workdays every year because of IBS. (3) Diagnosis for IBS mostly involves eliminating an organic cause for the symptoms. The symptoms of IBS include relief after defecation, diarrhea or constipation. (1) Once enough testing has been done to decrease the likelihood that some other infection or disease is not the cause of symptoms, IBS is diagnosed.

As there is no known outward cause for the disorder, it must involve the body itself. This is apparent through the current modes of treatment. The treatments include avoiding foods that may aggravate symptoms, taking tricyclic antidepressants and hypnotherapy. (3) Avoiding certain foods may be related to individual's allergies towards those foods, which may have a genetic link. Also tricyclic antidepressants that reduce constipation such as Tegaserod or reduce diarrhea such as Alosteron have been proven effective in certain trials.(1) (3) The literature shows, however, that hypnotherapy is the most effective form of treatment for those diagnosed with IBS. (1) (2) (3) (4) (5) (6)

The fact that hypnotherapy shows positive results may be related to the high co morbidity of IBS and stress in patients.(1) Stomach discomfort is a common symptom of nervousness and anxiety. The author can recall many instances when she was in a stressful situation, such as before an interview or before presenting a project in class and feelings of an upset stomach would arise. It is through such examples that one becomes aware of how close the connection between thoughts and physical responses are. The upset stomach was a direct result of the nervousness. She could only get rid of that symptom but reassuring herself and decreasing the number of worrisome thoughts. The reduction of anxiety reduced the discomfort in the bowels.

Those with IBS may have a hypersensitivity to this symptom of stress. (6) Just as some feel hungry when nervous, while others loose their appetite, people with IBS may have more severe bowel problems connected to stress. It is unclear as to whether this is true in all IBS patients, especially those who do not see a doctor for treatment. However, for those with stress related or stress induced IBS, hypnotherapy has proven to be an effective means of treatment. (6)

Research on hypnosis has reduced much of the mystery surrounding the process. As Galovski and Blanchard (2) report, all participants in their study accepted hypnosis as a satisfactory treatment option for IBS. This overall acceptance may be generalized to the comfort level of the United States as a whole with regards to hypnosis. Today there are standardized ways to hypnotize an individual, with many free scripts and instructions available both in print and on the internet. (7) The basic theme underlying most of these ways to induce hypnosis are attaining a relaxed state. Generally the person stares at a fixed point and listens to the hypnotist (either an actual person or a recording) recite a script, which induces relaxation. (7) (9)

The Stanford Hypnotic Susceptibility Scale (8) is a way to determine how deeply a person can be hypnotized. There are 12 items or tasks the hypnotist asks the person under hypnosis to perform. One example is the hypnotist tells the person that they have no sense of smell. If the person were very susceptible to suggestion under hypnosis he or she would not react to a putrid smell. Someone who is not as easily influenced under hypnosis would draw back from the source of the smell. (9) The more times a person under hypnosis reacts in line with the given suggestion the more hypnotizable the person is. A person who is very hypnotizable would score a 12, meaning they acted in accordance with every suggestion. A person who is not at all susceptible to hypnosis would score a zero. The general population scores in the range of 5 to 7. (9)

Hypnotherapy is used as a way to make people more aware of their bodies so that they may have better control over their general functioning. Hypnosis has been studied in acute pain management and has been shown to reduce perceived pain in moderately and highly hypnotizable people. (10) It has also been used to relieve chronic pain such as that experienced by cancer patients by improving distraction techniques such as visualization. (6) (11) The person focuses attention away from pain and on to pleasant images thus reducing the experience of pain.

Hypnotherapy for sufferers of IBS seems to have the least effect on those that have symptoms related to diarrhea. (2) (5) This may be due to the types of suggestions given to patients under hypnosis. For example, in the study conducted by Galovski and Blanchard imagery such as easily flowing water equated with digestion and intestinal function was used. (2) This sort of imagery would be useful if the patient had symptoms of constipation, however for someone suffering from diarrhea their digestive tract functions too much like this image. Listening to suggestions like that would not show a reduction of symptoms and could potentially exacerbate the problem.

One study showed that most of the physiological effects of IBS remained, despite self-reports of improved symptoms during and after hypnotherapy. (5) This result is in line with other studies that looked at perceived distress on the body during hypnosis. (10) (12) Hilgard (10) showed that acute pain thresholds could increase under hypnosis but that the physiological responses to stimuli, such as heart rate, are similar to those not under hypnosis and experiencing the same painful stimuli.

Likewise, Williamson et. al. (12) carried out an experiment involving bicyclists and perceived physical effort. They found that hypnotized bicyclists had increased blood pressure and heart rate when they were cycling under the suggestion of going up an incline. However, under the suggestion that they were going down a hill their blood pressure and heart rate were the same as under the suggestion that they were going on a flat surface. Under all three conditions their speed and work done was the same. The participants reported that their perception of the work done was less while going down hill, but their physiological responses did not reflect that. (12) Here are examples of perceived pain or distress on the body being less than physical indicators would suggest.

This might indicate that in IBS, hypnosis may not cure all, most, or any of the actual symptoms but rather reduce or eliminate the perceived discomfort and pain associated with it. This conclusion would correspond with findings that those with IBS are hypersensitive to pain in the bowels. It would also be consistent with the high levels of co morbidity of IBS and anxiety disorders. (2) (3) One is then left wondering if the hypnotherapy is just treating the anxiety disorder. And if symptoms do actually remit were they just caused by the anxiety? If they were, did the person really have IBS or just an anxiety disorder with physical symptoms?


References

1) Talley, N. J. & Spiller, R. (2002). Irritable bowel syndrome: a little understood organic bowel disease? [Electronic version]. The Lancet, 360, 555-564.

2) Galovski, T. E. & Blandchard, E. B. (1998). Treatment of irritable bowel syndrome with hypnotherapy [Electronic version]. Applied Psychophysiology and Biofeedback, 23, 219-232.

3) Farthing, M. J. G (1995). Irritable bowel, irritable body, or irritable brain? [Electronic version]. British Medical Journal, 310 (6973), 171-176.

4) Houghton, L. A., Calvert, E. L., Jackson, N. A., Cooper, P., & Whorwell, P. J. (2002). Visceral sensation and emotion: a study using hypnosis [Electronic version]. Gut, 51 (5), 701-704.

5) Nash, M. R. (2004). Salient findings: pivotal reviews and research on hypnosis, soma and cognition. The International Journal of Clinical and Experimental Hypnosis, 52 (1), 82-88.

6) Vickers, A. & Zollman, C. (1999). Hypnosis and relaxation therapies [Electronic version]. British Medical Journal, 319, 1346-1349.

7)Hypnosis Script Library - Suite 101.com, free scripts to induce hypnosis

8) Weitzenhoffer, A. M., & Hilgard, E. R. (1959). Stanford hypnotic susceptibility scale, forms A and B. Palo Alto, CA: Consulting Psychologists Press.

9) Nash, M. R. (1997). The truth and the hype of hypnosis [Electronic version]. Scientific American. 277, 47-55.

10) Hilgard, E. R. (1967). A quantitative study of pain and its reduction through hypnotic suggestion [Electronic version]. Proceedings of the National Academy of Sciences of the United States of America, 57 (6), 1581-1586.

11) Reed, W. H., Montgomery, G. H., & DuHamel, K. N. (2001). Behavioral intervention for cancer treatment side effects [Electronic version]. Journal of the National Cancer Institute, 93 (11), 810-823.

12) Williamson, J. W., McColl, R., Mathews, D., Mitchell, J. H., Raven, P. B., & Morgan, W. P. (2001). Hypnotic manipulation of effort sense during dynamic exercise: cardiovascular responses and brain activation [Electronic version]. Journal of Applied Physiology, 90, 1392-1399.


Falling Down- Multiple Sclerosis, Proprioception a
Name: Michael Fi
Date: 2004-04-20 17:33:57
Link to this Comment: 9492


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Many patients suffering from Multiple Sclerosis have difficulty maintaining balance and walking and many suffer falls. These falls and other motor skill impairments result not only from the deterioration of motor neurons, but also from consequent decline in proprioceptive capacity. The inability of a Multiple Sclerosis patient to effectively process reafferent feedback amplifies the neuronal impairment found elsewhere in the patient's Central Nervous System.

Multiple Sclerosis (MS) is a demyelinating syndrome affecting roughly half a million Americans. MS degenerates myelin into plaques (known as sclerosis) which impair electrical conductivity along axons. The underlying mechanisms this process are not well understood at the present time. However it is known that as MS eats away at a patient's myelin sheathes, muscles may gain in average tension or become weak and difficult to mobilize. One of the basic diagnostic tests for MS involves the electrical measurement of evoked potentials, which are an estimate of the time it takes for the CNS to transmit action potentials. Demyelination slows the speed of evoked potentials. (1)

While MS erodes the body's ability to transmit sensory information and send motor instructions, the links which enable the body to marshal muscles for movement resultant from a stimulus decay as well. This decreases the precision and subsequently the utility of concerted motor functions. For example, while driving an automobile a difficulty in properly transmitting the sight of a car veering dangerously in traffic results in an increased braking time. If instead of an oncoming car, one were to see his foot about to step onto a loose rock on a slope, a lag time may prove equally dangerous. This affirmation of body position and subsequent integration is known as reafferent feedback.

While carrying out complex motor functions such as walking, a person must perceive of his position and movement in order to maintain balance, plot a course of action (or inaction) relative to this information and vary muscle movements appropriately. This complex phenomenon is known as proprioception. The sensory apparatus which are responsible for detecting changes in muscle movement and orientation are known as proprioceptors. These proprioceptors, containing myelinated neurons known as gamma motor neurons, are susceptible to sclerosis. (2)

Two common symptoms associated with all types and severities of MS are optic neuritis, which is a term describing plaque interference with optic nerve function, and vertigo, which is a dizziness which is often associated with impartial or irregular sensory input, inner ear problems or brainstem damage. (3) Motor impairment or ataxia is also symptomatic of MS.

Ataxia related to MS comes in several forms and is well documented. MS patients suffering from upper limb ataxia have greater difficulty than healthy individuals in pointing at objects in varying states of motion. (4) Patients with vestibular ataxia have difficulty maintaining a normal gait when they are not deliberately visually monitoring their movements.

While both upper limb and vestibular ataxia are manifestations of direct effects upon topographically specific regions of the CNS and brain pertaining to balance or coordination, proprioceptive ataxia results from a malfunction or deterioration of the gamma motor neurons themselves. (5)

No comprehensive studies have been able to quantify the deleterious effect of sclerosis upon gamma motor neurons and coordination nor have any studies been able to figure out why and how MS makes people lose their balance. But we do know that MS can destroy gamma motor neurons and optic nerves and cause people to fall and injure themselves.

Falling down (especially repeatedly) is a symptom that there are major nervous or muscular system malfunctions. Standing upright is perhaps the most significant evolutionarily derived trait that humans possess, on par with the opposable thumb. Thus we can likely assume that sclerosis-induced damage to the nervous system is rather extensive once a patient begins to fall down with regularity. A fall represents a localized failure of a portion of a balance-related system or a broad failure of an entire system. Either way, a fall represents a general failure of the CNS and other systems to properly maintain an erect state.

In the event of a localized failure within a system, an apt analogy for the resultant general malfunction (a fall) is the propagation of error through a system of equations or a computational model. Assume an MS patient with optic neuritis. Distorted visual imagery of a set of stairs represents an error in perception that is passed along when several other parts of the system attempt to act on the imprecise information. There may or may not be a direct relation between the degree of optic distortion and the degree to which the subject's foot deviates from its normal path down the stairs. Now assume a subject experiencing minor distortion of a proprioceptive signal from the foot combined with a minor distortion of visual input due to optic neuritis. Each individual distortion may be slight enough that alone it could be compensated for, however, the two distortions may be enough to send the subject tumbling down the steps- his foot being unable to feel its way onto the step.

Assessment of the cause of a fall can be difficult. The root problem may be ataxia or vertigo. One way to potentially rule out the malfunction of reafferent feedback and the optic nerves is through the Romberg Test for balance. This test simply consists of nudging a free-standing subject whose eyes are closed. (6) Perhaps in the future, cheap and effective scans for sclerosis may be able to reveal the site-specificity of plaque buildup. With this knowledge we could determine the correlations between site-specific buildup and proprioceptive malfunctions.

So what does it mean to fall? What does it mean to lose track of your own body? Falling is not just the result of a loss of proprioception or a malfunction of the reafferent feedback system. Falling down is a fundamental failure of the body to either locomote properly or protect itself. When one falls, one regresses into a more childish state, where walking cannot be taken for granted. Falling represents a physical (and mental) devolution, an unlearning of one of the most fundamental motor skills. Falling can be discouraging in the same way physical deterioration can be anguishing. MS patients can be depressive due to their inability to properly control their bodies. Scientists at the British MS Research Centre in Bristol have developed a machine to measure gait irregularities and inform patients if they are in danger of falling. (7) This machine will also serve as a collector of vital data on MS patients' falls and balance problems, enabling development of more comprehensive preventative means.

References

1) Canadian MS society diagnosis ,
Evoked potentials- time it takes CNS to send and receive signals. Demyelination slows time.

2) Proprioception literature review by Darryn Sargant from Australasian Journal of Podiatric Medicine.

3) National MS Society, MS and vertigo

4) MS patients have more difficulty pointing at objects in varying states of motion than non patients. Influence of visual and proprioceptive afferences on upper limb ataxia in patients with multiple sclerosis.
J. Neurol. Sci. Feb. 1, 1999. 163(1):61-9.
Quintern J, Immisch I, Albrecht H, Pollmann W, Glasauer S, Straube A.
Department of Neurology, Ludwig-Maximilians University, Klinikum Grosshadern, Munich, Germany

5) MS encyclopedia, vestibular ataxia- gait disorder. Result of improper visual proprioception.

6) MS encyclopedia, Romberg test

7) BBC: Bristol study in brief, "people are unaware of how bad their balance is..."


Technology and the Written Word
Name: Maria Scot
Date: 2004-04-22 08:12:36
Link to this Comment: 9540


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The history of the development of written language reflects the parallel relationship between the human capacity for communication and technology. The written word allows for populations to develop a shared, cumulative body of knowledge based on the experiences and records of previous generations. The human mind is ill suited to serve as a passive vessel for knowledge or ideas. In the process of acquiring new knowledge we cannot help but twist and revise the information in light of our own story, perceptions, and opinions. In creating a written language humans found a way for information to exist independently from a human host, allowing it to remain free of the distortion and bastardization that it would inevitably undergo. Our capacity for language is perpetually evolving to allow for better communication in order to gain more quickly and effectively new information and modify behaviors accordingly. To this end the recent development of computers and the creation of the Internet provide human society with an unprecedented forum in which the written language instead of serving as a means of preserving knowledge, is an immediately accessible source of information and method of communication to larger and more diverse groups than ever before. Such developments are altering our language structure as it now exists and making significant new demands on our methods of communication in the future.

Language has always been a fairly fluid entity, evolving and shifting to get the intended point across. As cultures emerged and faded out or merged into different groups, so with them went a variety of different systems of written language (3). The earliest languages tended not to be solely phonetic, ideographic or pictographic, but combinations thereof (3). Our modern languages, at least since the invention of the printing press, have tended towards phonetic alphabet based systems. The invention of computers drastically alters the situation. Computers offer a graphical interface that allows the use of graphical languages in a way that the printing press and typewriters did not.

Computers fundamentally alter the content timing and manner of written communication. They remove many of the technical difficulties that limited the ways in which text could be created and used effectively. Since the advent of the printing press in the mid-fifteenth century, texts and documents were more easily created using a phonetic alphabet. It is more practical to re-arrange a limited set of phonetic symbols to create all possible words than to have the thousands of individual ideograms and pictograms. In the Japanese language, for example, there are two distinct systems for writing. One of these systems, Kata Kana is a phonetic system of writing using 71 graphic symbols and when read must be comprehended syllable by syllable (unlike English words for example, which are easily identifiable just by looking at them). The second language, Kanji, is mostly ideographic and represents both sound and meaning and is comprised of over 40,000 ideograms (3). Obviously, it is easier to create a typewriter, printing press and even word processor working with 71 phonetic symbols rather than with 40,000 plus ideograms. The first Japanese typewriter, for example, was produced in 1915 and had a flat bed of 3,000 keys (4). As books became the preferred method for conveying ideas, stories and knowledge, practical concerns continued to encourage the use of alphabets over ideograms. Libraries faced the practical challenge of being forced to catalog and index texts. It is reasonably straightforward to index works written using an alphabet. It is significantly less straightforward to index ideograms. Computers remove most of these technical issues. Computers' graphical interface allows the use of graphical languages, presenting society with a new possibility for creating a sort of hybrid method of communication between ideographic and phonetic writing systems. Computers potentially can utilize the strengths of both writing systems to address the difficulty that various individuals have with phonetic processing by using other means to communicate information usually only available in a phonetic writing system. Dyslexic children, for example, can learn to read when the words are represented by single characters rather than a series of phonemes (1). Computers have the potential to use this distinction between phonetic and non-phonetic processing to the child's advantage by converting texts. Similar possibilities exist for victims of stroke or other brain damage whose phonetic processing abilities have been injured (1). When speakers of Japanese, for example, suffer certain damage to the left hemisphere, their ability to read kana is profoundly disrupted, while their ability to interpret kanji, or ideographs, is relatively undisturbed (1). Even in individuals without brain injury, the simultaneous integration of phonetic and non-phonetic processing presents new communicative methods. Beyond the technological possibilities presented by computers, the creation of the Internet is a space that has profoundly altered human communication. It provides a new forum in which texts composed of a mixture of pictographs, ideographs and the written word are immediately made available to a broad audience. Unlike books, much of what is written on the internet is not intended for posterity. Rather, the internet is meant to facilitate immediate communication. This development is a fundamental shift in the intention of written language and, as a result, the way in which the language is used is evolving to best suit its new purpose. To that end it has evolved it's own 'shorthand' in an attempt to allow for written communication to take place at the same speed as spoken language while retaining many of the subtleties of speech. In this context ideograms have come back into use, because they communicate a larger idea or sentiment in a single character. 'Emoticons', small graphical depictions of faces making a variety of expressions (smiling, winking, frowning), for example, have come into frequent use in Instant Messaging programs, because the written word alone does not provide the recipient with enough information about the exact meaning behind a message. (was it intended sarcastically, jokingly, seriously, ect.) In some ways it is much more a form of literally 'written speech' and as such is based in the phonetics of the spoken language. 'Want to' on the internet often becomes 'wanna,' 'going to' becomes 'gonna'. The technology of computers is allowing humans to continue to develop systems of written communication consistent with their ability to process and exchange information.

Humans are moving away from texts that are purely phonetic and evolving a writing system that makes use of a variety of communicative methods. Many of the forces that previously determined the nature of our language no longer apply and technological advances allow us to make use of the many ways in which humans can receive information and express their knowledge and thoughts. Joseph Brodsky once wrote that "..apart from pure linguistic necessity, what makes one write is...the urge to spare certain things of one's world-of one's personal civilization-one's own non-semantic continuum. Art is not a better, but an alternative existence; it is not an attempt to escape reality but the opposite, and attempt to animate it. It is a spirit seeking flesh but finding words." Perhaps what this more diverse, integrated system of writing moves us towards is not finding words so much as meaning. It again allows us more freedom in using written language to express ideas instead of forcing ideas through the sometimes inhibiting paradigms of language.


References

1)Kandel, Eric R. Principles of Neural Science. Simon & Schuster. 1991.
2)Birth of a Writing Machinediscusses the development of a Japanese type-writer
3)History of Cuneiform As the title of the page might suggest, this is a history of Cuneiform.
4)Early Office MuseumImages of early type-writers ect.
5)website of the International Dyslexia Foundation


Dreams and the Unconscious
Name: La Toiya L
Date: 2004-04-28 04:15:09
Link to this Comment: 9658


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

The complicated yet interesting connection between dreaming and our brains is one that many scientists, philosophers and psychologists have grappled with. Dreams alone are strange and obscure and the human mind is intricate and complicated but their relationship is one that, when examined, is fascinating. A predominant figure in the history of dreams is Sigmund Freud, although not originating the concept of dream interpretation, was integral in developing some methodologies of utilizing the dream as a means of deciphering the psyche of the dreamer - particularly in uncovering and analyzing the dreamer's psychological problems. Sigmund Freud's analysis of dreams and their connection to our unconscious were one of a kind compared to other beliefs that were held during his time. Freud delved into the human mind in ways that many before him hadn't.
In Freud's book, "The Interpretation of Dreams" he describes five distinct processes which are brought into play during dreamwork. (1)

• Displacement: This is where the dreamer represses an urge, and then redirects
that urge to another person or object.

• Condensation: This is the process whereby the dreamer disguises a particular
urge, emotion or thought by condensing, or contracting, it into a brief dream
image.

• Symbolization: This is where the repressed urge is played out in a symbolic act.
For instance, in Freud's methodology the act of inserting a key into a keyhole
would have sexual meaning.

• Projection: This is the projection of the dreamer's repressed desire onto other
people, but should not be confused with displacement as it does not involve
objects. In projection, instead of dreaming about sleeping with their co-worker,
the individual would dream of their boss in bed with the desired sexual partner,
projecting the urge onto the boss rather than literally dreaming themselves in
the bed.

• Secondary revision: This is the expression Freud uses for the final stage of
dream production. After the individual undergoes one or more of the other four
dreamwork processes, they then undergo the secondary processes of the ego in
which the more bizarre components of the dream are reorganized so the dream has a
comprehensible surface meaning. This surface meaning, once arrived at through
secondary revision, is called the manifest dream.

Freud makes use of psychological techniques to interpret dreams and in doing so the dreams reveal themselves as a psychological structure, full of significance. Also through psychological interpretation the structure of a dream can be attributed to a specific place in the psychic activities during the waking state. By looking into the unconscious there's plenty to learn about the structure of the human mind. Freud argues that the structure of the human mind largely comes from the unconscious, while others argue that the human mind is solely based on the conscious. To assume that our mental structures are based on that which we experience consciously is ignoring a large portion of that which contributes to our minds, the unconscious. Some argue that certain behaviors are simple, natural, and can't be explained but in actuality all human behaviors complex or not can be better understood through exploring the convoluted world of the unconscious. Freud believed dreams were a door into the human psyche and a crucial part of understanding the mind.

There are other interesting viewpoints regarding dreams some of which support Freudian thought and others which negate it. It's widely accepted that we dream of what we have seen, said, desire, fear, or have done. "Experience corroborates our assertion that we dream most frequently of those things toward which our warmest passions are directed." (2). There are also theories on dreams and how they function both in the conscious and the unconscious. Theories, like those of Franz Joseph Delboeuf, state that the full psychic activity of the waking state continues in our dreams. Here the psyche does not sleep. The theory of partial wakefulness, on the contrary, argues that in dreaming there is a diminution of the psychic activity, a loosening of connections, and an impoverishment of the available material. The theory of partial wakefulness did not escape criticism even by the earlier writers. Dr. E. Friedrich Burdach wrote in 1830: "If we say that dreaming is a partial waking, then, in the first place, neither the waking nor the sleeping state is explained thereby; secondly, this amounts only to saying that certain powers of the mind are active in dreams while others are at rest. But such irregularities occur throughout life..."

The different concepts of Dreamwork explain how dreams are produced and function. Dreamwork occurs while we are asleep and at the point when the mind is about to start dreaming. The Dreamwork operation condenses ideas and thoughts of the mind into latent and manifest dreams in an objective to make dreams as unintelligible and incomprehensible as possible. Dreamwork produces dreams by used information from our conscious and unconscious. Dreamwork is how so many ideas and thought get condensed in our dreams into little story like films. Dreamwork has operations like condensation, dramatization, displacement, and word play that are tools in creating dreams.

Repression occurs in the mind when the human conscious feel threatened. Repression is a way the human mind protects itself from anything it cannot handle by blocking it out of the conscious psyche. Repression is a shielding method by which the conscious blocks something out which usually reoccurs in the unconscious state. Freud believed that even though a person may not be mindful of the repressed information it's still present, and is accessible through psychological techniques. It should be understood that the conscious mind doesn't always repress information or thoughts purposely, but rather it's the psyche doing so unconsciously. Freud believed that desires and ideas that society deems as unacceptable are dumped into the unconscious by means of repression.

Freud argues that the psyche is made up of the conscious and the unconscious mind and when analyzing the mind it is imperative in knowing that these two sectors are distinctively different. The mental aware mind of people usually uses the conscious mind. The conscious mind is where reality and our mind have created how we react, learn, and think. When we use or logic and learned behavior we are referring to our conscious mind. Even though conscious mind is referred to the most, it's minimal in size compared to the unconscious. The unconscious, which is the bigger part of the mind, makes up the majority of the psyche. The unconscious which is considered to not be used often is actually used the most. Although our conscious is directly linked to our behaviors, the reasons behind why we do the things we do are derived from our unconscious. The unconscious is storage for repression, emotions, memories, thoughts and feelings that have already happened to a person. Dreams are unique. No other individual can have your background, your emotions, or your experiences. Every dream is connected with your own "reality". Although this much is true, neuropsychology is making rapid strides in helping us to understand the aspects of self and society that affect our dreams.


References


1)Interpretation of Dreams

2)The Interpretation of Dreams (3rd edition) by Sigmund Freud

"Glossary of Freudian Terms."

"Dreamwork, Dream Library


Nature vs. Nurture: Are We Asking the Right Questi
Name: Natalie Me
Date: 2004-04-29 10:30:59
Link to this Comment: 9682


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


I began this paper with the intention of writing about the possible connections between memory and computers, and mind and technology. Recently, a class discussion on memory spilled over with a friend into a lunch table debate. I can't even begin to explain how the discussion maneuvered from computer chip memory, to individual and collective memory and knowledge to the controversial issues surrounding nature v. nurture. I realized that I was much more interested in investigating the question of nature v. nurture. No one can deny that both one's genes and environment impact a personality, so other issues must fuel the debate. This is the conclusion I came up with as I read through online materials. Let me outline my discovery:

I was taught growing up that nature played the major role in determining one's behavior. Not that my parents are profoundly positivist scientists, but my beliefs have much to do with theirs. I am one of eight children and the distinct differences in each one of our personalities convinced me that we must have been born with many personality traits already established, or with the groundwork for a predisposition to develop. How could one child be outgoing and social while another is extremely introverted? How could one be so devoted to studies while another is deeply committed to football? For the most part all eight of us have grown up with a similar upbringing and like experiences, but our behaviors, attitudes and intelligence vary wildly. Certainly this is due to some specific set of predispositions.

And then there are issues of personality that seem to manifest themselves at very early ages, before many psychologists will admit that a personality can even be formed. My father, at the age of four, one day after his parents had left for a dinner party, told his babysitter that he wanted to die, convinced that his parents were never returning. Where did this fatalistic disposition come from at such a young age? He has continued to suffer from depression his whole life. Hearing him discuss his condition and looking at the lack of precipitating factors always led me to believe that this, for sure was a caused by his genetic makeup and because this disease has impacted his life so dramatically, it certainly has made up much of his personality and subsequent behaviors.

First, I want to review some of the arguments behind each viewpoint. Let me start with the major biological processes involved in the nature argument. I know genes are responsible but what exactly are those? Beyond what I remember from eighth grade health-science, I realize I know embarrassingly little about how a gene might actually impact who I am. The scientists who set out on the Human Genome Project probably felt similarly and though their work has taught us a lot more, we still have much to learn. We do know that humans have about 30,000 genes. It was noted over and over again on the web that this is only twice as many as the simple fruit fly. So what is it that makes humans and human behavior so complex?

Genes contain a coded pattern of proteins and it is in these combinations that our humanity and individuality really come out. "Proteins are the chemical tool of every cell...the chemical environment inside the cell is controlled by what proteins are present [and] in turn the chemicals in the cell control what genes are active at what time" (1). With an infinite potential for variation in these genes, it seems conceivable to me that we could all be the result of mere fluctuations in a chemical process.

However, some scientists and philosophers disagree. According to Davies (2), Craig Venter, a scientists working on the Human Genome Project, doesn't believe that we do have enough genes to buy into biological determinism. A twin study done by Swedish scientists and published in the New England Journal of Medicine found that in determining cancer, environmental factors were much more reliable (3). Psychological and philosophical behaviorists (like the notable Skinner and Watson) believe that we are born with a blank slate and our personality and subsequent behavior are developed only through experience.

Once again, though, I have already conceded that both nature and nurture play a part. Are we trying to crack the puzzle of how much environment and genes exactly contribute? Twin studies (the most common method of studying this issue) have usually resulted in "assigning percentage values [that implicate] both genes and environment" (4). Is this really what we are searching for? I am beginning to feel as though I am missing the point.
Let me return to the natural acknowledgement that nature contributes some unknown amount and environment the rest. But the fact that people continue to make declarations on the side of nature or nurture leads me to either want to throw the whole thing out, or figure out why it remains such a compelling question. Why haven't people accepted a shared source for behavior? Why the continued discussion and debate? What makes this topic such an omnipresent force in science and society? I determined that cultural or social features make this issue such a salient one.

Let me now consider those social implications of this debate. If we are going to buy completely in to the biological determinism argument, what are the possible consequences of doing so? First of all, legal and ethical consequences are far-reaching. We haven't even begun to contemplate all of the subsequent results if we declare a 'criminal' gene. Genetic discrimination might become rampant. Though some states have passed genetic privacy laws, once testing, identification and record keeping begin, what is to keep us from becoming unemployable and uninsurable (2)?

Many are concerned that criminal acts could be justified by a 'bad' or 'criminality' gene. This could also render those individuals unemployable and uninsurable. The possibility of a 'gay' gene also concerns many conservative and gay-rights groups. Labeling sexuality as a purely biological (or non-biological) process overrides an individuals autonomous choice of lifestyle. But it also might establish an imperative for equal rights (5). Both sides of all these issues seem to have some moral claim to proving or disproving biological determinism.
If we could link many behaviors and personality traits to particular genes, would that allow us to manipulate those genes and processes on an as-desired basis? There are far-reaching bioethical considerations for this. The link between man and machine is even implied in this situation. Even if it is possible, can we allow ourselves to depend on technology to rid the human race of bothersome diseases and conditions?

One article I read linked this issue to memory and collective or individual knowledge. I was somewhat surprised at this, not recognizing the impact this debate has on those arenas of discourse. LeDoux (6) argues that synaptic plasticity plays a large role in mediating between different types of memory that allow us to learn, while at the same time it depends on our inherent traits or processes. He says that explicit memory, the "ability to consciously recall past events" interacts with our programmed implicit memory, or "our ability to see the world the same way other humans do." An example of this would be that "we are born with the ability to act afraid, but we usually have to learn precisely what to fear" (6).

LeDoux believes that "the nature/nurture debate operates around a false dichotomy: the assumption that biology, on the one hand, and lived experience, on the other, affect us in fundamentally different ways" (6). For LeDoux, implicit memory and explicit memory allow us to learn from two sources concurrently. Might this false dichotomy be the assumption upon which we are basing our discussion that is leading us around in circles?


Rothman (3), reports that:
"to label a disease as genetic-only is to propagate the idea that an individual is doomed to live with his or her genetic makeup. Conversely, classifying disease as environmental only does not explain the role of genetic variations that increase susceptibility to environmental factors. These labels serve no purpose and are misleading" (3).

It seems we have stumbled upon a either a roadblock, or an exit from a never-ending discussion. What might be a better question to ask? Or, what other factors and issues should we be looking at? The conclusion that we have been giving the wrong issues our undue concern is the idea I came to, and began my paper with. I found an article on sociobiology that seemed to offer a glimpse into just how narrow the nature v. nurture debate is.

The article analyzes 1993 NORC General Social Survey data in attempt to discover people's belief's on what they think determines how their life turns out. They looked at five different factors: God, genes, society, individual work and effort, and chance (7). I was most interested at the society or culture option. The article argues that "distinguishing nature vs. nurture in terms of determinism vs. free will is probably erroneous when one considers the extent to which enculturation patterns minds, selves and behavior" (7).

What my research has done is to connect many different issues to the one I was originally concerned with. I now have a sort of mental map of controversy in my head. I do agree that perhaps we are asking the wrong question, or maybe we are just attempting to simplify concerns that have far reaching implications. Whatever the case, I think that we could look to socialization processes to help us identify new questions to ask. Socialization process taken beyond the realm of 'nurture' may provide an alternative to the old dichotomous debate.

As I finish up, I do have one final thought: through my reading I have been reminded that biology is a not a value-free field. Looking back at my experience and many others' I wonder about the extent that religious values have impacted this ongoing public, political and academic dialogue. Certainly religious concerns are implied here, invoking ideas such as a pre-Earth life or a purely scientific universe. Perhaps this is why many people cling to a science-based answer, or one that invokes a more mystical world... and come to think of it, where does this behavior come from? Nature, nurture or society? Might this be the real question we should be investigating?


References

1)http://environmentalet.hypermart.net/psy111/naturenurture.htm

2)http://www.pbs.org/wgbh/nova/genome/debate.html,

3)http://www.cancer.org/docroot/NWS/content/NWS_1_1x_Nature_vs_Nurture_The_ Debate,

4)http://www.cdc.gov/genomics/info/factshts/nvsn.htm,

5)http://members.aol.com/leolighterx/orientation.html,

6)http://home.att.net/~xcar/tna/ledoux.htm,

7)http://www.trinity.edu/mkearl/socpsy-2.html,


Some Thoughts on Smoking
Name: Tegan Geor
Date: 2004-04-30 16:54:14
Link to this Comment: 9708


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

If I were to wake up tomorrow without arms or a mouth, I would light a cigarette with my feet, and smoke through my nose.

This might be what a person could consider a serious problem.

Cigarette smokers, often more than others, can tell you what is bad about smoking cigarettes. Smoking cigarettes puts a person at risk for bronchitis and emphysema. Heart disease. Strokes. Ulcers, cataracts, and osteoporosis. Pretty much every cancer a person can get, highlights including lung, mouth, kidney, uterine, cervical, prostate, and colon cancer. (1). (1) Cigarettes make you smell funny, turn your teeth yellow, make it harder to smell things or to taste things, decrease the elasticity of your skin, and cause people you scowl at you as they walk in and out of buildings.

These are things I know.

And yet, I—along with 46 million some-odd other Americans—still smoke cigarettes. A third of us try to quit every year, and only about 3 percent of those will succeed without outside treatment by way of therapy or prescribed drugs. (2).

So why is it so hard to quit? The answer to that question may be found in some new research on the matter.

The nucleus-accumbens is an area of the brain connecting the ventral tegmental area and the prefrontal cortex. The pathway from the vertral tegmental area through the nucleus-accumbens to the prefrontal cortex is often referred to as the reward pathway: (3). this is the area of your brain which reinforces rewarding behavior—eating, having sex, and numerous other things which human beings do and do often because they make them feel good. The neurons along this pathway use the neurotransmitter dopamine. When a person lights and begins smoking a cigarette, dopamine production is stimulated in this area of the brain, having a calming effect on the smoker (Dopamine is also the neurotransmitter responsible for the high one gets using cocaine, opiates, and alcohol). Elsewhere in the brain, acetylcholine and norepinephrine are released: neurotransmitters that regulate mood, attention, and memory. (2). And if a smoker is anxious, aroused, or stressed, smoking can affect an increase in neuromodulators—chemicals which act to counterbalance neurotransmitters, effectively calming the smoker down.

It isn't hard to get cigarettes. Compared to other addictive substances, it isn't expensive. It is far more socially acceptable than many other addictive substances, too: alcohol may be more widely used, but socially it is far more acceptable to go to work having just smoked a cigarette than, say, having had a few beers. And one of the most troublesome things about nicotine—at least so far as being able to quit—is just how easy and effective cigarettes are at administering the drug. Smoking a drug is very nearly as efficient a way to deliver a drug as injecting it. And what might be the most remarkable thing about cigarettes is the precision to which a smoker can regulate her intake. Nicotine content in a cigarette is around .1 to .2 mg per cigarette, depending on the brand (4).: and at about 10-12 puffs per cigarette—after each of which a smoker could theoretically decide she had had enough—the ease at which one could administer tiny doses of nicotine is really quite remarkable.

And at some point, the ease and precision with which one can smoke will permanently alter your brain structure. (5). A tolerance develops for the stimulated neurotransmitter activity. And according to at least one study, (2). attention, memory, and reasoning ability decline just four hours after a smoker has not had a cigarette, and do not recover for days, even with no further use. It seems current evidence may indicate that brain function altered by nicotine may never return to pre-addiction levels.

All of these things are not looking good for my plan to eventually quit someday. There
is of course always hope: nicotine replacement therapies seem to work well, especially when combined with therapy.

References

1)NIDA, Site on Nicotine Addiction

2)Yahoo Addiction Center

3)Neurobiology and Addiction, Over-simple, but there are some cool pictures.

4)Info on Nicotine Content of Cigarette Brands , From the Vaults of Erowid, a pretty comprehensive if kind of flaky site about the chemical structure, history, of tobacco (and other things). Lots of information.

5)Forget the Eggs: Here's Why Your Brain Gets Addicted to Drugs

Also:

The Truth, a flashy anti-tobacco website for kids, I think.


Seasonal Affective Disorder: A Look at How the Win
Name: Sonam Tama
Date: 2004-05-04 00:35:20
Link to this Comment: 9754


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Most people I know here at Bryn Mawr College feel as though they are still experiencing a bit of the "winter blues" even though it is officially spring. Having lived in the Philippines most of my life, I did not have to worry about the "winter blues" but as I experienced my first East coast winter here, I could feel my moods change along with the seasons. Then I heard about Seasonal Affective Disorder (SAD) and although I know a lot of us get the winter blues, SAD is not just about having the "winter blues." It is a severe form of depression affecting possibly as many as 6 out of every 100 people in the United States and the long duration of SAD symptoms distinguishes it from the "winter blues."

SAD is a mood disorder in which people who are diagnosed suffer from symptoms of depression during the winter months, with symptoms subsiding during the spring and summer months (1). Episodes of depression that occur to the person are related to seasonal variations of light (shorter days in fall and winter) (1) and studies that arctic people suffer more from depression (4) gives strength to this idea and disputes the notion that SAD is a made up disease.
The most difficult months for SAD sufferers are January and February. Depending on the person and the geographical location, depression can last for several months with the following symptoms (2):

• mood fluctuations
• excessive eating and sleeping
• weight gain
• loss of interest in sex
• a craving for sugary and/or starchy foods
• fatigue
• social withdrawal
• seasonal episodes substantially outnumber nonseasonal depression episodes.
• full remission from depression occur in the spring and summer months

These symptoms have a damaging effect on sufferers as they are unable to function without continuous treatment.
SAD was first noted before 1845, but not officially named until the early 1980's when it was discovered by Norman E. Rosenthal, M.D. who observed a correlation between depression and season change after mapping the mood patterns of a group of people for a year. He found that many of the people in the group started to become depressed in the fall with their depression worsening through the winter and decreasing in the spring (3).

So how does SAD work exactly? Sunlight affects the seasonal activities, such as reproductive cycles and hibernation of animals (1). As the seasons change and sunlight patterns are altered, there are shifts in our "biological internal clocks" or circadian rhythm. This can cause our biological clocks to be out of "step" with our daily schedules. Learning about SAD helps us to learn more about the relationship between our body and the environment we are in. More specifically, it is the relationship between sunlight, melatonin (the sleep-related hormone secreted by the pineal gland in the brain (1)) and serotonin (the hormone associated with wakefulness and elevated mood). As night falls, melatonin levels increase and as sunlight emerges, melatonin levels decrease. Serotonin levels increase when you're exposed to bright light, which is why during the summer, moods tend to be elevated and the opposite occurs in the winters since shorter, darker days produce more melatonin (6).

Despite the controversy surrounding studies showing Phototherapy or bright light therapy (BLT) to have a placebo effect, this treatment has also been shown to suppress the brain's secretion of melatonin. Additionally, studies have shown that exposure to bright light may help those who suffer from SAD (2). The device most often used today is a bank of white fluorescent lights on a metal reflector and shield with a plastic screen (1).

A 1999 study by Dr. Timo Partonen and his colleagues at the University of Helsinki's National Public Institute in Finland suggests that more exposure to the sun during the summer may help decrease the mood problems months later in the winter. Partonen and his team found that blood levels of cholecalciferol naturally peak in the fall months, suggesting that during the fall, we use the cholecalciferol that we store up during the summer. Light stimulates the production of Cholecalciferol, which the body transforms into vitamin D, which then helps the body maintain higher levels of serotonin during the winter months (6). So if we try to get even greater exposure to the sun, we may store enough cholecalciferol and then produce more vitamin D, leading to higher levels of serotonin during the winter months than we might usually have. The study concludes that the amount of serotonin you have in the winter is determined by your exposure to light the previous summer – prevent or reduce depression during winter.

If phototherapy doesn't work, an antidepressant drug may prove effective in reducing or eliminating SAD symptoms, but there may be unwanted side effects to consider. Selective serotonin re-uptake inhibitors (SSRIs) are the most successful antidepressant drugs which work by helping naturally produced serotonin stay in the bloodstream longer, keeping your mood and energy levels higher (7). However, there seems to also be a condition called serotonin syndrome, wherein the brain contains too much serotonin, generally caused by interactions between serotonergic drugs, for example by concurrent use of MAOIs (class of antidepressant drugs used less frequently than other classes of antidepressant drugs due to potentially serious dietary and drug interactions they are used less frequently than other classes of antidepressant drugs) and SSRIs (6).

So, how do SSRIs and light therapy compare to one another? A study led by Dr. Daniel Kripke of the Circadian Pacemaker Laboratory at the University of California, San Diego concluded that light therapy benefits not only SAD patients but also people suffering from other forms of depression (6). The study, which was published in the Journal of Affective Disorders, also concludes that light therapy may help to alleviate SAD symptoms faster than antidepressant drugs and that patients who undergo both light and drug therapy could receive the greatest benefits (6).

Alternative therapies to combat depression include proper nourishment with intake of vitamin B complex, vitamin C, folic acid, calcium, potassium, and magnesium as well as regular exercise – and breaks - under the morning sun (to avoid sunburn) to stimulate circulation and release serotonins in the brain as well as meditations. One study found that an hour's walk in winter sunlight was as effective as two and a half hours under bright artificial light (1).

Finally, in being called a "seasonal" disorder, it may give the wrong impression that it is purely environmental. In fact SAD is a great example of the interplay between nature and nurture, biology and the environment. My most pressing question about SAD has been about the biological aspects of the disorder. Are some people biologically more inclined to have SAD? What happens when a person with SAD moves to an area with lots of sunlight? Also, there are statistics that show that younger persons (between 18 and 30) and women are at higher risk (2). But there are some websites suggesting that women may be more ready to admit to depression and ask for help than men (2). I think that the same could be true for younger people. Perhaps people living in areas with less sunlight are more aware of disorders like SAD and are quick to assume that they have it. What I am suggesting is not that SAD is not real but that there is always a greater chance that more people are being diagnosed – and report of having SAD - in areas where more people are wary of the disorder.
But going back to the biological aspects of SAD, there are also studies showing that people with SAD often have the disorder in their family history, and are more likely to have alcoholism in their families than people who do not have the disorder (3). The society for light treatment and Biological Rhythms has recently published extensive studies on SAD. More studies including ones on SAD and puberty as well as SAD and thyroid function are showing the biological inclination of certain individuals towards the disorder (3). Maybe some people build it up by not getting enough sunlight and others already lack the serotonin levels needed. All in all, the various forms of treatments and the many conflicting studies available on the internet indicate that there are currently diverging conclusions regarding SAD and that future study is necessary. But they also suggest that perhaps each individual patient may have a different solution to their problem.

References

1) What is Seasonal Affective Disorder?, , on the National Mental Health Association website

2) Seasonal Affective Disorder , , on the Healing Deva Alternative Therapies website

3) Seasonal Affective Disorder , Seasonal Affective Disorder, Augsburg College Server

4) Seasonal affective disorder in an Arctic community , on the Blackwell-Synergy journals website

5) About light, depression & melatonin , on New Technology Publishing website

6) Summer Sun for the Winter Blues, newsarticle on CNN website

7) on Wikipedia, the free encyclopedia


Pain, Pain, Go Away: Complex Regional Pain Syndrom
Name: Amy Gao
Date: 2004-05-04 17:31:22
Link to this Comment: 9757

<mytitle> Biology 202
2004 Final Paper
On Serendip

It is two in the morning. The paper is still four pages from being completed, the presentation patiently waits upon the printer to be rehearsed just once more, and you blink your sleepy eyes awake—or as functionally awake as you could be with one and a half pot of coffee—to realize that you still have eight hours left to work on all these demonstrations of your intelligence. Eight hours sans sleeping, of course, because by now that activity sounded as familiar as last Spring Break.

And then you feel the throbbing. The pulsation of your heartbeat grows stronger by the minute on the sides of your temples, and you start to realize that this is the beginning of something so well-known to yourself and many others. It had been given many names throughout the ages in different languages, but you thought that none of it sounds as well as its present incarnation, because by the mere pronunciation of the syllables you feel its affects. You call it....

Pain. We have all experienced this unpleasant sensation at some point in our lives, some more than others, some stronger than others, some lasting longer than others, whether it is in the form of headaches, muscle strains, stomachaches, or back pains. We have all developed creative ways to make ourselves comfortable when in pain; with cold or hot pads wrapped around the affected area, with essential oil massages, or just simply sleeping the pain off. Suffice to say, where there are nervous system extensions, there is a possibility for the sense of pain to occur. A Tylenol or an Advil would be the very most medication that we would do for pain; however, for others, it may take a lot more than just a simple pill to alleviate the pain.

Complex Regional Pain Syndrome, or CRPS, which is also called Reflex Sympathetic Dystrophy Syndrome (RSDS), is a symptom characterized by severe burning pain, pathological changes in bone and skin, excessive sweating, tissue swelling, and extreme sensitivity to touch.(1) It is a disorder that occurs at the site of injury after high-velocity impacts such as those from bullets or shrapnel; however, it may also occur without apparent injury to the individual. The symptoms of CRPS are characterized into two different types: CRPS I is the clinical term used to describe the patients who suffer the symptoms of CRPS but with no nerve injury, and CRPS II characterizes patients who experience the same symptoms with nerve damage.

CRPS is indicated by the gradual change of warm, shiny red skin characteristic of wound flesh into being cool and bluish. The pain that is experienced is out of proportion with the injury sustained and becomes worse progressively. In the more severe cases when the symptoms fail to subside after treatment, the joints eventually become stiff from disuse, and skin, muscles and bones atrophy. There may also be periods of remission and exacerbation that may last for weeks, months, or years. The cause of CRPS is unknown, and the various symptoms attributed to the onset of CRPS vary in their severity and duration.

What sets this disorder apart from the other symptoms that are collectively characterized as "pain" is that it concurrently affects the skin, muscles, nerves, blood vessels, and bones. It is a symptom more observed in individuals between the ages of 40 and 60; however, it can also strike individuals in any age category. The diagnosis of CRPS is to be made in the following context, according to the guidelines by the Reflex Sympathetic Dystrophy Syndrome Association of America: there is a history of trauma to the affected area associated with pain that is disproportionate to the inciting event plus one or more of the following: abnormal function of the sympathetic nervous system, swelling, movement disorder, and changes in tissue growth (dystrophy and atrophy).(4)

There is no single, stand-alone test that can be used to detect CRPS; however, there are laboratory diagnostic instruments that may be used in conjunction to detect the presence of CRPS. The two main techniques used to detect CRPS are thermography and X-ray. Thermography is employed to detect the changes in body temperature that are common in those afflicted with CRPS, and X-ray is used to observe any damages or changes in the bone structure.(4) In many cases physicians may order EMG, Nerve Conduction Studies, CAT scan and MRI along with the two aforementioned tests; and the results of these tests may be normal for the patients. These studies are often done to identify if there are any possible sources of pain.

The disorder is believed to affect millions of people in this country, and statistics complied suggest that it occurs after 1% to 2% of various fractures, after 2% to 5% of peripheral nerve injuries, and 7% to 35% of prospective studies of Colles fracture.(2) Due to the nature of this disorder, the diagnosis is often not made early; the mild cases may be cured with no treatment; however, others may progress through the stages and the conditions may become chronic or even debilitating. It has been noticed to spread in three major different patterns: in the continuity type, the symptoms may migrate from the initial site of the pain to another part of the body; in the mirror-image type, the symptoms may spread from one limb to the opposing limb; in the independent type, the symptoms may even jump to a distant part of the body.(5)

CRPS is defined by some experts to progress in three stages, involving physiological changes in the affected area; however, this claim has yet to be independently verified.(3) During the first stage, the affect area suffers from severe pain along with muscle spasm, joint stiffness, rapid hair growth, changes in the blood vessels that result in change of color and temperature of the skin. This is believed to occur over a period of about three months. The symptoms that would mark the second stage of CRPS are intensifying pain, swelling and decreased hair growth among others; it is expected that the symptoms that characterize stage one would be worse during stage two. The final phase of CRPS is when irreversible changes occur in the skin and bone; the limbs would have very limited movement and may even become contorted, and there is noted atrophy of the muscle.

Even though current research has not yet demonstrated a definite mechanism relating in jury and CRPS, however, scientists have formed a plausible explanation for the occurrence of CRPS. The fright-flight response mechanism in is very important for survival. Once it is initiated, the sympathetic nervous system is activated as a response to injury. The firing of the sympathetic nerves cause the contraction of blood vessels in the skin, and as a result, forces blood deep into the muscle and consequently enables the victim to use his/her muscle after an injury to escape from the danger; there is also a decreased supply of blood to the skin to reduce blood loss. As this is an "emergency" response activated when the individual is in danger, it is typically shut down in a fairly short amount of time after an injury. However, for the individuals who develop CRPS, the sympathetic nervous system does not shut down after an extended period of time. As a consequence of the unregulated sympathetic activity at the site of injury, there is an inflammatory response that causes blood vessels to spasms, which lead to even more pain and swelling, which leads to more pain and even more response.(5)

There is no single magic pill for CRPS; therefore early detection and treatment are very important. Treatments for this syndrome involve pain reliving and rehabilitation of the limbs (or body parts) affected. Initially, if the symptoms are mild, non-steroidal anti-inflammatory drugs or naproxen sodium may be the suggested medication; sometimes prescription painkillers may be required for the more serious cases. Physical therapy, when employed at an early stage, may improve the mobility and strength of the body parts affected; the earlier the detection of CRPS, the more effective the therapy may be. Psychotherapy also may be considered as part of the patient's therapy, as the individuals afflicted with this syndrome may suffer from depression or anxiety which may make rehabilitation more difficult. Sympathetic nerve blocks are another consideration for the treatment; they may do wonders for the pain-reliving of CRPS, and they may be administered in two ways: the direct blocking of sympathetic receptors or the placement of an anesthetic next to the spine to block the sympathetic nerves.(3)

CRPS patients can have a higher chance of rehabilitation if the syndrome was diagnosed early. There are many treatments that are often used in conjunction to alleviate the pain caused by the symptoms and to regain mobility of the body parts that have been rendered dystrophic or atrophic by the syndrome. As there is no defining biochemical pathway associating injury and the onset of CRPS, there are no precautions (aside from being extremely careful and not to get injured) that one could take to fully lower the possibility of CRPS occurrence.

Further investigation for the cause of CRPS and the association between injury and CRPS (and with the spontaneous onset of CRPS without evident injury) are necessary. Drugs also need to be developed to satisfy the need of patients with CRPS, as they may be more prone to painkiller addiction than others because of the unclear nature of their disease and the constant need of painkillers to help to alleviate the pain. There are many patients and their families who suffer from this debilitating syndrome, and with more research and clinical trials we may develop techniques that can be employed for those in need.

References

(1)National Institute of Neurological Disorders and Stroke on RSDS

(2)Reflex Sympathetic Dystrophy Syndrome

(3)NIND RSDS fact sheet

(4)Reflex Sympathetic Dystrophy Syndrome Association of America

(5)The Mayo Clinic on CRPS


Hypnosis
Name: Laura Silv
Date: 2004-05-04 22:40:20
Link to this Comment: 9759


<mytitle>

Biology 202
2004 Third Web Paper
On Serendip


In this class, we've talked a lot about the blurred line between reality and what lies beneath. We've talked about dreams and daydreams and fantasy and memories as vivid as if they were being relived. As I thought about all these truths manipulations thereof, I started thinking about hypnosis as yet another way to blend confound the mind and further blend the line between reality and not-reality. Most people think of hypnosis as the way the media presents it, in the dark, stuffy living room of an eccentric professor waving a watch in front of your face, or the fringed tent of a gypsy at a carnival. But like most things in the media, this is probably a farce, an extreme exaggeration of a misunderstood practice. People use hypnosis to stop smoking, lose weight, get over their phobias and recover repressed memories, yet so little is known about the process. So I resolved to try to find out the truth about hypnosis – if it works, why it works, and how it works.


The common view is that a hypnotized subject is in a half-asleep state – even the words "hypnotism" and "hypnosis" were taken from the Greek verb for sleep, hypnos.(1) Many people think that the subject is controlled by the hypnotist in a completely dominated situation, like Laurence Harvey's character in The Manchurian Candidate. Rather, the Skeptic's Dictionary uses three descriptive qualities to convey the state of hypnosis: "(a) intense concentration, (b) extreme relaxation, and (c) high suggestibility"(2), as well as heightened imagination.(3) HowStuffWorks.com relates the hypnotized state to driving a car, reading a book or watching a movie: "You focus intently on the subject at hand, to the near exclusion of any other thought."(4) Such activities, and the ability to immerse one's self so completely in them, are called by some experts a form of "self-hypnosis".


Subjects are alert and have free will throughout the hypnotic process, but their brains work differently; the brain waves which operate at high levels when one is fully conscious are less active, and those which are active in dreams are more active than in normal consciousness.(5) From this evidence, a school of thought has arisen which believes hypnosis to be an "altered state" of consciousness. Objectors, who believe in the "unconscious reservoir" theory of hypnosis, say that a change in brain chemistry and behavior is not enough evidence to suggest an "altered state". Sneezing causes such changes from normal consciousness, yet that is not considered an alternative state. Rather, "reservoir" followers believe that hypnosis crosses the divide between the conscious and the subconscious, where you have an entire lifetime's (or more) worth of memories at your disposal, ones which are not available to the conscious mind.(6) People under hypnosis can recall past lives or repressed and forgotten memories, even ones that happened before one had faculties for conscious long-term memories.


Most commonly, one will hear about people going to hypnotists to be cured of certain minor psychiatric diseases – phobias, cigarette cravings, over-eating, et cetera. This is called hypnotherapy.(7) In this practice, the hypnotist also acts as a therapist, helping you find the subconscious root of your fears or cravings and work through them, as one would in a normal therapy session, so that when you "wake up", you will no longer be afraid of snakes or want to smoke.


The power of the mind is very important to hypnotism. One must be able to access the information "reservoir" yet still cooperate with the hypnotist. A vivid imagination helps with one's ability to be hypnotized and the results it yields, and one who does not believe in hypnosis cannot be hypnotized at all.(8) The hypnotists' suggestions to the subject can guide but not control. However, in a case where imagination and suggestibility are so central to the process, that sometimes one's imagination can go outside the scope of the subconscious into falsehood. A subject can recall false memories or past dreams which never really happened. Because such cases of false recollections are impossible to completely distinguish from cases where true memories are recalled, hypnotism is still regarded as a non-science and with skepticism by many.(9) Also, the recovering of repressed traumatic events can be detrimental to the subject. For these and other reasons, few companies and establishments use hypnotism, and why it remains a primarily private enterprise, yet one relatively easy to find. There are billboards, radio commercials, over one million websites – even a touring program where you can become a certified hypnotist.(10) Clearly it's a process which remains in demand, no matter what the skeptics have to say about it.

References


1) How Hypnosis Works


2) The Skeptic's Dictionary


3) How Hypnosis Works


4) How Hypnosis Works


5) How Hypnosis Works

6) The Skeptic's Dictionary


7) How Hypnosis Works


8) The Skeptic's Dictionary


9) The Skeptic's Dictionary


10) Hypnosis.com


Army of Barbies: The New Culture of Narcissism
Name: Michelle S
Date: 2004-05-05 09:18:21
Link to this Comment: 9794


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

"Beauty has no obvious use; nor is there any cultural necessity for it. Yet civilization could not do without it."
-Sigmund Freud (Civilization and Its Discontents)

As Freud expressed so eloquently, beauty is something that is integral to the functioning of our society. Beauty is a factor, which we base our preference for mates on. Recently, there has been a deluge of products, programs, surgeries, and machines to help people enhance their beauty. Mainstream culture has become obsessed with conforming to a "beauty ideal" that has been perpetuated by the media's preoccupation with aesthetic perfection. Plastic surgery is one of the major architects of this beauty ideal. The increase in beauty ideals through media and culture, as a result of plastic surgery, may result in an unrealistic beauty standard, decreasing pools of possible mates. Altered mate selection in turn will affect mate selection and thus, reproduction and overall population.

In recent years, the human assessment of attractiveness and specifically human beauty standards has been studied extensively. However, few academics have analyzed the importance of sexual selection in the determination of beauty. Sexual selection arises from the process of sexual competition, which often controls individual competition for mates. Antlers on stags, peacock tails, and frog air sacs are among traits that have evolved in order to compete sexually (2). These traits do not enhance the survival prospects of individuals, but improve the likelihood of obtaining a mate. These traits support the idea that aesthetically pleasing characteristics may be directly correlated with the endurance and physical health of animals (2). Therefore, reproductive success and parasite resistance could be related to the relative attractiveness of many living organisms (3).

Current empirical and theoretical studies reveal that mate preference is founded upon visual, vocal, and chemical cues (2). Many secondary sexual characteristics, such as facial and body features, develop under the influence of sex hormones. In analyzing female beauty, it appears that youth, fertility, and health symbolizes ideal beauty (2). Characteristics that influence attraction assessments include form of face and body, structure, beauty ideals, voices, age, decoration, cosmetics, body scent, hair color, hairstyles, and both cultural and temporal dynamics (2). Neotenous facial features in females such as a small lower face, lower jaw, and nose, and full large lips are some of the most highly admired traits (2).

Other significant physical traits which are associated with reproduction and the capacity for survival include levels of estrogen, amount of body fat as well as measurements of hips, waists, and breasts. High estrogen levels are beneficial because of the ability to cope with toxic metabolites, which indicate stronger general immunity to illness (3). Waist and hip ratios are also indications of health because they are known to be a measure of a women's ability to produce male offspring. The size of the female breasts, positive pelvic tilts, and waist to hip ratios are so favorable because of their strong indication of a woman's reproductive success. The fat to body mass ratio of 1:4 was found to best maintain stable female sex steroids. In order to strengthen this signal, fat must be distributed over specific areas, such as the buttock and breasts. Overall, the optimal female phenotype is a symmetric one since these characteristic reinforce the idea of reproductive performance. Body bilateral symmetry represents quality of development (3). Lack of balance and evenness in proportions is considered less than ideal since it may cause health and performance problems in the future. The combination of these characteristics establishes the ideal physical aspect of the beauty standard.

The chemical perspective of beauty is much more subtle. Pheromones are chemical signals emitted by an individual that influence the physiology or behavior of other animals of the same species. Pheromones have been shown to influence the perception of beauty. Specifically, human body odor has been proven to influence mate choices because of their association to secondary immune systems (3). Chemical signals, caused by odor, travel through brain pathways and can directly have an effect on the emotion awareness Odors have the ability to produce a positive or negative mood or feeling, thereby directly modifying social perception of others(3). The possible effect of pheromones on mood makes body odor a likely mechanism through which attraction assessment can be made (3). A study of pheromones showed a positive correlation of women's attraction to male body scent and the corresponding symmetry of his features. Women preferred the body scent of males that possessed more symmetrical features (3). Because secondary sexual characteristics, such as facial and bodily attractiveness, are health certifications, the signals of individuals' respective pheromones signify quality of phenotypic and genetic quality.

Clearly, beauty is an important feature of human life on many levels. It is embedded in culture and society. Beauty and the development towards ideal beauty, influenced by what is deemed attractive, has become an obsession in popular culture. Mainstream media has determined what traits and physical features are most attractive, many of which are valued for a brief moments before the public tires of it, and a new feature is publicized.

In order to increase beauty, humans have incorporated the use of human decoration. Human decoration can manipulate or alter the perception of beauty. An extreme form of human decoration is cosmetic surgery. The fascination with drastic feature alteration, combined with advancing technology has triggered an increase in the popularity of cosmetic surgery. These surgeries restore function and/or improve the appearance of tissue (4). Cosmetic surgery provides individuals a method to alter characteristics and obtain larger breasts, better waist to hip ratios, fuller lips, or improve facial symmetry. These procedures effectively produce the illusion that an individual possesses the preferred reproductive and immunity attributes desired by males. This "created" beauty will give the appearance of health to members of the opposite sex.

Outward appearances are increasingly more important in society. This emphasis on physical allure creates an elaborate quest for beauty in society. Beauty magazines isolate famous actors, and models which resemble complete beauty. Reality television shows have begun to focus on the popularity of plastic surgery. Music Television's "I Want a Famous Face" is a show that documents a new and disturbing a phenomena, the desire of generation X to mimic and behave like favorite celebrities who are adored by the public. Fox Television's show "The Swan" invites 40 average looking women to undergo severe physical alterations, and includes a combination of plastic surgery, severe dieting, and personal training which are monitored by doctors, dieticians, and physical trainers. The show establishes a Darwinian perspective of beauty as contestants are eliminated each week, and the finalists compete together in a beauty pageant. "Extreme Makeover" from American Broadcasting Television is a similar program which interviews average individuals and changes their unflattering features with the help of cosmetic dentistry, plastic surgery, and exercise.

The problem with these shows is that plastic surgery is becoming commonplace. Viewers feel the changes are successful, quick, uncomplicated, and can be accomplished with little pain. The American Society of Plastic Surgeons is concerned that these shows are creating unrealistic expectations concerning plastic surgery. The television creates a dangerous audience bias because individuals appear more secure, contented and satisfied with their life because of these cosmetic changes. During 2003, cosmetic plastic surgery has increased by 32% since 2002 (4) with more than 8.7 million procedures conducted within the United States alone (4).

As a result of the media's emphasis is on women's beauty and enhanced features, men and their sexual selection will also be manipulated (2). The "Farrah effect" is identified as the phenomena in which men improve their personal beauty standard after observing the media's portrayal of beautiful women (2). If beauty does symbolize health, and humans can artificially create beauty (i.e. plastic surgery, extreme dieting, and personal training), how will this affect sexual selection? The artificial construction of beauty will influence sexual selection. As more individuals use synthetic processes to improve their appearance, the beauty standard will adapt by absorbing enhanced feature upon enhanced feature. The media will alter the beauty standard, until it becomes impracticable to achieve. As the beauty standard increases at an increasing rate, the pool of possible mate choices diminished, which decreases the chance of finding a possible mate.

Society is constantly adjusting the beauty standards. Modern media's emphasis on beauty and its relationship with status, success, and quality of life create today's beauty standards. The media has saturated the public with the processes and effects of plastic surgery. As beauty enhancements generate status and become more popular and commonplace, particular enhancements will lose their advantage once too many people use it. Therefore, a negatively reinforced cycle is created. Once too many people enhance a specific feature, the original advantage will eventually be lost through overuse, and another enhancement will be celebrated. This will continue to happen until individuals utilize so much cosmetic surgery, that they no longer include any of their natural features, and are primarily composed of synthetic materials.

These plastic people will contribute to consistently improving beauty standards. Attractiveness standards will be raised by idealizing beauty enhancements, and consequently these synthetic individuals will regulate beauty. Ultimately, this will generate unreal expectations of mates. If the beauty standard, which is largely manufactured by the media, is perceived as being more beautiful than naturally produced traits, mates will no longer base selection upon realistic standards. This will eventually create a higher proportion of single individuals, possibly resulting in a significant reduction in offspring production. As the media continues to generate an unrealistic ideal, and a portion of the public continues to dedicate themselves to this ideal, mate selection will eventually include impractical preconceived notion and prejudices.

It is important to remember that these expectations are only based upon the visual element of beauty. There are a multitude of other dimensions which are involved in mate selection. Pheromones, movements, and vocalization all play a part. On a less scientific level, personality, similar interests, hobbies, and social status can also influence the selection process considerably. Consequently, although the beauty standard may evolve drastically as a result of a collective change, the desire for perfect physical beauty will be balanced by other less scientific dimensions of mate selection, such as personality, disposition, and a shared future outlook. In addition, the discrepancy between plastic beauty and pheromones may also lower the elevated beauty standard on a more subconscious level. A recent study found the symmetry of men's faces was directly proportionate to how attractive female's assessed their respective body odors. This supports the idea that non-visual, fixed characteristics such as pheromones, and body movement, counter the increasing assessment of beauty. The inability of humans to change certain aspects helps to keep the beauty standard more constant. It prevents mating standards from rising to a level that will negatively affect procreation.

It seems that in theory, media's push for perfect people will lead to a saturation point. There will come a point at which no higher level of beauty will exist. No further enhancement of beauty can be modified or created. By the cycling theory mentioned previously, beauty enhancement is based of gaining desired features. Each desired feature enhancement is replaced with the next one. Desired enhanced features become commonplace leaving natural beauty as the only option for the next fad in beauty. The media has inadvertently created a pattern encompassing the original cycle theory. Once synthetic beauty is too widespread, rare natural beauty features will be desired. Natural beauty features will begin to gain popularity, characteristic by characteristic, thereby lowering the beauty standard. This lowering of the beauty standard will progress to a low saturation point of beauty standards. The media portrayal of these new standards will eventually result in the desire of enhanced features once again. Equilibrium between these opposing states will establish. Sexual selection and the desire for procreation will remain in tact. Media's initial role, in promoting synthetically enhanced features, will eventually morph to one promoting natural beauty. Although society will continue to be exposed to the extremities of plastic surgery, the combination of chemical and personality characteristics will counteract the influence of media and society, and effectively stabilize mate selection and procreation.

References

1) The Culture of Beauty , General Site about Plastic Surgery Rise

2) Darwinian Aesthetics: sexual selection and the biology of beauty. , Journal article concerning beauty (General qualitative discussion)

3) (Homo Sapien) Facial Attractiveness and Sexual Selection: The Role of Symmetry and Averageness. , More quantitative discussion of beauty

4) Cosmetic Surgery on the Rise , Updated information about plastic surgery

5) Physical Beauty Involves More than Good Looks ,

6) The Psychoanalytical Construction of Beauty ,

7) 10 Cosmetic Plastic Surgery Predictions for 2004-From the American Society for Aesthetic Plastic Surgery ,

8) 2003 Cosmetic Surgery Statistics Show Strong Increases , Statistics about plastic surgery


Latah
Name: Amanda Gle
Date: 2004-05-05 18:09:22
Link to this Comment: 9796


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Culture bound syndromes are found in numerous societies around the world from America to Africa to the Pacific. These syndromes are noticed in social and cultural patterns of the various episodes of the syndromes. "The political nature of deviance designations [are] relative to whose interests are being served by such labels." Also, "the conspicuous gender aspects of those defined as deviant" are important in determining culture bound syndromes (1).


In Malaysia, a culture-bound syndrome called latah is found. Latah is a "behavior complex built around hyperstartling" (2). Latah is the globally experienced syndrome of hyperstartling taken to an extreme. The word latah means "ticklish" in Malay (3) (pg. 13). Latah is a culture-bound syndrome as there are "particular cultural conditions that are necessary for the occurrence of that syndrome" (4). While it is a culture-bound syndrome, I feel that evidence shows it is also a neurological illness.


Latah occurs in the performance-oriented Malaysian society due to repetitive scares and being poked. Pawang Lumun, an indigenous healer, stated, "If we don't startle them with pokes in the ribs, they don't become latah. If we keep poking a normal person like that, he'll become a latah. It doesn't take long: five days of poking over and over, little by little a person gets quite flustered." (5) (pg. 1) While anyone can become latah through this series of irritation, more women than men are latah. Specifically these women are lonely older women, whose husbands work often or have passed away. Perhaps the women who become latah are pushed to the edge of society and are looking for attention and a place of importance in a subconscious route. That more women are latahs could be because people are more afraid to poke men than women, for if a strong man was to lash out he could hurt someone badly. Thus it is safer to startle women.


The Malaysian villages that latah occurs in are rural, though there are a few unexpected cases of latah in the city. These villages' societies are based on manual labor, specifically boat-life and farming. The villagers have to look for entertainment in performance, including latah. There are three types of latah: immediate response (when one is startled), attention capture latah (doing other's actions and obeying others), and role latah (a combination of immediate response and attention capture latah) (5) (pg. 5). The three types demonstrate the various forms of subconscious and conscious attention seeking. Latah allows people to be the center of attention in the entertainment world. "Its manifestations take two main forms, a startle reaction often accompanied by coprolalia, and a compulsive mimicry, persisting despite the victim's conscious desire to stop" (3) (pg. 3) Despite the desire to stop there might be an unconscious desire to perform and gain attention.


To understand why latah is a culturally bound illness, one must understand a few key points. One is that everyone startles. If a loud alarm goes off suddenly or a gun is shot, one's heart will race. Sir Hugh Clifford stated, "...anyone who desires to really account for this affliction must, I am convinced, begin by analyzing and examining and explaining the pathology of the common start or 'jump' to which we are all in a lesser or greater degree subject" (5) (pg. 195). The startling is a natural reflex that can lead one to protect oneself. Startle reactions happen to everyone. They can include the "swearing or the repletion of a purposeless phrase [that] can occur in startled normal persons" (3) (pg. 5) Not only can startle reactions be from a noise such as a loud bang or a shot, but also "the startle stimulus can be a sudden realization of a social situation," such as one's fly being undone or something green in one's teeth (3) (pg. 5). These social settings can cause people to swear or do repetitive startled motions.


These startled-motions can be found in other neurological illnesses such as catatonic schizophrenics and people with Gilles de la Tourette's syndrome, also known as maladies des tics. People with brain damage also have startled reactions. The brain-damaged people have the largest reactions that might be "due to loss of a sense of wholeness" of the self (3) (pg. 5). Schizophrenics are similar. Tourette's Syndrome is the most similar to the disease and used to be thought of as the same as latah. They are different as Tourette's begins to manifest itself in childhood with tics and verbal outbursts where as latah usually starts later in life. Also, Tourette's Syndrome does not require a startle for an outburst. Despite similar conditions, latah is not found elsewhere in the world.


A second point is that in comparison to the Western world, the Malaysian culture allows one to become latah. The continuous poking and startling makes a person jumpy. If other cultures encouraged the aggravation that included the torment of poking, and if almost any person was startled repeatedly, in the end, he or she would share traits with those who are latah. For example, an American will swear or jump if surprised, as it is one of the automatic responses to an external stimulus. But those who are not startled repeatedly will not perform when startled or even continue to swear. Despite this, there is "no obvious connection between its manifestations and the beliefs of the people among whom it mainly occurs" (3) (pg. 3). The beliefs do not cause the disease but they can influence the culture, which allows it. Doctors say it is a "psychiatric syndrome that psychoanalytic teaching would view (like most other syndromes) as a form of regression to an earlier developmental phase" (3) (pg. 6). Does the Malaysian culture encourage children to play games to become latah?


Hyper-suggestibility may occur in places that encourage hypnotism and games relating to it. One game played by Malaysian children is this:
In the game known as main hantu musang [the polecat spirit game] the principal player goes on hands and knees, is covered by a white sheet, and is said to be hypnotized into unconsciousness by the others who march round and round him, stroking and patting him and repeating the following words [words omitted]. After, the player is said to be possessed and is quite unconscious of his humanity. He chases the others, climbs up trees, leaps from branch to branch and so far forgets himself as to run the risk of injury by venturing on boughs too frail to bear his weight. In the end he is called to his senses by being addressed repeatedly by his name (3) (pp. 6-7).

This form of hyper-suggestibility probably has implications later in life such as latah.


One must also examine performance latah if one is to see the evidence that latah is culture-bound. There is reason to believe that in role latah, people self-stimulate using an "ick" sound if they are not fulfilling expectations and feel devalued. They use the latah to perform to others' standards. There are many incidences of latah behavior that show no signs of startling, but yet as the person is a Latah, she will get away with whatever is done. In the Western societies there are other such ways of doing thus, such as being the class clown in school or the person that everyone goes to for a joke. In America the people who fulfill the "latah role" are of every age, from misbehaving children to crazy old ladies.


Though there are cases of those under latah becoming completely obedient, and doing such acts as undressing and doing work for another, these cases are extreme and rare. They are also completely understandable. These obedient times are explained in America as having temporary insanity. For example, there was a latah woman in Malaysia who after being startled was told to kill. She was holding a knife at the time and her immediate reaction after the startling was to kill the woman next to her. In America, a good lawyer would have the woman plead temporary insanity. When in court the judge had a plank with sharp nails sticking out of it.
The judge said, "Now we'll test whether you're a real latah." A policeman came up behind the latah and poked her in the ribs, and he shouted, "Slap those nails!" Right away the old lady slapped down on those nails, and blood began to gush from her hand. The judge had to agree. "Truly, this woman is a real latah. This old woman is not guilty; the guilty one is the person who poked her." So the woman who poked the latah was the one who was sentenced to be hanged" (1).

In Malaysia her defense included her being a latah and the hyperstartling syndrome caused her to perform the murder.


Latah is a neurological illness. Doctors have found that there are seven factors that create development of latah in people, especially women.
1. Repressed wishes probably out of an infantile sexual character, adequately cathetcted and seeking an outlet.
2. Stimulus generalization leading to nonsexual stimuli being misinterpreted as sexual.
3. A masochistic tendency resulting in a failure to defend against the provocative stimuli and perhaps provoking such stimuli instead.
4. Dissociative child-rearing practices conducing to hyper-suggestibility.
5. The rewarding of hyper-suggestibility in adults, as by the introduction of beneficial but little-understood knowledge that could most rapidly be mastered by rote learning.
6. Suppression of lengthier dissociations or trance states through which the repressed wishes could obtain fuller expression.
7. An inflexibility of impulse control that leads to exaggerated startle reactions (such as occur also in catatonia) and thence to temporary suspension of inhibitions when startle occurs (3) (pp. 16-17).

Some of these would be expected to form neurological disorders or neuroses in the West and other countries. Other factors would not form disorders in the West, which leads to this neurological disorder being culture-bound. Latah has a list of symptoms, which makes it medical. Symptoms include "coprolalia and coprophrasia [that] describe verbal obscenity, mimesis (mimicry), echolalia for verbal mimicry, echomamia and hyperimitation (mimicking the general behavior of another), and echopraxia, echopraxis, or echokinesis (body mimicry)" (1). These symptoms as well as those of fatigue are all medical symptoms found in other disorder. This leads me to conclude that latah is a neurological disorder.


Each case of hyperstartling around the world is different; especially because of the culture it is found in. "Who may startle who, how it happens, where and when depend on the culture" (2). Because of this, latah in its own role is culture-bound. It is the Malay way of dealing with hyperstartling persons.


We, as Westerners, need to ask, "Should we classify latah, which is a culture-bound syndrome, with Western medical terms?" The clinical evidence to make it an illness is a bit weak for most Western doctors. While it is a mental abnormality, there must be a reason that those in the Malaysian culture do not classify it as such, but do classify it as an illness that is socially acceptable.

References

1) Bartholomew, Dr. Robert E. Exotic Deviance: Medicalizing Cultural Idioms—From Strangeness to Illness. 2000.


2) Simons, Ronald C., prod. & direct. Latah: A Culture-Specific Elaboration of the Startle Reflex. Indiana University Audiovisual Center, Bloomington, Ind: 1983.


3) Lebra, William P., ed. Culture-Bound Syndromes, Ethnopsychiatry, and Alternative Therapies: Volume IV of Mental Health Research in Aisa and the Pacific. 1976.


4) Pashigian, Melissa PhD. Notes. Medical Anthopology. Bryn Mawr College. 11/5/02.


5) Simons, Ronald C.. Boo! Culture, experience, and the startle reflex. 1996.


Behavioral Response to Smell: A closer look at the
Name: Sarah Cald
Date: 2004-05-05 19:30:26
Link to this Comment: 9797


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Humans utilize several senses to gather information about the world around us: sight, hearing, smell, taste and touch. Of the five senses, smell is often viewed as of lesser importance. In fact, people spend most of their lives trying to cover up and hide smells. Case in point, witness the rampant success of the deodorant business. Smell cannot be avoided; it is ubiquitous. Although research has furthered our knowledge of the mechanism of olfaction, very little is known about how smell and behavior are linked. This paper will investigate two questions. Firstly, how is olfaction linked to behavior, and secondly what is responsible for the behavioral responses to smell?

One would think that behavior and smell are closely related, what else could explain the various responses to certain smells? Some people like the smell of gasoline, while others are repulsed by it. Is there any scientific evidence that supports this theory? Indeed, research has shown that smell and behavior are closely linked. It is well documented that airborne chemicals influence our behavior without our being aware of smelling anything at all. Researchers have recently obtained brain-scan images showing that certain parts of our brain, including brain structures that control emotion and memory, become activated in response to an airborne compound at such a low concentration that we have no conscious awareness of it (1). Growing evidence that supports the existence of an unconscious, sixth sense is found in the study of pheromones.

Pheromones are airborne chemicals produced by one animal and detected by another of the same species, and are a common form of communication between animals. Pheromones influence the behavior of the animal that senses them. For example, male pigs release the hormone androsterone when they breathe. This hormone makes female pigs eager to mate (1). Although research suggests that humans have lost the ability to receive such odor molecules there is evidence that shows another organ that seems to be specialized for the detection of pheromones in other animals, such as humans. This organ, the vomeronasal organ (VMO), is connected to the nasal passage by a small opening about an inch behind the nostrils. Recent evidence shows that the VNO in humans serves to send pheromone-carried signals to the brain. Women rate androsterone, the male hormone, as more "pleasant-smelling" near ovulation than at the beginning or end of their cycle (1).

A study conducted at Brown University also yields evidence that odor and behavior are closely linked. More specifically, this study suggests that emotions can become conditioned to odors and subsequently influence behavior. In this study, 63 female undergraduates were asked to play a computer game that, unbeknownst to them, was designed so that they could not win. During that time, the students were exposed to a novel odor. Then were they given a 20-minute break and then sent to a different room to take a set of word tests. There were three rooms: one with the same scent as the room where they played the game, a room with a novel scent, and a room with no scent. Participants who performed the word tests in a room with the same scent as the computer game room exhibited increased frustration compared to the other groups of participants exposed to different smells (2).

Perhaps the greatest evidence that smell affects behavior is found in menstrual cycle synchrony. First reported in a college dormitory, this phenomenon involves the involuntary synchronization of menstrual cycles among women who live together as a result of pheromone signals (3). While it is highly likely that humans exude pheromones, no one has ever isolated and identified one.

All odorants are processed in humans through the same olfactory pathway. Odorants are collected in the sensory epithelia, located in the upper regions of the nasal cavity (4). Odorant molecules are absorbed in the mucus layer of the sensory epithelia where they then travel to receptor cells, which are located on the cilia that line the nasal cavity. These cilia each comprise receptor proteins that are specific to certain odorant molecules (4). Binding of an odorant to a receptor protein results in an activation of a second messenger pathway. There are two known pathways to date. The most common, involves the activation of the enzyme adenyl cyclase upon the binding of an odorant molecule (5). This enzyme catalyzes the release of cyclic AMP (cAMP). The increase in cAMP levels causes ligand-gated sodium channels to open, causing a depolarization of the membrane. This depolarization results in an action potential (5). These electric signals are carried by olfactory receptor neurons through the olfactory bulb. The olfactory bulb then relays information to the cerebral cortex resulting in sensory perception of smell (5). On average humans can recognize up to 10,000 separate odors (6), yet only have about 1,000 different olfactory receptor proteins (7). It is within the olfactory bulb that combinations of odorant molecules can be organized to signal the brain for specific smells (7).

Can knowledge of the mechanism of olfaction help identify what accounts for behavioral response to smells? Research involving the olfactory bulb and the memory of smells suggests that this organ may have some small role in the relationship between behavior and smell. It is well known that sensor cells in the nose die and replace themselves with new nerve cells ever 30-60 days (8). This fact led investigators to question how smells, like an apple pie baking in the oven, are remembered by animals. Scientist concluded that new nerve cells send out long extensions that find their way to the same spots in the olfactory bulb where the preceding nerve cells connected. In this way, the "road map" of odors remains constant throughout life (8). Although this finding is fundamental in understanding how odor information is encoded to the brain, it remains unclear how this information is decoded.

As I learn more about the role of the olfactory bulb, I become more convinced that it plays some part in the behavioral response to smell. So far, the olfactory bulb has been shown to function as the center where olfactory combinations of molecules are formed prior to being sent to the brain; the olfactory bulb has also been correlated to the memory of smells, serving as the "endpoint" of the road map of olfaction. Further research has shown that signals sent from the olfactory bulb are sent not only to the cerebral cortex, which is responsible for conscious thought processes, but also to the limbic system, which generates emotional feelings (9). These findings, as I interpret them, suggest that the role of the olfactory bulb in behavior may be to organize input signals from odorants and re-format these signals into new outputs signals that can be interpreted by the human brain. If this is the case, then the olfactory bulb would have to have some consistent method of analyzing odors – otherwise how can the smell of an orange continually be perceived in the same manner? For example, octanol, an ingredient in natural gas and petroleum, exudes an orange and rose-like smell. By changing one atom in the molecule's structure, it becomes octanoic acid, which is characterized as a rancid and sweaty smell. The olfactory bulb must be highly specific to each of these odorant combinations in order to elicit just drastically different smell descriptions. In order to do this the olfactory bulb must comprise machinery that can form the same combinations of odors from various signals. Richard Axel M.D., an investigator at Columbia University College of Physicians and Surgeons and a pioneer in the field of olfactory research, explains this concept best:

"The brain is essentially saying... 'I'm seeing activity in positions 1, 15, and 54 of the olfactory bulb, which correspond to odorant receptors 1, 15, and 54, so that must be jasmine (7)."

The olfactory bulb must be able to consistently process signals 1, 15 and 54 as those of jasmine.

A finding by one of Axel's colleagues, Linda Buck, suggests that concentration of odorant may also play a role in the type of behavior. When indole, a substance found in both coal tar and perfumes, becomes concentrated, it smells horrible. However, when diluted indole gives off a fragrance similar to that of jasmine (8). This finding suggests to me that the olfactory bulb may be overloaded with signals when odors become highly concentrated. The excessive flood of input signals to the olfactory bulb may prevent accurate olfaction to occur. Just think of how you feel after you have walked down the perfume aisle at a department store, you are unable to smell accurately. Along these lines, the olfactory bulb may serve as the first step in behavioral response. Perhaps different people have different thresholds of olfactory signal receiving, and that people respond differently to the same level of an odor depending on how overloaded their olfactory bulb is. Additionally, it would seem reasonable to propose that differences in the olfactory bulbs of people results in different interpretations of smell. This seems logical because more is known about the similarity between human brains than the olfactory bulb. In fact, the entire components of the olfactory bulb remain unknown; perhaps they differ among individuals. While these are just hypotheses formulated from what I have learned thus far about olfaction, they are worthy of exploring further.

Our sense of smell is nothing to sniff at; it serves many functions in our lives and affects our behavior, emotions and memory. While investigation into the mechanism of olfaction has proved beneficial in understanding how signals are sent to the brain, there is still very little known about how those signals are interpreted and received by the brain. I am certain that by understanding how olfactory signals are interpreted in the brain, we can learn more about why certain odors elicit different behaviors in people. For now, it seems as though the olfactory bulb may have a larger role in behavior than initially suspected. Not just an organization center, the olfactory bulb may also function as the beginning phase of behavioral response to smell.

References

1)A Secret Sense in the Human Nose: Pheromones and Mammals
2)Odors Summon Emotion and Influence Behavior, New Study Says
3)Pheromones: The Smell of Beauty
4)Monell Chemical Senses Center – An Overview of Olfaction
5) Lancet, Doron. "Vertebrate Olfactory Reception." Ann. Rev. Neurosci. 9 (1986): 329-355.
6)The Mystery of Smell : The Vivid World of Odors
6) The Mystery of Smell: How Rats and Mice – and Probably Humans – Recognize Odors
8)Researchers Sniff Out Secrets of Smell
9)Sensing Smell


Night Shift Effects
Name: Elizabeth
Date: 2004-05-05 22:19:57
Link to this Comment: 9798


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

In emergency services, the night shift from 19:00 to 06:00 hours must be staffed. Those who work these shifts are familiar with the consequences of staying up all night and then having to go to class or to another job during the day. EMTs who work all night experience drowsiness which is believed to coincide with poor reaction times (1). This paper will examine the cause of this slowed reaction time and sleepiness found in those who work the night shift.

Circadian rhythms are cycles in physiological functions such as sleep (2). Most circadian rhythms are controlled by the suprachiasmatic nucleus or SCN (2). The SCN is a pair of brain structures that together contain about 20,000 neurons located in the hypothalamus, just above where the optic nerves cross (2). Light that reaches photoreceptors in the retina creates signals that travel along the optic nerve to the SCN (2). The clock mechanism controls sleeping patterns depending on light cues with the cyclical changes in the concentration of the hormone melatonin (3). Neurons from the SCN connect to the pineal gland where melatonin is produced (4). When signaled, melatonin is released into blood and acts on receptors throughout the brain and body to alter functions associated with the sleep/wake cycle. The body's level of melatonin normally increases after the sun sets and acts to inhibit systems in order to promote sleep (3). Melatonin can affect multiple body systems including the reticular activating system (RAS), which is responsible for the alertness of an individual.

The RAS is composed of parts of the medulla oblongata, the pons and midbrain (5). This region receives sensory signals and stimuli from the environment and other regions of the brain and coordinates these signals to produce an output (5). In the reticular activating system Gamma-aminobutyric acid (GABA) receptors are inhibitory (6). Molecules can bind to these receptors causing chloride ion channels to open. The presence of chloride ions hyperpolarizes the postsynaptic neuron and prevents a signal from being passed (7). When regions of the RAS are active, nerve impulses travel to other areas of the brain, increasing activity associated with consciousness such as improved reaction times. Among other areas, the RAS can send signals to the motor cortex (8). The motor cortex coordinates the movement of muscles in response to sensory inputs that the brain receives (9). If the RAS is slow in releasing a postsynaptic neuron to the motor cortex due to the presence of molecules that stimulate GABA receptors, the central nervous system will take a longer period of time sending an action potential to motor neurons in order to cause a movement.

Melatonin may allosterically bind to GABA receptors, producing an inhibitory effect in the RAS that causes the increased drowsiness and poor reaction times observed in night shift EMTs (10). At night, the SCN signals the increased production of melatonin because no sunlight is present. Because of this greater concentration, enough melatonin is present to bind to GABA receptors in the RAS (10). During the night shift, individuals remain awake but the SCN continually signals to sleep by the presence of melatonin. This continued presence of melatonin inhibits neurons and leads to a drowsy feeling and diminished response times.

I will test the theory that a decrease in my reaction times should correspond to drowsiness over one of my night shifts. I will sleep approximately eight hours and test my reaction times every two hours from the time I wake up through the end of my shift. The day I chose to collect data was sunny so that my retinas would be able to send the sensory information of sunlight to my SCN. I stayed up all night at my eleven hour shift (19:00-06:00) and tested my reaction times on an internet test site (11). The test involved hitting a stop button with a mouse curser when seeing the screen color change from white to dark pink. The screen color changed at varying times to reduce the possibility of conditioning to a specific time change of the screen color. Results are included in table 1. I believe that if a sensory signal, for example a changing color screen, is received by the neurons in the retina at night, the signal is processed through the RAS more slowly due to the presence of melatonin. The post-synaptic neuron may not send a signal as quickly to the motor cortex. The motor cortex most then send an action potential to muscle neurons in order to cause the observed output of clicking a mouse button to demonstrate the screen has changed color.

Table 1,

As this test shows, poorer reaction times are associated with working a night shift. The RAS receives the visual stimulus that the screen changes from the optic nerve, but neurons take a longer period of time sending this sensory stimulus to the motor cortex due to the presence of melatonin. Reaction times appear to remain constant until an increase around 02:00. The increased reaction times that occurred at 02:00 were also associated with a feeling of sleepiness. This observation is consistent with the hypothesis that the RAS is responsible for the control of alertness and the relaying of sensory signals to the motor cortex. If the RAS is responsible for both drowsy feelings and reaction times, increased melatonin at GABA receptors in this region should affect both at approximately the same time. Once the sun has set, reaction times alertness may not be immediately affected because my rhythms have been set to stay up later due to my normal sleep patterns of falling asleep between 01:00 and 02:00. Melatonin release may be delayed until this time. Also, there may be a several hour delay before sufficient melatonin concentrations exist in the blood stream to cause an effect on the GABA receptors. Reaction time improvements between 06:00 and 08:00 coincide with sunrise and an increased feeling of alertness. This may be because the SCN signals the repression of melatonin production when the optic nerve signals the presence of sunlight and as a result, less will be present in the RAS. In the absence of melatonin, the neurons of the RAS can send signals to the motor cortex more efficiently. Overall, this data set provides some evidence to suggest that night shift worker reaction times are slowed by the interaction of SCN, Melatonin, and the RAS.

The data set collected seems to support the hypothesis that emergency medical technicians have slowed reaction times when working at night that also correspond with drowsiness. In order to have more accurate results, my study will need to be repeated multiple times. Further fMRI data could also be used to study the specific brain regions involved in drowsiness and slowed reaction times. Through my informal study and the observations made by others, the cyclic nature of melatonin concentration controlled by the SCN appears to effect neurons in the RAS, causing the feeling of drowsiness and increased reaction times.


References

1) A literature View on Reaction Time, This is a good site for resources on the study of reaction times.

2) Sleep and Circadian Rhythms, Background information on the SCN and circadian rhythms.

3) Learn about fireflies, biological clocks, and using VCR codes, A USA TODAY online article that discusses the role of melatonin and the effect of light on the SCN.

4) Third Eye - Pineal Gland, Diagrams and lots of information on the Pineal Gland.

5) Sleeping Disorders , This site provides a discussion of the reticular activating system.

6) CNS Depressants: Sedative-Hypnotics, The role of GABA receptors in sleepiness.

7) Synapses , A discussion of the different types of synapses.

8) ADD ADHD: Reticular Activating System, Evidence that the RAS signals the motor cortex through studying ADD.

9) Probe the Brain, Pictures and description of the motor cortex.

10) The optic tectum of the salmon:site of interaction of neurohormonal photoperiodic and neural visual signals, The GABAergic neuronal system and melatonin receptors. Interaction of GABA andmelatonin.

11) Test your reaction time , This is the test I used when determining my reaction times.


Understanding Meditation Through the Central Nervo
Name: Hannah Mes
Date: 2004-05-06 18:22:39
Link to this Comment: 9803


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


It has been suggested in class that a disconnect of information exists between the I-function and the central nervous system, a concept that I found intriguing as my experience with Vipassana, a Buddhist meditation technique, allowed me to make the gap between my conscious and unconscious less sharp. The class discussions challenged me to think about meditation in terms of a series of physiological responses that could be observed, documented, and analyzed. The broader implications of this explanation meant that the mental state achieved through meditation did not require discipline, but could feasibly be induced by other chemicals. While attractive for a variety of reasons, I resisted this methodology as overly simplistic. Part of my experience with meditation has been mental transformations that were only achieved through determination, persistence, and patience. Individual commitments to these mental states are not factored into an explanation that draws primarily from transformations in the central nervous system. Ultimately, the idea that the central nervous system could describe a meditative state was incomplete because it could not quantify the mental changes I experienced.

Before this course, I had only considered meditation in terms of mental transformations that had occurred in my conscious mind, or I-function. This explanation made sense because the most noticeable changes were in my general attitude, interaction with other people, and pervasive feeling of balance. I knew that meditation had impacted my mental state as I felt extremely relaxed but what occurred in my central nervous system and how much this influenced my altered state of consciousness remained unclear. I researched this these changes because it challenged my understanding of Vipassana as a purely mental practice. After learning about the physical changes in my central nervous system, I shifted the way that I thought about my I-function and central nervous system as acting independently of each other. Instead of working autonomously, each of these states shared information and were deeply influenced by each other.

In a preliminary discussion on this topic, I explored the similarities and differences between my experiences with meditation and the experiences of other meditators, such as Pilou Thirakoul and Dr. James Austin. (See "Shifting Realities Through Vipassana Meditation" (1).) I ended with the following question: "When information passes from the central nervous system to the I-function what changes can be observed at a gross, physical level as well as at a subtle, chemical level."
This paper will delve deeper into a discussion that addresses the specific transformations that are occurring in my central nervous system and how this shifts my understanding of meditation in a broader sense. These physiological changes, which have been knowingly induced by the meditator, have real consequences on the individual's mental state. By examining these changes from the physical level where information is quantified and concrete one can begin to theorize about the abstract changes at a psychological/mental level.

Through research I discovered that these sensations of relaxation include, "generalized reduction in multiple physiological and biochemical markers, such as decreased heart rate, decreased respiration rate, decreased plasma cortisol (a major stress hormone), decreased pulse rate, and increased EEG (electroencephalogram) alpha, a brain wave associated with relaxation."

I realized that there was a change in my brain waves when I was meditating but could not conclude as to whether or not this had an effect on my I-function, or conscious mind. Alpha brain waves, oscillating in the range of 7.5-13 cycles per second, occur in meditation and hypnosis. (2)Although some scientists have argued that the existence of Alpha brain waves suggests deeper thinking and a propensity for creativity, others argued that these waves occur when there is little visual or sensory input. I argue that although these waves may not induce an altered state of consciousness, that these brain waves have a direct relationship with the mental states of a meditator. I know that the combination of these physiological markers were linked to states of deep relaxation from my own experiences with Vipassana.

For example, on my 10 day Vipassana course all students take a vow of "Noble Silence" in which one abstains from any type of verbal or gestural communication so as to maintain an environment that is conducive to intense meditation. By limiting the amount of sensory input that the I-function receives, an individual can focus on internal changes with less distraction. By closing one's eyes and "self" off from the rest of the external world, the workings of one's "internal" world become more apparent.

In an experiment done by Benson and Wallace at the Harvard Medical School in 1963, the researchers found that certain physiological changes occurred during meditation. The meditators all demonstrated a fall in metabolic rates as they demanded approximately 20% less oxygen after a few minutes of meditation. General activity in the central nervous system slowed down, illustrated by the predominance of the parasympathetic branch of the CNS, responsible for the sensation of relaxation. They concluded that the state experienced by the meditating subjects could be described as "wakeful and hypermetabolic" and that meditation produces a "complex of responses that marks a highly relaxed state." (4)

I began with the premise that an altered state of consciousness experienced during meditation could be understood as a series of physiological transformation in the central nervous system. With that said, I began to wonder if this state could be simply a series of chemical responses that could be systematically induced by other chemicals, or if the repetition and work that are part of the traditional method of meditation were just as important. If my mental state could be accounted for in terms of physical changes, then did any of my mental changes occur independently? What was my role in pro-actively changing my state of consciousness? Was my experience actually the effect of chemicals in my parasympathetic CNS that were the by product of my mental states- or was my mental state the by-product of these chemical transformations?

Although I know that there is a relationship between these physical and mental changes, I make no claims as to what order they occur in and how deeply they influence changes in the other. I can confidently assert that although there meditation can be understood in terms of the central nervous system at physical level, this provides only superficial understanding of meditation. The physiological answers do not describe my experience with changes in my mental state.

Meditation has produced mental states of deep relaxation and increased awareness. Situations that were previously confusing or hard to analyze became clearer. As a result, I felt more emotionally, mentally, and physically balanced. After working through a particularly difficult meditation session I felt a sense of achievement and mental strength. This could not have been achieved through chemical means, but only by the experience of long meditation sittings over a substantial period of time. I argue that even if this desirable state could be reached in an easier way, (by chemicals), it would not be recommended.

When one has practiced this method of meditation, the response becomes learned over a period of time. One can chose to experience that same sensation of relaxation and balance without any chemical prompt. The discipline involved in meditation is great, but the result is a mental transformation with consequences that have far-reaching effects. The physiological changes that occur with meditation should not be mistaken for meditation or the experience of enlightenment. Instead, the central nervous system should be seen as changing due to the will of the I-function. Enlightenment does not occur spontaneously, it is a final goal that must be worked towards.

The relationship between my central nervous system and I-function has always existed, but it is only recently that I have become more aware of it. I was surprised to find that there was so much activity in my central nervous system that I had been "asleep" to. This has led me to believe that my CNS is another part of my mind that is already awake unto itself, and that through an "awakening" of my bodily sensations, I can "awaken" to the sensations (or states) of my mind as well. While these physical changes help describe the physical sensations associated with meditation, they fail to describe the mental sensations that occur simultaneously. Inducing a physical state can create an environment in which a specific mental state can be achieved. Other mental transformations must occur for this to happen otherwise all beings could achieve enlightenment through a series of different chemical reactions.


References


1)First Biowebpapers,

2) Alpha Waves
,

3) Austin, James. Zen and the Brain. New York: Yale University Press, 1999

4) Holistic Meditation,

5) Organization of the Central Nervous System ,


Health: Mind and Society III
Name: Aiham Korb
Date: 2004-05-06 20:55:33
Link to this Comment: 9804


<mytitle>

Biology 202
2004 First Web Paper
On Serendip


In the previous papers, Health: Mind and Society I and II, we established that the interactions of many different variables are responsible for health and disease outcomes. We also saw how psychosocial factors influence the physiological homeostasis by affecting the neuro-endocrine and immune systems. These connections, noted in the biopsychosocial model, have helped us understand the importance of environmental influences on the body. Drawing from the links noted in the last paper between stress and socio-economic status, we will continue to focus on the biological effects of stress in disease progression. Some of the studies which we are to mention will further suggest how societal forces and structures are most often the source of stress. As for now, let us take a step back and recall the experiment about stress and socio-economic class.

In the last paper, we looked at a study by Brydon et al., in which the experimenters found significant differences in the heart-rate and Interleukin-6 level recovery between two SES groups (high and low), after exposure to a stress-provoking task in the laboratory. The conclusion was that people of low SES have a "dysfunctional adaptive response" to psychological stress due to chronic stress-related increases in IL-6 and HPA activity (1). This to say that, as a result of chronic stress, those of low SES have problems maintaining and recovering bodily homeostasis. The article tells us that "IL-6 is sensitive to psychological stress" (1). This is a key point, as we have already spoken of some of the harmful effects of increased IL-6 levels. Yet, in addition to these negative effects, stress-related increases in IL-6 may also have considerable implications for general morbidity and mortality, particularly in old age (1). It is true that besides the risk of cardiovascular disease, elevated IL-6 levels are associated with a spectrum of other age-related conditions including: osteoporosis, cancer, stroke, arthritis, dementia, type 2 diabetes, frailty and functional decline (2). Such findings may be of great relevance to our study of stress. For example, this may imply that chronically stress-inducing environments, to which people of low SES are usually exposed, are actually increasing the likelihood of future morbidity and mortality. These chronic stress-related increases in levels of IL-6 may in fact be accelerating the aging process.

Another phenomenon pointing in the direction of these claims is the overworking of the HPA axis in the elderly. Aging is associated with changes in the function and regulation of the HPA axis, including higher cortisol levels and slower neuroendocrine recovery from stress (2). These patterns are highly similar to those we observed in the low SES group in the Brydon experiment. This should make sense, because IL-6 does stimulate the HPA axis. And as we have seen in the first two papers, the overworking of the stress and neuro-endocrine responses causes immunomodulating (or a suppression of the immune system). HPA hyperactivity also corresponds to other negative outcomes on health, such as central obesity, hypertension, insulin resistance, and dislipidaemia (all risk factors for coronary artery disease) (1). These correlations of IL-6 and HPA hyperactivity between chronically stressed individuals, and those experiencing old age should suggest a valid relationship between stress and the aging process. It is highly possible that chronic exposure to stressful environments not only increases susceptibility to disease, but also speeds up the physiological aging mechanisms. Thus far, we have seen some of the effects which environmental stress has on the onset of disease, we will now turn to look at its role in the presence of malady.

Of the possible causal pathways in which stress influences health, we have been mainly concerned so far with the "direct effects". In this pathway, we showed how stress leads to disease via the physiological responses which may severely disturb homeostasis, such as high blood pressure, HPA hyperactivity, and high cortisol and IL-6 levels. There are also the "indirect effects" of stress, which work by producing unhealthy behavior changes, such as sleep depravation, substance abuse...etc. However, it is the "interactive effects" which will draw our attention in this paper.

In the presence of disease, the "direct effects" largely shift to the "interactive effects" as the negative physiological responses of stress interact with those being caused by the disease. This intensifies the progression of the disease, and even leads to its exacerbation. In the case of AIDS, for example, there is strong evidence to link the hyperactivation of the physiological stress-response with the progression of the disease. In fact, two pathways have been investigated as potential mediators of effects of psychosocial factors and HIV progression: the HPA axis and the sympathetic nervous system (SNS). In vivo and in vitro studies have helped develop a detailed pathway linking activation of the SNS to HIV progression. "Individuals who demonstrated higher levels of sympathetic nervous system (SNS) activity to a variety of challenging laboratory based tasks show poorer suppression of viral replication following initiation of highly active anti-retroviral therapy " (3). The malfunctioning of the neuroendocrine and immune systems, which worsens the state of the disease, is largely influenced by psychosocial factors, as we have demonstrated before. What Kemeny also found was that the psychological (or mental) states of HIV-positive individuals predicted the progression of the virus. Her studies found that two cognitive appraisals were highly correlated with AIDS onset and mortality: negative expectancies about future health, and negative appraisal of self. These psychological factors (pessimism, negative affect, etc.) were associated with more active physiological stress-responses, and therefore with negative health outcomes as well.

In order to account for psychological states, we must consider them in the larger context of the psychosocial environment. Indeed, Kemeny asserts that the mentioned cognitions were shown to be highly associated with a trait termed "rejection sensitivity" (3). While this may be understood as a personality trait that varies among individuals, rejection sensitivity (and personality in general) is strongly shaped by experiences and the environment. "One context that can enhance the likelihood of chronic negative views of the self is a family history of rejection" (3). An example Kemeny cites is that of a study of HIV positive men finding that rejection sensitivity around one's homosexuality predicts a more rapid onset of AIDS and eventually accelerated mortality (4). In this case, it is certainly factors of the social environment (prejudices, attitudes, homophobia, lack of acceptance, etc.) which are the major influences responsible for inducing low self-appraisal and rejection sensitivity. Indeed, environmental and contextual factors largely affect health outcomes. As pointed out before, this is not restricted to humans. Animal studies using a primate model of HIV found that social stressors, such as separation and housing changes, predict accelerated disease progression in the infected animal (5). Thus, changes within the immune system have been demonstrated to correlate with social stress, support (or lack there of), negative affect and such psychosocial factors. These experiments constitute a small sample of the accumulating evidence supporting the significant role which environmental circumstances (generally) and psychosocial factors (specifically) play on health.

Social isolation and the lack of social support predict morbidity and mortality from cancer, cardiovascular disease, and several other causes (6). Since stress is not only perceived personally, but also through the prism of social interactions, social environments may lessen or exacerbate the physiological responses to stress. Social relationships, and societal structures on a larger level, may act as "buffers" against, or catalysts of infection and the progression of disease. Studies have shown that people exposed to such chronic social stresses for more than two months have an increased susceptibility to the common cold (7). For instance, loneliness is a relevant and important factor in pre-disease pathways, and is a major factor in the mental health of cancer survivors (6). The diagnosis of cancer has indeed been associated with increased dysphoria (abnormal depression), family problems, and feelings of loneliness and isolation. Also, social isolation is found to correlate with increased risk of death from cancer as well as stroke (6). The physiological effects associated with chronic (social) stress will unfold over long periods of time, increasing the vulnerability to a host of different diseases, including cancer. The biopsychosocial model supports this argument. Within the "psychosocial processes", resources of social support influence (directly and indirectly and through complex interactions) "health behaviors", "life stress", and therefore impacts the functioning of the neuroendocrine and immune mechanisms. This in turn may cause vulnerability, disease onset and progression, and finally affecting survival and the quality of life (2).

Considering the fact that heart disease and cancer constitute the two leading causes of death in the U.S., we should by now begin to question more seriously the environment that fosters such causes of mortality. In the search for cures and medications which may only reduce the disease's symptoms, we should as well be looking to prevent these pathologies by identifying and eliminating their sources. For example, the SES experiment discussed in the previous paper was able to find significant associations between the environment to which are exposed people of low SES (usually characterized by chronic stress and low social support), and their negative (physiological) health outcomes (1). This implied link between socioeconomic status inequalities and poor health consequences is extremely important. It forces us to reconsider our limited and rather biased perception of, and approach to health. Psychoneuroimmunology invites us to do the same thing. Just as the word "neuro" lies in the center of the term, psychoneuroimmunology considers the nervous system as the central link between the psychological state and the functioning of the immune system (the body's main defense system). Rather than focusing only on the physiological aspect of health and malady, psychoneuroimmunology proposes a more comprehensive and interdisciplinary approach to health. Such an integrative model may help to explain, for example, why a healthy economy does not necessarily mean a healthy population (which is especially the case in the U.S., where social and economic structures are based on competition and inequality, rather than on cooperation, and social support). Finally, the integration of these and other interacting variables into the way we define and approach well-being will be a necessary step towards "getting it less wrong".


Sources:


1) Socioeconomic status and stress-induced increases in interleukin-6, By Brydon, Edwards, Mohamed-Ali et al. Brain, Behavior and Immunity 18. 2004. p. 281-290.

2) Psychoneuroimmunology and health psychology: An integrative model, By Erin Castanzo and Susan Lutgendorf. Brain, Behavior and Immunity 17. 2003. p. 225-232.

3) An interdisciplinary research model to investigate psychosocial cofactors in disease: Application to HIV-1 pathogenesis, By Margaret Kemeny. Brain, Behavior, and Immunity 17. 2003. p. 62-72.

4) Social Identity and Physical Health: Accelerated HIV Progression in Rejection-Sensitive Gay M. By Cole, S., Kemeny M., and Taylor, S. Journal of Personality and Social Psychology 17. 1997. p. 320-335.

5) Social separation, housing relocation, and survival in simian AIDS: a retrospective analysis, By Capitanio, J. and Lerche, N. Psychosomatic Medicine 60. 1998. p. 235-244.

6) Loneliness and pathways to disease, By Louise Hawkley and John Cacioppo. Brain, Behavior, and Immunity 17. 2003. p. 98-105.

7) The Mind-Body Interaction in Disease. By Esther Sternberg and Philip Gold. Scientific American. 2002.


Motherless Brooklyn: Living with Tourette's Syndro
Name: Chevon Dep
Date: 2004-05-07 10:10:05
Link to this Comment: 9808


<mytitle>

Biology 202
2004 Third Web Paper
On Serendip

The label freak has taken on several meanings over the years. It is no longer limited to those in the circus such as the bearded lady, the hairest person, or the lobster claw people. It has spread to include a variety of people, who are often considered outcasts by society. For example, people with Tourette's Syndrome have being catergorized as freaks because of their uncontrollable actions such as cursing and twitching in public. Therefore, the lack of knowldge the public has of Tourette's Syndrome leads them to call people living with this disorder horrible names such as retard, idiot, and freak. In John Lethem's Motherless Brooklyn, he explores such abuse that someone living with Tourette's Syndrome encounters in their life. The reader is able to observe how the main character, Lionel, portrays his disorder and how others in the novel portray him. Although Motherless Brooklyn is a novel, it does address what Tourette's Syndrome is and how one lives with it on a daily basis.

Tourette's Syndrome is a neurological disorder that causes uncontrollable pyhsical or verbal outbursts. Nina Burleigh writes that the uncontrollable outbursts are due to the brain's receptors for dopamine not working properly. (1) Dopamine is neurotransmitter that helps control inappropriate impulses to move or speak. One noticeable feature of Tourette's Syndrome is tics, which are the involuntary motor and vocal movments and sounds. (2) Tics are commonly noticed in early elementary-age children. Although tics are common at an early age, there are several cases in which the diagnosis for Tourette's Syndrome does not occur until years later. For example, Nancy Dreher talks about a Canadian surgeon who began having tics at age seven, but was not diagnosed until he was thrity-seven. (3) This shows that there is a lack of knowledge about the disorder and that more research needs to be conducted. Since many of the patients do not know how to classify the tics, they attempt to hid it from the public. Lionel explains, "Of course I was vibrating too, vibrating before Minna rounded us up, vibrating inside always and straining to keep it from showing." (4) Since Lionel did not know what was wrong with him, he felt it was best to just conceal it for as long as he could.

According to Kelly Prestia, "As a child develops and matures, tics may become more complex and may appear in facial gestures or movements that imitate others, or completely different tics may occur."(2) This is the case for Lionel in Motherless Child, where his nervousness sets off his tics. Lionel says, "I felt myself knitting my brow exaggeratedly, a tic, and wanted to tell him to wipe the grin off his face: Everything he was seeing was not to his credit."(4) He could not control what the doorman was observing, because he could barely control his own actions. Common tics include eye blinking, shoulder shrugging, grimacing, head jerking, yelping, and sniffing. Tics can also pose danger to body. Prestia writes, "Motor and vocal tics can cause excess wear and tear on the individual's body, causing damage to organs, muscles, and joints." (2) In some instances, medication can be used to reduce the symptoms.

The more complex tics involve several muscles groups, which leads to more involuntary actions. (3) Dreher writes, "A small number of people with Tourette's Syndrome may also have a compulsion to shout obscenities, something called coprolalia, or to constantly repeat the words of other people, called echolia." (3) Throughout the novel, Lionel has these same compulsions. For example, Lionel replies to Minna's question by saying, "Scott Out of the Canyon! I don't know why, I just –fuckitup—I just can't stop." (4) Instead of repeating the exact words of Minna, Lionel mixes the letters up and also screams out obscenities. Due to the progression of the disorder, Lionel is unable to stop the outbursts. However, he does attempt to suppress them on several occassions. For example, Lionel recalls, "Language bubbled inside me now, the frozen sea melting, but it felt too dangerous to let out." (4) Like many Tourette's Syndrome patients, Lionel experiences embarassment and anxiety due to the lack of understanding and acceptance of their tics by others and therefore try to suppress the tics. Prestia agrues, "Although tics are involuntary due to neurological basis, some individuals can "hold in" their need to release tics until the tics can be released at an appropriate time or place. This is exteremely difficult, however, and may cause the tics to intensify when they are released."(2) Since it is a neurological disorder that involves the control of movements, there is not much a patient can do to contain the actions and the outbursts. In his attempt to suppress the tics, Lionel undergoes some changes, which eventually leads to a violent release of the tics. He says, "So I kept my tongue wound in my teeth, ignored the pulsing in my cheek, the throbbing in my gullet persistently swallowed language back like vomit."(4) Lionel even recognizes that it is a rare occassion that he could actually get through a moment without ticcing.

Tics are not the only characterstic of Tourette's Syndrome. Prestia points out, "Many individuals with Tourette's Syndrome have comorbid diagnoses, such as learning disabilites, obsessive-compulsive disorder, and attention deficit /hyperactive disorder." (2) But what is the connection between Tourette's Syndrome and these other diagnoses? Since Tourette's Syndrome is rapid involuntary movements, traces of the other diagnoses can develop. For example, a Tourette's Syndrome patient has a habit of imitating the actions and repeating the words of others, which can also be categorized as a symptom of obsessive-compulsive disorder. In the novel, Lethem refers to the tics as compulsions, which suggests their connection to obsessive-compulsive disorder. Lionel says, "For me, counting and touching things and repeating words were all the same activity." (4) Since such activites are part of both Tourette's and obessive-compulsive disorder, it is difficult for a distinction to be made between the two. The lack of distinction can be a reason for the undiagnosis of Tourette's Syndrome. Although there are several disorders associated with Tourette's Syndrome, it is believed that Tourette's Syndrome does not directly affect intelligence, and many students with Tourette's Syndrome have an average or above average IQs. (2) The combination of Tourette's syndrome and the other disorders does influence the overall intelligence of someone who does have these disorders.

On several occassions, Lionel attempts to separate his Tourette's self from his actual self. For instance, he says, "Bailey was a name embedded in my Tourette's brain, though I couldn't say why. I'd never known a Bailey." (4) Lionel is making it clear that his Tourette's brain has its own set of characteristics, and should remain independent. This distinction also shows that Lionel cannot control his Tourette's brain. When Lionel talks about the compulsions, he makes a statement that his brain is trying to create new tics. (4) The creation of tics by the Tourette's brain and Lionel's suppression of the tics turns it into a battle that Lionel cannot seem to win. Lionel comments, "The freak show was now the whole show, and my earlier, ticless self impossible anymore to recall clearly." (4) His life before Tourette's Syndrome is slowly fading away, as the disorder becomes more progressive. Later in the novel, he refers to Tourette's as his other name. (4) He begins to accept the fact that he has Tourette's Syndrome and that it is part of his life. Lionel decided to learn more about the disorder so that he could live a 'normal' life. He said, "I read books about the drugs that might help me, Hadol, Klonpin, and Orap, and laboriously insisted on the Home's once-weekly visiting nurse helping me achieving a diagnosis and prescription, only to discover an absolute intolerance: The chemicals slowed my brain to a morose crawl, were a boot on my wheel of self." (4) Although there are drugs available to reduce the frequency and severity of the symptoms, they also come with side effects that can make living with Tourette's Syndrome worse.

People with Tourette's Syndrome not only have to deal with the inability to control their own movements but also the constant misconceptions about their disorder. A few of the misconceptions are laziness, weird, and badly behaved. These misconceptions are particularly detrimental to school-aged children, who have to deal with being teased and viewed negatively by their teachers and peers. They may begin to internalize the misconceptions and believe them to be true. According to Prestia, "Many students with Tourette's Syndrome are at risk for developing poor self-esteem and self-confidence, in some cases, leading to depression." (2) Since he is called names such as half fag and freak by Minna, Lionel begins to develop low self-esteem. Lionel describes himself as "undersold goods, a twitcher, and a regrettable, inferior offering." (4) These negative labels imposed on Lionel shows that there is a lack of knowldege of the disorder. As long as there is a lack of knowledge about Tourette's Syndrome, the misconceptions will remain.

In order to increase the awareness of Tourette's Syndrome, it is necessary to have informational sessions in schools. This will allow both the students and teachers to become more familiar with the disorder. Once teachers begin to understand it is not a matter of the students being lazy or behaving badly, they can develop startegies that can build the children's esteem and lead to better academic performance. Since Tourette's Syndrome is often accompanied by other disorders, breaking down assignments and giving students work in smaller sections can limit the number of incomplete assignments. It is equally important for the peers to understand the characerstics of Tourette's Syndrome so that it can break the cycle of misconceptions of the disorder. Living with Tourette's Syndrome is not a choice. And for members of society to attach a negative label does not help Tourette's Syndrome patients' lives become better in any way but instead makes it even more difficult to combat the disorder.

References

1) Burleigh, Nina. "Why she couldn't stop cursing." Redbook. Nov 1998: 1-6, A Good Article
2) Prestia, Kelly. "Tourette's Syndrome: characteristics and interventions." Intervention in School & Clinic. Nov 2003: 1-10, A Good Article
3) Dreher, Nancy. "What is Tourette Syndrome." Current Health. Oct 1996: 1-5, A Good Article
4) Lethem, Johnathan. Motherless Brooklyn. New York: Vintage Books, 1999, A Good Book


The Credibility of Rational Emotional Behavior The
Name: Michelle S
Date: 2004-05-07 12:12:43
Link to this Comment: 9809


<mytitle>

Biology 202
2004 First Web Paper
On Serendip

Rational Emotional Behavior Therapy (REBT) is a cognitive-behavioral treatment developed by Dr. Albert Ellis, and falls under the behaviorist school of psychology. Cognitive-behavior therapy focuses upon the individual's ability to make significant changes in their life without understanding why the change is occurring. Ellis is a clinical psychologist specializing in psychoanalysis. The foundation of REBT was based upon Ellis's desire to aid in the improvement of his patients. He became discouraged by the lack of progress of clients, and attempted to create a program which would alter the perceptions and restrictions individuals impose upon themselves, which prevents them from obtaining self-confidence, and success in their lives (1) . However, can REBT be considered an effective method of treatment specifically when the therapy opposes any effort to find the root and causes behind patient problems, and wholly emphasizes ethical egoism during the progression towards recovery?

REBT is based upon the principle that human emotions and behaviors are the direct product of what individuals believe, presume, and think. These beliefs influence how people perceive themselves, others, and the surrounding environment. Ellis also identifies an individual's biology as a partial influence upon their thoughts as well. He felt that many of his patient's beliefs were deceitful and incorrect, and consequently, caused them to make false assumptions. These beliefs are categorized as irrational thinking, and cause unfounded guilt, depression, and feelings of worthlessness. Ellis established two specific requirements that define an irrational belief. This includes obstructing an individual from pursuing their objectives, and creating consistent negative emotions, which cause stress and confusion. This emotional strain causes the individual to damage themselves, others, and their life. The second requirement of irrational thinking is that it alters reality, and is a false impression of what actually occurred. Ellis provides examples of irrational beliefs, and specifically outlines three main ideas that appear to dominant patient thoughts. These beliefs include the idea that individuals must be exceptionally competent, or they are worthless. The second belief is that other individuals must treat them well, or they are awful, and the last belief is that the world should provide people with contentment, or they will die (1) .

Irrational thinking generally occurs in the subconscious, and is therefore, difficult to control. In order to combat these illogical beliefs, Ellis also created a Rational Self-Analysis diagram, which patients could utilize to think through their behavior. Yet, many psychologists, psychiatrists, and academics criticize Ellis's theory. One of the most significant arguments against REBT is that the therapy requires individuals to alter their irrational beliefs, but there is no attention to finding out how and why these irrational beliefs were first obtained. Therefore, the client has no point of reference when attempting to modify belief systems that are deeply embedded in their identity. Another considerable problem with REBT is that the framework relies upon self-interest. The patient analyzes their own thoughts, considerations, and actions, and is taught that the perception of others is of no consequence. Although this may help patients to become more autonomous in decision-making, patients must understand that there is a social-interest component in their belief system. Patients must be aware of the sentiments and principles of others, without allowing it to oppress their own choices. However, Ellis supports self-interest above social-interest, which becomes more of a secondary aspect of REBT(2).

There have been a number of case studies conducted, which report the progress of clients utilizing the techniques of REBT. One example could include a college student who is having trouble living with his roommate. The student could be fearful of confronting his roommate on issues because of irrational beliefs. The student is fearful of not being well liked. This ultimately leads to avoidance behavior regarding his living conditions. According to the theories of REBT, the student must correct his irrational beliefs by realizing that individuals can, and are willing to change their behavior. It is also acceptable for people to voice their opinions, and share their feelings about situations. The student must realize that there is no need to be unhappy if his is disliked by his roommate. Everyone seeks admiration, and esteem, but it is not necessary to be content. Another example of a patient utilizing Ellis's teachings is an individual who is an overachiever. The patient only feels competent and worthy of love when he performs well in school, excels at sports, and is praised at his job. If the patient has a discouraging experience on an exam, he feels insignificant and worthless. The associated irrational belief in this situation would be that the patient must do well to deserve affection and love. The patient must ultimately realize that love is unconditional, and is not dependant upon his abilities, or lack thereof. The patient may do poorly on an exam, and recognize his feelings of unhappiness over it, but not allow himself to become insecure about relationships.

The purpose of Rational Self Analysis, and REBT in general is based upon Ellis's realization that patients can contribute to their progress, by examining the validity of their beliefs. Although the theory is criticized for not exploring the sources of discontent among patients, this appears to be the advantage of REBT. Ellis focuses upon an individual's actions, and the associated belief, which is causing the action. Discovering historical causes for behavior can be abstract, and too conceptual. Ultimately, understanding the reasons behind a behavior is mutually exclusive of recovering from the behavior. In addition, irrational beliefs are often created because individuals are apprehensive of how others will perceive them. Ellis's decision to make social interest a conditional aspect of recovery, and not the end aspect is sensible and insightful. Patients who suffer excessive irrational beliefs must learn to put their own self-interests above others, and make it a desired, but not necessary part of their own contentment. Therefore, the structure and models behind REBT can be considered both effective, and perceptive methods of curing individuals of their self-destructive habits, and the most substantial arguments against Ellis can be identified as necessary conditions for full recuperation of patients.

References

1). A Brief Introduction to Rational Emotive Behavior Therapy. New Zealand Center for Rational Emotional Behavior Therapy

2)REBT, Philosophy and Philosophical Counselling

3)American Psychology American

4)The prince of reason , Interview with Ellis


Color Blindness and its Neural Implications
Name: Allison Ga
Date: 2004-05-07 13:09:57
Link to this Comment: 9810


<mytitle>

Biology 202
2004 Third Web Paper
On Serendip

A person who is color blind does not see the world the same way as someone who has regular color perception. The existence of color blindness in individuals exhibits the idea of a relationship between how we perceive the world through our eyes and how that information is then processed by the brain.

Color perception depends on how the information communicated from the eyes to the brain is interpreted. In order to illustrate the important factor of sight, the mechanics of seeing will be discussed. Outside images are projected through the cornea and the