Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.


Bio 202, Spring 2003 Forum


Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.

Go to last comment

Gaucher Disease: A Rarity in Three Types
Name: Rachel Sin
Date: 2003-02-21 16:24:07
Link to this Comment: 4740


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Ethnicity can provide individuals with wonderful traditions and celebrations of one's heritage. However, for some Ashkenazi Jews, ethnicity brings them much more than they bargained for: a rare condition causing a wide array of liver, lung, spleen, bone and bone problems. Ethnicity brings them Type I Gaucher Disease. Type II and Type III are the two other forms of this rare genetic condition, and can occur at equal frequencies in all ethnic groups. Gaucher disease was first described in 1882 by Doctor Philippe Charles Ernest Gaucher from France (2) . Type I , the most frequently seen form of the disease, can affect people of multiple ethnic backgrounds. However, its prevalence is greatest by far in the Ashkenazi Jewish population, making it the most common genetic disease within this ethnic group.

While the Type I Gaucher Disease is non-neuronopathic (not affecting the nervous system) the second two types are neuronopathic. Yet even though the three types of Gaucher produce different symptoms, all three types result from the same cause: a lack of glucocerebrosidase enzyme. The glucocerebrosidase enzyme functions to break down the compound glucocerebroside, a fatty compound which usually is stored in all cells of the body in very small amounts. In Gaucher patients, an excess of glucocerebroside builds up in the body, and is stored abnormally in lysosome, or storage cells (3) . Typically, macrophages are able to aid in the degradation process of glucocerebroside. However, due to the lack of glucocerebrosidase in Gaucher patients, glucocerebroside stays in the lysosome, preventing macrophages from acting upon them. Macrophages which are enlarged and contain an abnormal buildup of glucocerebroside are known as Gaucher cells. (1) . In affected patients, Gaucher cells can be found in bone marrow, liver and spleen cells. (2) .

Each of the three types of Gaucher Disease affect many systems of the body. Type I of the disease, which is the most mild form and is most frequently seen, is the only form of Gaucher which does not affect the nervous system. Typically, the average age of onset for Type I Gaucher is 21 years (6) . Approximately 1 in 10 Ashkenazi Jews is heterozygous for type I. Although the condition is non-neuronopathic, patients can exhibit a wide array of symptoms ranging from increased spleen and liver volume, lung compression, a variety of bone problems including lesions, bone tissue death and pain, and anemia and easy bruising. Individuals with Type I Gaucher Disease typically have a life span of 6 to 80 years (5) . Within families, the severity of Type I of the condition varies immensely, thereby making it impossible to determine which family members will suffer from the most severe symptoms. Gaucher Disease is different from most other autosomal recessive conditions in that one of the nonfunctional glucocerebrosidase genes (which are characteristic of Gaucher Disease) is passed o1n to each of the patient's offspring, causing them all to be carriers. Among Ashkenazi Jews, it has been presumed that around 1 in 450 Ashkenazi Jews has two mutated copies of the glucocerebrosidase gene (4) .

While Type I Gaucher is by far the most common form of the disease, Type II is excessively rare; among newborns, less than 1 in 100,000 have Type II Gaucher. Its onset occurs in infancy, and unlike Type I, Type II Gaucher is highly neuronopathic, and causes drastic neurological problems, usually at age one. Type II typically leads to death by the age of two due to the degree of severity of its effects on the nervous system (3) . In addition to the severe neurological defects, Type II also causes the spleen and liver enlargements seen in Type I. However, unlike Type I, it is not concentrated in the Ashkenazi Jewish ethnic group, and is not concentrated in any other specific ethnic group in particular. (1) .

Like Type II Gaucher Disease, Type III Gaucher, also known as Sub Acute Neuropathic Form, is also an exceptionally rare form. It usually strikes during childhood, and accounts for less than 5% of all Gaucher Disease cases. It is estimated that fewer than 1 in 100,000 live births worldwide (with the exception of Scandinavia) result in this type of the disease. While some patients afflicted with this type of Gaucher Disease die during their childhood, most others can live to middle age and some even beyond middle age (6) . This second neuronopathic form was also once called juvenile Gaucher disease, and progresses the most slowly of all three types. As is the case with Type II Gaucher Disease, Type III is not associated with a specific ethnicity, yet many Type III Gaucher sufferers have been found to be concentrated in Sweden (1) . Symptoms in the nervous system in patients who have this form of the disease include myoclonic seizures, a lack of coordination and mental degeneration. The form is more severe than Type I; like Type I, symptoms also include enlarged liver and spleen and bone disease. However, although it is neuronopathic, its effects on the nervous system are not nearly as drastic, rendering it less severe than Type III (4) .

The three forms of Gaucher Disease all result from a lack of the production of the glucocerebrosidase enzyme, and all have drastic effects on numerous body systems. Type I Gaucher is far less rare than the other two, is non-neronopathic and not so rare among Eastern European Jews. Conversely, the second two types of Gaucher are much rarer, are neuronopathic and panethnic, and the symptoms are much more severe than those exhibited in Type I .

References

WWW Sources

1)Living With Gaucher Disease, from Massachusetts General Hospital

2)Gaucher Disease in Ashkenazic Jews

3)The NTSAD Diseases Family: Gaucher Disease , from NTSAD

4)Gaucher Disease – Rare Disorders-Medstudent , from Medstudents.com

5) Scientific American:Ask the Experts: Medicine -
What is Gaucher Disease? Are there treatments?
, from Scientific American

6) Health Topics A-Z: Gaucher Disease , from Ahealthyme.com


How can observed behavior differences in individua
Name: Luz Martin
Date: 2003-02-23 18:46:47
Link to this Comment: 4766


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

To understand the human brain, looking at behavior as an indicator of brain function has led researchers to focus on individuals with Williams syndrome (1). The limitations as well as highly developed features in behavior found in people with this syndrome have attracted investigators. The unique behavior displayed has led many to believe that it may be possible to unravel brain organization by locating the damaged areas in individuals with Williams syndrome and connect them to the behavior observed in these individuals (1).

Individuals with Williams syndrome are all unique in their behavior, but most share common features. When given IQ tests they tend to receive scores that are below the average (1). The most common feature in is mental retardation, with about 95% with IQs in the mild to moderate range of retardation (3). Along with low IQ scores, these individuals tend to have limited writing and arithmetic skills. Some individuals display developed abilities to compose stories. They seem to be comfortable narrating and manipulating their voice to express their story (3). They are known for their musical talents and spoken language abilities as well as their sociable tendencies (1).

Research on individuals with Williams syndrome has led to many discoveries. One of the biggest puzzle pieces uncovered is the genetic cause of the disorder (4). A small piece missing in one of the chromosome 7 copies was found to be the cause of Williams syndrome (4). Individuals lose about 15 to more genes contained in this missing piece. About 95% of the individuals with Williams syndrome display this deletion (1). Some features of the disorder include problems with the heart, blood vessels, kidney, as well as dental problems (4). Connecting genes to physical brain abnormalities and ultimately to the observed behavior could produce an explanation as to how the developing brain is organized.

When comparing "normal" children with children with Williams syndrome behavioral differences can be observed (2). Along with the behavioral differences, brain structure differences have also been observed (2). Physically, the brain of an individual with the syndrome is discovered to be about 80% of the volume of the "normal" brain (2). The Williams syndrome brain is also characterized by a reduction of cerebral gray matter and lower cell density (2).

Knowing that there are physical differences in brain structure developments, how can we assume that the brain of a child with the disorder is simply a damaged version of the normal child with the normal brain if their genetic information is different (2) ?

The difference in genetic information leads to different a development (2). There must be a difference in the growth of the brain in the womb as well in childhood. With a set of different instructions for development in an individual with Williams syndrome, is it really surprising that their brains are different?

Williams syndrome is a disease marked by the deletion of a section of chromosome 7 that may involve several more genes (4). Individual genes don't explain the whole story leading to behavioral differences. Genes code for proteins, but not necessarily for the behavior displayed in a person (2). Recognizing that there are multiple genes involved in one function as well as the observation that one gene may also be part of many functions makes it harder to explain that the symptoms due to the deletion of a small piece of chromosome are to be entirely blamed for the condition (2). Keeping in mind the contribution of the genetic information, a person with Williams syndrome has a different developmental path than that of a person with the normal brain.

When comparing the brain of adults with Williams syndrome and that of "normal" adults, there is an assumption that the relationship between behavior and brain structure can be compared between the two. If we were to suppose that behavior is a product of the brain, but not of single divisions of the brain, then it could be that the brain of a person with Williams syndrome is producing similar behaviors as the normal brain. The behavior may not be the product of the same regions in a normal brain because we have observed that their brain is different (2).

"In other words, researchers cannot use the end state of development to make claims about the start state (5)."

This might mean that we are dealing with a brain that is not necessarily damaged, but has simply physically developed differently. We cannot simply compare a "normal" brain with an abnormal brain when they are defined by different genetic information. The brain is much more complex. The same behavior can be produced without having the same brain.

References


1) Lenhoff, Howard M. Wang, Paul P., Frank Greenberg and Ursula Bellugi. (1997, December). Williams Syndrome and the Brain. Scientific American. 68-73. from http://www.sciamarchive.org

2)Institute of Child Health, Crucial Differences Between Developmental Cognitive Neuroscience and Adult Neuropsychology, by Annette Karmiloff-Smith.

3)Center for Research in Language, Williams Syndrome: An Unusual Neuropsychological Profile by Ursula Bellugi, Paul P. Wang, and Terry L. Jernigan.

4)National Center for Biotechnology Information, Genes and Disease information

5)Guardian Unlimited newspaper, Summary of lecture by Annette Karmiloff-Smith.


Guillain-Barre Syndrome
Name: Amelia Tur
Date: 2003-02-23 21:05:08
Link to this Comment: 4771


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Most people do not expect to become paralyzed during the course of their lives. Barring injury to the nervous system or debilitating disease, one does not expect to lose motor function. In spite of these expectations, people of all races, sexes, ages, and classes can be afflicted with a debilitating syndrome that can lead to difficulty in walking or even to temporary paralysis in the most severe cases. This syndrome is known commonly as Guillain-Barre Syndrome, or GBS.

GBS is an inflammatory disorder of the peripheral nerves. When the syndrome occurs, the body's peripheral nerves become inflamed and cease to work due to an unknown cause. (1) (3) Around 50% of the cases of GBS appear after a bacterial or viral infection. (1) The syndrome can also appear after surgery or vaccination. GBS can appear hours or days after these incidences or can even take up to three or four weeks to appear. (4) Some theories propose that GBS is caused by a mechanism of the autoimmune system that prompts antibodies and white blood cells to attack the covering and insulation of the nerve cells, which leads to abnormal sensation. GBS is considered a syndrome rather than a disease, because its description is based on a set of symptoms reported by the patient to her doctor. (5)

GBS is also known as acute inflammatory demylinating polyneuropathy and Landry's ascending paralysis after Jean B. O. Landry, a French physician who described a disorder that "paralyzed the legs, arms, neck, and breathing muscles of the chest." (4) (1) GBS was named after French physicians Georges Guillain and Jean Alexander Barre who, along with fellow physician Andre Stohl, described the differences of the spinal fluid of those who suffered from the syndrome. (5) The syndrome affects one to two people per 100,000 in the United States, making it the most common cause of rapidly acquired paralysis in this country. (1) Some patients initially diagnosed with GBS are later diagnosed with chronic inflammatory demyelinating poly[radiculo]neuropathy, or CIDP. (Sometimes radiculo is left out of the name, hence the brackets.) CIDP was initially known as "chronic GBS," but is now widely considered a related condition. (3)

Although patients can be preliminarily diagnosed with the syndrome based on an analysis of the physical symptoms, two tests can be used to confirm the diagnosis of GBS. The first is a lumbar puncture, or spinal tap, in order to obtain a small amount of spinal fluid for analysis. The spinal fluid of those with GBS often contains more protein than usual. The second is an electromyogram (EMG), which is an electrical measure of nerve conduction and muscle activity. (3) (4) The symptoms of GBS begin with numbness and tingling in the fingers and toes leading to weakness in the arms, legs, face, and breathing muscles. The weakness begins in the lower portion of the body and rapidly moves upward. This weakness eventually leads to loss of sensation in the affected areas; although a number of cases are mild, temporary limb paralysis is not uncommon. In the milder cases, the numbness can only cause difficulty in walking, "requiring sticks, crutches, or a walking frame." (3) Pain is not uncommon, and abnormal sensations, such as the feeling of "pins-and-needles," can affect both sides of the body equally. Loss of reflexes, for example the knee jerk, is common. (1) (3) Most patients are at their weakest point about two weeks after the onset of the symptoms, and around 90% will have reached their weakest state three weeks after the onset of symptoms. (4)

The progression of the syndrome is unpredictable, especially in the early stages, and patients may preemptively put in the hospital to be cared for. (1) In the more severe cases, patients require hospitalization and a stay in the intensive care unit in the early stages of their disease, especially if they need the assistance of a respirator, or ventilator, to breathe. In about a quarter of the cases of GBS, the paralysis moves up to the patient's chest, requiring this assistance from a respirator. The face and throat may also be affected by the paralysis, necessitating a feeding tube through the nose or directly into the stomach. (3) The period that the patient is afflicted with the syndrome can be extremely long and long-term hospital stays are not uncommon. Care is generally confined to supportive measures to the patient as the condition improves spontaneously, and efforts are made to speed recovery of nerve function, including physical therapy and hydrotherapy. Most patients eventually recover and go on to lead normal, or very near normal lives. Some patients, however, may remain partially paralyzed and require the use of a wheelchair for a long period of time. (1) (3) Around 30% of GBS patients still feel residual weakness three years after the onset of symptoms. (4) Death due to GBS is highly unlikely with modern medical practices, but death does occur in about 5% of the cases. (3) For those with CIDP, the course of the disease is generally longer than those with GBS and the recovery period can follow a recovery/relapse cycle, but the patients are less likely to suffer respiratory failure. (3)

The syndrome came to brief public attention in 1976 when a number of people vaccinated against Swine Flu were stricken with the syndrome. (1) A Minnesota doctor had reported to his local health board that after inoculating a man for Swine Flu, the man had developed GBS. The health board reported this the Center for Disease Control, who were running the Swine Flu vaccination program, and the CDC began to ask doctors to report cases of newly diagnosed GBS. It did not take long for doctors to begin to associate GBS with the vaccine, even though the CDC tried not to relay this impression. As GBS is difficult to diagnose and has many symptoms similar to other neurological diseases, doctors at the time might have been more likely to diagnose patients who came in with muscle weakness who did receive the vaccine with GBS more often than those patients who presented muscle weakness and did not receive the vaccine. As the CDC could not say with certainty that the flu vaccine was not causing the high number of cases of GBS, the CDC announced cessation of the vaccination program on December 16, 1976. To this day, some people avoid flu vaccines because of this one incidence of extreme side affects. (2)

In the end, a person with GBS may eventually fully recover her motor function to the level it was before her affliction with the syndrome. While she may be left with no physical reminders of the disorder, she and her family and friends will always remember her sudden incapacitation. The emotional scars from such an incident can last a lifetime.


References

1) Guillain-Barre Syndrome International - GBS: An Overview, An overview of GBS on the website of the Guillain-Barre Syndrome Foundation International, based in Wynnewood, PA.

2) Kolata, Gina. Flu: The Story of the Great Influenza Pandemic of 1918 and the Search for the Virus That Caused It. Simon & Schuster: New York. Pgs. 167-185.

3) Guillain-Barré Support Group, The homepage for the Guillain-Barre Syndrome Support Group based in the United Kingdom. The organization disseminates information to sufferers of the syndrome and their family and friends.

4) NINDS Guillain-Barre Information Page, National Institute of Neurological Disorders and Stroke information page on GBS.

5) GBS - An Overview For The Layperson, An overview of GBS written by Dr. Joel S. Steinberg, a neurologist that once suffered from GBS.


A Critical investigation of the etiology of Devel
Name: Michelle C
Date: 2003-02-23 22:17:25
Link to this Comment: 4776


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

The long disputed debate about the primary cause of dyslexia is still very much alive in the field of psychology. Dyslexia is commonly characterized as a reading and writing impairment that affects around 5% of the global population. The disorder has frequently been hypothesized to be the result of various sensory malfunctions. For over a decade, studies have made major contributions to the disorder's etiology; however, scientists are still unclear of its specific causal. Initially, dyslexia was thought to be a reading disorder in children and adults (1). Later it was suggested to consist of both a visual and writing component, therefore characterizing it as more of a learning disability which affected people of normal intelligence's ability to perform to their fullest potential (5). In the current research, cognitive and biological perspectives have often been developed independently of one another failing to recognize their respective positions within the disorder's etiology.
The Phonological Deficit and Magnocellular theory are two of the most dominant theories in dyslexic research. Various theories have been suggested to explain the nature and origin of dyslexia, however, they often served as additional support for either the phonological or magnocellular theories. The Double Deficit theory suggested that dyslexic symptoms were the result of speed-processing (7). The Genomic theory posed that dyslexia was a highly heritable disorder that can be localized to a specific genetic component, Finally, the Cerebellar Deficit theory suggested that dyslexia was the result of an abnormal cerebellum exist (2). With the constant debate of the biological nature versus the cognitive nature of the disorder's causal, scientists have had major problems explaining the disorders' etiology based on one individual theory. While it was important to locate the specific cause of the disorders' manifestations, it seemed that the most effective results would be achieved by a collective approach. This kind of approach would encompass ideas placed forth by both the cognitive and neurological theoretical ideas which currently existed in the current research of the etiology of dyslexia.
The phonological deficit hypothesis of dyslexia is one of the most long standing explanations existing in psychological research. The theory was coined by a man known as the father of dyslexia, Pringle-Morgan in 1896. Morgan viewed reading as a process that critically involved the segmentation of text into graphemes (1). These graphemes served as the earliest precursors to phonemes (the small unit of language), such that grapheme to phoneme conversion is equivalent to the whole sound of a word (1), (5). The process is said to both require that a reader assemble and address a word's phonology. Dyslexics are said to have phonological deficits which caused phonemic representation difficulties; they would often fare worse than others when it came to mapping sounds into letters in the brain, and with phonemic recall (1,5). Deficits in dyslexics' ability to retain short-term word memory and problems with segmentation of words into phonemes (e.g., auditory discrimination), lent support to these ideas (5).

A study by Petra Georgiewa and colleagues investigated the phonological theory using advance scientific techniques. In the study, 17 adolescent children were subjects; 9 dyslexics & 8 controls. Dyslexic subjects were diagnosed based on the discrepancy between non-verbal IQ and read/spelling performance (1). All subjects were matched for age and intelligence level (IQ > 85) and the mean age was 13 years old. During the study each child was given a series of word and non-word displays on a computer screen to read silently. The experimental condition contained words and non-words and the control condition contained words and non-words with an asterisks. All words were either 1 or 3 syllable nouns accumulated from a basic 10 year old vocabulary. Each word and non-word was presented one per 2000 milliseconds and remained on the screen for 1800 milliseconds for the completion of the task. A blocked fMRI design was used so that the tasks consisted of 8 blocked segments with a total of four blocks of experimental reading and four blocks of control condition reading which were alternated over the 20 minute duration of the experiment. The usage of fMRI (functional Magnetic Resonance Imaging) and ERP (Event Related Potentials) were used to compare differences across the dyslexic and control groups' performance. Additionally, observations of a behavioral task was collected with a similar word blocking before the experiment allowing subjects to read aloud allowing both groups to form familiarity with the task a provide a baseline.

Subjects silently read similar linguistic stimuli for both the fMRI and the ERP data collection. The ERP's however, were measured in a separate session where the stimuli and control words were displayed in a pseudorandomized order that was all presented in one session (1). Five images were taken from every 2.5 minute session totaling 40 total images for fMRI analysis. These data were collected by simultaneous electrodes at critical time periods for linguistic processing (100- 180ms, 180-250ms, 250-350ms, 350-500ms, and 500-600ms after word presentation). All ERP data were analyzed over a 2000ms timeframe.

Results from the fMRI data revealed significant activation in the left inferior frontal gyrus (IFG) for control subjects during the task (1). However, the dyslexic subjects display three areas of activation of dyslexic readers (1). Dyslexic readers' had activation in the left IFG, the posterior left thalamus, and in a part of the left caudatus nucleus (1). In addition, dyslexics displayed a significant hyperactivaion in the Broca's area (found in the anteriror insula and in the lingual gyrus [right tempro-occipital region]) in comparison to control subjects (1).

The data collected in Georgiewa's study provided significant support for the phonological hypothesis. Data from the behavioral study worked to demonstrate that dyslexics had increased problems with phonological decoding (most decoded at a slower rat which required a greater amount of effort) which was seen even during out loud reading tasks. Group differences in the activation of the Brocaas area are said to be reflective of increased effort concerning phonological coding, due to the Brocaas area's involvement in phonological decoding and lexical identification; piecemeal/ assembled phonology (1). And lastly, the activation of three separate areas of the brain during phonological decoding is suggestive of insufficient sensory function and of possible compensatory efforts to more efficiently do a task that is generally localized to one area of the brain.

While the phonological theory of dyslexia provided a sufficient explanation of the etiology of dyslexia through the usage of a cognitive framework, the magnocellular theory provided the field of dyslexia with a grounded biological origin for the cognitive manifestations that were observed in dyslexia. This theory did not refute the findings of the phonological deficit theory, rather, it tried to validate it by providing neurological evidence linking cognitive symptomologies to brain abnormalities. Moreover, this theory suggested that dyslexics suffer from an auditory deficit which also can be seen in the manifestations of the disorder. The magnocellular theory was coined by Stein in 1997 and was based on the idea that there exists a division of the visual system into two neural pathways: the magnocellular and the parvocellular pathways (6). It is suggested that dyslexics suffer from abnormalities in the magnocellular pathway that cause visual and binocular difficulties (6). The impairments of these neural pathways are believed to cause auditory and visual deficits. With both auditory and visual deficits, the magnocellular theory suggests that dyslexics suffer from difficulty in processing rapid temporal properties of sounds which leads to phonological deficits (5).

A 2002 study investigating temporal processing by Rey, De Martino, Espesser & Habib was done in light of both the phonological and magnocellular theories and lends support to a neurological temporal deficits in dyslexics described by the magnocellular theory. This study accessed 13 developmental dyslexic children ranging from 9.8- 13 years and 10 normal readers ranging from 11.5-13 years old. Dyslexic subjects attended a school specialized for dyslexic while controls attended regular junior high school. Subjects were match for age and IQ. During the experiment subjects were asked to complete a Temporal Order Judgement (TOJ) task which used the succession of two (p/s) within a cluster, these clusters, in addition there were two sections where either the stimuli was artificially shortened or lengthened during the TOJ task (4).

Results from the experiment supported a possible deficit in temporal functioning as a component of dyslexia. Dyslexics did significantly worse on the consonant brevity trails (shorting), preformed somewhat better on the consonant lengthening, and displayed normal performance (for their age and intelligence) when given 200% lengthening of consonants (4). The absence of significant effects of consonant ordering, suggested that temporal processing difficulty work to impair the dyslexic student's ability to perform to the best of their ability. Similar findings of Bradlow's 1999 study support these deficits. Bradlow's electrophisological manipulations (slowing or lengthening) of the presentations of each phoneme along the da-ga continuum revealed grave differences among dyslexic and normal subjects. It was found that common deficits of slowed temporal processing were minimized when there was an increase in phonemic presentation (increases of 40 milliseconds to 80 milliseconds) (4). It may be inferred through both the Rev, atl and Bradlow findings that there does exist clinical evidence for a magnocellular deficit in dyslexics which is inclusive of the presence of a phonological deficiencies, but also recognizes an additional temporal processing deficits.

Though both theories provide acceptable rationale for possible causal of dyslexia it is difficult to give preference to either theory. While the phonological deficits theory addresses important cognitive impairments that are detected in the majority of adults and children who suffer from dyslexia, it fails to account for the many other legitimate sympotomologies displayed in the greater population of dyslexia (7). To address this shortcoming, in 2002 Maryanne Wolf and Partrica Bowers proposed the idea of the double deficit theory. This theory proposed two subtypes of dyslexic readers; one with a single deficit (name-speed or phonological deficits) and others with a double deficit (both naming speed and phonological deficits (7). Through their research they discovered that phonological deficits were just one source of reading dysfunction and that naming speed deficits were also a major source of problems in dyslexic readers (7). Further, it was suggested that dyslexics with double deficits were among the worse reading due to their limited compensatory routes for reading efficiently. This research was the result of a growing number of dyslexic children who had not been diagnosed because their symptomologies were not "just" phonological in nature. Though the phonological deficit theory provides us with a firm idea of the core manifestations of dyslexia, it should be revisited to account for the various subtypes which occur throughout the spectrum of dyslexia. Accounting for these variations would allow clinical diagnoses to be a more flexible processes that acknowledges a wider variety of co-occurring symptomologies.

In accordance, the magnocellular theory, though useful in the localization of cognitive symptoms to core neurological abnormalities neglects to address common symptoms displayed in dyslexics which are not the directly the result of visual or auditory pathway abnormality. Difficulties in handwriting, clumsiness, automating skills are among a few commonly found symptoms which are present in over 90% of dyslexics (2). Many of these symptoms have been proposed to be the result of a ceberallar abnormality which effect dyslexics fine motor, balance and the ability to automate skills. From this idea, researcher Nicolson, Fawcett, and Dean proposed the cerbellar deficit theory as an additional cause of dyslexia and found promising results from a 2001 experiment which suggested that the core deficits; reading, writing and spelling, could be all be linked to abnormal functioning of the cerebellum (2). They further proposed that cerbellar deficits encompassed and addressed phonological deficits while paralleling magnocellular abnormalities; suggesting that some dyslexic children displayed either or both magnocellular and cerebellar abnormalities which served as a core source of the manifestations of their phonological deficits.

In light of the current research of the double deficit and cebellar deficit theories, it is necessary that the traditional theories of dyslexia be revisited. Though both the phonological and magnocellular deficit theories provide key components which are helpful in the long standing investigation of dyslexia, they both are lacking in major ways. Using the both the double deficit and cerbellar theories in conjunction with these dominant theoretical models, however, may be useful in obtaining a more holistic understanding of the true nature of the disorder. Accounting for the major phonological deficits with the phonological theory, the disorders neurological origin through the use of both the magnocellular and cerbellar theories, and assessing the various subtypes of dyslexia through the double deficit theory would be the ideal strategy for efficient investigation of the etiology of dyslexia. In conclusion, a model which works to incorporate these four facets of investigation could potential advance the research and treatment of dyslexia, by broadening the current diagnostic spectrum of dyslexics and evoking varying styles of intervention which work to target the multitude a dyslexic symptoms and subtypes.

References

< a name="1">1)Sciencedirect, Great article on brain imaging in subjects who suffer from dyslexia.


< a name ="2">2)Sciencedirect, Great article which explains the cerebellar deficit hypothesis of dyslexia.


< a name="3">3)Infotrac, Article about cerebellar deficits in dyslexics which lends support to the hypothesis of cerebellar deficit in dyslexia.

< a name="4">4)Sciencedirect, Article which give an in-depth explanation of temporal processing and phonological deficit theory of dyslexia.


< a name ="5">5)Nature.com, Article discussing the two most prevalent theories of dyslexia; the magnocellular and phonological theories.


< a name.="7">7) Infotrac, Article describing the double deficit hypothesis of dyslexia.

NON-WEB REFERENCES

6).To see but not read; the magnocellular theory of dyslexia.="6">6) Stien, J. & Walsh, V. TINS v20 1997 pages 147-152.


Asperger's Syndrome: What is it and What Happens N
Name: Marissa Li
Date: 2003-02-24 02:45:40
Link to this Comment: 4783

Typically in society today, the over diagnosis of disorders such as Attention Deficit Disorder (ADD) and other behavior disorders are very common. One disorder that has only recently been brought to the table is Asperger's Syndrome. Only added to the Diagnostic and Statistical Manual of Mental Disorders (DSM) in 1994, Asperger's is a neurological difference that seems to be stirring up everywhere among candidates assumed to be either socially senseless or mildly autistic (3). It is a disorder that is characterized by different extremes in behavior, and does no surface until around ages five to nine years old, if even diagnosed, many times it is not diagnosed at all. Though it is not a widely studied disorder and still new in the "autism spectrum," it is becoming more and more commonly diagnosed or looked into, especially among engineers and computer programmers (1). Asperger's was first recognized by the Viennese scientist Hans Asperger and the child psychiatrist Leo Kanner in 1943 and 1944. Each published papers, which described patterns of behaviors among children with normal intelligence and vocabulary that were somewhat awkward and unaware socially and in communication. Though the idea was dropped soon after, the study of Autism and the like were continued with a consideration of "high functioning" autistics that may function in even another realm (later to be diagnosed as Asperger's) (1). Those with Asperger's tend to exhibit signs of detachment socially, they are sensitive to their surroundings, and yet cannot interpret their own "proper body space" (1). They tend to preoccupy themselves with distinct and specific entities within an object or an idea, and they are obsessively organized and function within a world of obsessive uniformity. For example, the child or adult will insist on performing the same tasks daily in order for their life to run smoothly. Asperger's so far is more commonly traced in males, and there are a variety of cases where children actually appear to be "little professors" or mini-geniuses because they are so internalized, focused and brilliant, like an idiot savant, it tends to show up in diminutive and specific areas of intellect (1). Typical Asperger's have difficulty expressing empathy and though they appear to live normal lives, they have severe social interaction issues and cannot stay within the realm of others for long periods of time without floating into another thought process. There is no clinical indication of cognitive defects or slowing, and yet Asperger's function in another world internally within their own conscience, and perhaps what is not even within their own conscience. Since Asperger's is somewhat new within the medical field, actual physical brain and nervous system research is not very lengthy or proven. There are studies that suppose that the right hemisphere of the brain is naturally less functional in Asperger's cases and that part of that is a decrease in motor skills and an all around physical awkwardness (5). Asperger's cases have extensive vocabularies at times, but really no grasp or a reaction to language, caused by a malfunction in the ability to empathize with anything or grapple with another's speech patterns, leading to many interpretations of such behavior. Many Asperger's cases have "behavioral problems" for this very reason (5). Their actions are not accountable because they are not necessarily aware of their surroundings. According to Asberger himself, "in the course of development, certain features predominate or recede, so that the problems presented change considerably. Nevertheless, the essential aspects of the problem remain unchanged. In early childhood there are the difficulties in learning simple practical skills and in social adaptation. These difficulties arise out of the same disturbance which at school age cause learning and conduct problems, in adolescence job and performance problems, and in adulthood social and marital conflicts" (3). Though the research on the brain and nervous system with Asperger's is new and insufficient, it does show signs of a mix up in consciousness as well as an actual alteration within the brain and nervous system itself. There are several theories out now which speculate that Asperger's is linked to many persons who are very bright and dominant in the field of computers and engineering. "It's a familiar joke in the industry that many of the hardcore programmers in IT strongholds like Intel, Adobe, and Silicon Graphics - coming to work early, leaving late, sucking down Big Gulps in their cubicles while they code for hours - are residing somewhere in Asperger's domain. Bill Gates is regularly diagnosed in the press: His single-minded focus on technical minutiae, rocking motions, and flat tone of voice are all suggestive of an adult with some trace of the disorder." "Replacing the hubbub of the traditional office with a screen and an email address inserts a controllable interface between a programmer and the chaos of everyday life. Flattened workplace hierarchies are more comfortable for those who find it hard to read social cues. A WYSIWYG world, where respect and rewards are based strictly on merit, is an Asperger's dream" (3). So, is everyone who exhibits loner behavior and inclination towards computer programming a candidate for Asperger's? In today's society we have so many children on Ritalin for ADD, ADHD, and various other social and behavioral problems. Perhaps they are all Asperger's, but what is wrong with that? According to UCSF neurologist, Kirk Wilhelmson "If we could eliminate the genes for things like autism, I think it would be disastrous. The healthiest state for a gene pool is maximum diversity of things that might be good" (3). The idea behind the rise of autistic qualities in certain areas is due to certain mating patterns. When a computer programmer marries an engineer and both exhibit Asperger's like tendencies, it is no surprise that their offspring would do the same. Whether or not this is a terrible outcome is questionable, because these people that display these qualities are extremely beneficial to today's technological world. Perhaps the way of the world is for some to be social creatures and others to be the techies. The thought of genetic predetermination and upbringing are somewhat scary effects, but who is to say that one is not satisfied if they work at a computer all day? Most likely Asperger's has always been in existence, from Monks to Bill Gates. They have all made their contributions to society, so really can that be classified as a "disorder?" Research must be done to clarify the actual physical brain and nervous system of Asperger's, but on a general level, in society, it seems to be affecting many, but in ways that are not necessarily detrimental. WWW Resources 1) 1)/a>Online Asperger Syndrome Information and Support,Basics on Aspergers 2) 2)/a>Worchester Polytechnic 3) 3)/a>The Greek Syndrome, Article re: Silicon Vallet and abundance of Aspergers 4) 4)/a>The AQ Test, Test for Aspergers 5) 5)/a>Maryland Asperger Advocacy and Support Group,Basic Information and medical information on Asperger's


Stress, Sports and Performance
Name: Arunjot Si
Date: 2003-02-24 18:22:57
Link to this Comment: 4788


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Actors, athletes and students all have something in common. They all perform their tasks with varying stress levels. What is this stress that we all talk about? Stress can be defined as a physical, mental or emotional demand, which tends to disturb the homeostasis of the body. Used rather loosely, the term may relate to any kind of pressure, be it due to one's job, schoolwork, marriage, illness or death of a loved one. The common denominator in all of these is change. Loss of familiarity breeds this anxiety with any change being viewed as a "threat".

The issue of anxiety is an important aspect of performance. Whether it is during the tense moments of a championship game or amidst that dreaded History exam, anxiety affects our performance via changes in the body, which can be identified by certain indicators. One misconception though with performing under pressure is that stress always has a negative connotation. Many times, "the stress of competition may cause a negative anxiety in one performer but positive excitement in another" (3). That is why one frequently hears how elite players' thrive under pressure, when most others would crumble.

PHYSIOLOGY OF STRESS
Stress is an integral part of our lives. "It is a natural byproduct of all our activities" (4). Life is a dynamic process and thus forever changing and stressful. Our body responds to acute stress by a liberation of chemicals. This is known as the fight-or-flight response of the body, which is mediated by adrenaline and other stress hormones, and is comprised of such physiologic changes as increased heart rate and blood pressure, faster breathing, muscle tension, dilated pupils, dry mouth and increased blood sugar. In other words, stress is the state of increased arousal necessary for an organism to defend itself at a time of danger. Alterations of hormones in the body include not only adrenaline, but also substances like testosterone and human growth hormone. Up to a certain point stress is beneficial. We perform with greater energy and increased awareness with the influx of excitatory hormones that release immediate energy (11).

STRESS OVERLOAD
As we all probably know, there is only so much tension one can take. Whether it is constant episodes or chronic stress, either can transform what was beneficial stress into "distress". The stress hormones which are protective initially and liberated for self-preservation may cause damage due to overproduction. This has an effect on the entire metabolism, including the rate at which our cells grow and are repaired as well as the production of the cells in the immune system (1).

The hormonal surge of glucocorticoids released to promote utilization of glucose as well as the conversion of protein and lipids to usable glucose can become detrimental in the long run. One clinical symptom that arises from this hormonal imbalance is an increase in appetite, which in a chronic situation, may lead to obesity (2). Catecholamines also increase blood pressure and repeated spikes of hypertension may promote formation of atherosclerotic plaques, particularly with combination of high cholesterol and lipids can ultimately lead to heart disease and stroke.

The brain of course is a crucial target and neurons exposed to elevated glucocorticoids for long periods of time are known to be adversely affected. Tests have shown that brain cells in rats may shrivel and the dendrite branches, which are used to communicate with other neurons, wither away (2). In particular, the hippocampus, which is implicated in memory and mood, may be damaged by stress and the adrenal steroids.

Advances in medical technology like magnetic resonance imaging (MRI) enable us to develop clear images of specific parts of the brain, which in return allow us to see where exactly stress is affecting the brain. The hippocampus – which has glucocorticoid receptors is actually noted to be twelve percent smaller in volume in people with stress disorders. Yet, is stress-linked brain damage permanent? Short-term stress damage appears to be reversible (as per rat experiments) though chronic stress may lead to neuronal loss.

DEFINING AN ATHLETE WHO PEFORMS AT HIS BEST
"Sports performance is not simply a product of physiological (for example stress and fitness) and biomechanical (for example technique factors) but psychological factors also play a crucial role in determining performance" a href="#3">(3). However, every athlete has a certain stress level that is needed to optimize his or her game. That bar depends on factors such as past experiences, coping responses and genetics (7). Although psychological preparation is a component that has been often neglected by athletes and coaches alike, studies have shown that mental readiness was felt to be the most significant statistical link with Olympic ranking. Athletes have frequently been quoted to state how the mental aspect is the most important part of one's performance. As Arnold Palmer, a professional golfer suggested that the game is 90% psychological. "The total time spent by the golfer actually swinging and striking the ball during those 72 holes is approximately seven minutes and 30 seconds, leaving 15 hours, 52 minutes and 30 seconds of 'thinking time'" (3). Stress during sports, as in anything else in life, may be acute, episodic or chronic. For the most part in sports, it is episodic, whether during a competitive match between friends, or a championship game. While acute stress may actually act as a challenge, if not harnessed, it can evolve to not only an episodic stressor that can affect one in the long term, but can also hamper one's play.

RESEARCH ON SPORTS PSYCHOLOGY
Interest in researching sports psychology has skyrocketed over the past few years. A myriad of hypotheses have been developed to attempt to clarify the relationship between stress and performance. In 1943, the drive theory was introduced, claiming that an athlete who is appropriately skilled will perform better if their drive to compete is aroused or if they are "psyched up". In 1962, the inverted-U hypothesis was formed on the notion that there is an optimal amount of arousal that an athlete will perform at. However, "if that level of arousal is passed then the level of performance will decrease. The same thing happens when the level of arousal is lower than the optimal level" (9). Though this hypothesis has had much support for many years, it too has fallen out of favor due to its oversimplifacation on a subject as complex as brain and behavior. Other theories that have been proposed like the multidimensional anxiety theory and the catastrophe theory all make their predictions on how anxiety plays a role in one's performance level, but the results remains inconclusive (9).

In recent research, the factor of competitive anxiety has been dissected into two segments -- somatic and cognitive anxiety. Cognitive anxiety is characterized by negative expectations, lack of concentration, and images of failure. Somatic anxiety refers to physiological symptoms such as sweaty hands and tension and other physiologic changes (4). In order to chalk out optimal performance, the precursors of anxiety need to be sought out. The temporal patterning of anxiety, before, during and after competition has been receiving a lot of attention in research.

COPING WITH STRESS IN SPORTS
Developing coping techniques is the most crucial element in balancing stress levels so that they optimize instead of inhibit performance level. Relaxation, visualization/imagery, self-talk, goal setting, motivation, and video review are all examples of systems that can be used by athletes (7). Self-regulation training cultivates one's self-confidence and attention control levels. Goal setting is another important system as making realistic short-term goals prevents one from getting overwhelmed, which can result in loss of focus. Having these realistic expectations is the only way one can eventually reach one's long-term goals.

Anxiety control is another technique that is obtained through muscle relaxation exercises as well as mental relaxation through modalities such as meditation or listening to music. Practice for perfection is necessary but as we talk about anxiety control, too much practice can actually lead to overpressure which obviously leads to anxiety beyond the optimal level necessary for the given task. In addition, self-doubts regarding one's performance and a desire to impress others will create a high level of anxiety which leads to "choking" as the athletes' focus on the game is lost as is his/her physical control (6). Athletes that maintain a proper combination of honing their physical skills and developing their mental game are able to adapt to any unfamiliar situation/circumstance that they encounter. As an Olympic champion stated: "My fingers and feet were damp and freezing cold. I felt weak, my breath was short and I felt a slight constriction in my throat... I just wanted to get the whole thing done with. The waiting was agony, but my mind conditioned through long training and experience warned 'wait to warm up! Wait! Wait!'" (11). According to research, elite athletes use these "equalizing" techniques in some combination before, during, and after competition, and this gives them the greatest chance to thrive, even when the game is on the line (5).

CONCLUSION
A certain level of stress is needed for optimal performance. Too little stress expresses itself in feelings of boredom and not being challenged. "What is becoming increasingly clear... that competitive stress does not necessarily impair performance and can in certain circumstances enhance it" (3). At an optimum level of stress one gets the benefits of alertness and activation that improves performance. Even while making such statements, it is important to realize that there is currently no conclusive evidence except for the fact that stress and anxiety do have an influence in performance.

The surge of interest devoted to this specialized arena reflects how our society breeds competition, not just in sports, but in every aspect of life and at every age. How can I get that edge over the person next to me? That is the mantra by which we are motivated to improve ourselves. Though much progress has been made in this region of neurobiological science, we cannot understand enough to come up with more than an idea of how the brain deals with stress when related to performance. We still have so many unanswered questions regarding the brain and its behavior that we are unable to unravel many of these mysteries though we continue to make educated guesses.


References

1)McEwen, Bruce. "The Neurobiology and Neuroendocrinology of Stress"
Psychiatric Clinics of North America. 2002 June.

2) New Studies of Human Brains Show Stress May Shrink Neurons , by Robert Sapolsky

3)Jones, Graham. Stress and Performance in Sport. New York: John Wiley and Sons, 1990.

4)Herbert, John. "Stress, the Brain and Mental Illness." BMJ. 30 August 1997: 530-535.

5) Peforming Your Best When it Counts the Most , by Kyle Kepler

6) Choking in Big Competitions , by Kaori Araki

7) The Online Journal of Sports Psychology ,

8) The Mental Edge , by Sandy Dupcak

9) Competitive Anxiety , by Brian Mackenzie

10)Nelson, Charles. The Effects of Early Adversity on Neurobehavioral Development. London: Lawrence Erlbaum Associates, 2000.

11)Neufeld, Richard. Advances in the Investigation of Psychological Stress. New York:Wiley and Sons, 1989.


"Listening to Prozac" : The dangers behind the sir
Name: Neela Thir
Date: 2003-02-24 19:30:12
Link to this Comment: 4790


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

"If the human brain were simple enough for us to understand, we would be too simple to understand it" (1).

In his book Listening to Prozac, Dr. Peter Kramer thoroughly examines how Prozac has revolutionized the power of psychopharmacological medication and what it teaches us about the human self. Prozac has demonstrated the ability to transform a person's behavior, outlook, and conception of self through a neurological change of biology, thus providing more evidence that brain does indeed equal behavior. Perhaps more fascinating than the answers it provides about human neurobiology are the difficult questions, ironies, and problems its usage raises. The administration of Prozac challenges the model of healing through cognitive powers due to its purely biologic effectiveness. This success has widened the gap between the un-medicated and medicated human self. Which is the "true" reflection of a person? Do Prozac's transformations emulate an unnatural idealized social norm or release a healthy individual trapped in an unnatural state? How does this reflect or change our definitions of "illness" and "wellness"?

Dr. Kramer's discussions hinge upon the idea that the nervous system controls behavior. The case studies he provides show people who, after taking Prozac, have remarkable "transformations" of multiple facets of behavior including perceptions, motivation, emotions, sense of choice, values, and personality (defined by given temperament as well as developed character). Prozac's ability to change a person so drastically on a biological level causes much apprehension because the change does not need to be processed cognitively or even consciously. Dr. Kramer asserts that this change need not coincide with any self-knowledge because it is "evidently not necessary"(32). His comment points to a desire among many that the conscious self (I-function) has a stronger influence on behavior than biology does because we intimately connect behavior with self-identity. Relying on a foreign substance to change biology (and self) without apprising and receiving sanction from the conscious-self first seems unnatural. The utter reliance on biology without utilizing our human gift of cognition seems to be a violation of how humanity has separated itself from our own inner animal. Dr. Kramer dismisses claims that Prozac compromises our vision of humanity through changing behavior in psychobiological terms by saying, "biological models are not reductionistic but humanizing, in the sense that they restore scale and perspective and take into account the vast part of us that is not intellect" (143). Further compromising psychoanalysis in favor of pharmacology, Dr. Kramer writes, "we are symbol-centered creatures, and some of our worst stresses are symbolic. But to my mind these final causes were like the pebbles that finally turn a tenuously balanced collection of rocks into an avalanche" (141). He goes on to write that rearranging the pebbles will not stop the avalanche; only changing the structure of the rocks, our brain's biologic functions, is totally effective. The symbolic pebbles (cognition, the I-function, conscious self, memory etc.) he describes may be visible and therefore more focused upon, but as seen with Prozac patients, they are ultimately forgotten in the bigger picture of biologic determination.

After seeing the dramatic difference Prozac makes, Dr. Kramer poses a critical question: "had medication somehow removed a false self and replaced it with a true one?" (19). His patients praised the effects of the drug saying "I feel like myself", and when they were weaned off of Prozac, they complained "I'm not myself again" (10). For people suffering from acute depression or anxiety, the medicated self ironically becomes the "true" self despite being in an altered state. One of his patients said jokingly that she has changed her name – "I call myself Ms. Prozac" (11). Here, self is not only dependant on biology but on medication. Her cognitive I-function has literally begun referring to itself as "Prozac" rather than her realized or given name. By naming herself Ms. Prozac, the patient demonstrates how this medication has the potential of becoming and defining a new self based entirely on the pill's neural actions while eliminating the old self as false and somehow untrue. Dr. Kramer later comments that Prozac differs from other mood-brightener medications because it has "characteristics" and a "personality" which affects the human brain (257). It is interesting to note that Prozac is given a "personality". It becomes personified because of its own, almost human, ability to create a sense of self in the patient. The irony is that the concept of one's true personality becomes vague because of the medicine's effectiveness in expressing its own "personality". The long-acting nature of Prozac may cause "a very steady cushion against decompensation [that] may translate into a new continuous sense of self" (182). Therefore, the true sense of self that Prozac provides may be an effective physiological personality-high which effectively mimics a continuous and realistic sense of self.

However, Dr. Kramer has reservations that Prozac carries its own set of prescribed alterations. Rather, he says that medication merely clears the way for patients to discover their true selves, which have been masked by traumatic events and psycho-environmental occurrences. He cites that "repeated stress sensitizes rats to produce higher levels of CRF" which "can lead to the changes in the cell's genetic material, the DNA and RNA". Outside environment can change biology, which in turn, changes the behavior and the self. Dr. Kramer parallels physical stress, like electric shock, with psychosocial stress, such as sexual abuse, because they both can "cause cell death in the nerves affected by the cortisol system" (117). He states that "medication is like a revolution overthrowing a totalitarian editor and allowing the news to emerge in perspective" (221). However, this begs the question: whose "news" is emerging after the evil editor of psychosocial stress is overthrown? Is the resulting change in self the result of the medication's design, society's expectations, or is it indeed the "true self"?

Dr. Kramer freely admits that "Prozac supports social stasis by allowing people to move toward a cultural ideal" thereby indicating that Prozac not only hints at future "cosmetic psychopharmacology" but in fact represents its arrival (271). Prozac "induces pleasure in part by freeing people to enjoy activities that are social and productive...enjoyed by other normal people in their ordinary social pursuits" (265). In people with minimal depression, Prozac serves as a social un-inhibitor or the equivalent of having a few drinks and becoming an ideal social mover. Merely because social forwardness is sanctioned by our society as "productive" does not sanction science to change people's biology so they derive pleasure in fulfilling the cultural ideal. We have the potential of creating a homogeneous society with modified and standardized behaviors despite a hereditary and experiential biology of difference. As Michael McGuire defined as a "trait distribution phenomenon," a normal human temperament may be changed medicinally because the culture does not reward it, therefore leading to unhappiness, unfulfillment, and depression. One might argue, however, that if we can biologically cure people's minor depression because of society's narrow-minded ideals and their own personal trauma, what is the harm? By medicating such people with Prozac and then diagnosing their problem, we may create a more and more narrow definition of "normal" and "well" by expanding what we think of as deviant. By over-prescribing Prozac, we are eliminating more types of biologic temperaments by labeling them as "illnesses". Society may not accommodate biology, but should we limit biology to fit society?

Listening to Prozac proves difficult because it tells us so much of what we do not want to hear. It challenges engrained ideas about distinctions between biology, cognition, medications, and our sense of self. What we value – conscious human control - turns out to be secondary to that which supposedly demeans us – our biologic animalistic determination. Yet, Dr. Kramer frames this realization as beneficial; by accepting this, we are accepting our greater innate complexity rather than simplicity. However, it is important to recognize the dangers of falling under Prozac's spell. Happiness void of repercussions is no more contained within a Prozac capsule as it is with any other drug. Society's influence on the brain, behavior, and the medication of "illnesses" should not be ignored. Prozac may call to us as the savior of those with minor depression, but its overuse can drown difference in sea of medicated sameness.

References

1)Kramer, Peter M.D. Listening to Prozac. New York: Viking, 1993.


Cocaine and the Nervous System
Name: Elizabeth
Date: 2003-02-24 20:30:05
Link to this Comment: 4795


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

All drugs have a negative effect on the nervous system, but few can match the
dramatic impact of cocaine. Cocaine is one of the most potent, addictive, and
unpredictable recreational drugs, and thus can cause the most profound and irreversible damage to the nervous system. The high risk associated with cocaine remains the same regardless of whether the drug is snorted, smoked, or injected into the user¡¯s bloodstream. In addition to the intense damage cocaine can cause to the liver, intestines, heart, and lungs, even casual use of the drug will impair the brain and cause serious damage to the central nervous system. Although cocaine use affects many components of the body, including vision and appetite, the most significant damage cause by cocaine takes place in the brain and central nervous system.

Spanish explorers first observe South American natives chewing the cocoa leaf, from which cocaine is derived, when they arrived on the continent in 16th century. The South Americans chewed these cocoa leaves in order to stay awake for longer periods of time. Centuries after this initial discovery, Albert Neiman isolated cocaine from the cocoa leaf in 1860. Neiman used this extraction as an anesthetic. Over the ensuing years, cocaine use became increasingly common and was even sanctioned by doctors, who prescribed the drug to aid recovering alcoholics. Cocaine was even a key ingredient in such popular beverages as Coca- Cola. It was not until the long-term health problems associated with cocaine use emerged that the public realized that the drug was harmful
and highly addictive (2).

Cocaine is a versatile drug which can be ingested in a variety of ways. In its purest form, cocaine is a white powder extracted directly form the leaves of the cocoa plant. However, in the modern drug market, pure cocaine is often tempered with a variety of substances in order to make cocaine more profitable for drug dealers (5). The most common way to ingest powdered cocaine is to inhale the drug through one¡¯s nasal passage, where the cocaine is absorbed into the bloodstream by way of the nasal tissues. Cocaine can also be injected directly into a vein with a syringe. Finally, cocaine smoke can be inhaled into the lungs, where it flows into the bloodstream as quickly as when injected into a vein. In 1985, crack cocaine was invented, which is the optimal form of cocaine for smoking (2). While most cocaine is created through a complex process requiring ether and other unstable and expensive substances, crack cocaine is processed with ammonia or baking soda. Crack cocaine has gained popularity as the drug is cheaper and provides a more potent immediate high than snorting cocaine (6). However, those who smoke cocaine run a higher risk of becoming addicted to the drug, as more cocaine is absorbed into the bloodstream through this method of ingestion (1).

Cocaine produces its pleasurable high by interfering with the brain¡¯s ¡°pleasure centers¡± where such chemicals as dopamine are produced. The drug traps an excess amount of dopamine in the brain, causing an elevated sense of well being. Cocaine acts as a stimulant to the body. In turn, the drug cause blood vessels to restrict, increases the body¡¯s temperature, heart rate, and blood pressure, and cause the pupils to dilate (4). Cocaine also increases one¡¯s breathing rate. Cocaine causes such pleasurable effects as reduced fatigue, increased mental clarity, and a rush of energy. However, the more one takes cocaine, the less one feels its pleasurable effects, which causes the addict to take higher and higher doses of cocaine in an attempt to recapture the intensity of that initial high (1). In any case, a cocaine high does not last very long. The average high a user gets from snorting cocaine only lasts for 15-30 minutes. These highs are less intense, as it takes longer for the drug to be absorbed into the bloodstream when snorted. A smoking high, although more intense due to the rapidity in which the drug is absorbed into the bloodstream, lasts for an even shorter period of only about five to ten minutes (5). After the euphoric high comes the crashing low, in which the addict craves more of the drug and in larger doses (2).

Cocaine can cause serious long-term effects to the central nervous system, including an increased chance of heart attack, stroke, and convulsions, combined with a higher likelihood of brain seizures, respiratory failures, and, ultimately, death (2). An overdose of cocaine raises blood pressure to unsafe heights, often resulting in permanent brain damage or even. Coming down off of cocaine is highly unpleasant, as the user may feel nauseous, irritable, and paranoid. Also, in some cases, a sudden death may occur, although it is impossible to predict who could be killed suddenly by cocaine ingestion. Crack cocaine in particular heightens paranoia in its users, who have the more difficulty quitting the drug than other cocaine users (6).

Many studies have been done which analyze the impact of cocaine on the
brain itself. By inhibiting the brains release of dopamine and other neurochemicals, cocaine can cause serious and often irreversible damage to neurons within the brain. In autopsies, cocaine users had a reduced number of dopamine neurons (7). When flooded with the excess of dopamine created during a cocaine high, the brain reacts by making less dopamine, getting rid of this excess, and shutting down the dopamine neurotransmitters, sometimes permanently. In turn, many cocaine users feel depressed once they go off of the drug, which makes cocaine is highly addictive. Many addicts report that they crave the drug more than food, and laboratory animals will endure starvation and electroshocks if they can still have the drug (3).

Cocaine is one of the most dangerous drugs for the central nervous system. As a powerful stimulant, cocaine increases the likelihood of many fatal nervous system malfunctions, including stroke. However, the high initially gotten from cocaine keeps its addicts looking for more, as this highly addictive drug can be difficult to quit. Also, as the neurotransmitters shut down and disappear, the user needs cocaine to create an artificial high. Cocaine can cause serious damage to the nervous system, as it eats away chunks of the brain and increases blood pressure, heart rate and body temperature, often for the rest of the addict¡¯s life.

References

1)Drug information: Cocaine

2)Cocaine

3)The Effects of Cocaine on the Developing Nervous System

4)The Physical Effects of Cocaine

5)As a Matter of Fact

6)Crack and Cocaine

7)Cocaine Brain Damage may be Permanent


Avian Song Control
Name: Nicole Jac
Date: 2003-02-24 21:23:25
Link to this Comment: 4796


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Bird songs continue to fascinate neurobiologists and neuroethologists because the development of song has been a popular model used to examine the role of environment on behavior. In most species, only male birds sing complex songs. Their vocalizations are the result of sexual dimorphism in the brain regions responsible for the production of song. However, this behavior is not genetically hardwired into the avian brain. Certain conditions must exist in order for male birds to successfully produce their species-specific song. Additionally, the neuronal circuitry and structure of the avian song system shows high levels of plasticity.

If the brain and behavior are indistinguishable, then the structural differences in the avian brain are responsible for behavioral differences across the sexes. Nottebohm and colleagues identified six anatomically distinct regions of the forebrain involved in the production of song, which are arranged into two independent pathways, the posterior pathway, which controls song production, and the anterior pathway, which controls song learning. The collective unit is typically referred to as the vocal control region (VCR) (1) (2).

Female birds sing rarely and this behavioral difference is reflective of the anatomy of the female avian brain. There are significant differences in the size of three neural areas involved in the production of song across the sexes, and a specific area, Area X, is present in the male and absent in the female. Additionally, the incorporation of radiolabeled testosterone in certain locations is different in males and females (3) (4).

Scientists have been particularly interested in the origin of the structural differences in male and female songbirds. Research has suggested the importance of gonadal hormones, specifically testosterone in the production of song. It was observed that castration eliminated all song production (5). Additionally, when testosterone levels are low, there is not only a decrease in the production of song, but also a decrease in the size of some nuclei involved in song production (6). Further support for the necessity of testosterone for song production was demonstrated by Nottebohm (1980) when he injected female birds with testosterone, which lead to the production of song (7). This research has interesting implications regarding anatomical changes that may occur when an organism is chemically imbalanced. Disruptions in chemical equilibrium may alter brain structure and subsequently influence behavior.

Nevertheless, not all research has supported the claim that testosterone is responsible for anatomical and behavioral differences between male and female songbirds. Contradictory results have been obtained in different species of birds, and no one has successfully induced feminine bird song behavior in male birds with the introduction of hormones. The role of hormones remains unclear at this time, however, the structures and circuitry involved in song production exhibits neuronal plasticity in many species and there is a relationship between gonadal hormones and sex-specific behavior.

Assuming functional circuitry, bird vocalizations come in many varieties, and range from several seconds to several minutes (8). The complexity of the avian vocal repertoire has allowed scientists to study the development of this behavior. Bird songs are often species specific, but there are also differences in the vocalizations within a species. If bird vocalizations were merely the result of genetics there would be little variation in the complexity or variety of songs. The individuality present across males of the species raised questions about the origins of the songs and factors responsible for creativity and individuality in song.

Thorpe and colleagues (1961) removed young birds from the nest immediately after hatching and raised them in captivity. When the males began to vocalize, their songs were significantly degraded compared to males in the wild. The songs were similar to the songs produced by wild males, however, they were simpler in form and there was little variation within or between individual birds (9). This experiment demonstrated that the variation in song observed in wild birds was not genetically programmed, but rather something that developed when an animal was in the wild and could hear the songs of other males in the species.

Marler and Tamura (1964) further examined the development of bird songs by examining the bird songs of finches. Finches have species specific songs, however, the songs differ slightly depending on their geographical niche. In essence, finches have dialects that are regionally specific. They removed young birds from the nest after hatching and found that all birds, regardless of their geographical location developed simplified versions of their regional song. The birds were unable to learn their region-specific songs because they were not able to hear males from their region sing it. It was determined that the critical period for acquiring song occurs during the first 3 months of life, which is before a bird can produce song. In actuality, a template of the song forms in as little as 20 days, song acquisition is often completed in approximately 35 days, and after 60 days of rehearsal the song becomes consistent. Even males reared in isolation can be trained to sing by playing a recording of their song as long as they are under 3 months of age. After 4 months of age, birds become unreceptive to training. This shows that there is a simple song that is processed at an early age and subject to modification during the earliest months of life. The young birds, once exposed to the song of adult males, store it until they begin to sing (10).

Further examination by Konishi (1965) demonstrated that a young bird that is deafened immediately after birth will sing, but is only capable of forming disjointed notes. The song of a deafened bird is even more severely degraded compared to a bird raised in isolation and would not be identified as the song of a particular species (11). This demonstrates that not only is auditory exposure necessary for the successful production of song, but auditory feedback is also necessary.

Song production is a complex behavior that suggests that the interaction of genetics and the environment is necessary for birds to successfully vocalize. Further study is necessary to fully understand the factors that are responsible for sexual differentiation of behavior and the neuroplasticity and sexual dimorphism seen in the avian brain. This model of animal behavior may contribute to understanding the differences between male and female brains in humans, and demonstrate how anatomical differences may be the cause of behavior differences across the sexes.

References

1) Song circuit diagram, a basic diagram of the song control circuit in the avian brain.

2) Songbird brain circuitry , created by Heather Williams at Williams College, this is a more detailed figure of brain circuitry.

3) Sex and the Central Nervous System , the web site to accompany a developmental biology text by Scott F. Gilbert.

4) Arnold, A.P. (1980). Sexual differences in the brain. American Scientist. 68. 165-173.

5) Thorpe, W.H. (1958). The Learning of Song Patterns by Birds, with Special Reference to the Song of the Chaffinch. Ibis, 100. 535-70.

6) Nottebohm, F. (1981). A brain for all seasons: Cyclical anatomical changes in song control nuclei of the canary brain. Science. 214. 1368-1370.

7) Nottebohm, F. (1980). Testosterone triggers growth of brain vocal control nuclei in adult female canaries. Brain Research. 189. 429-436.

8) Zebra finch song , created by Heather Williams at Williams College , includes audio clips of zebra finch songs.

9) Thorpe, W.H. (1961). Bird song. New York: Cambridge University Press.

10) Marler, P. & Tamura, M. (1964). Culturally transmitted patterns of vocal behavior in sparrows. Science, 146. 1483-1486.

11) Konishi, M. (1965). The role of auditory feedback on the control of vocalization in the white-crowned sparrow. Zeitschrift fur Tierpsychologie. 22. 770-783.

12) Hinde, R.A. (1969). New York: Cambridge University Press.

13) An Introduction to Birdsong and the Avian Song System. ,(HTML format) General information about birdsong and the avian song system from a special issue of the Journal of Neurobiology.


Terrorists: How different are they?
Name: Stephanie
Date: 2003-02-24 23:29:34
Link to this Comment: 4800


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Ever since September 11th, terrorism has been on virtually all of our minds. And now, some eighteen months later, as the nation perches on the brink of war with Iraq, our fears remain. The frustration that most people experience in the aftermath of extreme violence is largely the result of the question why. Why would anyone want to commit so heinous a crime? How could they live with themselves? Terrorism is a widely researched topic, but it seems to be particularly salient now, as it hits closer to home. Are terrorists different than the rest of us? Are they different than serial killers? If brain equals behavior, then yes, they are. But perhaps that equation is only true in some cases.

Because the acts that terrorists execute are so disturbing, many people think they must be crazy - that there must be something fundamentally wrong with them or with their brains. There is an ongoing debate on this matter - especially since different research shows variations in the extent to which terrorists are perceived as "crazy." Clark McCauley, Professor of Psychology at Bryn Mawr maintains that terrorists are not crazy. In fact, they are quite normal and their psychology is normal. According to Professor McCauley, research has found "psychopathology and personality disorder no more likely among terrorists than among non-terrorists from the same background" (1). For most, this is an unfavorable result, for not only does it mean that anyone is capable of committing acts of terror, but it also means that there is little distinction between "us" and "them" - in fact, the "us and them" distinction may not really exist, at least not on a biological or psychological level (if we are truly essentially similar, with the most obvious difference being distinctly behavioral).

It is the discovery that terrorists and non-terrorists are not as different as originally thought that is so unnerving to non-terrorists (the "us"). It is difficult for most people to accept that they could, that anyone could, hypothetically, commit mass genocide or use commercial jets as missiles. Professor McCauley admits, "terrorism would be a trivial problem if only those with some kind of psychopathology could be terrorists. Rather we have to face the fact that normal people can be terrorists, that we ourselves are capable of terrorist acts under some circumstances" (1). Of course it is upsetting to think that normal people ("normal" being defined as people who refrain from terrorist behaviors or other violent and socially unacceptable acts) could become terrorists, but this does not imply that since we are capable of terrorist behavior we will go ahead with it. If terrorists are biologically and psychologically similar to non-terrorists, then why do they kill?

The path that future terrorists follow is a gradual one, for it is almost impossible for someone unaccustomed to killing, to suddenly be able to do so. The ability to kill must be nurtured over time, usually through the group dynamics of terror networks: "The terrorist has a fixation on systemic value. This means that they emotionally crave membership in the organization, group, or order to which they belong" (2). Terror groups are like families to terrorists, each with their role, and each providing support for their fellow terrorists. Terrorists live for the survival of their group and for the group's ideas and rules. The meaning of terrorists' lives "comes from being a dutiful soldier and member of the group" (2). In order for this type of blind loyalty to occur, subjects must lack any personality, individuality or sense of self - this is the mechanism that allows terrorists to kill, for without any solid conviction of who they themselves are, they cannot understand the value of individual or collective human life: "a terrorist does not see the infinite, unique, singular value of people and therefore does not see what is wrong with killing or maiming another person" (2). In this way, terrorists are able to almost alienate themselves from social norms and completely immerse themselves in the ideology of the group.

Terrorists' motivation to kill is comprised of a combination of factors. They may kill for political or economic reasons, but they may also kill for the sense of power it generates. Terrorists, according to Stephen J. Morgan, "crave the ultimate power, that of power over life and death of innocent people. They believe themselves to be omnipotent, messengers and agents of God, without feeling guilt or shame for anything they do" (3). This seems logical, but if terrorists see themselves as being divine or immortal, then they should have no reason to believe that other nations would be able to affect, manipulate, or disturb them - unless, of course, they kill because they see the other nations and civilizations that populate the world as threats.

Although many terrorists may not be significantly different from non-terrorists, research shows that some traits are more common in terrorists than in non-terrorists. These shared traits include low self-esteem and predilection for risk-taking. Research attributes low self-esteem to terrorist mentality because terrorists "tend to place unrealistically high demands on themselves and, when confronted with failure, [tend] to raise rather than lower their aspirations" (4). A penchant for taking risks and engaging in fast-paced activities are other qualities generally ascribed to terrorists because they tend to be "stimulus hunters who are attracted to situations involving stress and who quickly become bored with inactivity" (4). Of course, these characteristics should not be thought of as warning signs - they are merely the result of research that tries to explain aspects of terrorist behavior.

There is a general discourse over the nature of terrorist psychology and behavior - some researchers accept terrorists as "neurotics" or "psychopaths", while others emphasize the importance of examining "the social, cultural, political, and economic environment in which they operate" (3), (4).. Akin to this is Khachig Tololyan's argument that as a result of terrorists' humanness, "their behavior cannot be understood by the crude - or even by the careful - application of pseudo-scientific laws of general behavior" (5). Rather, he claims, "we need to examine the specific meditating factors that lead some societies under pressure, among many, to produce the kinds of violent accts that we call terrorism" (5). This seems a logical argument, for environment can certainly influence one's behavior. Although the acts of terrorists are hardly forgivable, they do not necessarily immediately qualify terrorists as clinically insane, for there may be some external forces or circumstances (political, social, and/or economic constructs) that may provoke terrorists to respond violently.

Interestingly, there seems to be a discrepancy between the psychology of terrorists and other criminals, like serial killers. Perhaps this is obvious, but is the idea that serial killers tend to stalk specific (though usually previously unknown) individuals and then kill one by one, whereas terrorists focus on governments and nations, with an end goal of having "a lot of people watching not a lot of people dead," related to their psychological disposition? (6), (7). Studies show that serial killers tend to share similar experiences such as "childhood abuse, genetics, chemical imbalances, brain injuries, exposure to traumatic events, and perceived social injustices" (6). However, as the same article explains, "a huge population has been exposed to one or more of these traumas. Is there some sort of lethal concoction that sets serial killers apart from the rest of the population?" (6) At this point, it is not clear, but it is clear that there is something, whether it is a biologically dysfunctional brain or psychological delusions that make serial killers act the way they do - but what about terrorists?

It is difficult to compare terrorists to other criminals because so few terrorists have been caught and those who have been caught are rarely willing to talk about their experiences. Therefore, it is hard to know whether terrorist motivations stem from abuse, trauma, or biological problems. However, although terrorists and serial killers are both thought of as "rational," meaning that their mental states are stable enough to allow them to organize, plan, and execute their attacks, they are certainly not sane - at least not by Western standards. In their own minds, and to their groups, they are sane - and perhaps in the future we will be able to more fully understand their rationale.

Until the nation is able to more efficiently curb terrorism, the government has developed some precautions designed to discourage terrorists. Recently, a new system known as "brain fingerprinting," has been developed for installation at airport security checkpoints. Suspects seized for interrogation may be screened this way: "A subject's head is strapped with electrodes that pick up electrical activity. He sits in front of a computer monitor as words and images flash on the screen. When he recognizes the visual stimuli, a waveform called the P300 reacts and the signal is fed into a computer where it is analyzed using a proprietary algorithm" (8). In cases involving suspected terrorists, the images used would be those "only known to a terrorist group, such as the word 'al-Qaida' written in Arabic or the instrument panel of a 757" (8) This is an interesting development, however, it may be important to note that just because a suspect shows brain activity when viewing certain images does not immediately qualify him or her as a terrorist. "Brain waves can't hand down a guilty sentence," says one researcher; another concurs: " 'It's [the technique of brain fingerprinting] like saying you can measure brain activity from someone's scalp and read their mind. You can see differential electrical activity, but you can't read the electrical activity as if it were words. You can say it's different, but you can't interpret it" (8). Interpretation is a loaded term - even in psychological research, we interpret the behaviors of others. In doing so, we do not always arrive at a clear conclusion. Thus, our relation to terrorists may not be as obvious as we think, or would like to think - they may be fundamentally similar to us, but are just conditioned to perform different behaviors.


References

1) The Psychology of Terrorism , from Social Science Research Council website.

2) How a Terrorist Thinks , from Clear Direction, Inc. website.

3) The Mind of the Terrorist Fundamentalist , site is kind of scary looking.

4) Long, David E. The Anatomy of Terrorism. New York: The Free Press, 1990.

5) Rapoport, David C. Inside Terrorist Organizations. London: Frank Cass Publishers, 2001.

6) What Makes Serial Killers Tick? , from Court TV's Crime Library website.

7) Freedman, Lawrence. Superterrorism Policy Responses. Massachusetts: Blackwell Publishing, 2002.

8) Thought Police Peek Into Brains , from Wired News website.


PANDAS: A link between strep throat and OCD
Name: Cordelia S
Date: 2003-02-24 23:50:20
Link to this Comment: 4803


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Can an ordinary streptococcal infection (strep throat) lead to obsessive-compulsive disorder (OCD)? In a small subgroup of children, a seemingly normal bacterial strep infection can turn into a severe neuropsychiatric disorder. The disorder affecting this group is known as PANDAS (Pediatric Autoimmune Neuropsychiatric Disorders Associated with Streptococcal infections), and was identified by Dr. Susan Swedo just twelve years ago (1). Though research on PANDAS is still very much a work in progress, it has already generated excitement that this disorder may lead to answers about the cause and nature of OCD (2). Similarities and differences between PANDAS patients and the majority of OCD patients, experimental treatments for PANDAS infections, and comorbidity of PANDAS with a variety of other psychiatric and neurological disorders are slowly leading to an understanding of exactly what OCD does to the brain (3).

It is not the streptococci themselves that cause OCD symptoms. Rather, strep infections seem to cause the body's immune system to build up antibodies that, for an unknown reason, begin to attack the basal ganglia in rare cases (1). The link between streptococcal infections and neurological disorders has been known about for half of a century. Rheumatic fever was identified in the 1950s as being an autoimmune disorder correlated with strep; Sydenham chorea, a disorder of the central nervous system involving hyperactivity, loss of motor control, and occasionally psychosis, was recognized as another strep-linked disorder that could be a symptom of Rheumatic fever or could stand on its own. PANDAS seems to be a milder form of Sydenham chorea (4).

Dr. Swedo observed, tested, and interviewed fifty children with a sudden onset of OCD or tic disorders who had recently (within the past several months) been diagnosed with a group A beta-hemolytic streptococcal (GABHS) infection. These children tested negative for Sydenham chorea. Swedo discovered that the children had episodic patterns of OCD and tic symptoms. She tested the presence of antistreptococcal antibodies in their blood and found that symptom exacerbations were twice as likely to occur with the presence of antistreptococcal antibodies (1). Brain imaging studies found that the caudate nucleus, frequently linked with OCD, became inflamed in PANDAS patients when antibody presence was high (2).

OCD symptoms are generally very similar between children with PANDAS and other OCD patients (5). However, the onset of symptoms can be quite different. While OCD is usually first identified in adolescence, PANDAS patients are always prepubescent. This is likely to be because of the rarity of GABHS infections in teens and adults. Also, though OCD usually manifests itself gradually, in PANDAS patients it can set in overnight. Swedo and colleagues report frequently seeing children whose parents could recall the day their child became obsessive-compulsive (2). Though it is not known why, PANDAS patients overwhelmingly obsess about urination, which is not an especially dominant obsession in other OCD cases (5). The episodic pattern of symptoms is unique to PANDAS patients. While other OCD patients can go through periods where symptoms are slightly more or less exacerbated, PANDAS patients often experience complete disappearance of symptoms between episodes (1). It is unknown whether a genetic marker on B cells of the immune system known as D8/17 is specific to PANDAS patients, or common in all OCD patients (6). The structure and function of this marker is currently being identified, and may provide some clues about the heredity of PANDAS or OCD in general (2).

Thus far, studies in which penicillin was given to PANDAS infected children as a preventative measure against strep and OCD have been inconclusive (3). However, many PANDAS patients have shown significant reduction of OCD symptoms when given plasmaphoresis, a type of plasma transfusion, to remove the antibodies (2). Current studies are further investigating prophylactic antibiotics, plasma exchange, and steroids as possible treatments to go along with SSRIs in treating both PANDAS and ordinary OCD.



As in most cases of OCD, other neuropsychiatric disorders are often present in PANDAS patients. Swedo and colleagues found that 40% of PANDAS patients suffered from ADHD, 42% from affective disorders, and 32% from anxiety disorders (1). There are several points of interest in discussing the comorbidity of these illnesses with PANDAS. It was found that non-OCD psychiatric symptoms in most cases followed the same cycles as OCD symptoms, and set in suddenly when antibody levels were high (1). This brings up the question of whether any additional psychiatric disorders can be triggered by strep throat or other bacterial infections. Though there is no evidence to date linking post-strep autoimmune dysfunction with any illnesses other than tic disorders, OCD, and possibly late-onset ADHD, researchers are looking into possible ties with disorders like autism, anorexia, and depression (2). The comorbidity statistics also suggest that particular areas of the brain which we know are involved in other psychiatric disorders are attacked by the post-strep antibodies, and could help lead to identifying the exact cells or proteins that are targeted. Interestingly, the putamen and globus pallidus, neighbors of the caudate nucleus, are linked to tic disorders and hyperactivity (2). This could explain the frequency of occurrence of these symptoms alongside OCD in PANDAS.

The frequency of PANDAS in the general population is unknown, but it is definitely a rare disorder. By contrast, OCD is present in one to two percent of the population (7). This may make PANDAS research appear useless in relation to research on "normal" OCD. On the contrary, the small size of the subgroup of PANDAS sufferers and the link to a disease as widely studied as strep throat could provide the key to discovering the cause of OCD and identifying exactly what genes and brain structures are involved (2). For example, if the nature of the antibody attack on the basal ganglia in PANDAS were identified, researchers could possibly target similar degradation in the basal ganglia of other OCD patients and potentially begin to look at ways to prevent this degradation. Also, research and public knowledge about PANDAS might make more people aware of the medical aspects and biological causes of mental illnesses. Perhaps this would lessen societal discrimination against the mentally ill and lead more people to understand why pharmaceuticals are often helpful or necessary in treating mental illnesses (7).

There is strong evidence of a link between streptococcal infections and obsessive-compulsive disorder in some children. Though it is not known exactly how the immune system turns against itself and causes behavioral symptoms, there is hope within the scientific community that answering questions about PANDAS will in turn lead to answers about OCD and mental illness in general. This disorder provides evidence for medical models of psychiatric illnesses, and for the idea that the brain = behavior. It is amazing and frightening that an illness that seems like a mere nuisance can lead to a severe behavioral change almost overnight. However, research and possible treatments appear promising, and this tiny disorder may contribute more to the body of neuropsychiatric knowledge than any other illness in the past.


References

1)American Journal of Psychiatry Website, First Susan Swedo article about PANDAS, defines symptoms and criteria

2)The Scientist Website , Harvey Black article discussing research and several points of view on PANDAS

3)Science Direct Website , Pilot study on use of prophylactic penicillin in treating PANDAS

4)Medscape Website, Register for Medscape, then go to Richard Barthel article "Pandas in Children - Current Approaches", overview of knowledge on PANDAS

5)JAMA Website , Joan Stephenson article discussing antibiotic treatment

6)Psychiatric News Website , Article discussing biological marker associated with OCD

7)University of Florida News , Current research being done on PANDAS and OCD


The Neurophysiology of Sleep and Dreams
Name: Alexandra
Date: 2003-02-25 00:17:42
Link to this Comment: 4804


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

The ancient Babylonians thought dreams were messages from supernatural beings, and that the good dreams came from gods and that bad dreams came from demons. (1) Since then people have sought many different explanations for the occurrence and importance of dreams. Before beginning to understand the function or significance of sleep and dreams, it is important to look at when, what, where, and how dreaming and sleeping occur.

Adult humans sleep, or should sleep, for about eight hours a day. People's necessary time spent sleeping changes over time. Newborns spend about twice as long sleeping. (2) Circadian rhythms, [the term originates from the Latin, "circa diem," which means about "about a day" (3)] determine when people fall asleep. The circadian rhythm, actually twenty-five hours long, is reset by light. (4) In 1953 scientists discovered REM (rapid-eye movement) sleep. The physiological state of REM occurs periodically with NREM (non-rapid-eye movement) about every ninety minutes (5), and lasts for about 20-30 minutes (6).

Differences between NREM and REM can be measured using an electroencephalogram (EEG), electrooptogram (EOG), and electromyelogram (EMG), which measure brainwaves, eye-movement, and muscle tone, respectively. REM is categorized by high-frequency, low-amplitude, more irregular waves in EEG, rapid, coordinated movement in EOG, and weak EMG. During this type of sleep, brain activation heightens, breathing and heart rates increase, and body movement is paralyzed. Because the person is highly aroused, like in waking, but also very asleep, REM sleep is also called paradoxical sleep (6).

Although dreams and REM are not synonymous, most dreaming occurs during REM sleep. Dreaming during the REM sleep can be seen as a "state of delerium possessing qualities of hallucination, disorientation, confabulation, and amnesia." (7) While as many as 70-95% of people awakened during REM remember their dreams, only 5-10% of those awakened during NREM report dreams (5).

The release of different neurotransmitters in different areas seems to determine which type of sleep should be activated. At the onset of sleep serotonin is secreted and seems to trigger NREM. (4) NREM switches to REM sleep with the release of the chemical, acetylcholine, in the pons, which is located in the base of the brain, and later the re-release of noradrenaline and serotonin seems to switch off REM (5) (7) and reactivate NREM sleep again. The action of the neurotransmitters as triggers of NREM and REM sleep is referred to as the reciprocal interaction/activation synthesis. (5) With the excretion of acetylcholine, signals from the pons are sent to the thalamus, which relays them to the cortex, and also sent to shut off the neurons in the spinal cord, which causes the temporary paralysis of the body.

Although the pons is responsible for REM sleep, dreams originate in areas in both the frontal lobe and also at the back of the brain. In twenty-six cases in neurological literature about damage to the pons, although there was a loss of REM sleep in all of them, loss of dreaming was reported in only one of the cases. (5) Also while damage to frontal areas of the cortex makes dreaming impossible, the REM cycle of the individual, whose brain is damaged that way, remains unaffected. (5)

Because most dreaming does occur during REM sleep, however, the REM state can be thought of as one of the triggers for dreaming. (5) The pons' paralyzing of the muscles is an important precondition for dreaming, since without that paralysis dreamers might act out their dreams (the effects of this problem can be seen in individuals suffering from REM Behavior Disorder). Thus, without muscle atonia, dreaming could be a very dangerous activity!

Understanding the nature of the areas of the cortex that are responsible for dreaming may help explain the reasons behind the content of dreams. The area of the frontal lobes, which is responsible for dreaming, contains the "large fiber pathway, which transmits the neurotransmitter dopamine from the middle part of the brain to the higher parts of the brain. " (5) The use of dopamine stimulants, such as L-dopa, massively increases the intensity and frequency of dreams. (5) Since the frontal and limbic areas of the brain are concerned with arousal, emotion, memory, and motivation (5), dreaming might be tied to these aspects of behavior based on its location in the brain. The gray cortex at the back of the brain, also responsible for dreaming, is connected to abstract thinking and is where the highest level of perceptual information is processed. (5) The activation of this location seems to be related to the huge importance of visual perception in dreams.

Although the functions of REM sleep and dreams is not fully agreed upon or understood, evidence for the necessity of sleep is incontrovertible. Rats deprived of sleep die after a few days. Also REM sleep and dreaming seem to be quite important since the individual deprived of REM sleep, will respond with REM rebound, which means that the amount of time spent in the REM state will increase following REM deprivation. (6) Also the attempts at entering REM increase, and individuals show more anxiety and irritability, and have trouble concentrating during the day. (8) Animals deprived of REM also show an increase in attempts at entering REM, and display other behavioral disturbances, such as increased aggression. (8) Thus, while in a future web-paper I intend to discuss and investigate more the functions of NREM and REM sleep, a knowledge of the processes at hand seemed necessary before such a discussion to occur.

References

1)The Association for the Study of Dreams
2)Harvard Undergraduate Society for Neuroscience
3)Silent Partners Organization
4)Talk About Sleep
5)The American Psychoanalytic Association
6)California State University student essay
7)"Neurophysiology of dreams"
8)Slides on Physiology of Sleep and Dreaming


Neurobiofeedback: The Latest Treatment for Migrai
Name: Clare Smig
Date: 2003-02-25 00:38:53
Link to this Comment: 4805


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Headaches are among the most common health complaints today. According to the National Headache Foundation in Chicago, 45 million Americans suffer from recurring headaches—16 to 18 million of which are migraines (1). Migraines are vascular headaches because they involve the swelling of the brain's blood vessels (2). The occurrence of migraine headaches, contrary to popular belief, is a disease. If you suffer from migraines you might be used to people comparing your migraine to a headache or trying to blame these "headaches" on you and your lifestyle. However, migraines are caused by the expansion of blood vessels whereas regular headaches area caused by the constriction of blood vessels. Although certain things such as harsh lighting, movement, or chocolate may trigger a migraine, the actual cause of this vessel swelling is unknown and may vary from person to person. Currently, there is no cure for migraine (3).

One theory as to the cause of migraines lies in excitement of the nervous system caused by stress, anxiety, or some unknown (4). A more recent form of treatment known as neurobiofeedback actually works by allowing patients to train their brains to function at a more relaxed mental state. The success of this treatment may indicate that increased neuron activity is one of the more common causes of migraines. Neurobiofeedback has been identified as successful for migraines precipitated by PMS, food allergies, or stress. It is not clear exactly how food allergies are related to increased nerve activity. Stress, however, regardless of the type, seems to be strongly correlated with migraines as it will determine the severity of the headache. Neurobiofeedback goes to the root of this problem and, as a result, is one of the more preferred methods of treatment (5).

Biofeedback, in general, is a technique in which the body's responses to specific stimuli are measured in order to give patients knowledge about how they physically react to various events. In the case of headaches, patients can condition their mind or body to react differently to pre-headache symptoms and prevent a headache from occurring (1). Neurobio or electroencephalogram (EEG) feedback, specifically, measures brain wave activity and feeds back to a patient their own brain wave patterns so that they can modify these patterns through game-like computer simulations (6).

Why does this work? Brain waves are recordings of electrical changes in the brain. They originate from the cerebral cortex and are detected in the extracellular fluid of the brain in response to changes in potential among large groups of neurons. These changes are directly related to the degree of neuron activity, which are translated into different waves. Alpha and theta waves occur during more relaxed, meditative states. Beta waves dominate during more active, stressful mental states (7). Previous studies have shown that migraines may result when neurons in the brain's cortex become overexcited (8). This would suggest the domination of beta waves. Therefore, neurobiofeedback enables migraine sufferers to counteract these waves and teach their brain to function at more relaxed wavelengths. According to one neurobiofeedback specialist, each person has its own psychopysiological stress profile showing how much of her sympathetic and how much of her parasympathetic nervous system she uses. Some people, even when they are idle or doing next to nothing, have their sympathetic system activated all of the time. This is the active, energy-expending branch of the nervous system, as opposed to the parasympathetic, which is the more relaxed, recovery, energy-storing system (6). Therefore, it is possible that migraine victims have their sympathetic nervous system "on" more than others and this could be what leads to an increase in beta waves observed. However, this would require further research. Furthermore, does nerve excitement lead to swelling of the blood vessels, as seen in victims of migraine? Continued research would be needed to show the correlation between these two aspects of migraines. Finally, there is still some question as to whether those things that trigger migraines—bright lights, harsh sounds, lack of sleep—increase neuron activity.

What is clear, however, is that neurobiofeedback has been successful in treating migraines for many people (9). Many sites and articles show success with this treatment, yet it is never 100%. Therefore, although the symptoms may be the same, there is something else involved in the disease that is causing the severe headaches. It is possible that their migraines are caused by other proposed theories including instability of the vascular system, magnesium deficiency, estrogen level fluctuations, and blood platelet disorder (based on the notion that the platelets of migraine sufferers aggregate more readily than normal platelets in response to certain neurotransmitters). Hopefully, continued understanding of this disease will help those who hope to treat it provide further relief for those who suffer from it from the great variety of factors that seem to cause migraines.

References

1) Howe, Maggy. "Headaches." Country Living. March 1996, 82- 84.

2)Nervous System Disorders

3)Migraines: Myth and Reality.

4)Causes of Migraine

5)Michigan Institute for Neurobiofeedback.

6)Biofeedback/Neurobiofeedback Therapy.

7)Brain Waves

8) Ferrari, Michel D. "Migraine." The Lancet. April 1998, 1043-1051.

9) EEG Spectrum International


Locked-In Syndrome: Giving Thought The Power Of Ac
Name: Priya Ragh
Date: 2003-02-25 00:53:25
Link to this Comment: 4806


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Imagine a world in which human communication is executed through the simplicity of thought. No muscle action- no nodding, smiling, slapping, pointing, speaking, or feeling...just through the immobile and inconspicuous medium of thought. This is an example of a locked-in patient. In a locked-in condition, the patient's ability to move his/her limbs, neck, and even muscles is brought to an abrupt halt. Messages ordered by the brain do not reach the muscles that consequently carry out the physical tasks of the brain. The locked-in syndrome leaves the victim completely paralyzed sparing only the eye muscles in most cases.

The reason for this disability is most commonly due to lesions in the nerve centers that control the muscle contractions, or a blood clot that blocks circulation of oxygen to the brain stem. Brain-stem strokes, accidents, extreme spinal-cord injuries, and neurological diseases are other main causes for the syndrome (5). Axons that carry brain signals leave the larger motor areas on the surface of the brain and direct their signals towards the brain stem. It is here where they converge linking one another to form a tightly packed bundle called the motor tract. The brain stem motor tract is extremely sensitive; thus even the slightest impact of a stroke can lead to destruction of the axon bundles resulting in a total paralysis (1). For a locked-in patient, depending on the severity of the stroke, the sensory tracts may or may not be affected. These tracts also form axon bundles and determine the functioning of the feel, touch, and pressure perceptions.

What is interesting is that while total paralysis of the external body is a likely possibility, the eye muscles and brain functioning remain intact and undamaged in most cases.It is rather remarkable to see the wonders of human life; where a person in a locked-in state is often mistaken to being a vegetable while in reality, barring very serious stroke effects, they are regular people who hear, understand, and think as coherently if not better than the next person.

The story of jean-Dominique Bauby is not only inspirational but has motivated scientists around the world to enhance technological means to broaden brain communication. Bauby, a 42 year old editor-in-chief of a French Magazine suffered from a severe brain-stem stroke in 1995. His body was instantly paralyzed leaving him with only one functioning eye muscle. Bauby is praised to this day for his ability to invent life for himself in the most appalling of circumstances. In fact, he astounded the international community when he came out with his own book, "The Diving Bell and the Butterfly"(2).

In this book is explained the number of medical staff and nurses who ignore or overlook the feelings and views of the patient whose only fault is his/her lack of physical or verbal communication. A strong prognosis needs to be recognized and respected for the acknowledgement of the rich mind trapped in a locked-in body. Bauby in his book said, " How many people in this state in enclosed half-lit hospital beds are never stimulated, never recognised as peoples with intact minds who can still contribute of only their talents are acknowledged?" Roger Rees of Flinders University in Adelaide agrees, " Can we make the quantum leap from befuddled locked-in thinking, to behaviour which acknowledges that while their physical world may be closed forever, their inner world is full of rich memories and imaginings whose journey can still be nourished and enhanced (3)?"

In a somewhat angelic disguise are present in this world doctors like Birbaumer. In 1995, Birbaumer, a German scientist on receiving a $1.5 million Leibniz prize focused on training paralyzed people to write out their thoughts with their brains. How this is a possible is indeed the miracle of knowledge and dedication of scholars in the field of science and technology. To establish a tool for communication between the brain and scientist, one must first fill in the missing pieces of the puzzle. Under normal circumstances, a series of electrical impulses are transmitted by the brain cells that flow along the nerves which in turn trigger the release of chemicals, which signal the muscles to flex and contract. Locked-in muscles do not receive any of these chemical messages (2).

Today's science, or rather, tomorrow's cure has shown the significance of computer technology. By implementing glass electrodes into the brains of locked-in patients, it is now possible to observe the growth of brain cell that contains growth- accelerating molecules. The glass electrode in return detects and transmits the brain signals to the computer. The latter translates the signals into screen movements that produce speech imitating the thoughts of the patient. All this is done by the powers of brain and mind- none of it involving the body (4).

The above experiment is indeed fascinating but at the same time a little frightful. Looking into the future of genetic engineering and biotechnology, we must be certain to honor our ethical responsibilities. Abusing science to satisfy one's political or personal appetite is disrespectful and dangerous to the stability of peace. Today it is about reading peoples' minds for them, what then does tomorrow behold? As long as research is done with the intention of bringing hope and light into a person's life, I continue to admire in awe the achievements in science today.

References


1. 1) National Health Institute Page, excellent for Science and Health information.

2. 2) Ian Parker, " Reading Minds": The New Yorker, January 20, 2003.

3. 3) abc home page, good website.

4. 4) Cleveland Clinic Websire, vast research material.

5. 5) National Institute of Health, one of my favorite websites for the field of health.


The "Mozart Effect"- Real or just a hoax?
Name: Grace Shin
Date: 2003-02-25 02:37:07
Link to this Comment: 4810


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

In 1998, Zell Miller, the governor of the state of Georgia, started a new program that distributed free CDs with classical music to the parents of every newborn baby in Georgia. Why did he do this? He certainly was not just trying to be nice and win a political statement; instead, his idea came from a new line of research showing a link between listening to classical music and enhanced brain development in infants. (1) So, what evidence was there for this governor to make a $105,000 proposal to give classical music for the newborn babies? I considered how my sister and I took music lessons, the Suzuki method, since we were 7 or 8 years old, and how my mother always had classical music playing in the house. My mother was convinced that musical ability will not only help us to be more well rounded people, but also that it will help us to be smarter individuals. Did all those years of piano lessons really pay off? Did all that money spent on buying classical music and attending classical concerts really play a role in determining where my sister and I are in our education?

Previous to this initiative by Gov. Miller, many researchers probed at the question of whether or not classical music enhanced intelligence. One of the initial experiments used musical selections by Wolfgang Amadeus Mozart to test for improvements in memory and this idea thus became known as the "Mozart Effect". The original experiment was published in 1993 by scientists at the University of California at Irvine. College students were required to listen to ten minutes of Mozart's sonata for two pianos in D major, a relaxation tape, or silence. Immediately after listening to these selections, they took a spatial reasoning test (from the Stanford-Binet intelligence scale). The results showed that the students' scores improved after listening to the Mozart selection. (2) However, soon enough, it was found that these results lasted only 10-15 minutes and more research quickly was undertaken by various groups all over the world.

However, before continuing in this search of the validity of the Mozart effect, I feel like there needs to be a definition of "intelligence". Since we learned how tricky notions of words can be on our minds from class, I took the liberty to search for how intelligence is measured and decided to take the definition used by Wilfried Gruhn as being associated with cognitive and intellectual capacity, the psychobiological potential to solve problems. This can be correlated with higher speed in neuronal signal transmission and signal processing, stronger and more efficient neuronal interconnectivity, and high correlation between IQ and neuronal activity (r=0.50 - 0.70). (3)

Now, if brain = behavior as we are learning in class (4), then there indeed should be a measurable correlation between musically trained minds and their intelligence. In addition, there must be one or more parts of the brain which are responsible for both these two "behaviors" of being "musically inclined" and "intelligent". And since the measure of intelligence is defined mainly concerning the brain activity, I delved into the "effects on music on brain" idea that has always been accepted in my childhood.

Assuming that brain = behavior, I wanted to find out if music really did affect the brain itself. If musically trained people were collected what would be the difference in their brains? One line of research found evidence that music training "beefs up brain circuitry". In a study conducted by the Society for Neuroscience, it was found that several brain areas such as the primary motor cortex and the cerebellum, which are involved in movement and coordination, are larger in adult musicians than in non-musicians. Another example given was that the corpus callosum, which connects the two sides of the brain, was larger in adult musicians. A third example is the auditory cortex, which is responsible for bringing music and speech into conscious experience, was also larger. (5)

Few other studies suggest that "music alone does have a modest brain effect." (5) One study showed that listening to the intricacies of Mozart pieces did raise college students' spatial skills. Rats were also tested and it was found that rats were able to complete a maze more rapidly and with fewer errors when exposed to Mozart tunes. As the research continued, even the earlier studies seem to show that the brain changes associated with musicians enhance mental functions such as ability to read and write music, keep tempo, memorize pieces, as well as functions not associated with music like word memory tests. Out of the adults tested in these memory tests, musicians scored much higher than non-musicians did.

The research continued with preschoolers, who were given piano lessons for about six months, and then tested in their puzzle-solving abilities. The experiment found that the preschoolers who had lessons performed better than those who did not have lessons. This was further supported when a group looked at the effects of music lessons on spatial reasoning and found that after 8 months, the children who took piano lessons, when tested on their ability to put puzzles together (spatial-temporal reasoning) and to recognize shapes (spatial-recognition reasoning), had improvements in the spatial-temporal test. Even when the children were tested one day after their last keyboard lesson, they still showed this improvement. (6)

Another study done with children consisted of testing second-graders who took piano lessons in their ability to play special math games. Again, it was found that they were able to score higher compared to other second-graders who took other lessons such as English. This study continued to see their performance in fourth-grade and found that they were able to better understand concepts such as fractions, ratios, symmetry, graphs, etc. (5)

In contrast, many laboratories have tried to use the music of Mozart to improve memory, but were unable to find similar supporting results. One group of scientists used a test where students listened to a list of numbers, and then repeat them backwards, known as a backwards digit span test. However, listening to Mozart before this test had no effect on the students. Other researchers have said that the original work on the Mozart Effect was flawed because only a few students were tested, and it was possible that listening to Mozart really did not improve memory. The latter suggested that it was possible that the relaxation test and silence impaired memory. In 1999, Dr. Kenneth Steele reported that when they used the exact procedures from the original experiment, the results did not give conclusive evidence of the Mozart effect. (7) Because there are many supporting as well as non supporting evidence of the Mozart effect, I soon came to realize that it will be fascinating to see where this all will lead to. Because so many people are caught up with the idea of wanting to be intelligent, by the time I have children, there definitely will be more information available to parents who want "intelligent" children!

Continuing in my search for the correlation between music and intelligence, I found an interesting direction that these studies are taking, which is the chance of musical training becoming a possible treatment of brain damage. (5) Music therapies are being used in many different clinics for various behavioral and neurological problems such as depression, autism, and aphasia.

An example of music affecting brain damage is the report by Berlin et al who studied seven patients with aphasia. Using Melodic Intonation Therapy (MIT), which consists of speaking in a type of musical manner, characterized by strong melodic (two notes, high and low) and temporal (two durations, long and short) components. Evaluating the effects of MIT on the brain, measured by relative cerebral blood flow and PET scanning during the hearing, they saw that "MIT-loaded" words recovered speech capabilities, which were thought to be lost. Going back to the brain = behavior idea, this experiment was further examined to find that critical regions, like the Broca's Area in the left hemisphere, which is involved in language and speech, of the brain were activated by "MIT-loaded" words, and not regular words. (8)

Another example involves the study reported in Lost & Found, an online journal on the autism web site. In 1996/7, where procedures like AIT, Auditory Integration Training, is showing great potential for benefiting the growth and development of various special children. In this report, they modeled the possible effects of AIT in newly born domestic chicks. After an exposure to an AIT-type program for ten days, the neurochemical changes in the brain were studied. Repeatable data were collected and results showed that music had an incredible effect in the chicks. Four groups of chicks were exposed to the followed auditory conditions: 1- Musical treatment with AIT-type modulated music, 2- Musical treatment without modulated music, 3- Verbal treatment of the human voice, and 4- Control with no treatment. After 10 days of these conditions, and after two days following the termination of the treatment, chemicals in the chicks' brains were measured. Concentrations of norepinephrine (NE) and its principle metabolite MHPG were elevated in the groups with the music treatment, and more so in the music treatment with the AIT-type modulated music. Though the relationships among these observations to humans are not yet fully known, these results are quite promising. NE is known to be related with attention processes, not to mention that these observations are related to the type of brain change that is thought to mediate the antidepressant effects of tricyclic drugs like desipramine. (9)

In conclusion, I do not believe that there is conclusive evidence to believe in the Mozart effect. However, despite the fact that we cannot reach a conclusion just on these experiments, there is a vast amount of data out in the web for the public to further probe for themselves. Thus far, we are torn between conflicting results and many parents are still being misled by various advertisers who says that classical music will enhance children's learning capabilities. However, I think it is safe to say that there is some evidence that the brain is affected indeed somehow by music and that music lessons can not hurt the growing stages of a child. I think it will be very exciting to see where the research regarding treatments of brain damage is headed despite the unwillingness on my part to make any conclusions as to whether or not these observations on the brain necessarily enhance intelligence.


References

1)CNN News web site, Article under Entertainment.

2)Entrez-PubMed, National Library of Medicine's search engine for literature.

3)Wilfried Gruhn - Brain Research - Forschung und Lehre, Article published in European Music Journal.

4)Biology 202 Lecture Notes , on the Serendip web site.

5)Music Training and the Brain, in the Society for Neuroscience web site.

6) M.I.N.D. Institute's site for research papers.

7)Music and Memory and Intelligence.

8)The Effects of Music and the Brain, web site on the value of music education.

9)Therapeutic Options: Effects of Music on the Brain, Lost & Found, Journal on Autism.


Predestined Serial Killers
Name: Annabella
Date: 2003-02-25 02:58:00
Link to this Comment: 4811


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

serial killer: a person who commits a series of murders, often with no apparent motive and usually following a similar, characteristic pattern of behavior. (8)

As participants in today's information obsessed society we are constantly being bombarded with the brutal actions that mankind is capable of. One watches the news and hears about a murder, or reads a book about a mysterious killer. The only time that the New York Daily News has ever outsold the New York Times was when the headline claimed the letters that proved the 'real' identity of Jack the Ripper. As you wade through these bits and pieces of reality, one can't help but be struck by the thought-- what causes a person to actively commit such horrendous acts? There have been many different studies done in hopes of finding an answer. For a crime such as serial killing there are two main schools of thought. The first idea is that serial killing is caused by an abnormality in the frontal lobe region of the brain. Another theory is that serial killers are bred by circumstance. However, I believe that with some analysis the evidence for both theories can serve to prove that serial killers are genetically different. Thus demonstrating that serial killing can find its origins in genetics.

A startling amount of criminals on death row have been clinically diagnosed with brain disorders. A recent study has demonstrated that 20 out of 31 confessed killers are diagnosed as mentally ill. Out of that 20, 64% have frontal lobe abnormalities. (1) A thorough study of the profiles of many serial killers shows that many of them had suffered sever head injuries (to the frontal lobe) when they were children. To discover why damage to the frontal lobe could be a cause of serial killing, one must look at the function of the frontal lobe of the brain.

The frontal lobe is located in the most anterior part of the brain hemispheres. It is considered responsible for much of the behavior that makes possible stable and adequate social relations. Self-control, planning, judgment, the balance of individual versus social needs, and many other essential functions underlying effective social intercourse are mediated by the frontal structures of the brain. (3) Antonio and Anna Damasio, two noted Portuguese neurologists and researchers working in the University of Iowa, have been investigating in the last decade the neurological basis of psychopathy. They have shown that individuals who had undergone damage to the ventromedial frontal cortex (and who had normal personalities before the damage) developed abnormal social conduct, leading to negative personal consequences. Among other things, they presented inadequate decision-making and planning abilities, which are known to be processed by the frontal lobe of the brain. (5) For a long time now, neuroscientists have known that lesions to this part of the brain lead to severe deficits in all these behaviors. The inordinate use of prefrontal lobotomy as a therapeutic tool by surgeons for many mental diseases in the 40s and 50s, provided researchers with enough data to implicate the frontal brain in the genesis of dissocial and antisocial personalities. (6) Through this information one can posit that the frontal lobe acts as a conscience.

An example of a serial killer that had suffered sever injury to his frontal lobe is Albert Fish, better known as the Brooklyn Vampire. At the age of seven he had a severe fall off a cherry tree which caused a head injury from which he would have permanent problems with, such as headaches and dizzy spells. (2) After his fall he began to display many violent tendencies, including an interest in sadomasochistic activities. At the age of twenty he killed his fist victim, a twelve-year-old neighbor by the name of Gracie whom he cannibalized. (2) It would appear that a damaged frontal lobe is a prime suspect in the causation of serial killing. However, if you look closer this argument flows over to prove that genetics is the main cause of serial killing. Could it be the case that we are all genetically encoded to become serial killers, and the frontal lobe is the proverbial muzzle that stops us from acting on our tendencies? All of us have at one point or another imagined killing a person because they displayed tendencies that we did not like. None of us went out and killed them in a ritualized manner.

The idea that the frontal lobe is a stop button, or acts as a conscience that stops a 'normal' human from acting on their more violent tendencies, is not a new one. Evolutionary biologists point that the frontal lobe evolved in tandem with the evolution of man from a beast to purveyor of civilization. (3) The trademark of all social primates is a highly developed frontal brain, and human beings have the largest one of all. It is a simple jump in reason to understand that, like a dog without its muzzle, the human animal without a frontal lobe is more likely to 'bite.'

Brain damage cannot be the motivator for all serial killing. 46% of all confessed serial killers have no frontal or general brain damage. The majority admits that they were perfectly aware of what they were doing before, during, and after the crime. Some even confess that they know that what they were doing was wrong, and contemplated 'giving up' after the first time. The thrill derived from murder is a temporary fix. Like any other powerful narcotic, homicidal violence satisfies the senses for a time, but the effect soon fades. And when it does, a predator goes hunting. (1) Caroll Edward Cole commented that after his first kill " I thought my life was going to improve, I was sadly mistaken. Neither at home or at school. I was getting meaner and meaner, fighting all the time in a way to hurt or maim, and my thoughts were not the ideas of an innocent child, believe me." By the end of his career Cole had killed so many people that he could not even remember the names of his victims. At one point during his testimony he remembered a new victim, "This one is almost a complete blank," Cole said of the victim. He didn't know the woman's name, but Cole remembered finding pieces of her body scattered from the bathroom to the kitchen of his small apartment. "Evidently I had done some cooking the night before," he testified. "There was some meat on the stove in a frying pan and part that I hadn't eaten on a plate, on the table." (1) Caroll Edward Cole had no mental illnesses, and was aware that he was committing morally abhorrent acts.

Circumstance is another argument that is proposed as a cause of serial killing. As the study of the profiles of serial killers progresses many similarities in their pasts, and in their recurring actions become eerily apparent. As children many suffered through traumatic childhoods, usually being physically or mentally abused. From this fact, it emerges that all reported cases of abuse committed against serial killers was done by their mothers. Most serial killers kill either for personal gain (revenge, money etc...) or pleasure (usually sexual). Many serial killers have above average intelligence, and are well employed in high wage jobs (engineers, doctors, lawyers etc...). The majority came from middle to high-income families. They are usually handsome/ beautiful, very articulate, and sexually promiscuous. (1) As killers they have a specific trait that they look for in the victims that they ritually kill. In interviews, many serial killers admitted that the people they killed fulfilled some aspect of a person that abused or taunted them. Caroll Edward Cole killed little boys that reminded him of a schoolyard bully that taunted him for having a girl's name. He later expanded his modus operandi to include young brown haired females that looked like his abusive mother. Under oath, he told a story of childhood abuse inflicted by his sadistic, adulterous mother, giving rise to a morbid obsession with women who betrayed their husbands or lovers. "I think," he told the jury, "I've been killing her through them." (1)

Do these disquieting threads of commonality among serial killers make a strong case towards the idea that serial killers are caused by circumstance? It would appear to be true, until you take into consideration the distribution curve. If serial killers give the impression of being linked by similar circumstance it is because the genetic traits of serial killing are all found at one end of the distribution curve (which by chance could be the same end in which a higher probability of certain circumstances occurs). The normal distribution curve is one of the most commonly observed and is the starting point for modeling many natural processes. (4) There are also cases of serial killers that break the thread of commonality. For example, one pair of killers, killed as a form of earning income. In the 1800's William Burke and William Hare poisoned over 200 people, and sold their corpses to medical universities so that medical students could learn how to autopsy cadavers. (1) If there is no commonality of experience between serial killers, then serial killing must be genetic.

An interesting idea comes forward when discussing the commonality of serial killers. Many genetic diseases are carried by a female and become active in their male offspring. For example, the most common form of color blindness, affects about 7 percent of men and less than 1 percent of women. It is identified as a sex-linked hereditary characteristic, passed from the mother to her son. (7) Can the same be said for serial killing? It is observed that their mother abused the majority of serial killers. Following this with the thought that serial killing is caused by genetics, could the mother be passing the genetic trait down to her son? This would account for the reason that serial killers are predominantly male.

In this paper the question has been raised: are serial killers genetically different? This question spawned the idea that serial killing is genetic. To prove this the two main arguments against genetics were looked at, and disproved. The first argument proposed that serial killing was caused by mental illness, or more specifically that serial killing was caused by trauma to the frontal lobe. If you look closely at this argument it actually turns to prove that serial killing can be genetic. The argument generates the idea that we are all genetically encoded to become serial killers, and the frontal lobe (acting as a conscience) stops us from doing so. The argument does not refute the idea that serial killing is genetic. The next major argument towards the causation of serial killing supposes that a serial killer is created by circumstance. To oppose this one can simply say that the apparent commonality of experience among serial killers only demonstrates that since serial killers are genetically different, in general they are at one end of the distribution curve. If there is no commonality of experience, then serial killing must be genetically caused. Maybe our founding fathers claims of manifest destiny and predestination were accurate after all.

Web References

1)Crime Library, This is a web site that houses the profiles of many serial killers, it also give detailed case history, and modus operandi. It specializes in hard to find serial killer cases. Especially older ones.
2)AtoZ Serial Killers, Here you will find a psychological profile of serial killers. Has photos, and sketches of the serial killers. Gives profiles of modern, and operating killers.
3)Frontal Lobe Abnormalities, This site provides a very good case for abnormalities in the frontal lobe. It specializes in brain abnormalities, and how that affects behavior.
4) Distribution Curve, Here you will find provided a really simple description of the distribution curve.
5)A Study of Brain Physiology ,This site focuses on the physiology of the brain.
6)History of Psychosurgery, This site looks at the different treatments for brain diseases. It is a historical look at psychosurgery and the effects that it has on the brain.
7)A History of Color Blindness, A basic history of color blindness, and its genetic origins.
8)Oxford English Dictionary, This is the site that hosts the complete Oxford English Dictionary.


Ecstasy, the Brain, and the Media
Name: Kathleen F
Date: 2003-02-25 03:26:38
Link to this Comment: 4812


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Ecstasy has been glorified by countless Brit-pop drug anthems, condemned by staunch anti-drug foundations and even caused a controversial media debate when the post-mortem picture of eighteen year old Lorna Spinks was splashed across every newspaper in the United Kingdom, her Ecstasy-related death rendered in full gruesome color. The long-term effects and temporary consequences of Ecstasy have been a subject of heated debate in the past ten years as the pill has seen a surge in popularity. What exactly does Ecstasy do to the brain? What creates the euphoric effects? Why has it been used in therapy? And does the media's portrayal of Ecstasy rely on the facts of the drug, or skew the information to instill a sense of fear into citizens, parents, and teenagers?

Ecstasy (Methlenedioxy-methamphetamine, MDMA for short) is a synthetic, psychoactive drug with amphetamine-like and hallucinogenic properties. It shares a chemical structure with methamphetamine, mescaline, and methylenedioxyamphetamine (MDA), drugs known to cause brain damage (1). MDMA, in a simple explanation, works by interfering with the communication system between neurotransmitters. Serotonin is one of a group of neurotransmitters that carries out communication between the body and the brain. The message molecules travel from neuron to neuron, attaching to receptor sites. This communication activates signs that either allow the message to be passed or prevent the message from being sent to other cells. However, when MDMA enters the nervous system, it interferes with this system. After serotonin is released, the neurotransmitters are retrieved into the nerve terminal where they are recycled. MDMA hinders this process so that the serotonin is not drawn back in. This allows for an accumulation of serotonin, and also an increase in serotonin synapses (2). This surge of serotonin creates an emotional openness in the Ecstasy user. A sense of euphoria and ecstatic delight envelop the user. Some users report thinking clearly and objectively, and often claim to come to terms with personal problems or various other skeletons in the closet (3).

This is the reason Ecstasy resurfaced in the 1980s (after being developed in Germany in 1912 as a dieting drug due to the fact that amphetamines are appetite suppressors) as a tool in experimental psychotherapy, particularly regarding relationship and marital problems (4). In 1984 the drug was declared illegal in the United States after it started being used for recreational purposes. However, in June of 1999, Swiss courts ruled that dealing Ecstasy is not a serious offence. Ecstasy remains illegal in Switzerland but is now regarded as a "soft drug" rather than hard. The Swiss courts stated that Ecstasy "cannot be said to pose a serious risk to physical and mental health and does not generally lead to criminal behavior." Switzerland already had quite a liberal background regarding the use of Ecstasy. Between 1988 and 1993, long after psychotherapeutic Ecstasy use had been curtailed in the US, therapists in private practice were permitted to use Ecstasy within psychotherapy (5). Shockingly enough, this past November, the Food and Drug Administration approved the first ever U.S. study of Ecstasy as helpful medicine, claiming that previous testing has not focused on the drug's benefits, but on issues of toxicity (6).

However, the use of Ecstasy in psychotherapy is still hazy. If MDMA merely ups serotonin levels, and serotonin is responsible for making the individual feel just plain Good, is this really solving marital problems? If brain=behavior, and the brain is regulating the behavior through an artificial source, is this really genuinely therapeutic? Or is it a trick of the brain, flooding itself with neurotransmitters and depleting the euphoric sensation as levels return to normal in the next few days?

The idea of serotonin levels returning to normal in "a few days" seems idealistic, as well. What are the long-term effects of Ecstasy use? The serotonin absorption in the blood is refurbished in 24 hours but serotonin tissue levels are found to decline for weeks after Ecstasy usage. The receptors and neurons within the system are also found to degenerate over a period of time. Animal studies done by Dr. Karl Jansen have concluded that MDMA is far more toxic in primates. Studies done on rats revealed that the loss of serotonin in rats was four times less than in monkeys. Jansen declares the drug potentially hazardous for human use and also drew comparisons between serotonergic axons damaged by Ecstasy and those seen in Alzheimer's disease, where the most common receptor modification is a loss of presynaptic serotonergic receptors (7).

Finally, what to make of the highly publicized deaths claimed by the media to be "Ecstasy-caused"? In 1995 the English media championed the tragic story of teenage Leah Betts, posting the country with billboards claiming that, "Just One E Tablet killed Leah Betts." The country went ballistic over the story, but failed to mention that Betts had not died from the Ecstasy, but from brain swelling due to drinking too much water. The media opted to keep the message straightforward: Leah died from taking a single E pill. To say she died from excessive water consumption is not a good story (8).

I spent last year living in London. One day I looked out my window to see a huge billboard covered in deviant-looking pills with the caption "Which one is the killer? 1 in 100 E tabs can kill you." The amount seemed strikingly high to me, especially because studies had been done in Britain previously reporting that there were well over a million regular Ecstasy users in England alone. The number of deaths related to Ecstasy in the UK is about seven per year. The numbers didn't match up. If a million users are understood, this works out at 1:143,000, implying the risk is "Minimal", according to British Government Risk Classification developed by Sir Kenneth Calman, Government Chief Medical Officer. This means a citizen would have a greater chance of dying in a fishing accident than from taking an E pill (9).

Perhaps one of the biggest problems with Ecstasy is the aura of mystery surrounding it and the media hype of "the killer pill." One ray of light surrounding the tragic E-related deaths of young people in Britain (and other countries around the globe) is the November decision by the FDA to delve into the effects of Ecstasy. The only way these deaths will not be in vain is if more education and knowledge of MDMA and Ecstasy is brought to light.

References

1) National Institute on Drug Abuse

2) Seratonin and the Brain

3) Erowid.org, the Sensory Effects of MDMA

4) Therapeutic Uses of Ecstasy

5) Switzerland and Ecstasy

6) The Return of using Ecstasy for Therapy?

7) Adverse Psychological Effects of Ecstasy use and Their Treatment

8) The Leah Betts story

9) British Government Risk Classification


Attention Deficit Hyperactivity Disorder
Name: Nupur Chau
Date: 2003-02-25 04:05:45
Link to this Comment: 4813


<mytitle>

Biology 202
2003 First Web Paper
On Serendip


According to the National Institute of Mental Health, Attention Deficit Hyperactivity Disorder, or ADHD, is the most commonly diagnosed disorder among children (1). The disorder affects approximately 3-5 percent of children of school age (1), with each classroom in the United States having at least one child with this disorder (1). Despite the frequency of this disease in the United States, there still remains many discrepancies about the disorder itself, starting from the diagnosis and frequent misdiagnosis of ADHD, as well as the question of whether or not ADHD is an actual medical condition, or just a "cultural disease" (3).

According to the NIMH, frequent symptoms of ADHD include inattention, hyperactivity, and impulsivity (1). Examples of these three patterns of behavior can be found in the Diagnostic and Statistical Manual of Mental Disorders, which, summarized by the National Institute of Mental Health, states that signs of inattention include

* Distraction by "irrelevant sights and sounds" (2)
* Failure of attention towards details, resulting in "careless mistakes" (2)
* Frequently failing to follow instructions
* Losing or forgetting things used for tasks, such as toys or pencils (2).

Similarly, signs of hyperactivity and impulsivity include

* A restless feeling, shown through frequent fidgeting or squirming
* Failure to sit or maintain quiet behavior
* "Blurting out answers before hearing the whole question" (2)
* Difficulty in waiting in line or for their turn (2).

The Diagnostic and Statistical Manual of Mental Disorders also mentions that these behaviors must appear before the age of seven (2), and continue for at least six months (2). Most importantly, these patterns of inattention, hyperactivity, and impulsivity must affect at least two areas in the person's life, whether it be school, work, home, or social settings (2).

However, the National Institute of Mental Health also states that during certain stages of a child's development, children are generally inattentive, hyperactive, and impulsive (2). In other words, at certain stages, children may have the symptoms of ADHD, without actually having ADHD. Similarly, many sleep disorders among children create symptoms that are similar to symptoms associated with ADHD (3).

These discrepancies result in a high risk of misdiagnosis for ADHD: "ADHD is, after all, merely a collection of behaviors. There's no virus to look for, no blood test to administer" (3). The diagnosis of ADHD is based purely on observations and anecdotes, both of which have a higher chance of being wrong than something than a blood test or CAT scan. Additionally, the interpretation of the behaviors can be different for different "experts". Seeing as there is no "real" scientific process, many of the observations are open to interpretation, and thus different diagnoses.

It is because there is no "real" scientific process of diagnosing ADHD that many people think of ADHD as a "cultural disease" (3) rather than a medical condition. Others view it as a way to justify rowdy kids that have parents that fail to discipline them. Regardless of one's beliefs, the American Academy of Pediatrics states that between 4-13 percent of children aged 6 to 12 are affected with the disorder, whatever the disorder may be.


References

1)ADHD-Questions and Answers

2)ADHD

3)Parenting-Hyperactivity Hype?


The Controversy over Stem Cells and Parkinson's Di
Name: Melissa Os
Date: 2003-02-25 04:24:45
Link to this Comment: 4814


<mytitle>

Biology 202
2003 First Web Paper
On Serendip


Without any thought, without even noticing it happens, when one has an itch, they scratch it. The arm moves up to the face, the fingers reach down and move across the skin. This series of actions, which many of us do everyday is something individuals with Parkinson's disease struggle with every moment of their lives. Simple movements are replaced by frozen limbs that they or their nervous system can not move. Described by many as a type of momentary paralysis, the disease causes gradual degeneration in patients until they are no longer able to perform the most basic bodily functions, such as swallowing or blinking.

Parkinson's disease is a neurological disorder that is named after "the English physician who first described it fully in 1817" (4). The disease causes disturbances in the motor functions resulting in patients having trouble moving. Other characteristics that are not always present in every patient are tremors and stiffening of limbs. All of these characteristics, of the disease are caused by "degeneration of a group of nerve cells deep within the center of the brain in an area called the substantia nigra" (5). Dopamine is the neurotransmitter for these cells to signal other nerve cells. However as the cluster of nerve cells fail to operate, the dopamine can not reach the areas of the brain that affects one's motor functions (5). On average Parkinson's patients have "less than half as much dopamine in their systems as healthy people do" (8). The problem and controversy that arises from this disease is in the cure. Researchers, for years, have been attempting to unravel the mystery of what causes Parkinson's disease and how it can be treated and or cured. While many medications have been developed the most controversial treatment is the use of stem cells, which has raised debate not only in the scientific realm, but in political and religious circles as well.

In the past twenty years, many drugs have been developed to treat the disease. Although the cause of Parkinson's disease is still unknown, scientists have been developing methods of treatment and therapy. The idea is to replace dopamine in the brain, which is accomplished, to some extent, with the administration of L-Dopa. In conjunction with other drugs, L- Dopa "inhibits the enzymes that break down L-dopa in the liver, thus making a greater part of it available to the brain" (5). This treatment is very successful, but it only hinders the disease for a time and it is by no means a cure. That leaves us with stem-cells and the role they play in treatment of Parkinson's disease.

There are many different types of stem-cells which can be implanted in patients to regenerate or replace the damaged or abnormal cells caused by not only diseases like Parkinson's but also Alzheimer's and spinal cord injuries (2). A specific example in relation to Parkinson's is the harvesting of embryonic stem cells. These human embryonic stem cells can be transplanted into the brain to replace and create dopamine neurons. The controversy is in how one can obtain these stem cells. During fertilization, in humans, the embryo is hollow and contains cells that eventually develop into a fetus (1). Researchers have discovered, as recently as 1998, that the cells in the embryo contain all the tissues types, therefore becoming any cell in the body. Thus the stem cells can be transplanted into patients with diseases dealing with cell abnormality (1).

Many religious groups argue that stem cell research should be discontinued by the federal government, because the killing of the embryos is just as heinous as abortion. Stem-cells are extracted from a few different sources, either from surplus embryos created for infertile couples, umbilical cords, and from elective abortions. For pro-lifers the embryos used to extract the stem cells are equal to human lives being destroyed (6). The other side of the argument is that embryonic cells are not living- they do not have a soul. Much of this debate is rooted and overlaps with the debate over whether abortion, of any form should be lawful. Scientists have also argued that many of the embryos used would have been destroyed at some point anyway, so why not use them for furthering the good of the human race. Consequently, political debate over whether federal funds should be used to support the further research of stem cell transplantation continues to this day. An example of the relevance and immediacy of this issue is the speech President Bush gave in the summer of 2001, in which he approved funding from the government on research of 64 human embryonic cells (7) (1).

These arguments are all valid and much debated however there are other reasons, less explored that account for the reason some feel stem cell research should not be pursued. Much of the time when stem cells are inserted into the brain "they grow into every cell type and form tumor-like masses called teratomas, eventually killing there hosts" (1). The problem is not that stem cells fail to become dopamine neurons; the failure is in controlling the growth of these cells. While there has been research completed with successful control of the stem cell growth, the percentage is small and inconsistent (1). Therefore, at this stage it is still too early to attempt these transplantations of stem cells on human patients. However these studies represent a ray of light and hope for Parkinson's patients who could benefit greatly by the transplantation of stem cells into their brain and the resulting production of dopamine neurons.

There are no easy answers when it comes to a cure or a treatment for Parkinson's patients. The most promising cure is in stem-cells. Not only does the research and implantation of these cells into a treatment raise debate with religious groups, but it extends into the political realm as well. These debates all hold merit, but the bigger more important issue that is rarely talked about is that of whether stem cells can actually be effective in humans. Can researchers learn to harness the growth of these cells and if they can what does that mean for the future of these type of stem cell therapies? What moral and ethical implications does this process pose? While these questions are unanswerable at this point the only way to prove one way or another, whether these cells will be effective is to be able to research them, so that maybe one day Parkinson's disease can be an aliment of the past.


References

WWW Sources

1)articles from the National Academy of Sciences,particularly an article about embryonic cells by Curt Freed

2), encyclopedia of biological terms and articles, search under key word stem cells.

3),a Parkinson's disease society page that gives basic information.

4),

5), encyclopedia of biological terms and articles, search under key words Parkinson's Disease.

6) ,site that deals with religious views on stem cell research.

7),

8),science news archive with articles relating to current science issues.


REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Sexual Differentiaion and its affects on Sexual Or
Name: Tiffany Li
Date: 2003-02-25 05:13:05
Link to this Comment: 4815


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT

What controls a human's sexual orientation? The long-standing debate of nature versus nurture can be extended to explaining human sexual orientation. Is it biological or environmental? The biological explanation has been gaining popularity amongst the scientific community although it is only based on speculations. It is argued that sexual orientation is linked to factors that occur during sexual differentiation. The prenatal exposure to androgens and their affect on the development of the human brain play a pivotal role in sexual orientation (2). Heredity is also part of the debate. Does biology merely provide the slate of neural circuitry upon which sexual orientation is inscribed? Do biological factors directly wire the brain so that it will support a particular orientation? Or do biological factors influence sexual orientation only indirectly?

Gender is determined by the sex chromosomes, XX produces a female, and XY produces a male. Males are produced by the action of the SRY gene on the Y chromosome, which contains the code necessary to cause the indifferent gonads to develop as testes (1). In turn the testes secrete two kinds of hormones, the anti-Mullerian hormone and testosterone, which instruct the body to develop in a masculine fashion (1). The presence of androgens during the development of the embryo results in a male while their absence results by default in a female. Hence the dictum "Nature's impulse is to create a female" (1). The genetic sex (whether the individual is XX or XY) determines the gonadal sex (whether there are ovaries or testis), which through hormonal secretions determines the phenotypic sex. Sexual differentiation is not driven by the sex chromosomes directly but by the hormones secreted by the gonads (3).

Hormones are responsible for sexual dimorphism in the structure of the body and its organs. They have organizational and activational effects on the internal sex organs, genitals, and secondary sex characteristics (1). Naturally these effects influence a person's behavior not only by producing masculine or feminine bodies, but also by causing subtle differences in brain structure. Evidence suggests that prenatal exposure to androgens can affect human social behavior, anatomy, and sexual orientation.

Androgens cause masculinization and defeminization of developing embryos. Masculinization is the organizational effect of androgens that enables animals and humans to engage in male sexual behavior in adulthood. This is accomplished by stimulating the development of neural circuits controlling male sexual behavior (1). Defeminization is the organizational effect of androgens that prevents animals and humans from displaying female sexual behavior in adulthood. This is accomplished by suppressing the development of neural circuits controlling female sexual behavior (1). For example if a female rodent is ovariectomized and given an injection of testosterone immediately after birth she will not respond to a male rat as an adult, when she is given estradiol and progesterone. This demonstrates that she has been defeminized. If the same female rodent is given testosterone in adulthood, rather than estradiol and progesterone, she will mount and attempt to copulate with receptive females (3). This on the other hand is an example of masculinization.

For example in the congenital adrenal hyperplasia (CAH) disorder, the adrenal glands secrete abnormal amounts of androgens. The secretion begins prenatally and causes prenatal masculinization and defeminization (1). Boys born with CAH develop normally. However girls with CAH are born with an enlarged clitoris and fused labia. Sometimes surgical intervention is needed and the person will be given a synthetic hormone that suppresses the abnormal secretion of androgens (1). Money and his colleagues performed a study on 30 women with a history of CAH. They were asked to give their sexual orientation and 48% described themselves as homosexual or bisexual. They also exhibited more masculinized behavior (4). These results suggest that an abnormally high exposure to prenatal androgens affects sexual orientation.

Androgen insensitivity syndrome affects people who are insensitive to androgens. These genetic males develop as females with female external genitalia, but also with testes and no uterus or Fallopian tubes (1). If they are raised as girls, they seem to do fine and function sexually as women in adulthood. Women with this problem lead normal sex lives. There is no indication of sexual orientation toward women. Thus, the lack of androgen receptors appears to prevent both the masculinizing and defeminizing effects of androgens on a person's sexual interest (1).

There are two additional cases which show quite a strong correlation between sexual orientation and sexual differentiation. The first example is a genetically transmitted condition which is fairly common in one area of the Dominican Republic. It causes abnormal sexual differentiation in XY fetuses that produce an inadequate enzyme required to convert testosterone to DHT in the external genitalia (3). The child is born with ambiguous genitalia, labia (with testes inside) and an enlarged clitoris, causing partial masculinization. These individuals are usually raised as females, but at puberty the rise in testicular androgen secretion causes the phallus and the scrotum to grow and the body to develop in a male fashion. At this age the individuals start assuming male tasks and having girlfriends (3). These individuals prove that early testosterone masculinizes the human brain and influences sexual orientation and gender identity. The following example is with a set of identical twin boys. They were raised normally until seven months of age when the penis of one of the boys was accidentally burnt off during circumcision. The parents decided to perform a sex change on the child to remove the testes and raise it as a girl. It turned out that when the child reached adolescence she was unhappy and felt like she was really a boy. The family admitted to her what had happened and she got at sex change to become a boy again (1). This is another case that suggests that people's sexual identity and orientation is under the control of biological factors rather than environmental.

The brain is a sexually dimorphic organ. Neurologists discovered that the two hemispheres if a women's brain appear to share functions more than those of a man's brain. Men's brains are also larger on average than a woman's (1). Most researchers believe that the differences in the brain arise from different levels of exposure to prenatal androgens. Several studies have examined the brains of deceased heterosexual and homosexual men and heterosexual women. The studies have found differences in the size of three different subregions of the brain: the suprachiasmatic nucleus, the sexually dimorphic nucleus of the preoptic area (SDN-POA), and the anterior commissure (2). The suprachiasmatic nucleus was found to larger in homosexual mean and smaller in heterosexual men and women. The SDN-POA was found to be larger in heterosexual men and smaller in homosexual men and heterosexual women. The anterior commissure was found to be larger in homosexual men and heterosexual women and smaller in heterosexual men (2). It cannot be concluded that any of these brain regions are directly involved in people's sexual orientation. The results do suggest that the brains of heterosexual women, heterosexual men, and homosexual men may have been exposed to different patterns of hormones prenatally and that differences do exist.

If sexual orientation is affected by differences in exposure of the developing brain to androgens, there must be factors that cause exposure to vary. A study performed on rats showed that maternal stress decreased the secretion of androgens, causing an increased incidence of homosexual behavior and female-like play behavior in male rats (1). Prenatal stress has also been shown to reduce the size of the SDN-POA, which is normally larger in males (1).

The last biological factor that may play a role in sexual orientation is heredity. A study performed using identical and fraternal twins was performed. When both twins were homosexual, they were said to be concordant for that trait. If only one was homosexual, they were said to be discordant (1). There was 52% concordance of homosexuality in identical male twins and 22% in fraternal twins and a 48% concordance of homosexuality in identical female twins and 16% in fraternal twins (1). This study suggests that two biological factors may affect a person's sexual orientation, prenatal hormonal exposure and heredity.

Sexual orientation may be influenced by prenatal exposure to androgens as the studies strongly imply. So far, researchers have obtained evidence that suggests that the sizes of three brain regions are related to a man's sexual orientation. The case in the Dominican Republic and the twin whose penis was accidentally damaged shows that the effects or early androgenization are not easily reversed by the way a child is reared. The twins' studies suggest that heredity may play a role in sexual orientation. Despite all this evidence scientists are still unable to fully assert that biologically factors are responsible for sexual orientation. There are so many variations within society that could affect at person's sexuality, that it is impossible to make any assertions at this point. Therefore the nature and nurture debate is still open.


Sources

1) Carlson, Niel R. Physiology of Behavior. 7Th ed. Massachusetts: Allyn and Bacon,
2001.

2) Swaab, D.F., and Hofman, M. Sexual Differentiation of the Human Hypothalamus in
Relation to Gender and Sexual Orientation. Trends in Neuroscience. 18, 264-270.

3) Breedlove, M.S. Sexual Differnetiation of the Brain and Behavior. In Becker, J.B,
Breedlove, S.M, and Crews, D. (EDS). Behavioral Endocrinology. Pp 39-70.

4) Money, J. , Schwartz, M. and Lewis, V.G. Adult Erotusexual status and Fetal Hormonal Masculinization and Demasculinization:46, XX. Psychoneuroendocrinology, 1984, 9, 903-908.


Helen Keller: A Medical Marvel or Evidence of the
Name: Ellinor Wa
Date: 2003-02-25 06:37:21
Link to this Comment: 4816


<mytitle>

Biology 202
2003 First Web Paper
On Serendip


Everyone cried a little inside when Helen Keller, history's notorious deaf-blind-mute uttered that magic word 'wa' at the end of the scientifically baffling classic true story. Her ability to overcome the limitations caused by her sensory disabilities not only brought hope for many like cases, but also raised radical scientific questions as to the depth of the brain's ability.


For those who are not familiar with the story of Helen Keller or the play 'The Miracle Worker', it recalls the life of a girl born in 1880 who falls tragically ill at the young age of two years old, consequently losing her ability to hear, speak, and see. Helen's frustration grew along side with her age; the older she got the more it became apparent to her parents that she was living in more of an invisible box, than the real world. Her imparities trapped her in life that seemed unlivable. Unable to subject themselves to the torment which enveloped them; watching, hearing and feeling the angst which Helen projected by throwing plates and screaming was enough for them to regret being blessed with their own senses. The Kellers, in hopes of a solution, hired Anne Sullivan, an educated blind woman, experienced in the field of educating sensory disabilities arrived at the Alabama home of the Kellers in 1887. There she worked with Helen for only a little over a month attempting to teach her to spell and understand the meaning of words v. the feeling of objects before she guided Helen to the water pump and a miracle unfolded. Helen understood the juxtaposition of the touch of water and the actual word 'water' Anne spelled out on her hand . Helen suddenly began to formulate the word 'wa' and repeated it, symbolizing her comprehension. This incident was, without a doubt, the greatest revolution in educating those with similar problems and was labeled a medical miracle for its just phenomenon.


I cannot help but think of the incredible capacity had, not only by Helen Keller, but by the human mind. What enables the human brain to function with minimalized inputs? How is the human sensory system able to remain dynamic and adaptive?


The sensory function in the nervous system is a complicated one, which still stimulates unanswered questions. While most often a sensory disabled person will only experience a sensory malfunction in one area, Helen Keller experienced it in two: the loss of sight and the loss of hearing initially rendering her unable to speak at all. The loss of sight occurs when the retina, a layer of neurons formed at the rear of the eye and the only part of the brain can be seen from outside the skull no longer is able to perform properly. The natural ability to hear is orchestrated by receptor neurons located in the ear which are called 'hair cells' that transmit sound through vibrations When these two senses no longer remain intact, the possibility of learning to communicate appears to be doubtful.


Helen Keller's transformation from a victim of herself to a college graduate, publishing several books and traveling the world to help others with disabilities similar to her own, brings about many questions as to how a human being can transcend such incompetence. As previously noted Helen fell ill at the age of two, long before a child learns to read, and often before one learns to speak, especially in the early 19th century. Had she been blind, deaf and mute from her birth would she have been able to combat such ailments? Or was her previous natural sensory experience irrelevant on the grounds that she would have been too young to acknowledge its functions?


In order to properly answer these questions, a variety of facts and analyses must be apprehended. Primarily, we must consider the extent of the miracle at hand. Before the age of two, not only did Helen Keller not know how to read or write, but she also did not know what reading and writing was in the simplest form. The fact that she could negotiate, in her inexperienced mind the, the relationship between fingers basically drawing lines on her hand, pronunciation, and the actual object (water) itself seems implausible. Whether or not some sort of innate notion of language instilled in her brain since birth depends primarily on the amount of awareness in relation to the English language or the senses she acquired before she became impaired. Furthermore, had she acquired such a concept, it would not be enough to surpass the boundaries of her inability to communicate and understand.


Scientists Bohm and Peat have theorized on Helen Keller's aided ability to communicate. They postulated that her natural instinct and previous knowledge of the senses led her to combine the notion of 'A' – water in its natural form (in a variety of different ways including from the rain, tap, and pail) and 'B' – the letters Anne would write on her hand when she experienced A. Essentially, she associated one form of learning with another. However, if this theory holds true, where in it lies the final output? Helen's pronunciation of the word 'wa' meaning water has no connection to either the word written on her hand nor the feeling of the water in all forms.


This brings me to a conclusion that the I function must have played a key role in facilitating the process of compound understanding. Whether or not the I function exists in the literal sense, its presence began to make its way into scientific rationality long before Christopher Reeves.

References

1)jstor home page, Scientific Monthly Vol.15 No.3

2)originresearch home page

3)The Life of Helen Keller

4)5)Sensory Perceptions Homepage

6)More of the Life of Helen Keller


Brain Plasticity
Name: Kat McCorm
Date: 2003-02-25 06:47:04
Link to this Comment: 4817


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Throughout the line of questioning we have been following in our efforts to get "progressively less wrong" in our class wide model of the brain, a constant debate has sparked on the issue of whether brain equals behavior. If the agree that brain truly equals behavior, then we can surmise that the vastly differing human behavior must also translate to differing nuances in the brain. It is a widely conceded point that experience also effects behavior, and therefore experience must also effect the brain. On this point, I have been intrigued: are these differences in the brain mysterious; things as well theorized on by a philosopher as researched by a biologist? Or can an experience actually change the physical structure of the brain? In my web research, I found a partial answer in the concept of plasticity.

According to source (1), "Plasticity refers to how circuits in the brain change--organize and reorganize--in response to experience, or sensory stimulation." There appear be four types of stimuli to which a brain responds with change: developmental, such as in the newly formed and ever evolving brain of a child; activity dependent, such as in cases of lost senses; learning and memory, in which the brain changes in response to a particular experience; and finally injury induced, resulting from damage in the brain, as occurs in a stroke or in the well-know case of Phineas Gage. Although the particular change in the brain is dependent on the type of stimulus, brain plasticity can be widely described as an adjustment in the strength of synaptic connections between brain cells. (1)

The developmental function of brain plasticity is important not only in the world of early childhood, but also has implications for the function of an aging brain. As we age, the synaptic plasticity deceases due to the increased expression of neurotoxins in astrocytes which are responsible for cell-cell communication (2). Similarly, in youth, increased synaptic plasticity accounts for the inordinate amount of growth and learning that must occur in this stage of development.

Much research has been done on injury-induced plasticity, and continues to be done with the hopes of minimizing the effects of an injury on the brain. One case where is in brain injury due to stroke , wherein particular functions of the brain such as motor control, memory, or language may be affected. According to source (3) "a reorganization of brain functions may occur through 'uninjured' brain areas, allowing then-altered functions to be performed differently". If this function of brain plasticity can be exacerbated and emphasized, it is perhaps possible with further research and experimentation to minimize the effects of brain injury such that many or all symptoms are eliminated.

In the area of activity dependent plasticity, a study has been done comparing patients who had gone through a period of deafness and recently received cochlear implants to a control group made up of individuals with normal hearing (4). A positron emission tomography, or PET scan was done on both groups to compare which parts of the cerebral network were engaged in both groups. The results were such that the two groups, when confronted with the same stimuli, would in some cases use entirely different parts of their brains to receive and interpret the same sounds. According to an interpretatio of the study "These data provide evidence for altered functional specificity of the superior temporal cortex [and] flexible recruitment of brain regions located within and outside the classical language areas." (4) This study indicates to me the adaptivity of the brain to outside stimuli in such a light that it become irrefutable.

In reflecting on this newfound knowledge, it becomes clear that experience does indeed account for differences within the brain, modulating synapses such that a certain response is favored as a result of a certain stimuli. Coming from a family of psychologists, I imagine applying this phenomenon to a situation often encountered by my parents: a person with a phobia. Typically, psychologists account phobias to a long ago encounter, sometimes remembered and sometimes long forgotten, which caused the person to always have the protective response of fear when faced with a certain stimulus. Although an explanation such as this is referred to by the general public as psychobabble, the explanation of brain plasticity may lend credence to this psychological theory. Say a child has a fear inspiring encounter with a spider. This single experience has the power to modify the strength of the synapses within his brain such that every time the arachnophobiac is faced with spiders, his fear response is much more potent than the average persons, In this way, I can see the plasticity of learning and memory relating to long held theories about differing human behavior.

The concept of synaptic plasticity is not merely limited to humans, however. In one study done by Sharen McKay, et al., of Yale University, the well known vertebrate enactors associated with synaptic plasticity (neurotrophins) were artificially placed in an invertebrate setting and yet again influenced neuronal growth and plasticity(5). Although invertebrates do not produced neurotrophins, substances with similar effects have been isolated in several invertebrate species. These results suggest that synaptic plasticity is a function that is widely found in many species of life, not merely in a developmental role, but also in the adaptation and modification of adult brains. This article again inspired my theorizing: does the suggestion that "adult plasticity [is] highly conserved across diverse phyla" (5) indicate that the ability of the brain to adapt to learning, memory, and specific experience is an evolutionary advantage?

. In researching and learning about the types of brain plasticity, I found more evidence for the idea that brain equals behavior, and that the experiences which are input into the brain do directly cause changes in synaptic activity and strength. Although I have found the ideas inherent in brain elasticity intriguing, I'm also afraid that they have raised more questions for me than they have answered. I find myself wondering about the evolutionary advantages of plasticity, the amount of time it takes to effect a physical change in the brain, and why, even in the face of plasticity, do certain functions of the brain never seem to adapt to the demands of the modern day world?


References

1) Brain Plasticity, on the John F. Kennedy Center for Research on Human Development web site.
2) Glia-neuron intercommunications and synaptic plasticity, on the PubMed web site.
3) A Comprehensive Functional Approach to Brain Injury Rehabilitation, on the Brain Injury Source web site.
4)Functional plasticity of language-related brain areas after cochlear implantation, on the PubMed web site
5) Regulation of Synaptic Function by Neurotrophic Factors in Vertebrates and Invertebrates: Implications for Development and Learning, on the Learning & Memory web site.


Pathways of Pain and Possibility: The Dysfunction
Name: Danielle M
Date: 2003-02-25 07:26:15
Link to this Comment: 4818


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

As a disorder reaching nearly every culture, historic and contemporary, headache has been experienced in some form by the majority of the human population. Despite its relative age and prevalence, we have yet to fully ascertain either its cause, its organization, or its cure, and continue to suffer everything from quotidian tension-type headache to cluster or "suicide"-type headache with little substantial relief. So just what is it we know about headache? Broadly, headache is largely understood in terms of its external characteristics, that is, its symptoms and their effects on the sufferer. Headache is described as "a throbbing, pulsating or dull ache, often worsened by movement and varying in intensity [that] can be a disorder unto itself, such as migraine, or a symptom of another disorder ranging from a head injury to a brain tumor" (1). The term headache comprises a wide range of subtypes, the most common and studied of which are tension-type, migraine, and cluster headache (1). Of focused attention in this paper is migraine headache, a disorder now afflicting between 23 and 28 million Americans, about 60% of whom go undiagnosed (1).

Historically, migraine and its symptoms have appeared in popular as well as medical literature for more than 1000 years (1). Arguments addressing the nature of migraine pain have thus been long debated, particularly as to whether it was the result of "a disorder of blood vessels (vascular) or of the brain itself (neural)" (2). As early as 1873, Edward Liveing published a theory of the link between migraine and other neurological disorders, including insomnia, epilepsy, vertigo, and vasovagal faints. He asserted with surprising accuracy what contemporary researchers are still discovering: that migraine and other neuroses are a dysfunction of the nervous system originating in the cerebral cortex of the brain (2). Now, hundreds of years later, researchers are engaged in the continuing search for proof of the true pathogenesis of migraine and its cure. What has thus far been accepted is the rather vague definition of migraine headache as "a primary episodic headache disorder characterized by various combinations of neurological, gastrointestinal and autonomic changes" (1). Yet for its pervasiveness and deleterious effects, what do we truly understand about migraine headache? How is migraine related to other neurological dysfunction? Is there a cure? if so, how can it be found?

That which is most understood about migraine is of course that which is most readily observed and documented--its manifest symptoms. Previously lacking the objective means of discovering the underlying cause, researchers were constrained to understanding headache "by analyzing its symptoms" (3). Migraine headache has thus been divided into two types according to the accompanying symptoms: common migraine, which affects the majority of sufferers, and classical migraine, which affects about 15% of migraine patients (3). Both common and classical migraine are associated with extreme throbbing pain that is often unilateral (one-sided), nausea, and "exquisite" sensitivity to light (photophobia) and sound (sonophobia) (3). Those with classical forms of migraine, however, experience an additional symptom, "a distortion of vision that can be hallucinatory in nature" known as a migraine "aura" (3). Both classical and common migraine follow a common symptomatic pattern once triggered, lasting a total of between 12 to 24 hours. The pattern of migraine behavior is divided into four stages: in classical migraine, symptom progression begins with the prodrome, leading to the aura, the headache, and then the postdrome; common headache follows the same path without the experience of an aura (1).

The migraine prodrome occurs in the 24 hours preceding the headache stage and consists of feelings of extremes of euphoria and unbridled energy or lethargy and depression, with the possibility of "vague yawning," or distinct food cravings or aversions (2).

In classical migraine, the prodrome is followed by the aura, which arises before or during the headache, usually lasting about 30 minutes. The aura is characterized by visual sensations of "flashing lights, zigzag castellations, balls or filaments of light," or metamorphopsia, where objects appear distorted and out of proportion (2). The sufferer's hands may feel swollen, or her tongue too large for her mouth (2). These disturbances are thought to result from dysfunction in the parietal and occipital lobes of the brain; they have also been credited as the possible inspiration for "the grotesque experiences of Lewis Caroll's Alice in Wonderland," as Caroll was known to suffer from headache (2).

The next stage in both classical and common migraine is the headache itself, sometimes centered on the eye or temple, or beginning as a tightness at the back of the head and spreading slowly into a one-sided pain. The pain is described as throbbing or pulsating, but may also occur as "a non-specific dull ache" or a band of pressure. Movement or exertion worsens the discomfort, and nausea and vomiting may occur at the height of pain (2).

Finally, the headache is proceeded by the resolution phase, the postdrome, marked by "diminished headache intensity" in company with "a sense of relief--almost euphoria in some," though there may be "a vague bruised aching in the head and general tiredness" (2).

Triggers of migraine are much less well understood than the symptoms they cause--researchers are unable to reach a consensus on which factors consistently trigger migraines, primarily because there is in fact little such consistency. The factors most frequently blamed include hormones, insomnia, hunger, or changes in barometric pressure, though recurrent triggers vary across individual sufferers (4). Of the identified migraine triggers, two in particular could lead to new observations in the theory of migraine: hormone fluctuation and food craving/aversion. The implication of cycling drops in estrogen levels in migraine has led to an increasing body of research and speculation about women and migraines (4). In the effort to explain the pronounced prevalence of migraine among females, researchers have noted the development of significantly higher rates of migraine in girls than boys during puberty, the increasing difficulty of managing migraine during the premenopausal period, and the worsening effect of oral contraceptives and estrogen replacement therapy on migraine occurrence (4). The result of migraine from falling estrogen levels may be indicative of the close correlation between hormones and the neurotransmitters involved in migraine dysfunction. Because estrogen is involved in regulating neurotransmitter action in priming blood vessels to receive serotonin, when estrogen drops, the receptiveness of blood vessels to serotonin may drop as well, causing both migraine and depression-like symptoms.

Certain foods such as chocolate, coffee, red wine, cheese, or fish have also been identified as triggers in the past, though some researchers now believe dietary factors to be related to "suggestion and psychological conditioning" or "neighborly suggestion and pseudoallergy," rather than actual food-specific sensitivity (2). This dismissal however, may not consider closely enough the relation between hormonal changes and unusual attitudes toward specific foods often experienced during periods of hormonal fluctuation, such as menstruation or menopause, and seen in the prodromal stage of migraine. Such a link may be indicative of the role of particular foods and their biological effects on brain chemicals associated with the sensation of emotional sorrow or pleasure. The unusual attitudes toward food seen in the prodromal stage of migraine, along with the co-occurring disruptions of emotions (euphoria, lethargy, depression), may be a clue about the possible relationship of migraine not only to neurotransmitter regulation in the blood vessels, but about food and its ability to affect migraine-related brain chemistry. Despite assertions, food may well be a noteworthy trigger of migraine attack worthy of exploration, potentially able to reveal further insight into the neurological activity of neurotransmitters, food, and pain/pleasure in migraine headache.

Beyond what is externally manifest during migraine headache, what is known of the internal pathogenesis (cause) and pathophysiology (changes associated with dysfunction)? At present, knowledge of headache, migraine or otherwise, is relatively limited, and what is known has been only recently discovered through new brain-imaging technology. Previously, little to no information on brain activity during migraine was available, and theories of the disorder's cause and internal manifestation were vague and unsubstantiated. The most widely held past theory believed stress to be the main contributor to all forms of headache, and was supposed to somehow act on the arteries of the brain, thereby constricting blood vessels and reducing oxygen flow to the brain. The vessels were said to then inexplicably contract and irritate the nerves of surrounding arteries, which subsequently caused the sensation of headache pain (the brain itself does not feel pain) (4). More recent researchers now question the theory, reasoning, as did neurologist Neil Raskin, that: "If it were true, we would get a headache after running around the block," which also causes blood vessels in the brain to contract (4).

How then does the brain experience pain? If the brain itself does not feel pain, why do headaches produce severe head pain? The answer (at present) lies with the structures of the brain that do feel pain, which include the meninges (the three membranes that envelop the brain and spinal cord: the pia mater, dura mater, and arachnoid membrane), arteries and veins, paranasal sinuses, the eyes, and the cervical spine (2). The mechanisms causing head pain are the distension, inflammation, and traction of the blood vessels, ventricles, and dura matter (2). When pain sensitive structures are irritated, the tirgeminal nerve, the sensory pathway between the head and neck, acts as the pain conduit. Specifically, pain is run from the nerves that supply pain-sensitive structures at one end, to the upper three cervical nerve roots at the other, where nerves are sent up into the face and scalp (2). The location and function of the trigeminal nerve in the brainstem could account for the location of pain around the eye or temple in migraine headache and the sensation of unilateral throbbing pain of migraine (3). In experimental studies this theory has been supported by the parallel pain experienced when researchers irritate the blood vessels of the brain (6).

Revised notions have therefore begun to suspect that blood vessels themselves are more likely the "victims" of headache, rather than the cause, as originally argued (3). One of the first experiments to show such evidence for a different headache pathogenesis found that by irritating the brains of laboratory rats (whose trigeminal stimulation occurs very similarly to humans') with a chemical or probe. The result was an observed wave of electrical impulses through the brain, followed by reduced electrical activity that "spread out from the point of contact like ripples on a pond" (3). This experimental approach was next tested with human brains, this time using positron-emission topography to observe resultant activity. The result was the same--an electrical wave that researchers subsequently termed cortical spreading depression (CSD) seemingly sparked by the firing of abnormally excitable neurons at the back of the brain (7). These impulses were seen to spread across the cerebral cortex of the brain down the brainstem, where important pain centers are located, and to activate afferent (sensory) neurons through the trigeminal nerve (7), (8). As the wave passes, blood flow in the brain increases, then sharply drops--the pain of headache is thought to result from the activation of the brainstem and the resulting dilation of blood vessels (8). This drop may result from signals from the hypothalamus, itself influenced by such triggering factors as seasonal patterns, diurnal and biological mood swings, fatigue, or hormonal fluctuation (6)--all of which are suspected prompters of migraine pain, and interestingly, often related to depression as well. The stimulation of nerve cells in the brain stem by CSD thus increases blood flow to the scalp and face through a parasympathetic reflex (called the trigeminovascular reflex). Here, the neurotransmitter serotonin, along with norepinephrine, is implicated as the likely chemical messenger between the neurons originating in the brain stem, telling blood vessels to dilate or contract (3). This new migraine model thereby attempts to answer the question of whether headache pain is vascular or neurological by implicating them both in a cause and effect relationship called the "trigeminovascular reflex" (9).

With a growing knowledge of what underlies head pain, what then accounts for the often implicated emotional pain of migraine symptoms? Among researchers studying migraine headache in patients, "there is a consensus that migraine sufferers have a higher risk of anxio-depressive disorders than the general population," but the nature of the link remains uncertain (10). The relationship may be a result of chance, one disorder can cause the other, for example, or they may each be incited by common environmental factors, or there may be a common biology underlying both conditions (as some researchers think most likely) (11). Interestingly, not only is depression often seen in patients who experience migraine, but many migraine symptoms mirror similar features of depressive and anxiety disorders. The euphoria of the prodrome and postdrome stage, as well as the pronounced lethargy of the prodrome echo in miniature the drastic mood shifts of bi-polar depressive disorder and some personality disorders. This may be explained by the similarity of brain activity found in depression and migraine, including activity in the hypothalamus, which is implicated in the hormonal and mood regulatory functions of the limbic system; and the neurotransmitter serotonin, which is linked to depression and the sensation of emotional suffering or pleasure. According to researcher Fred Shetell, depression is mediated by the same neurotransmitters, and "migraine and depression often occur in the same people...More than 70% of people with migraine have a close relative who also suffers from the disorder" (9).

The significance of the relation between migraine headache and serotonin, is further evidenced by the "significant comorbidity between chronic headache and psychological distress" (5). Both "frequent headache and frequent disability are associated with depression, anxiety, and impaired quality of life;" while "headache associated with anxiety or depression tends to be more severe and often requires supplementary psychological treatment in addition to headache therapy" (5). In a study by published by the medical journal Cephalalgia, researchers found a significant link between abnormal perceptual experiences found in migraine aura and mood changes, particularly those of increased depression and irritability. The spread depression of cortical electrical activity was concluded to be responsible for temporal lobe and limbic system dysfunction experienced in both migraine and depressive disorders (5). Further, in a study by Guillem, Pelissolo, and Lepine, after examining the lifetime prevalence of major depression in migraine sufferers, it was found that 34.4% of migraineurs experienced clinical depression, in contrast to only 10.4% in those without migraine. For bipolar I disorder, the prevalence was 6.8% among people with classical migraine, versus 0.9% in those without (5). This evidence opens a new avenue of discussion, and begs for more investigation. Increased understanding of the neurological ties of migraine and depression could aid in the development of more comprehensive knowledge of migraine headache and how it can be treated. Beyond migraine, better comprehension of such neurological activity would also represent significant progress in the understanding of brain function and dysfunction in a more general sense.

Finally, the question most crucial to migraine sufferers: is there a cure to eliminate any and all future migraine pain? and if so, when will it be found? Currently, a cure for migraine headache pain has yet to be found, though patients can seek a variety of treatments, including psychophysiological therapy, physical therapy, and pharmacotherapy (5). Pharmacotherapy is the most widely used method, whose popular drug treatments are classified as either prophylactic (preventative), analgesic (pain reducing), or abortive (to reduce acute pain) (4). While no cure has yet been found, the discovery of CSD activity in the brain and the added understanding of the headache process to which it has contributed signals a significant advance in headache research. Studies of CSD could be useful in prophylactic migraine treatment by detailing how or why headache is triggered and just how "CSD activates the trigeminal nerve" (2). The recent discovery of a gene apparently linked to migraine heredity may also offer another step toward more effective treatment. In January of 2003, Italian scientists discovered a gene, ATP1A2, in human chromosome 1 that was linked to severe migraines in a four-year study of six generations of a migraine-prone family. The scientists believe a mutation of the gene causes "a malfunction of the pump that shifts sodium and potassium through the cell" (10). The presence of the mutant gene was associated with migraine pain, the sensation of tingling hair, and the perception of flashing lights commonly associated with migraine aura (10). If such a gene can be definitively linked to the source of migraine pain, development of more direct and effective preventative treatment could become a significantly nearer reality.

While much remains unknown about migraine headache, new research and experimental brain-imaging tools are actively seeking deeper and more comprehensive knowledge. Advances may be slow in the understanding of a disorder that is likely as old as man himself, but with each new discovery about the internal structures and functions of the brain, we come closer to better treatment, maybe even a cure. Because of the vastness of the brain's complexity, understanding one manifestation of dysfunction, such as migraine, may well also provide new leads to understanding other closely linked neurological disorders, such as depression or anxiety. Perhaps migraine disorder is a single feature in a chain of intertwined, co-occurring disorders, or perhaps it stems from a malformed gene passed through familial generations. With as much as remains unknown, each possibility can lead to fascinating new relationships and functions of the brain previously unimagined.

References

1)Headache, by Stephan D. Silberstein, on the Encyclopedia of Life Sciences web site.

2)Migraine, by J.M.S. Pearce, on the Encyclopedia of Life Sciences web site.

3) Brownlee, Shannon. The Anatomy of a Headache. U.S. News & World Report, 31 July 1989.

4)Migraine, by The Merck Manual of Diagnosis and Therapy on the Merck Manual web site.

5)Identification of Patients with Headache at Risk of Psychological Distress, by D.A. Marcus, Headache, May 2000 on the Web of Science web site.

6)Intrinsic Brain Activity Triggers Trigminal Meningeal Afferents in a Migraine Model , by Hayrunnisa Bolay, Uwe Reuter, Andrew K. Dunn, Zhihong Huang, David A. Boas, and Michael A. Moskowitz, Feb 2002 on the Nature Medicine web site.

7)New Research Could Open Doors to Better Migraine Treatment, by CNN on CNN.com.

8)New Understanding into the Mechanics of Migraine, by Kathryn Senior, Lancet, 9 Feb 2002 on Lexis-Nexis.

9)Sheftell, Fred D. Migraine Headaches. Scientific American, Summer 1998.

10)Italian Scientists Discover Migraine Gene , by Rachel Sanderson, 21 January 2003 on MEDLINEplus.

11)Inheritance of Cluster Headache and its Possible Link to Migraine, by L. Kudrow, Headache, July 1994 on the Web of Science web site.

12)Prevalence of Frequent Headache in a Population Sample, by J. Liberman, R. B. Lipton, Al Scher, and W. F. Stewart, July-August 1998 on the Web of Science web site.


Of Two Minds
Name: Sarah Feid
Date: 2003-02-25 08:24:56
Link to this Comment: 4819

<mytitle> Biology 202
2003 First Web Paper
On Serendip

Intro
"I'm of two minds on the matter." "I can't make up my mind." "I'm having an internal argument." Our language is full of idioms that make it sound as if there were two disagreeing voices inside our heads. Often, that is indeed how it feels. But is that sensation physiologically supported? Can a brain fight with itself? Can there be multiple independent centers of consciousness in a single head? Until the 1960s, there was no way for us to test this feeling of internal disagreement. But when a surgery aimed at alleviating epileptic seizures also isolated the two hemispheres of the patient's brain, science was surprisingly afforded that opportunity.

Background
The left and right hemispheres of the brain are connected by a dense bundle of neurons called the corpus callosum. This bundle is primarily responsible for communication of information between the two hemispheres, connecting them with approximately 200 million callosal axons (in humans.) (1) In some cases of multifocal epilepsy, the electrical discharges that cause seizures can start in one hemisphere and spread to the other by way of the corpus callosum, greatly increasing the severity of the fit. Sometimes this condition is unresponsive to medication, at which point the spasms can only be controlled with more drastic measures.(2)

In 1961, Dr. Michael Gazzaniga performed an operation which had been pioneered on animals by Drs. Ronald Meyers and Roger Sperry, but which had never before been tested on human patients. In this procedure, called a commissurotomy, the surgeon opens the skull, lays back the brain coverings with a cerebral retractor, and cuts through the corpus callosum. While this prevents a seizure from spreading, it also prevents information from being passed between hemispheres. Thanks to Dr. P. J. Vogel, we now know that severing the anterior ¾ of the corpus callosum can effectively stop the spread of a seizure, while allowing full communication between the hemispheres to remain. (3) However, the behavior of full-commissurotomy patients has been extensively documented, and provides fascinating insight into the specialization of the hemispheres, the nature of the brain, and the nature of consciousness itself.

Results
To understand these behaviors, one must first remember that neurological wiring of the body is, for the most part, contralateral. Signals travel from the left side of the body to the right hemisphere of the brain and back, and vice versa. For example, the left hemisphere "sees" out of the right eye, and moves the right hand.

Patients who have undergone a commissurotomy, "split brain patients," experience strange, acute post-operational symptoms: many have trouble speaking, or are completely mute; often they experience inter-manual conflict, where their hands cannot cooperate; when speech is possible, many remark that their left hand is behaving in a "foreign," "alien" manner, and they express surprise that it is acting so "purposefully." These symptoms fade over time. The long-term symptoms are much more difficult to distinguish in an everyday setting. Split brain patients function normally in social settings, except for slight memory problems. Pianists can still play the piano, artists can still paint. Once in an experimental setting however, more phenomena can be observed which point to the dramatic impact of full commissurotomy on cerebral function. (2)

Commissurotomy patients are usually tested with a tachistoscope, a device that presents to the patient a screen with a focal point in the center. The tachistoscope then flashes a word, picture, or scene in the non-overlapping visual field of one eye. This is done faster than the eye can move, to ensure that only one hemisphere of the brain receives the input.(4) When a key and a ring are flashed simultaneously in the left and right fields respectively, a normal viewer reports seeing "keyring". A split brain patient reports seeing only "ring". Even more notably, if the patient is told to use his left hand to pull the object he saw out of a bag, he pulls out a key. When asked what it is he pulled out, (without looking) the patient says, "A ring." (5)

Why the apparent duality?

The explanation requires a rethinking of the brain. Think of the hemispheres as separate, independent entities – the left hemisphere is commonly the center of language, while the right hemisphere is mute. (2) When the patient was asked what he saw, the "speaking" hemisphere replied truthfully: it saw a ring. It received no information from the right hemisphere (left eye) because of the commissurotomy, and so had no way of knowing about the key. When the patient was asked to use his left hand to pull out what he saw, the right hemisphere, which saw a key, told the left hand to pull out a key. Subtle tactile sensations are contralaterally wired, so the left hemisphere could not feel the key; thinking all that was shown was a ring, it assumed that the left hand pulled out a ring.

These startling findings are one example of a long series of experiments conducted to pinpoint the nature of the dual-hemisphere brain. Researchers over the years have examined everything from language capability and pitch acuity to academic strengths and compensation mechanisms. Among other things, they have found that the left brain is better at language, math, and analytical processing, while the right brain is superior in visuospatial processing, music, and form. Both hemispheres can control shoulder and upper arm movement, but only contralateral hemispheres can control fine hand motion and visual/audio processing. (2)

Over time, the hemispheres develop compensatory "clueing" mechanisms. For example, a patient may begin speaking the names of objects out loud to get her left hand to retrieve them. If she is trying to figure out what she is holding in her left hand without looking, the left hand may dig the point of a pencil into its palm, shooting ipsilaterally-wired pain signals to both hemispheres and causing the left hemisphere to realize that it's holding something sharp.(6)

The hemispheres seem to have different opinions, too – a split brain patient named Paul awoke from surgery to show an unheard-of amount of language skill in his right hemisphere. Though it could not speak, there was not only word association, but also complete sentence comprehension. Researchers were able to ask each hemisphere the same questions and compare the written answers. When asked, "Who are you?" both hemispheres answered, "Paul." But when asked, "What do you want to be?" the right hemisphere answered, "Automobile racer," while the left replied, "Draftsman." (1)

Philosophy
So here we have a picture of a dual consciousness, a picture so powerful we refer to each hemisphere as a separate person. How accurate is this image? Do we really have two separate centers of identity, which work together until they are split? Is this a real-life Jekyll and Hyde condition, with two minds in the same head? How separate is split hemispheric consciousness in reality? Gazzaniga comes to an interesting conclusion:

After many years of fascinating research on the split brain, it appears that the
inventive and interpreting left hemisphere has a conscious experience very different
from that of the truthful, literal right brain. Although both hemispheres can be viewed
as conscious, the left brain's consciousness far surpasses that of the right. Which raises
another set of questions that should keep us busy for the next 30 years or so. (7)

Not only does he believe that each hemisphere has its own consciousness, he clearly finds the left hemisphere to be superior to the right. In contrast, Dennett, author of Consciousness Explained, states:

When Dr. Jekyll changes into Mr. Hyde, that is a strange a mysterious thing. Are
they two people taking turns in a single body? But here is something stranger:
Dr. Juggle and Mr. Boggle [standing for the left and right cerebral hemispheres of a
single body] too, take turns in one body. But they are as like as identical twins! Why
then say that they have changed into one another? Well, why not: if Dr. Jekyll can
change into a man as different as Hyde, surely it must be all the easier for Juggle to
change into Boggle, who is exactly like him. We need conflict or strong difference
to shake our natural assumption that to one body there corresponds at most one agent (6)

His argument here is that, since there are no significant differences in personality between the two hemispheres (as there are with Jekyll and Hyde) there are not two separate consciousnesses.

In his book, Dennett seeks to discredit a theory he calls the Cartesian Theater of the mind. In the Cartesian Theater theory, the mind consists of a sum of parts that feed into a central ethereal "theater," a site of synthesis, a nucleus of consciousness with input from everywhere in the body.(8) That theory has interesting parallels to the brain's being the nucleus of the nervous system, interpreting inputs from all over; but in this theory, there is a central locale where this synthesis is projected. Where is that in a split brain patient?

To apply that theory to the brain, we would see a single "consciousness," independent of individual hemispheres, yet viewing their information in a sort of mental simulcast. Without the corpus callosum, this simulcast would have the handicap of receiving not a single synthesized feed of information from an integrated brain, but two distinct and sometimes contradictory lines of input. The mental "picture" is split, and confusing for a while, but in the end it is comprehendible by the "consciousness." This would account for the disorienting acute post-operational symptoms, and the subsequent return to normal social function. In this model, it is not the consciousness that is split, simply the picture presented to the consciousness.

Conclusion
So which is it? Does splitting the brain split consciousness into two distinct instantiations? Or does splitting the brain merely disjoint the "theater projection" presented to the single immaterial consciousness of an individual? That is a question open to debate. Those who believe in a transcendent human soul may subscribe more easily to the Cartesian Theater application, while devoted followers of the brain=behavior philosophy might more readily agree with Gazzaniga. Perhaps the contradictions evidenced by split brain behavior point to a dramatically strange Jekyll-Hyde situation, with a foreign and mute brain-half in control of the left side of the body. Or maybe, when their hands fight each other, commissurotomy patients are simply experiencing what we all do: they can't make up their minds.

References

(1) Split Brain Consciousness An extensive educational site about the brain and split brain studies, on Macalester College's site.

(2) The Split Brain A paper by J. Bogen et al. Bogen was one of the first extensive researchers of this phenomenon. This paper on Caltech's site is an excellent comprehensive resource.

(3) Splitting the Human Brain Article by Paul Pietsch PhD on the ShuffleBrain website, a resource a la Serendip.

(4) Neuroscience for Kids A good, simple, but detailed description provided by the faculty of the University of Washington.

(5) Two Brains or One? The Split Brain Studies A history of Sperry and the early studies.

(6) Dennett on the Split Brain A philosopher's review of Dennett's book, on the Psycoloquy website.

(7) Human Neurobiology: Split Brain Research An overview on the Educational Cyber-Playground website.

(8) Stage Effects in the Cartesian Theater: A review of Daniel Dennett's Consciousness Explained Another philosophical review, on the Psyche website.


The Coorelation between Drug Tolerance and the Env
Name: C. Kelvey
Date: 2003-02-25 08:36:40
Link to this Comment: 4820


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

When considering the dynamics of brain and behavior, another component that enters the equation is environment. If brain equals behavior then changes to either should result in an altercation to the other component. The question that arises is whether a change in the environment produces change in brain chemistry and therefore, behavior. A connection between brain, behavior and environment may be observed in the context of drug tolerance. There are a collection of questions that seem essential to consider when attempting to correlate the brain's development of an observable drug tolerance and the environment.

• Does the environment affect drug tolerance? How?

• What observations correlate environmental cues and a tolerance to drugs?

• How can an understanding of the correlation between environment and tolerance be applied to medical care and treatments?

Tolerance is caused by the brain's ability to adapt to or compensate for the presence of a chemical (1). Tolerance developed after a repeated exposure to a drug is due to two possible biological processes. One process involves a decrease in the concentration of the drug at the effector site due to changes in the absorption, distribution, metabolism or excretion of the drug. The other process involves changes in the sensitivity towards a drug due to adaptive changes that diminish the initial effects of a drug. The nervous system is able to adapt and thereby reduce the initial effects of a drug by using two methods. The first method involves a change in the number or properties of drug-sensitive receptors. The second method provides a coordinated compensatory response to counteract or oppose the effects of a drug (2).

In terms of drug administration, various observations indicate that the nervous system adapts through a compensatory response. A delayed recovery to the initial stimuli would be expected if the receptors were numerically or physically altered. However, after terminating a drug treatment, a delayed recovery to the initial stimuli has not been observed. Instead, the body readapts to the drug-free state (2). Furthermore, reactions that are almost directly opposite to the desired effect of the drug are observed. These reactions are termed withdrawal symptoms and are observed when no drugs are administered and compensatory processes operate unopposed (2). If the nervous system provides compensatory responses, then the question arises as to whether environmental cues initiate the response.

Environmental cues that initiate responses were observed by the Russian physiologist, Ivan Pavlov (1849-1936). Pavlov made initial observations pertaining to 'classical conditioning'. Pavlov correlated environmental cues and physiological changes as he observed dogs salivating in response to a collection of cues that signaled feeding time. Without the stimulus of food present, there was an observable response to the anticipated stimuli as the dogs salivated in preparation for the emanate arrival of food (3).


Siegel et al. (1982) applied Pavlov's model of 'classical conditioning' to the administration of drugs (4). Siegel et al. observed that the anticipated stimuli signaled by environmental cues provided an additional drug tolerance. An experiment was designed with three groups of rats. For thirty days, one group received injections of heroin in the Colony room (normal living conditions) and placebo (dextrose) in the Noisy room the next day while the second group, received placebo in the Colony room and heroin in the Noisy room. The third group received placebo in both the Colony room and Noisy room. After 30 days, the rats were injected with a potentially lethal dose of heroin. Of the rats that had never been previously injected with heroin, there was a 96% mortality rate. There was a 64% mortality rate with rats that were administered heroin in an alternative room to where they had previously been injected with the drug. Of the group of rats that received heroin injections in the same environment, there was only a 30% mortality rate. (5)
These observations show that while there is a tolerance to the drug itself through repeated exposure, environmental cues result in an additional tolerance.

As the nervous system attempts to maintain homeostasis, disturbances created by a drug trigger compensatory reactions. If environmental cues were coordinating the brain to engage in specific reactions and develop a drug tolerance, then differences within the brain would be expected. The neurons in the hippocampus and amygdala are postulated to participate in a mechanism of learning and memory necessary for processing and assigning value to drug-associated cues (6). Indeed, Mitchell et al. (2000) observed that a tolerance induced by environmental cues compared to a tolerance without consistent contextual pairing involved different patterns of neuronal activity in both the amygdala and the hippocampus (6).


The conclusions that are drawn from the previously mentioned observations suggest an additional tolerance caused by environmental cues. Such results can be applied to medical care and treatment. Within medical terminology, "drug overdose" may be a misnomer in many cases. An overdose may not only be the result of an alteration of the dosage. When ten drug addicts who had experienced near-death overdoses were questioned about their environment while injecting heroin, seven out of the ten claimed to have been shooting up in a new and unfamiliar setting (7).

Changing the environment where a drug is administered may be advantageous for those patients suffering from chronic illnesses and limitations to treatments, due to developing tolerances. Typically, the development of a drug tolerance results in either an increase in the dosage, which often involves toxic side effects or a change in the prescription if the option is available. Considering the role environment plays in drug tolerance offers an alternative. That is, a patient may limit the progression of developing a tolerance by changing the environment. Astonishing medical professional, a patient suffering from cancer endured a treatment that was expected to become ineffective within two years, yet continued to be effective after almost five years (8). The lack of tolerance was attributed to the fact that the patient moved twice during the previous years of treatment and therefore changed the environmental cues (8).

The treatment of patients in rehabilitation centers may also be based on observations made pertaining to environmental cues and physiological adaptations. Since an addict's response to drug-paired stimuli reflects a learned compensatory response, clinicians may weaken the response and onset of a craving through a repeated, controlled exposure to the drug-paired environmental cues, without the administration of the full drug dose (9).

Due to the observations mentioned, the model for drug tolerance within the nervous system can be modified. Traditionally, drug tolerance was a direct consequence of prolonged and repeated input of a drug resulting in a change within the nervous system. Instead however, in addition to the drug, there are numerous additional inputs pertaining to the environment in which the drug is administered. The collection of additional inputs causes conditioned responses within the nervous system. The inputs provided by environmental cues contribute to the observable drug tolerance and explain the increased level of tolerance that is conditional to a consistent setting.


References

1)Tolerance Definition Information

2)The American College of Neuropsychopharmacology Descriptive account for the adaptive processed that regulate tolerance to behavioral effects of drugs

3) Biography of Ivan Pavlov

4) Siegel, S., Hinson, R.E., Krank, M.D. and McCully, J. Heroin "overdose" death: contribution of drug-associated environmental cues. Science 216, 436-437 (1982)

5)University of Plymouth, Department of Psycology page clear outline of experiment

6) Nature Neuroscience

7) Siegel, S. Pavlovian conditioning and heroin overdose: reposts by overdose victims. Bull. Psychonomic Soc. 22, 428-430

8)A Practical Application of Environmentally-Induced Tolerance Theory

9) American Psychological Association


Where am I Going? Where Have I Been: Spatial Cogn
Name: Erin Fulch
Date: 2003-02-25 08:44:34
Link to this Comment: 4821


<mytitle>

Biology 202
2003 First Web Paper
On Serendip


In the complex dissection of the human brain evolving in our course, great strides have been made on the path to comprehension of thought and action. Evidence concerning the true relationship of mind, body, and behavior has been elucidated through discoveries of the neural pathways enabling active translation of input to output. We have suggested the origins of action, discussed stimuli both internal and external, as well as concepts of self, agency, and personality interwoven with a more accessible comprehension of physical functionality. However, I remain unable to superimpose upon the current construct of brain and behavior a compatible notion of awareness of self. What are the cognitive and neural mechanisms involved in understanding the spatial relationships between oneself and other objects in the world? How do we even become aware of space and the environment in which we live? What element of the nervous system governs those processes, which enable human beings to navigate through space?

The term "spatial cognition" is used to describe those processes controlling behavior that must be directed at particular location, as well as those responses that depend on location or spatial arrangement of stimuli (1). Navigation refers to the process of strategic route planning and way finding, where way finding is defined as a dynamic step-by-step decision-making process required to negotiate a path to a destination (2). As a spatial behavior, negotiation demands a spatial representation; a neural code that distinguishes one place or spatial arrangement of stimuli from another (1). What, though, serves as such a representation in navigation and from where are these representations derived? The processes occurring within the hippocampus provide such representations.

The hippocampal mode of processing is concerned primarily with large distances and long spaces of time. These processes demand a very specific form of spatial representation, which relate locations to one another as well as to landmarks in an environment, rather than simply to the agent of action. Spatial attention and action, which result from encoded sensory information, are controlled by the parietal neocortex (1). Information relating to the location and stimuli derived from that location is encoded in sensory cortices. Informed by this egocentric information, allocentric representations provide a basis from which one's current location and orientation can be computed from one's relationship to sensory cues in the environment. This particular set of locations is referred to as a cognitive map. Cognitive maps are those overall images or representations of the spaces and layout of a setting, which ultimately function in the recognition of location (2) .

Cognitive maps can also be more tangibly understood as the internal representations of the world and its spatial properties stored in memory (3). This memory can include the ideas of what is present in the immediate environment, what are the attributes of that environment, where is the location of a goal environment or destination, and how to physically progress to that location. These maps cannot be interpreted as cartographic maps existing within the brain, but rather as an assembly of incompletely integrated, discrete pieces such as landmarks, routes and regions (4). These pieces are determined by physical, perceptual, or conceptual boundaries. Neural representations of these elements of space are maintained over time and the brain must solve the problem of updating them each time physical location is altered, and subsequently, a receptor surface is moved (5). With every movement of the eye, we introduce a new object into our visual field. These novel objects activate a new set of retinal neurons. Despite this constant change, we experience the world as stable, and remain capable or more capable, rather, of movement through space. Hence, we are granted the ability to visually accept two-dimensional information and convert it to layout knowledge of simultaneous interrelations of locations and, through the complex workings of the human brain, undertake creative navigation.

Embracing a new comprehension of the human perception of physical space and reveling in the completion of my first web paper, I suddenly become aware of a striking truth. Before me, I see nothing more than a two dimensional space, images on my computer screen and virtual addresses and locations, through which I have myself been traversing and navigating. And yet, I feel as though I have recently utilized those same capabilities and cognitive resources on which I depend to find my way to my next class. There remain myriad questions to answer about the human potential for utilization of spatial cognition, and the utilization of spatial learning developed by the personal experience and internal and external stimuli.

Web Sources
1) Models of Spatial Cognition , a paper by the Institute of Cognitive Neuroscience.

2) Psycholoquy , Golledge article of great interest.

3) Spatial Cognition , a paper by Hanspeter A. Mallot.

4) MITECS , The MIT Encyclopedia of the Cognitive Sciences.

5) The Mind's View of Space , an academic paper.


Through Different Eyes: How People with Autism Exp
Name: Alanna Alb
Date: 2003-02-25 09:07:58
Link to this Comment: 4822


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Many of us have heard of the neurological disorder called autism, and have a general sense of what the term "autism" means and all of the typical behaviors that belong in its category. Yet, I must question how many of us out there who do take an interest in autism really understand how having this disorder can totally distort one's perception of what one experiences in the world. A person with autism senses things differently then we normally do, and also responds to them in other ways – what we would call "abnormal behaviors". Why is this so? According to scientists, MRI research studies have shown that the brains of autistic individuals have particular abnormalities in the cerebellum, brain stem, hippocampus, amygdala, the limbic system, and frontal cortex (7). This provides substantial evidence that autistic behaviors must be in some way caused by these abnormalities. The problem is that we do not know exactly how or why these abnormalities cause someone with autism to experience the world differently than we do. This underlying issue of autism has always greatly intrigued me, and yet the topic of sensory integrative dysfunction in autism has been overlooked for many years. Articles and documents addressing this feature of autism have begun to appear only recently. While conducting research for my paper, I found it a challenge to find articles that specifically talked about this topic that I desired so much to learn about. Thus, the ultimate goal of my discussion is to reveal a misunderstood, hidden world – the complicated sensory dysfunctions that underlie autistic spectrum disorder.

What have we found out so far about how people with autism experience the world? All the information that we do know has been pieced together from observations of autistic behaviors, and recently, the personal accounts of high-functioning autistic persons. Dr. Temple Grandin, a professor at Colorado State University who has autism, has been able to provide us with an in-depth look into the sensory world of autism: "I pulled away when people tried to hug me, because being touched sent an overwhelming tidal wave of stimulation through my body...when noise and sensory over-stimulation became too intense, I was able to shut off my hearing and retreat into my own world" (7). Tito Mukhopadhyay, a 14 year old boy from India with severe autism, has also been able to give us a somewhat clearer picture of what he experiences: "I am calming myself. My senses are so disconnected, I lose my body. So I flap [my hands]. If I don't do this, I feel scattered and anxious...I hardly realized that I had a body...I needed constant movement, which made me get the feeling of my body" (2).

These accounts have provided a special glimpse into the sensory disorders that accompany autism. It is fascinating to see how Dr. Grandin and Tito are living examples of how the autistic person perceives the world. At first glance, the two testimonies seem very much alike to me. Both of these autistic persons' nervous systems are constantly overwhelmed by the sensory input that their bodies receive. However, a much closer look reveals to me the key differences between the two. Dr. Grandin is a high-functioning autistic person whose nervous system receives too much sensory input. Her brain is painfully overwhelmed by the flood of information, and in response to this she withdraws from the source of input by "shutting off" one or more of her senses in a desperate attempt to find relief. Tito, on the other hand, is a low-functioning autistic person who is amazingly still able to communicate what he is feeling to the rest of the world. According to his testimony, Tito's nervous system actually receives so little input that he cannot sense a connection with his own body. His hand flapping response is his attempt to calm himself and gain a sense of his body's existence. The comparison between Tito and Dr. Grandin demonstrates an unmistakable yet perplexing truth: no two autistic people are alike. Although they may share common behaviors, these behaviors will appear in all sorts of combinations and will vary in levels of intensity. It is my opinion that this observed irregular pattern of autistic behaviors is partly what has contributed to its being ignored for so long. I find it unfortunate that researchers in the past had probably cast autism sensory issues aside simply because it was just too baffling.

Observed autistic behaviors such as hand flapping, tapping and /or mouthing objects, toe walking, rocking back and forth, head banging, and vocalizing, along with the testimonies of various autistic individuals, have led researchers to believe that those with autism are either severely over-sensitive, under-sensitive, or both to outside sensory stimuli. Autistic persons have said that they have visual distortions and impaired depth perception of their environment, noxious sensations, and auditory, proprioceptive, tactile, and kinesthetic impairments (1). This is evidence that their nervous systems do not process sensory information correctly, so they feel overwhelmed by the abundance or lack of sensory information that their nervous system is receiving. In response to such confusing input, they exhibit abnormal behaviors in an attempt to either reduce the amount of input their nervous system is receiving or increase it. Such behaviors like tapping or vocalizing allow them to know where their "boundaries" exist in their environment, since they cannot see the world the same way we do (4).

The distortions in sensory perception have been linked to certain brain abnormalities discovered in the brain autopsies and MRI images of different people with autism. Normally, our internal "brain maps" give us a sense of our bodies and involve the regions of the brain that deal with the senses and movement. MRI images depict that autistic persons have scrambled brain maps (2). In other words, the information connections for sensory functions still exist, but they are located in the wrong parts of the brain. For example, face-recognition areas in the brain of autistic persons have actually been found in the frontal lobes, which is quite contrary to the specialized location of face-recognition in the normal brain (2). Autistic brains have been found to be larger than average, and they contain an incredible amount of electrical discharges in the hearing regions. The cortical columns of the brain contain a much higher amount of cells than the norm, and also make extra connections between neurons. This excess circuitry is what is believed to cause problems in sensory function (10). Abnormalities found in the brain stem, cerebellum, hippocampus, amygdala, and the limbic system may also explain many sensory processing problems. For example, the amydala, the emotion center of the brain, is underdeveloped, as well as the hippocampus. The hippocampus controls sensory input as well as learning and memory, so immature growth in this region of the brain would most definitely explain some common autistic behaviors. In the frontal cortex, it has been discovered that there is a significant lack of Purkinje cells. How this abnormality relates to neurodevelopment and mental function is still unclear to researchers (9).

As of today, researchers recognize the common pattern of autistic behaviors, and they have located what and where the abnormalities in the brain are. They have agreed that these abnormalities may be contributing to the behaviors often observed in autism, but exactly how and why is still a widely debated topic. Various researchers have come up with their own specific interpretations of the connections between the autistic brain and autistic behavior. For example, in terms of visual perception, researchers have theorized as to whether autistic persons really do have severe visual distortions of what they see, even though they are by no means blind. Many have debated how and why some autistic persons will rely on touching objects to recognize the identity and location of objects despite their apparent ability to see. We know that parts of the autistic brain that control vision may be under or over-developed, but it is not understood how sometimes autistic persons may be able to see just fine, while at other times they behave as if they were truly blind. From what I have read, it seems that some professionals question if these visual experiences and such are truly caused by physiological problems in the brain, or if they are just mere hallucinations. The second argument seems highly unlikely to me when so many apparent abnormalities in the autistic brain have been detected. It only makes sense that the visual disturbances would be attributed to something physiological, not psychological.

Unfortunately, there has also been an overwhelming reluctance by professionals to rely on the testimonies of autistic persons who are capable of describing their condition. I was rather shocked to come upon this fact while conducting my research. I do not understand why they would refuse to listen to the ones who suffer from the disorder, because they are the only ones who can actually explain what it means to live in the world of autism. Do these researchers believe that the words of a mentally disabled individual are not plausible? Such negative attitudes displayed towards society's mentally disabled have only delayed in the quest to solve the baffling puzzle of autism.

In conclusion, we are left with more questions than before, and no definite answers. I have explained the complications surrounding sensory integrative dysfunction in autism, hoping to make others aware of how much it affects those living with autistic spectrum disorder. Autistic people will respond to a lack or abundance of sensory input by flapping hands, shutting off certain senses, and doing other abnormal behaviors. Several abnormalities have been found in the autistic brain, but many researchers debate what the connections are between these abnormalities and autistic behavior. These debates, as well as disfavorable attitudes towards the mentally disabled, have only slowed our progress in the search for answers. I can only hope that in the future improved research studies and technology, as well as increased awareness and compassion among society, will have helped to improve our knowledge and understanding of sensory dysfunctions in autism.

Bibliography and Additional Links:


1)Can Foundation Page,A Case Study of Distorted Visual Perception in Autism


2)Autism Today Page, A Boy, a Mother and a Rare Map of Autism's World


3)Autism Today Page, Different Sensory Experiences/Worlds


4)Autism Today Page, Possible Visual Experiences in Autism


5)Autism Today Page, Reconstruction of the Sensory World of Autism


6)Autism Today Page, Auditory Processing Problems in Autism


7)Autism Info Page, My Experiences with Visual Thinking Sensory Problems and Communication Disorders


8)Autism Today Page, An Inside View of Autism


9)Pub Med Page, Nicotinic Receptor Abnormalities in the Cerebellar Cortex in Autism


10)Pub Med Page,
Stereological Evidence of Abnormal Cortical Organization in Individuals with Autism


11)Autism and Related Conditions Page, Sensory and Motor Disorders


12)National Center for Biotechnology Information Page, Neurofunctional Mechanisms in Autism


13)Autism Today Page, Sensory Disorder


Bradykinesia
Name: Laurel Jac
Date: 2003-02-25 09:28:53
Link to this Comment: 4823


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Perception is an intangible part of every being. It cannot be explained, defined, or nailed down the way that most scientists would like. In some ways, perception can be taught-a person's circumstance and background would cause him or her to perceive a situation in a particular way. In other ways, perception is unpredictable and ever changing. Even here, attempting to describe the indescribable, there are flaws in the last two sentences because they are based on the writer's perceptions of perception. It is too subjective for a "scientific" definition. What does it mean for a person suffering from bradykinesia? If the individual understands the condition, she will realize that the perceptions she has are not always correct. She may perceive herself to be making a fist, or spreading her fingers, but in fact she may not have accomplished this. (1) A blind and deaf person may have perceptions about the world around her. Most likely, her only correct perceptions are those perceptions about herself such as: "I am moving my arm," or "I am swinging my legs." The external stimuli are ineffective in this person, whereas a person with bradykinesia can only react completely and at a normal speed to external stimuli. Because of damage to signal pathways, the internal stimuli are ineffectively activated. (1)

Bradykinesia is a Greek term that means "slow movement", and it is one of the constituents of Parkinson's disease (2), although it is also associated with other diseases. For patients suffering from Parkinson's disease, it is usually the most tiring and frustrating of the associated conditions. Small muscle movement is one of the first affected areas of the body. Therefore, a common test is to ask the patient to tap her finger. Normal individuals tap their fingers at 4 or 5 Hz, someone afflicted with bradykinesia can usually manage only up to 1 Hz.(3) There is no cure for bradykinesia. Certain surgeries may help decrease the condition. Hope remains for the future while researchers continue to explore different possibilities, examining causes and treatments that will lead to a cure and to more clues about Parkinson's disease, Huntington's disease, and other conditions with which bradykinesia is associated. (4)

Not only does bradykinesia affect the speed of movement, the person's ability to complete a motion suffers. While walking, the arms no longer swing, but remain lax at the person's sides. (2) If a person suffering from bradykinesia is asked to make a fist without looking, he or she can tell that their movements are slow. The individual does not, however, realize that she never makes a complete fist. The fingers may be only slightly bent. Slowness of movement, by itself, is not bradykinesia. It can be associated with depression, stroke, or any kind of brain injury. The motion may be slow, but it is complete. The shuffling stride of a Parkinsonian and the monotonous voice are examples of bradykinesia-not only is the person moving the feet and legs slowly, they are unable to make a full stride. (1) Bradykinesia is a motion disorder and a perception disorder. A therapy treating both aspects has yet to be found.

Most cases of bradykinesia do not affect the entire body, but it is possible for the whole body to be afflicted. In severe cases, patients have a noticeable, unnatural stillness. While seated, they make no movements the way normal individuals move, such as crossing and uncrossing legs, crossing and uncrossing arms, shifting the angle of their head, or tapping their fingers. Bradykinesia in the face can lead to what is called "mask face" because of the constant lack of expression. Loss of voice volume and lack of intonation are common occurrences in those with bradykinesia.(4)

Since the most obvious symptoms of bradykinesia can apply to other diseases, and there are several causes for the condition, it is important that all bases are checked, including a thorough family history, drug use, and any pre-existing conditions must be considered. MRI generally rules out stroke and tumor. Sometimes a lumbar puncture is taken to measure the presence of metabolites and neurotransmitters in order to rule out metabolic disorders such as dopa-responsive dystonia. Drugs such as calcium-channel blockers, neuroleptics, and serotonin-reuptake inhibitors can be causes of bradykinesia. (4) Other causes include dementia, depression, dementia, drug-induced parkinsonism, and repeated head trauma.(5)

The problem of bradykinesia lies in the internal connections and stimuli within the nervous system. It is not a behavior that can be overcome with training. When there is an interruption of the signal pathway in the nervous system, the brain has an amazing ability to reroute the signal, creating a new pathway. The new pathway, however, does not accomplish effectively the task of carrying the signal. It takes a much longer time, and thus the translation of the signal into motion is much slower than before. Bradykinesia is reduced in patients who undergo therapies that attempt to restore the signal pathway by dealing with the interruption.(2) Even during such treatments, however, the patient may experience what is called the "freezing phenomenon" in Parkinson's disease. The entire body becomes frozen, and the person is literally trapped within her own body, unable to help herself. Although the specifics are not known, it is suspected that the freezing comes when there is a dopamine deficiency in a specific portion of the nervous system called the substantia nigra. Since people who suffer from bradykinesia respond better to external stimuli than internal stimuli, triggering an internal stimulus with an external one can end the "freezing phenomenon" control over their bodies.(6)

One of the identifiable chemical imbalances in people suffering from Parkinson's disease is the depletion of dopamine levels within specific regions of the brain. The treatment, therefore, is L-dopa, a precursor of dopamine, which is designed to convert to dopamine within the brain, compensating for the dopamine losses.(4) This type of medication does much to slow the progression of the disease, but it has little affect on bradykinesia. Surgeries are also used to alleviate the symptoms of Parkinson's disease. The two main surgeries performed are the thalamotomy and the pallidotomy. Of these, only the pallidotomy affects bradykinesia. The surgery involves placing a heat lesion, in the globus pallidus internus, that is commissioned to correct the abnormal amount of discharged nerve cells located in that area of the brain. This treatment was in existence before the introduction of L-dopa in the 1960s. L-dopa became overwhelmingly popular, but because of occasional drastic side effects, not all Parkinson's disease patients can take the medication. Furthermore, there seemed to be little long-term success of L-dopa. Despite this, it continues to be one of the main treatments for Parkinson's disease.(7)

As long as there is creativity and variation of perception, research will continue to explore and probe further into the hows and whys of diseases such as Parkinson's. While research on the larger diseases continues, new information on the symptomatic conditions will surface and help to explain the large and small pictures. Presently, much research is taking place. Neurotrophic proteins are being researched as ways to protect nerve cells from deteriorating. Neuroprotective agents such as naturally occurring enzymes are being investigated in their relationship to the deactivation of so-called free radicals. Explorations continue in neural tissue transplantation and genetic engineering.(8) The possibilities that research suggests are exciting. It holds the promise that future generations will only dig deeper, learn more, and question more. Bradykinesia is part of every day life for some. Others have never heard of it. Perhaps someday research will reveal something exciting that may lead to more information that will lead to effective treatment and better understanding of the functions within the brain.

References

1)Curing Parkinson's disease in our lifetime

2)Parkinson's disease caregiver website

3)Awakenings: the Internet focus on Parkinson's disease

4)We Move

5)General Practice Notebook

6)Freezing Phenomenon in Parkinson's disease

7)Neuroscience Clinic

8)Holistic Medicine


From Entropy to Avalanching Sand Piles: Important
Name: Geoff Poll
Date: 2003-02-25 09:30:02
Link to this Comment: 4824


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

The year is 1967 and Theodore Bundy, an average American college student has fallen in love with Stephanie, a dark haired co-ed of the same state university. He convinces her to go on a few dates, but she quickly loses interest, later citing his lack of ambition. The rejection on his heels, Bundy shifts gears and spends the next six years of his life transforming himself into the law student of her dreams. When they meet again Bundy holds the upper hand and Stephanie falls in love. A short time after the small wedding ceremony Bundy abandons Stephanie during a ski vacation and she never hears from him again.(2)

In the context of this short historical blip from the life of America's most "normal" serial killer the ensuing killing and mutilation spree may be explained in any number of ways. Biologically we could look for an imbalance in neurotransmitter firing or an oversized development in the frontal lobe of his brain. Sociologically we could point to society's need to produce deviants in order to see itself more clearly. A psychoanalyst might notice that Stephanie and most of his victims bore a strange resemblance to Bundy's mother, of whose identity he was deceived until late in adolescence.

Each of these explanations provides its own compelling paradigm for looking at 'abnormal' behavior, then leaves great gaps in the understanding of our own 'normally' irregular behavior. We will forever be attracted to deviance models as a way of examining that which we are not, but 'normal' human behavior is also sporadic. More vigorous models are needed to take in the inconsistency, pick out the places where patterns begin to emerge, and go there to seek a more profound summary of observations.

With the help of these models we will find a place among us for naturally occurring Ted Bundy's, but more importantly is the perspective we gain in looking towards our own variable behavior against the backdrop of millions of years of evolution. At the end we will have few definitive answers, but many notable implications for the way that we perceive our world on many different levels.

For a jumping off point we start with the smallest example of randomness found in nature: the atom. The Second Law of Thermodynamics describes a system beginning with a large but already dissipated amount of energy coming from the breakdown of chemicals in the sun and ending when that energy disperses at the level of each tiny atom whose 'random' movement is propelled by that energy.(10)

The study of Entropy is the study of the amounts of this energy being dispersed in a given process at a certain temperature. Although this is a founding law of nature, life does not rise from nature through entropy but through the blocking of entropy. While each particle carrying a potential amount of energy will unload that energy and spread it to as many other less energetic particles as possible, systems work to use the energy by creating boundaries and walls, both physical and chemical. (10)

The human body presents us with a tangible example of this process in action. Still thinking about tiny particles dispersing their energy to other tiny particles, entropy can be observed in just about every human biological process as there is always energy flowing in or out of the body. The human body therefore exists as an open system in a state that is far-from-equilibrium. This state is marked by the disorderly movement of energy always leaving the system and the constant need for replenishment through the metabolism of food and oxygen. We do contain some of the energy, storing it as proteins and carbohydrate molecules, but it is in the disorder that the true nature of the human body becomes apparent. (10)

If we were to map out human behavior the way we do the behavior of tiny particles, we would find the same trend towards randomness. Whether an individual or an atom, objects under the sun move in ways that we can not predict. The Chaos Theory (CT) centers on this principle.(9)

So named for the chaotic behavior first observed in gaseous molecules, the CT is a misnomer. The bulk of the theory deals with looking at this chaos instead of through linear equations as was previously done, through three-dimensional nonlinear maps. A nonlinear mapping is one way to describe the activity of a system whose multiple pieces interact dynamically over the course of time. Each piece moves without repeating itself, and without direct correlation to any other particular piece in the system. Using this mapping technique scientists observe beautifully ordered patterns emerging from an otherwise chaotic set of data points. These data points are simple representations of the type of structures that make up most of the world's complex systems. (9)

To better understand the idea of a dynamically acting complex system we turn to Per Bak's sand piles and their potential to self-organize, as living systems do, to a point of criticality. The sand pile begins when grains of sand are dropped continuously onto a table. The sand forms a pile within its confines and at some point the pile will crumble, or avalanche. In repeating the dropping of sand over time, it was found that the pile will always organize itself into a predictable sand pile, but the avalanches will never be predictable. In a given time period, there will be a certain number of small and large avalanches, but it is impossible to predict when they will come.(3)

Looking closer at the sand pile we see many tiny grains, each alone acting randomly according to its fall and any number of variables it experiences. Together the grains form a larger, more stable structure. At this point there are two forces acting on the pile. Entropy tells us that a structure approaching its critical point—where it has grown to maximum size and strength—is full of potential energy that will be dispersed if given the right activation boost. At the same time the pile's stability to this point is mediated by forces that will continue to hold the pile stable. Alone, a grain of sand has no say in either process, neither building a stable pile, nor causing one's collapse. These types of results are completely system dependent.

The data this model produces are consistent with that of larger sand piles, like mountains and their tendency to avalanche, as well as stock market activity and other social systems. Bak's most compelling parallel is a translation of the sand pile to a living and evolving ecosystem. In this "landscape," each species acts in relation to greater system the way a grain of sand acts in relation to its pile. The species, defined as a group of individual organisms acting at the same fitness level, is always working towards the critical state characterized by maximum fitness within the ecosystem.(6)

In the ecosystem model, the collapse of the species at its critical level is the extinction of a species or a mutation to a 'new' species. Both of these events are very regular functions of the natural evolution of the ecosystem. Again it does not make sense for a species at its critical level to mutate or become extinct, but in an ecosystem there are countless species all working at the same time for the same goal of maximizing their fitness, which produces a nonlinear network.(6)

When most of the species can stabilize at a high fitness the system will remain in a state of stasis. While any species would like to remain in stasis, there will always be a species with a lower fitness level. Being deficient the barrier set up for this species to go past its criticality will be small and it is likely to mutate or become extinct. This mutation of a seemingly unimportant species does not command much attention in itself, but if we remember the web that makes up the ecosystem we will quickly understand the significance. (6)

If our one species was directly connected to only two other species, the mutation/extinction would upset the immediate landscape of these two and their fitness—previously stable—would have to be reevaluated. Each of the two species has direct connections to two other species and it goes on through the network of both animal and plant species as well as the self-regulated temperature and chemicals making up the ecosystem. The almost immediate result (in terms relative to natural evolution) of this small change is a major shift in a landscape to which each other species' fitness is no longer applicable. (6)

The point to be taken from this model is that the resulting changes and catastrophes are a normal part of the evolution of a landscape and that these changes come from within the network and not from outside influences. The environmental pressure that "causes" the avalanche is constant. The dependent variables are the fitness of a set species and the natural forces, or barrier,* keeping it at its critical state. (6)

Making a quick jump from one complex system to another, we see that the Nervous System (NS) of an individual human being, one of the main communication networks in our bodies, is 99% internal communication and relies little on outside world input. Focusing on the NS and looking to its smallest distinguishable mechanism, we find the nerve to be a self-regulating system.(7)

Along each axon directing information from one neuron to another, there are sets of gated Potassium (K+) gated channels. Particles of K+, following entropy, move towards dispersing their energy, but they can only move as far as the axon walls that contain them. When the gates open up and let the charged K+ through, it flows out and causes a negative charge that pulls it right back in again. The system continues this way until it reaches a threshold, a critical state where the same push comes from within the axon as outside. The system, forming a potential battery, waits in stasis for an activation energy. (7)

That battery is the beginning of an extensive internal system. The end has all ten billion neurons interacting through 1,000 billion synapses. (1) This is not a linear system. It is complex and dynamic with many loops for feedback and self-regulation. With a bit of imagination we can picture a landscape which is the human body, where the resting potential of the axon is the critical state of the neuron and every thought, movement, or reaction is an avalanche.

Neurons fire in all or nothing blasts, but the different firing patterns within the networks of neurons represent climaxes of varied sizes. As with a natural landscape where the species who last evolved is most likely to arrive with low inhibitory barrier and fitness level and be the next to evolve again, full thoughts and muscle motion may very well be the result of cascading bursts of neuronal activity.

At the same time, on a scale a bit more gradual than a neuron firing, we see development in the body, both physically and "mentally," which makes sense in light of the constant extinction/mutation/evolution that goes on according to this model. The neurons do not change physically. Instead, what we see and perceive is always the result of the patterns of those neurons we have had since birth, and the constantly evolving relationships among them.

The human body is always working to maximize its individual fitness and the fitness of the landscape in the context of the larger landscape of the world. The NS is one example of this but it is not the end of the story. Valera includes also the Endocrine and Immune Systems in his model of human cognition and perception. His Santiago theory (with Maturana) states that the mind should not be thought of as an object but a process, involving all of these systems as well as the outside environment. The two scientists do not deny that a physical world exists, but they do not see a separation between us and our perception and that world. (1)

It is our ability to think abstractly that makes the separation. That ability seems to come from the cortex, a subset of our NS. If our three main systems contribute to our perception of the world, we are losing something in abstractly conceptualizing. In simpler organisms, their cognition does not allow abstract reasoning but they still perceive the world enough to get by. There have been interesting experiments with simpler organisms, cited by Paul Grobstein, talking about the increase in "reliable input/output relations in the spinal cord" when parts of the NS are removed from an animal.(11) They perceive more efficiently and so the question is then how is the rest of their perception affected?

We can observe this phenomenon in quadriplegics who have a part of their systems cut off. After observing a reflex reaction of a quadriplegic's foot, that same individual may very well claim that, "I can not move my foot". Though his or her brain functions, their perception of the world is not accurate.(7) A quadriplegic has 'consciousness,' and so observing this behavior we see that consciousness is just a small part of our potential for perceiving the world.

Ted Bundy was asked countless times in many different ways if what he did was wrong, but his answer can only possibly incriminate his consciousness. We speak to a piece of him when we ask the question, the piece that communicates with the world through language. As we have seen, even within humans there is so much more that we are able to readily perceive or articulate.

With or without the conscious opinion of Bundy, our model provides little help in theorizing about the events leading up to any kind of burst of activity or catastrophe. Each one occurs as the result of a network of activity in a landscape, multiple members working towards a state of criticality determined by their natural inclination towards entropy and the self-regulation imposed to fight those forces and utilize the energy handed down by the sun. If we maintain this perspective any conclusions we may make will be nonmaterial and irreducible.

Our models do not offer us conclusive answers. The beauty of this way of thinking is the lack of control it gives us. Nevertheless, there are serious implications for every scientific field, and the subject deserves serious thought. Being that we are all living in a landscape, my questions tend towards how this looks as a mapping of our lives in relation to each other. We like to think of ourselves as existing forever in periods of stasis. I look around and do not a see a species that has maximized its fitness. Wars and serial killers are two quick examples for why the system as we are running it, or participating in it, deserves to be looked at.

*The probability of a species mutating is p=e^(b/t) where b=barrier, t=mutation parameter. Time between mutations is large, but actual mutation is fast. The higher the b, higher fitness of animal, more stability (barrier=stability when species is at its maximum fitness). The lower the b, the easier it will be to make a change.

References

Non-Web References:

1) Capra, Fritjof. Web of Life. New York: Doubleday, 1996.

2) Rule, Ann. Stranger Beside Me. New York: New American Library 1996.

3) Kauffman, Stuart. At Home in the Universe. New York: Oxford University Press, 1995.

4) W. Softky, W.; Holt G. "A Physicists Introduction to Brains and Neurons". Physics of Biological Systems. New York: Springer, 1997.

5) Bak, P.; Paczuski, M. "Mass Extinctions vs. Uniformitarianism in Biological Evolution." Physics of Biological Systems. New York: Springer, 1997.

6) Flyvbjerg, H.; Bak, P.; Jensen, M.H.; Sneppen, K. "A Self-Organized Critical Model for Evolution." Modelling the Dynamics of Biological Systems. New York: Springer, 1995.

7) Class notes. Bryn Mawr College. Biology 202, Professor Grobstein.

Web References:

8)Introduction to Chaos Theory, short but easy to understand

9)Chaos Theory, very good and comprehensive

10)Second Law of Thermodynamics, for non scientists, very easy to use

11)Variability in Brain Function and Behavior, interesting article by our prof


The Correlation between Thinking and Consciousness
Name: Shanti Mik
Date: 2003-02-25 09:44:48
Link to this Comment: 4827


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

It inherent to a human being to have the ability to reason and think, but how is it exactly that we go about doing this? Is thinking merely due to input and output, but if so then how come a person can build on previous knowledge? Where is this knowledge stored? Can we "max out" our brain or is it elastic? And is intelligence past knowledge or is that something we come programmed with? We cannot hope to answer all these questions, but it is interesting to note how if these questions were to be asked to five different people, their responses would be different because of their previous knowledge. They have all learned differently, and have all learned to think differently. All these questions also bring up general ideas we have when we discuss the notion of thinking and learning. Because the human brain is so complex, it is not easy to peer inside and decide if it is the neurons firing that helps us learn or if its something else. Why is it that some people are able to pick up mathematics incredibly quickly while others struggle and some people pick up philosophy while the mathematicians struggle with that?

To first understand thinking we have to believe in the mind. Our first question would then be do we know that the mind exists. One philosopher to tackle this problem was René Descartes in the seventeenth century. He believed that the behavioral sign that indicated the presence of the mind was language. Thinking is one distinct function of the mind Descartes said, and thinking produces thoughts. Language is what expresses these thoughts and therefore we can say that language implies thinking (1). The thinking then in turn implies the mind. Descartes then faced an interesting dilemma: if communication is evidence of a mind then do animals have minds since they are able to communicate with each other? Descartes believed that animals were mere machines and therefore did not have minds. Yet this becomes harder to prove once we think of evolution. If through evolution, humans did in fact come from less complex animals, then were did the mind come from? If we wish to keep with phylogenetic continuity, then we have to say that animals also possess minds. Through gradual increase of complexity from animal to human, is it possible to just have a mind pop in somewhere or is it that animals have a lesser form of a mind and that the mind as well as the body gradually became more complex over time. I am inclined to believe that all organisms have some form of the mind. If animals do not posses a mind and do not think, how is it possible then to teach a dog how to fetch or roll over? The dog itself reveals that it too is learning.

There are generally three types of behaviorists who each see the brain in a different way. There are methodological behaviorists, radical behaviorists and mediation behaviorists. Methodological behaviorists believe that consciousness exists but believe that it is within each persons brain and that it cannot be studied by science for it is unique to each person. They believe that science demands that we only study behavior(1). The radical behaviorists, on the other hand feel that a persons conscious may cause behavior. What is going on in your head will have an output or will be directly related to your actions, for example, taking medication for your headache, or even calling a friend because you feel sad. This type of behaviorists also resist the idea of "unseen machinery". If the machinery cannot be seen by anyone then a radical behaviorist rejects it(1). A mediation behaviorist studies human learning and find that humans learn differently than animals and they try to capture this difference through mediation. Animals respond directly to stimuli where as a mediational behaviorist believes that a human learns by responding to their environment and can control their own responses(1).

The problem with discussing consciousness is that it becomes hard to distinguish at times if a person is conscious. Descartes believes that the behavioral sign that indicated the mind was language and from this came all the rest of his assertions. Yet would a person who is mute also be considered unconscious or what about a person who is deaf and therefore cannot respond to the language of others. Does sign language count as another language that can therefore indicate the presence of a mind in a deaf or mute person? I would assert that because a human who is born deaf or mute, while not being able to use the language that most humans use vocally, still shows the presence of a consciousness and the ability to learn since they are able to learn something like sign language. This ability to learn even though a mute person cannot communicate the same way that the rest of us do is evidence that if the brain is not able to communicate its thoughts through its voice then it would find another way. This also demonstrates that language may not be the sole determiner of a conscious. Body language could then also be seen as a form of communication and language as could your actions, which is why we say at times that actions speak louder than words. This is effect reveals that we also do not feel that language

The question then becomes that regardless of if the consciousness is evidenced by language or the body, how is it that thinking actually occurs in the brain. To understand this, we need to understand what the brain is made up of. The brain is "Four pounds and several thousand miles of interconnected nerve cells (about 100 billion)" that control movement, sensations, thoughts, emotions(3). The brain itself consists of trillions of neurons which are all interconnected and all transmit signals at an astonishing rate. We then have to understand how things are understood by the brain. It all begins with an image or a distal stimulus. A distal stimulus is any object that is in the world that a human being can perceive. Then that distal stimulus becomes a proximal stimulus which is the retinal image of the book. Finally, the book becomes a percept, which is where your brain recognizes the object as a book(2). The percept is what you see and in the future becomes your interpretation of the same object. Now you know what a book looks like and if you come across a similar looking object, your brain perceives that also as a book. As a child, this is how you perceive the world. Your parents hold an object in front of you, repeat its name and try to have you make the same sounds. Then they show you how it is used. It is through this form of association that a child learns what a bottle is and what they can do to get one. This is also found through conditioning, which is one theory of how people learn.

The question is first how many different types of learning are there? Is there just instrumental conditioning-Pavolovian conditioning or is there something else which is a immediate association between stimuli and movements? Instrumental conditioning-Pavolovian conditioning, which is a dichotomy in itself represent a type of associated learning. Pavolovian conditioning was thought up by a man named Ivan Petrovich Pavlov(1849-1936) who was a physiologist. It follows a law called contiguity which says that two ideas will get associated if they occur in the mind at the same time. From this Pavlov derived his hypothetical brain theory which says that "stimuli set up centers of activity in the cerebrum and that centers regularly activated together will become linked, so that when one center is activated, the other will be too(1). This theory believes that the greater number of times you come into contact with a stimuli, if you are given the same response each time, then you will learn to associate that response with that stimuli. Yet a questions arises from this theory and it is that what if the next time a person comes into contact with that stimuli, a different response is given. Can the brain then attribute both responses to the same stimuli and also be able to differentiate the two? I believe so. The human mind possess a great elasticity and it is part of growing up as a human that we discover this, for we find that sometime we are given the same response with different stimuli. Our parents tell us we can not go somewhere, while the first time we asked it was to go to the mall, and the second time we asked, it was to go to a friend's house. A great deal of learning also comes through patterning(6) which is also related to perception. We start forming classes of information where similar stimuli or similar responses start to be categorized. Through the process of conditioning, we find that the more that we are exposed to in life, and with all the myriad of responses, our categories become more complex.

Learning is not something that a human can do optimally all their lives and even at an age where a human can learn the most, there are many factors which can stunt it. Humans are known to learn more at 20 years of age than at 60(5), yet within the population of 20 year olds, not all of them learn the same. There are many incredibly smart 20 year olds and there are many who are unable to learn without spending hours poring over material. What kind of things cause this to occur? Is intelligence and learning hereditary or is, in the words of one behaviorist, "The essence of intelligence is the adequate response to a stimulus"?(4) We believe that the greater response a person has to stimuli, the greater is their intelligence. But isn't intelligence itself rather arbitrary? Perhaps it would be better to say that a person's greater response to stimuli is evidence of their ability to learn and respond to their surroundings.

As we come to a greater understanding about learning and human behavior, we realize that the two concepts are not as simple as we initially thought. What indicates that a person is learning and is it possible to measure the amount of material that a person learns everyday and does thinking indicate learning? We've been thinking about consciousness and trying to understand if how closely learning and thinking are related. Our brains are in a constant state of consciousness. We are constantly alert to what is going on around us and we are constantly observing. It would be impossible to measure the amount of information that the brain absorbs, but as we realize its elastic potential, it makes us see how great the pursuit of knowledge really is. Our brains, unlike animals, have an incredible ability to reason, think, and pursue ideas. Our thoughts can be independent of our actions and we have the ability to critically observe the world around us. It is this gift that we are given as humans and it reinforces the idea that a mind is a terrible thing to waste.


References

1) Harris, Richard J, Leahey, Thomas H. Learning and Cognition. New York: Simon & Schuster, 1997.

2) Galotti, Kathleen. Cognitive Psychology In and Out of the Laboratory. New York: Wadsworth Publishing Co.,1999.

3)Brain Source A great site to find out about learning, brain disorders, damaged brains and the effects of stress on the brain

4)What is Intelligence?

5)Brain Connection

6)Understanding about learning and the brain


Reality versus Fiction: The Mysterious Illness Sc
Name: Nicole Meg
Date: 2003-02-25 09:56:22
Link to this Comment: 4828


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Imagine being functional your entire childhood and teenage life. You attend class, study, work, and juggle a myriad of activities. You may have friends with whom you socialize in your free time. You are becoming more independent and learning to care for yourself. Suppose the newscaster on television starts talking directly to you or that someone calling with the wrong number is really a government spy or that you were going out to lunch with the president? You lose control of your life, as you can no longer discern reality from wildly absurd fantasy. Available medical treatment is imperfect and it is difficult to engage your compliance. Friends and family watch your behavior deteriorate, heartbroken, and know not how to react. Such is the tragic web that schizophrenia weaves into the lives of its victims.

When thinking about the concept of the relationship of brain to behavior, what more dramatic condition comes to mind than schizophrenia. One percent of the population, or 2 million Americans alone, are affected by schizophrenia. Imagine living in a town with a population of 10,000 people. It is likely that ten of your neighbors and community members would suffer from schizophrenia. The tragic illness is not limited to one race or culture. It is found all over the world. It affects slightly more men than women, with first signs of schizophrenia appearing between 16 and 25 years of age, although it had been observed among children as well. (1) Sufferers experience psychosis or psychotic episodes when they are unable to distinguish between what is real and what is not. According to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), for a person to be diagnosed schizophrenic they must exhibit two or more of the following symptoms in the active phase of the disorder: delusions; auditory, visual, olfactory, and/or tactile hallucinations; disorganized thinking and/or speech; negative symptoms (absence of normal behavior); and catatonia. Of those diagnosed, there are three types of schizophrenic patients: disorganized (characterized by lack of emotion and disorganized speech), catatonic (characterized by reduced movement, rigid posture, or sometimes too much movement), and paranoid (strong delusions or hallucinations). (2) The greater tendency for monozygotic twins to have the illness shows that genetics plays a role but because the tendency is not 100%, genetics are not the sole factor. Considering that only 1% of the population is affected by schizophrenia, the elevated risk of relatives—8% for non-twin siblings, 12% for child whose parent has schizophrenia, 12% for fraternal twin, 40% for child who had two affected parents, and about 47% for identical twins—further supports the influence of genetics. A 2000 issue of Science identified a small region on chromosome 1 (1q21-q22) to be genetically involved in schizophrenia but little more about it is currently understood. (2) Does this mean that all people with larger lateral ventricles have schizophrenia? Do all schizophrenic patients have large lateral ventricles? Currently, testing has suggested only that large lateral ventricles are characteristic of schizophrenic patients. Reduced size of the hippocampus, increased size of basal ganglia, and abnormalities of the prefrontal cortex have also been associated but are not necessarily indicative of schizophrenia. (2) Elevated levels of D4 receptor bindng have been found during autopsy of schizophrenic patients. (4) Deth's research shows that dopamine can stimulate the methylation of membrane phospholipids through activation of the D4 dopamine receptor and, in summary, suggests that schizophrenia may result from impairment in the ability of D4 dopamine receptors to modulate NMDA receptors at nerve synapses via phospholipids methylations.

One difficult to define factor associated with schizophrenia is that of environment. In an article in April 2001 Proceedings of the National Academy of Sciences, U.S. and German scientists found that 1/3 of newly diagnosed schizophrenics contained genetic material from HERV-W retroviruses in their cerebrospinal fluid as opposed to only 5% of chronic sufferers and 0% of healthy people. Although the origin of the virus is unknown, it seems to trigger the onset of schizophrenia. (3) According to a study in the British Journal of Psychiatry lack of DHA (docosahexaenoic acid), a fatty acid in breast milk, during infancy increases risk of schizophrenia in adults. (7) Is schizophrenia the result of one factor, all of the factors, or a select few? Although the specific combination is unknown, schizophrenia is most likely a result of many of the factors listed above, however, current studies support that genetic, physiological and biochemical influences outweigh the influence of the environment alone.

How well do current medications treat schizophrenia? Although the illness is yet incurable, antipsycotics or neuroleptics (such as clozapine, risperidone, quetiapine, and olanzapine) have been shown to effectively control symptoms in most patients. (7) Most of these medications have negative side effects, such as agranulocytosis which is a loss of white blood cells, so continued development of more effective drugs with fewer side effects is important. If in fact schizophrenia is not completely triggered by genetics, physiology, and/or biochemistry further study of environmental stimuli may be the key to developing lifestyle changes and effective counseling approaches that will alter that environment of an at-risk or affected person. Continued research of biological causes and effective medications to treat those causes is also critical.

How frightening it must be for an individual to ponder the likelihood of succumbing to the illness themselves when struggling to accommodate an affected family member. What I really want to know from future research is if someone is predisposed to the disorder to the extent that they have a schizophrenic family member, what can they do, if anything, to prevent it from afflicting them? According to a list compiled by family members of affected patients affiliated with the World Fellowship for Schizophrenia, social withdrawal is the most common warning sign indicating onset of the illness with excessive fatigue, deterioration of social relationships, inability to concentrate or cope with minor problems, apparent indifference to even highly important situations, decline in academic and athletic performance, deterioration of personal hygiene, frequent aimless moves or trips, drug or alcohol abuse, bizarre behavior, inappropriate laughter, low tolerance to irritation, and forgetfulness also noted. 1) NIH
2) Schizophrenia
3) National Institute of Mental Health, Genetic Link
4) Scientific American
5) Scientific American Link to Viruses
6) Schizophrenia home page
7) Yale Bulletin
8) Schizophrenia at the National Institute of Mental Health
9) World Fellowship for Schizophrenia


The Spirit Molecule (DMT): An Endogenous Psychoact
Name: Neesha Pat
Date: 2003-02-25 11:04:31
Link to this Comment: 4829

"The feeling of doing DMT is as though one had been struck by noetic lightning. The ordinary world is almost instantaneously replaced, not only with a hallucination, but a hallucination whose alien character is its utter alienness. Nothing in this world can prepare one for the impressions that fill your mind when you enter the DMT sensorium."- McKenna.

N,N-dimethyltryptamine(DMT) is a psychoactive chemical in the tryptamine family, which causes intense visuals and strong psychedelic mental affects when smoked, injected, snorted, or when swallowed orally (with an MAOI such as haramaline). DMT was first synthesized in 1931, and demonstrated to be hallucinogenic in 1956. It has been shown to be present in many plant genera (Acacia, Anadenanthera, Mimosa, Piptadenia, Virola) and is a major component of several hallucinogenic snuffs (cohoba, parica, yopo). It is also present in the intoxicating beverage ayahuasca made from banisteriopsis caapi. This drink inspired much rock art and paintings drawn on the walls of native shelters in tribal Africa- what would be called 'psychedelic' art today (Bindal, 1983). The mechanism of action of DMT and related compounds is still a scientific mystery, however DMT has been identified as an endogenous psychadelic- it is a neurotransmitter found naturally in the human body and takes part in normal brain metabolism. Twenty-five years ago, Japanese scientists discovered that the brain actively transports DMT across the blood-brain barrier into its tissues. "I know of no other psychedelic drug that the brain treats with such eagerness," said one of the scientists. What intrigued me were the questions, how and why does DMT alter our perception so drastically if it is already present in our bodies?

DMT is known as the "spirit molecule" because it elicits with reasonable reliability, certain psychological states that are considered 'spiritual.' These are feelings of extraordinary joy, timelessness, and a certainty that what we are experiencing is more real than present day reality. DMT leads us to an acceptance of the coexistence of opposites, such as life and death, good and evil; a knowledge that consciousness continues after death; a deep understanding of the basic unity of all phenomena; and a sense of wisdom or love pervading all existence. The smoked DMT experience is short, but generally incredibly intense. Onset is fast and furious, and thus the term "mind-blowing" was used to describe the effect. It is a fully engaging and enveloping experience of visions and visuals, which vary greatly from one individual to the next. Users report visiting other worlds, talking with alien entities, profound changes in ontological perspective, fanciful dreamscapes, frightening and overwhelming forces, complete shifts in perception and identity followed by an abrupt return to baseline. "The effect can be like instant transportation to another universe for a timeless sojourn" (Alan Watts).

The physical and psychological effects of DMT include a powerful rushing sensation, intense open eye visuals, radical perspective shifting, stomach discomfort, overwhelming fear, color changes and auditory hallucination (buzzing sounds). One of the primary physical problems encountered with smoked DMT is the harsh nature of the smoke which can cause throat and lung irritation. Surprisingly however, DMT is neither physically addictive nor likely to cause psychological dependence (Szara, 1967).

Most subjects were reported to have had memorable positive experiences, however the possibility of unpleasant experiences is not ruled out. William Borroughs, a psychologist in London tried it and reported of it in most negative terms. Burroughs was working at the time on a theory of neurological geography and after trying DMT, he described certain cortical areas as 'heavenly', while others were 'diabolical'. In Burroughs' pharmacological cartography, DMT propelled the voyager into strange and decidedly unfriendly territory. The lesson however, was clear. DMT, like the other psychedelic keys, could open an infinity of possibilities. Set, setting, suggestibility and temperamental background were always present as filters through which the ecstatic experience could be distorted (Jacobs, 1987).

The brain however, is where DMT exerts its most interesting effects. The brain is a highly sensitive organ, especially susceptible to toxins and metabolic imbalances. In the brain, sites rich in DMT-sensitive serotonin receptors are involved in mood, perception, and thought. Although the brain denies access to most drugs and chemicals, it takes a particular and remarkable fancy to DMT. According to one scientist, "it is not stretching the truth to suggest that the brain hungers for it." A nearly impenetrable shield, the blood-brain barrier, prevents unwelcome agents from leaving the blood and crossing the capillary walls into the brain tissue. This defense extends to keep out the complex carbohydrates and fats that other tissues use for energy, as the brain only uses glucose or simple sugars as its energy sources. Amino acids are among the few molecules that are transported across the blood brain barrier and thus to find that brain actively transported DMT into its tissues was astounding. DMT is part of a high turnover system and is rapidly broken down once it enters the brain, giving it the ability to exert its effects in a short period of time. Researchers labeled DMT as 'brain food,' as it was treated in a manner similar to how the brain handles glucose. In the early nineties, DMT was thought to be required by the brain in small quantities to maintain normal functioning and only when DMT levels crossed a threshold did a person undergo 'unusual experiences.' These unusual experiences involved a separation of consciousness from the body and when psychedelic effects completely replaced the mind's normal thought processes. This hypothesis led scientists to believe that as an endogenous psychedelic, DMT may be involved in naturally occurring psychedelic states that have nothing to do with taking drugs, but whose similarities to drug-induced conditions are striking. DMT was considered to be a possible 'schizotoxin' which could be linked to states such as psychosis and schizophrenia. "It may be upon endogenous DMT's wings that we experience other life-changing states of mind associated with birth, death and near-death, entity or alien contact experiences, and mystical/spiritual consciousness" (Strassman, 2001).

Hallucinogenic drugs have multiple effects on central neurotransmission. In 1997, it was found that like LSD and Psilocybin, DMT has the property of increasing the metabolic turnover of serotonin in the body. Serotonin is found in specific neurons in the brain that mediate chemical neurotransmission. Axons of serotonergic neurons project to almost every part of the brain, affecting overall communication within the brain. Early in the research on hallucinogens, it was determined that hallucinogenic drugs structurally resemble serotonin (5-HT) and thus researchers thought that DMT bound itself to serotonin receptors in the cerebral cortex (Strassman, 1990).

Further, an increase in 5-hydroxy-IAA excretion suggests the involvement of serotonin in DMT action and elevated blood levels of indoleacetic acid (IAA) are seen during the time of peak effects, implying its role as a metabolite. The relationship between DMT and serotonin led researchers to become interested in the pineal gland. The pineal gland in humans regulates homeostasis of the body and body rhythms. A dysfunction could be associated with mental disorders presenting themselves as disturbances of normal sleep patterns, seasonal affective disorders, bipolar disorder, and chronic schizophrenia. Strassman proposed that the pineal gland, besides producing melatonin, is associated with 'unusual states of consciousness.' For example, it possesses the highest levels of serotonin in the body and contains the necessary building blocks to make DMT. The pineal gland also has the ability to convert serotonin to tryptamine, a critical step in DMT formation. In addition, 5-methoxy-tryptamine, which is a precursor of several hallucinogens, has been found in pineal tissue [Bosin and Beck, 1979; Pevet, 1983] and in the cerebrospinal fluid [Koslow, 1976; Prozialeck et al., 1978].

Every night Pinoline (made by the pineal gland), DMT and 5meoDMT are produced in the brain and are the causal agents of vivid dreams. The interior dialogue produced on a DMT trip leads scientists to believe that the language centers are also affected. Of late, various experiments have been conducted that show that DMT allows awareness of processes at a cellular or even atomic level. Could DMT smokers be tapping into the network of cells in the brain or into communication among molecules themselves?

According to Groff, the major psychedelics do not produce specific pharmacologic states (i.e-toxic psychosis) but are unspecific amplifiers of mental processes (Groff, 1980). At the same time, the identification of 5HT2 receptors as possibly being involved in the action of hallucinogens has provided a focal point for new studies. Is there a prototypic classical hallucinogen? Until we have the answers to such questions, we continue to seek out the complex relationship between humans and psychoactives.


WWW Sources

Szara, S. Hallucinogenic Drugs- Curse or Blessing? Am J Psychiatry 123: 1513-1518, 1967.

Strassman, R. Human Hallucinogenic Drug Research: Regulatory, Clinical and Scientific Issues. Brain Res. 162. 1990.

B.L. Jacobs. 1987. How Hallucinogenic Drugs Work. "American Scientist". 75:385-92.

M.C. Bindal, S.P. Gupta, and P. Singh. 1983. QSAR Studies on Hallucinogens. 'Chemical Reviews'. 83:633-49.

Groff, S. Realms of the Human Unconcious: Observations from LSD Research. Jeremy Tarcher Inc., LA. 1980, pp 87-99.

www.erowid.Org

http://www.drugabuse.gov/pdf/monographs/146.pdf


Anxiety Disorders
Name: Irina MOis
Date: 2003-02-25 14:33:18
Link to this Comment: 4833


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Irina Moissiu Web Paper # 1
As I got close to the Embassy Suites, where Lincoln Financial Group was holding
their interviews, I felt myself get tense. "What if people are in the lobby and they see me
in jeans? Would that make a bad impression?" After a long debate with myself, I decided
that it was nearly midnight and that people would not be awake. I walked into the lobby,
got my room key and went up. We all had our own suite so it was clear that Lincoln had
some money to spend. As I tried to fall asleep, I became more and more restless. I began
thinking about all the things that could go wrong. I couldn't sleep. 3am rolled around.
Then 6am. At 7am I got up, showered, put my suit on and walked out of the room. I
immediately turned around because I realized that I had forgotten my name tag. As I tried
to open the door with the plastic key, I realized I was trembling so bad that I could not get
the stupid key in the door. I finally managed to enter the room and get my name tag and I
proceeded to stab my finger with the safety pin of the tag. The pin kept slipping because
my palms were sweaty. I took a deep breath, cleaned myself, cursed myself for being
clumsy, and went downstairs to eat.

The elevator doors opened and I saw over 150 people in the lobby. I nearly
fainted. I felt like my lungs would not expand and for a second everything went black. I
quickly walked over to the bathroom and slapped myself a couple of times. Splashing
cold water on my face would have been out of the question given that I was wearing
mascara. I asked myself to get a grip (several times) and walk out of the bathroom. I
was so nervous that I hung my head and walked over to the food hoping to avoid any eye
contact. I looked at the food and I wanted to eat because I was hungry, but my nausea
got in the way. I finally had to look up and then I saw the rest of the name tags. "OH MY
GOD!" Cornell, University of Penn., Princeton, Yale, Columbia. I wanted to start
crying but there were too many people around. I thought "you might as well go home.
There is no way in hell you will get a job given the competition."

Time came to mingle with upper management. At this point I started sweating.
My heartbeat was so strong that I could almost feel it impair my speech. Soon enough,
my stomach and intestines began working overtime. Thankfully the bathroom was near
by.

Symptoms so far: tremors, diarrhea, sweating, palpitations, muscle tension, sleep disturbances.

I returned to Bryn Mawr where more fun times were awaiting me. This is a brief
on the rest of the month:

Lots of snow = lots of car trouble.

Computer crashes. Thought we saved everything on FTP. Nope! Some files got
lost. But at least I ordered another computer (which I had no money for. But hey! What
are credit cards for!!!?)

I work about 30 hours a week so I can't get to Guild to work on a computer. I fall behind in all my classes because 4 ½ out of 4 ½ classes are computer/web based.

The new computer arrives. It turns out that the hard drive is not spinning properly and I have to wait for another new computer.

Another interview in NYC. I get there a half hour late due to traffic and I make a total jerk of myself.

Hell week is coming up and I am dorm president without a computer to keep in
close touch with people. Lots of other social issues to deal with which should not be
mentioned.

It's only downhill from here because papers are due, midterms are coming up, I
have to start my thesis, and more interviews.

Symptoms over the last month: muscle tension, easily fatigued, edgy, difficulty
concentrating, irritability, sleep disturbances, need to cry when more than 5 things go
wrong in one day.

I would say that for the most part I am stressed out because there are a lot of
things happening at the same time. I would not consider myself to have any sort of
anxiety (although I did suffer from panic attacks a couple of years ago). I expected my
senior year to be a difficult one and I think I am doing just fine. I strongly believe that many things in life can be solved through the power of suggestion. When I feel like things are getting out of control, I look in the mirror and I try to put things in perspective. Will I really die if I don't get a job with Lincoln? Absolutely not. That is why my
interviews went fine regardless of how nervous I was at first. I think it is perfectly
normal to be scared when meeting very important people for the first time; or even when
you meet new people. Nobody wants to be viewed as "socially retarded" so we all worry
about the kind of image we project out to other people. Does a person have a disease if
they are shy or nervous or irritated? I don't think so. I believe that some of these
disease/illnesses are conjured up socially and by pharmaceutical companies to get people
to think they need a pill to make all their problems go away. But being nervous or shy is
normal for the majority of the human population. So if everyone listened to the
symptoms of General Anxiety Disorder (GAD), the majority of us would be taking a pill.

According to the Wyeth, anxiety is "a feeling of unease and fear that may be
characterized by physical symptoms such as palpitations, sweating, and feelings of
stress." GAD is "a psychiatric condition in which the main symptoms are chronic and
persistent apprehension and tension that are not related to any situation in particular.
There may be many unspecific physical reactions, such as trembling, jitteriness,
sweating, lightheadedness, and irritability." (1) The symptoms of GAD are thought to be:
muscle tension, easily fatigued, restless, difficulty concentrating, irritability, sleep
disturbance and most people will have at least 3 of these symptoms. (1) They also
mention that these symptoms should persist over 6 months but I can easily see these
symptoms persisting if (for example) I was going through a painful divorce which would
take more than 6 months.

Is it maybe possible that we as adults are taking on too many responsibilities?
Maybe we work too many hours and we don't have enough time to relax. Therefore we
are always juggling 100 things at the same time. The more things we juggle, the higher
the chances that something will go wrong, the higher the stress. The brain may be a
wonderful and complex thing but it needs to rest every-now-and-then.

So maybe a lot of adults are suffering from GAD due to high stress levels.
However when one looks at the symptoms for Social Anxiety Disorder (SAD):
palpitations, tremors, sweating, diarrhea, confusion, and blushing, it is very clear that
these are too general and can be interpreted by some people as a sign of illness.
According to the Anxiety Disorders Association of America, "individuals with the
disorder are acutely aware of the physical signs of their anxiety and fear that others will
notice, judge them, and think poorly of them."(2) Don't most of us worry about what
other people think of us? Don't most of us get nervous when we meet new people or
when we have to perform? Plenty of people suffer from stage freight. It is a matter of
self control and with enough experience; most performers learn to deal with it. The more
I look at it, the more it seems like we want to be perfect at everything and we want a
quick simple fix. Meditation or counseling just takes too long and we want to see results
immediately.

If it was not enough to subject adults to these ridiculous, socially constructed
illnesses, we have decided to put our children through the same traumas. SAD has a
slightly different template to go by when it is applied to children. The Cincinnati
Children's Hospital Medical Center states: "Social anxiety disorder (in children) is
characterized by fear and anxiety in social situations, extreme shyness, timidity, and
concerns about being embarrassed in front of others. Situations that trigger anxiety and
are often difficult for children with social anxiety disorder are speaking in front of the
class, talking with unfamiliar children, performing in front of others, taking tests, and
interacting with strangers." (3) How many children would not be shy to perform in
public and talk to strangers? The criteria used to diagnose this illness are too broad and
would apply to the majority of children. The symptoms for children are: avoiding eye
contact, speaking softly, trembling, fidgetiness, and nervousness. I would say that not a
lot of kids tremble. However all the other symptoms are exhibited by both shy and
hyperactive children depending on their activity.

Although I agree that there are some individuals who would benefit from the use
of medication, I believe that the majority of people need to understand that we are not
perfect and we can't expect ourselves to be perfect at all times. And if we notice our
imperfections, our first choice in treatment should not be a pill. It is true that we are all
very busy and meditation for example would require too much time. However what is
the point of taking medication which will have side effects which are just as bad as your
"illness" and can potentially be addictive (like Paxil which is used to treat SAD). (4)

Not all illnesses/diseases have strong medical support. Homosexuality was
considered a disease until the last 1970's. It is important to understand that although the
brain is complicated, there is no reason for doctors or pharmaceutical companies to create
more diseases/illnesses based on social construct rather than scientific facts.



















1.1)

2.2)

3.3)

4.4)

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Orpheus unraveled? A conversation on sound and br
Name: Katherine
Date: 2003-02-25 21:20:29
Link to this Comment: 4837


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

In Macedonian hills, the music of Orpheus was said to possess certain magical qualities, having powers strong enough to alter the very behavior of people and animals. Among its abilities, the notes of Orpheus' lyre were said to calm the guard-dog of Hades (1), to cause the evil Furies to cry, and to tame the deadly voices of the Sirens (2). Was this power simply a divine and magical gift with no other explanation, or can we explain more specifically the connections between music and behavior?

Sound is an important input affecting the nervous system. The brain reacts to sound input because information signals are able to travel from the outside environment, across action potentials and through the neural network into the brain. Such signals, electrical in nature, can be detected by the electroencephalogram, or EEG, which measures the electrical activity of the brain (3). Such electrical activity has been shown to correspond to different states of consciousness within an individual, as the EEG reveals different brain activity depending upon the mental state and the actions of the person being observed. Four different types of brain waves are defined—beta, alpha, theta, and delta—and each corresponds to a different state of activity and consciousness. Beta waves, vibrating at a frequency of 13-50 hertz, are those that are experienced during the normal and alert waking state; alpha waves at 8-13 Hz occur during relaxation; theta waves (4-7 Hz) in the "halfway" moments between waking and sleeping; and delta waves, the slowest at 0.5-4 Hz, during sleep. Interestingly, the brain has been shown to emit the slower theta waves even when in the waking state, when in the undergoing such activities as chant and meditation (4).

For thousands of years, music has been regarded as possessing unique powers in affecting the human experience. Specifically, music has been associated with healing abilities, and has been used for such purposes throughout history. Traditionally, the types of sound responsible for healing are characterized by distinct rhythms, and by specific emphasis on repetition that stems from those rhythms. The existence of repetitive beat seems to aid in the achievement of meditative state. Shamans are well known for their use of drum beats to access healing powers both within themselves and for the people they wish to treat (5).

It has been suggested that in the meditative state—a state of extreme awareness and internal mental calmness—the two hemispheres of the brain become synchronized in brain wave production, rather than generating signals of varying frequencies and amplitudes. It would thus make sense that the repetitive nature of chant, and the underlying beat of music, is central in the unifying and rhythmic effect that such practices have on the brain. Specifically, we find the underlying repetitive drone, a constantly held baseline tone, in numerous types of spiritual chant, including the Hebrew, Byzantine, Arabic, Tibetan and Gregorian traditions. The Om sound is also an important tone in traditional chanting practice which calls upon repetition and harmonics for its restorative effects.

A striking example relating rhythm, brain function, and health is found in a story which occurred forty years ago among a group of Benedictine monks in southern France, when changes in their behavior resulted in health complications. After the Vatican II council, it was decided that the monks no longer needed to perform their typical 6-8 daily hours of chanting, and could rather use that time for other chores and activities. Interestingly, it was after this change that a vast majority of the group began to suffer exceeding amounts of fatigue, no longer able to survive well on the limited sleep with which they once functioned normally. Health officials were called, and advised the men to get more sleep. This was done, but still the symptoms persisted. Again a doctor was called, this time deciding that the monks were undernourished and should begin eating meat. This new regimen was followed, but to no avail, and the monks suffered yet more tiredness and lethargy. Finally, it was prescribed by Dr. Alfred Tomatis, a French ENT specialist who acknowledged the impact of structured sound on brain function, that the monks recommence their many daily hours of chanting; and this, interestingly, is what seemed to solve the problem and bring vitality back. It was thus believed that the singing of chant, and specifically the accessing of high frequency sounds and harmonics (6), had a way of altering brain wave activity and energizing the nervous system and brain (7). It seemed possible that vocal harmonics could actually create new neurological connections.

Numerous other examples claim such positive effect of chanting on brain function. "Omkar" recitation and chanting has been shown to increase concentration and memory, and to reduce fatigue (8). In 1993, a study performed at the university of California at Irvine and published in Nature, examined the effect that listening to certain types of music might have on learning and memory (9). A group of students listened to ten minutes of the following three different selections, each for ten minutes prior to taking a spatial reasoning test: Mozart's piano Sonata in D major, a recording of relaxation music, and silence. According to the results, the students' scores improved after listening to the Mozart recording compared to the other two recordings, a result which spawned the term "Mozart effect" for the increased performance expected after such listening. The results, however, have their limits; the so-called "Mozart effect" was found to last only 10-15 minutes, and is helpful specifically in tests of spatial reasoning, rather than tests involving the recitation of numbers (10). This suggests that music has an effect on specific neural pathways, and those pathways are responsible for specific areas of mental functioning.

It has long been proposed by way of many religions, philosophies, and myths, that sound possesses creative force. Many religions relate the creation of the universe by way of a speech-act, in which the god literally speaks the world into existence. In Mesopotamian myth, the cosmos is formed musically by the lyre of Ur. Sound, in many instances, is painted as the fundamental moving force of the universe. This idea of 'sound creating form' began to migrate into the world of the sciences as well: the scientists H. Jenny and G. Manners in the mid 20th century studied the shapes and patterns created by sound vibrations as projected onto mediums of sand and iron filings, introducing a discipline which has been termed Cymantics.

It is fascinating to think about the primacy of voice and sound in the health and existence of life on earth. Indeed, voice can be thought of as a virtual map of the human topography. It has been suggested that it is the speaking ability of the I-function which helps us to define our "true selves," and allows us to separate this internal self from the outside world. Myths have even suggested that the "Ur-sound" introduces the fundamental frequencies lying at the base of all life. These ideas offer interesting models for the physical action and effects of sound on living systems. It is clear that sound, and structured voice, does indeed have a profound effect on brain physiology—and hence affects our behavior—as the lyre of Orpheus has always shown.

References

1)Ovid, Metamorphoses, Book X, lines 25-32.

2) Homer, Odyssey, XII.90.1.

3)Neuroscience website from the University of Washington, Electricity in the body: action potential, resting potential.

4)Journal article from Pubmed , A study of electroencephalogram in meditators.

5)Website of Shamanism Today, Informational page about native American shamans.

6)Website for Healing Sounds, Information about chanting and harmonics.

7) A further discussion of harmonics and chanting.

8)Website for Yoga Point, Publication of research on Omkar recitation.

9), Journal article in Nature,Music and spatial task performance. McLachlan, JC.

10)Neuroscience website from the University of Washington,details on tests and results.


The Placebo Effect: Redefining the Role of the Min
Name: Christine
Date: 2003-02-25 23:49:25
Link to this Comment: 4838


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

The mind has often been referred to as the organ of consciousness. Daily functions such as thinking, breathing, and most any task we do rely heavily on use of this precious organ. However, through the use of placebos, it is becoming clear that the mind may have an even greater influence on our daily lives, influencing our perceptions of well- being. The placebo, which is Latin for to please, is a sugar-pill that is given under the guise of being a medication thought to treat an ailment. The use of placebos has shown us that the mind has tremendous potential to induce physiological changes in our body based solely on its perceptions. In example, as we swallow a sugar pill thinking that it is Prozac, we may actually physically feel fewer symptoms of depression as a result of the mind's perception (1). The placebo effect (the phenomenon of perceived benefit from a mock stimulus) has recently opened further doors to our understanding of how the mind works. Once thought of as an inactive, harmless mock substance, placebos have now shown that they induce brain activity. Therefore, the perceived benefit that once was laughed off as fooling the patient, may actually be a consequence of very real physical responses created by the mind, and generating a very real benefit. This paper will explore the various placebo studies that have helped us define and redefine the role of the mind.

The placebo effect is a powerful effect that can consistently induce a perceived benefit. Once the placebo was identified as a tool capable of generating a desired response, it became more widely used as a control in clinical trials. As a result, the placebo has been extensively studied throughout history, and yields a significant amount of data on its clinical effect. Irving Kirsch and Guy Sapirstein used meta-analysis to analyze 19 clinical trials (1). In total, the results of 1460 patients receiving antidepressant medication and 858 receiving a placebo sugar pill were analyzed. This analysis combined the results from these 19 different studies and generated an effect size (which is calculated as the mean of the experimental group minus the mean of the control group, divided by the pooled standard deviation SD). Once Kirsch and Sapirstein subtracted mean placebo response rates from mean drug response rates, they found a mean medication effect of 0.39 SDs (1). For each type of medication, the effect size for the active drug response to drug response is between 1.43 and 1.69, and the placebo response is between 74% and 76% of the active drug response. This means that 75% of the effect attributed to the perceived use of anti-depressants was due to the placebo. The perception of clinical benefit from antidepressants was largely attributed to the perceptions of the mind, and not to the actual chemical make-up of the pills patients were taking. The placebo effect shows us that the mind heavily influences our perceptions of wellness and health.

The placebo effect is often thought of as an act of fooling the mind into perceiving a benefit that has no physical basis. This depiction of the mind as a naïve and foolish organ may be incomplete and ill-representative of the mind's abilities. Indeed, the mind may orchestrate a physical response in the body based on its perceptions alone. In one study done by Luparello et al (2), 40 asthmatics were exposed to a single placebo twice, and told each time that it was a different substance being administered. Each of the participants were told that they were inhaling a bronchoconstrictor as part of an industrial air pollutant study, when they were really only inhaling saline, an inert substance. Of the 40 asthmatics, 12 had extreme asthmatic attacks, and 7 others experienced a significant increase in airway resistance, although not enough to induce a full-blown attack. None of the other patients in the study had any adverse reactions, suggesting that the experience was specific to the asthmatic condition. When the same patients were told that the inhaler contained a bronchodilator, the attacks were reversed and their airways opened up. Three minutes after administration of the placebo, the mean thoracic gas volume ratio rose from 0.07 to 0.13 L/sec/cm H20/L (normal, 0.13 to 0.35) (2). Here, the same saline solution that has caused discomfort within the 19 patients also brought about relief when it was administered a second time. The same administered placebo seems to have produced adverse effects in the patients of this study. While the mind may be labeled as an organ easily fooled by placebos, whose benefit has no physical basis, it is clear that the mind may have an even greater role in behavior. The mind's perceptions of either an irritant or bronchodilator induced airways to physically open or close, and thereby physically experience relief or an attack. The mind may have greater physiological capabilities, and may have a greater role in physical as well as perceived responses that we experience daily.

The concept of a placebo has previously been thought of as a benign sugar pill that is inactive. However, a recent study by Leuchter et al (3) suggests that a placebo may actually activate a distinct and separate part of the brain, the prefrontal cortex. In this study, Leuchter et al 51 depressed patients received one of two medications, fluoxetine (24 patients) or venlafaxine (27 patients), in a nine-week placebo-controlled study. The brain activity of each of the patients (post administration of the pills) was then analyzed through the use of quantitative electroencephalography (QEEG). Both QEEG power and cordance, which measures blood flow and energy use in the brain, were examined. Among the three arms (fluoxetine, venlafaxine, and placebo) the placebo responders were the only group showing a significant increase in brain activity. In the prefrontal region, the placebo had a treatment response of 0.98 compared to the treatment response of the fluoxetine, which was 0.06 (3). Administration of a placebo appears to be an active treatment, rather than the no-treatment comparison it has been thought to provide. In this case, the placebo not only does induce brain activity, but it does so in a way that is distinct from the way the experimental drugs (antidepressants) do. The mind therefore is not an organ that is fooled by the placebo effect into generating a standard set of brain function specific to what it believes the placebo to be. Rather, the mind creates a response that is specific to the placebo, and distinct from what we previously called "active" substances (or experimental drugs). The mind therefore must have the ability to discern what is a placebo and what is an experimental, and thereafter generates a physical response accordingly. This sophisticated ability of the mind to discriminate further shows us that the mind is a complex organ capable that is not fooled, but creates very informed responses.

Placebos can no longer be thought of as the wool being pulled over the mind's eyes. These sugar pills induce the mind to create a very real and physical response that may be specific to the placebo; as a result, use of a placebo can become a very seductive treatment option for many. With the on-going use of placebos, both as a control, and potentially as a treatment alternative, several issues emerge: Is the use of placebos ethical? Furthermore, can it be guaranteed that placebos will generate a safe, and effective, result? While these pills may seem benign by being less active than experimental drugs, the risk for harmful and unethical consequences still does exist.

Results from one recent study have brought this issue to the forefront. Freed et al conducted a study examining the outcomes of 40 patients, ages 34-75, who had severe Parkinson's disease (4). In this study, the patients either underwent neuronal transplantation surgery or sham surgery (placebo). These patients were randomly assigned to the different groups. In the patients who underwent the sham surgery, holes were drilled into their skulls but the dura (the outermost of the three meninges) was not penetrated. While all of the patients had hoped to receive this neuronal transplant, only half actually did. The rest had the placebo surgery. Freed et al found that although there was no notable effect among the older patients in either transplantation or placebo surgeries, the younger transplantation recipients showed much improvement as compared with the placebo surgery group (4). No one from the placebo surgery group benefited from the procedure. Results were measured using the standardized scoring system of the Unified Parkinson's Disease Rating Scale (UPDRS) and the Schwab and England scale. They measure symptoms of Parkinson's disease including mentation/mood and performance in the activities of daily living, respectively. Freed et al further analyzed results looking for growth of transplants by using 18F-fluorodopa PET scans. These tests all concluded that the only group that benefited from the study was the younger transplantation group, leaving many concerned due to the lack of improvement in their condition. Half of those in the placebo group experienced additional pain, and some experienced trauma. In addition to not benefiting from the procedure, many experienced significant pain from the placebo surgery. In this case, the mind could not be induced into generating the type of physical response that is desired from this surgery. And further, the potential for pain as well as harm are also clear in this example. It is clear that the ethics behind placebos, given that they are active substances that can induce very real physical responses need to be taken seriously. The mind is a complex organ that may not always respond in the way that we hope it will.

The placebo effect has shed great light on the complex functions of the mind. The mind has the remarkable ability to generate a physiological and real response to placebos. Furthermore, the mind can discern a placebo from an experimental drug, as we see through the specific activation of the prefrontal cortex by the placebo. The mind has functions and capabilities that are larger than just thinking, breathing and walking. It not only controls our perceptions of our well-being, but may control the physicalities of our well-being more extensively than was previously thought. While the placebo effect has yielded important information on the powers of the mind, we need to think more responsibly about the use of placebos, and the potential effects of these active stimuli on the brain. Given that placebos do activate the brain, we need to re-address our notions of these pills as inactive sugar pills. What if placebos could have the potential to affect the mind in a way that is not positive? What if placebo pills, and furthermore surgeries, could be harmful to the patient? The ethics of placebos, and the role of the mind in responding to them, should not be underestimated as we move forward in our studies of how the mind works. Our well-being depends on it.

References

1) Kirsch, Irving, PhD and Guy Sapirstein, PhD. Listening to Prozac but Hearing Placebo: A Meta-analysis of Antidepressant Medication. Prevention & Treatment, Volume 1, June 1998.

2) Luparello, T.J., Lyons, H.A., Bleeker, E.R. & McFadden, E.R. (1968). Influences of suggestion on airway reactivity in asthmatic subjects. Psychosomatic Medicine, 30, 819-825.

3) Leuchter AF, Cook IA, Witte EA, Morgan M, & Abrams M. (2002) Changes in brain function of depressed patients during treatment with placebo. American Journal of Psychiatry, 159: 122-129.

4) Freed CR, Greene PE, Breeze RE, Tsai WY, DuMouchel W, Kao R, Dillon S, Winfield H, Culver S, Trojanowski JQ, Eidelberg D, & Fahn S. (2001) Transplantation of Embryonic Dopamine Neurons for Severe Parkinson's Disease. New England Journal of Medicine, 10:710-719.


Multiple Sclerosis
Name: Nia Turner
Date: 2003-02-25 23:54:19
Link to this Comment: 4839

Neurobiology and Behavior 202 Nia Turner
Professor Paul Grobstein 2/25/02

The primary objective of this paper is to raise fundamental questions in regards to multiple sclerosis, and to explore possibilities that attempt to answer these inquiries. Second, the prospective outcome is to provide a solid knowledge base for which my peers may begin to understand the relationship between multiple sclerosis and neurobiology and behavior. The first question to address in the general schema of this essay is: What is Multiple Sclerosis?

Multiple Sclerosis also commonly referred to as MS is considered an autoimmune disease that affects the central nervous system (CNS). The key to understanding MS is to recognize its relationship to the human immune system. The immune system is an intricate network of specialized cells and organs that defends the body against attacks by foreign agents also known as antigens such as bacteria, viruses, fungi, and parasites. To the contrary, in the case of multiple sclerosis, the connection between the immune system and the body is interrupted when the immune system identifies itself, particularly the white matter of the central nervous system as a foreign body, and consequently destroys the myelin. The myelin is a fatty tissue composed of rich protein and lipids that protect and insulate the nerve fibers, which serve to carry out electrical impulses. The central nervous system is made up of the brain, spinal cord, and optic nerves; thusly MS affects several areas of the human anatomy.

Multiple Sclerosis could be described as the loss of myelin in multiple locations throughout the body, which then exposes the nerve fibers and leaves scaring called sclerosis. The next question that should be addressed is: What are the principal functions of myelin? In addition to protecting nerve fibers myelin is fundamental in the process of conduction of electrical impulses to and from the brain to other parts of the body. The transmission of electrical impulses will be hindered as a direct consequence of damaged or destroyed myelin. One may ask why does the body begin to attack and destroy the myelin? This question ultimately leads to the inquiry: What causes multiple sclerosis? The response is that the exact origin of MS is unknown, and that scientists and researchers suspect that the damage to the myelin results from an abnormal response by the body's immune system. In other wards science cannot explain this phenomena. However, research is making advances in the area of MS, and the future for those who are affected by multiple sclerosis appears to be more optimistic.

In most recent years, scientists have created a group of tools that provide them with the capacity to target the genetic influences that make an individual prone to multiple sclerosis. Molecular genetics has provided a model for which these instruments are utilized to separate and determine the chemical structure of genes. Since the 1980's, scientists have begun to apply the tools of molecular genetics to human diseases that are attributed to genetic defects in individual genes. Now, some scientists are convinced that one may be "susceptible to MS only if she or he inherits an unlucky combination of alterations in several genes." An example that demonstrates progress in the area of MS research; by 1996 it was reported that up to twenty locations that may contain genes contributing to MS were recognized, but a specific gene was not documented to have a significant impact on the susceptibility of Multiple Sclerosis. Once the gene that accounts for a predisposition to MS is identified, subsequently researchers will address the influence that this gene has in the immune system and on the neurological aspect. This leads me to the following query: Who gets MS? The response to this question is that anyone may develop multiple sclerosis, but that there are some known patterns. The most widely accepted and documented epidemiological observations of MS are: that there is a significantly greater frequency in a latitude of 40º or more away from the equator, than in a lower latitude closer to the equator, in the U.S. MS occurs more often in states that are above the 37th parallel than in states below it, an individual who is born in an area with a higher risk of developing MS and moves to an area of lower risk, acquires the risk of the new residency if relocation occurs prior to adolescence, MS is more prevalent among Caucasians, (particularly those of northern European ancestry) than other races, Multiple Sclerosis is 2-3 times as common in women than in men, and in specific populations, a genetic marker has been linked to MS. These factors are diverse in nature and could be considered environmental, immunologic, viral, and or genetic factors. However, approximately 400,000 Americans have been diagnosed with having multiple sclerosis, and in addition every week 200 people are diagnosed. On a much larger scale, worldwide MS may affect 2.5 million individuals.

As people are affected either directly or indirectly: What can be done to promote awareness? The response is to become empowered by seeking education about multiple sclerosis, and share your knowledge with others. An aspect of becoming more informed about this autoimmune disease is to be aware of the symptoms. The symptoms of MS are unpredictable and vary from individual to individual, but some of the most common symptoms are bladder, bowel, sexual dysfunction, abnormal fatigue, lack of balance and muscle coordination, cognitive, emotional, and vision problems, and several more symptoms. Often, MS is misdiagnosed because of the array of symptoms. The two fundamental signs for confirming multiple sclerosis are; signs of disease in different parts of the nervous system and signs of at least two separate exacerbations of the disease. Once an individual experiences symptoms and is diagnosed with MS: What are the options available as far as treatment is concerned?

Unfortunately, there is not a cure, but rather treatments that have been proven effective in slowing down the progression of the disease. The medications most commonly used to treat MS are Glatiramer Acetate, Interferon beta-1a, Interferon beta-1b, and Mitoxantrone. Except for Mitoxantrone the medications listed above are types of proteins manufactured by a biotechnological process from one of the naturally occurring interferons. To the contrary Mitoxantrone belongs to the general group of medicines called antineoplastics. It acts in MS by suppressing the activity of T cells, B cells, and macrophages that are thought to lead the attack on the myelin sheath. People diagnosed and properly treated for multiple sclerosis may live a normal life expectancy, and maintain an active lifestyle. It all depends upon the state of mind of the individual. My mother is a primary example, that MS does not have to control one's life, but to the contrary an individual has the power to control it.

There are several important aspects of living with this disease, instead of at its mercy. First, one needs to acknowledge the existence of the disease. Second, an individual should allow time to process the idea of living with the disease. Third, one should become an active agent in fighting the disease by becoming informed. Fourth, a person should be willing to make adjustments, which may alter one's lifestyle. In conclusion an individual should not be afraid of multiple sclerosis, but dare to live a fulfilling life.

Bibliography

1)http://www.nationalmssociety.org

2)http://www.ninds.nih.gov/health_and_medical/pubs/multiple_sclerosis.htm

3)http://www.undestandingms.com/ms/articles/cognitive.asp

For further information check out

4)http://web.lexis-nexis.com/universe/document?_

5)http://www.docguide.com/news/content.nfs/NewsPrint/

6)http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


"Why? The Neuroscience of Suicide": Further Resea
Name: Clarissa G
Date: 2003-02-26 00:53:54
Link to this Comment: 4840


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

While this writer had some rudimentary knowledge of the impact serotonin had on the brain, "Why? The Neuroscience of Suicide" by Carol Ezzell piqued my curiosity on the role levels of serotonin and the process by which it is absorbed in the brain affect suicidal patients. This article was recently posted on the Neurology and Behavior website as supplemental reading for neurology and behavior's spring semester 2003 class. In this article the writer Carol Ezzell weaves her own personal experience with informative reporting of groundbreaking neuroscience research on suicide. Through further research I discovered various articles on a group of scientists from Columbia University doing research on the difference in people's brains whom have attempted suicide and or succeeded.

It is widely accepted that the level of serotonin present in the brain has a significant affect on the behavior of an individual, specifically, an individuals mood. SSRI's (Selective Serotonin Reuptake Inhibitor) are common medications that treat major depression. Thus affecting the mood of an individual. Some would argue improving the quality of life of people who suffer from clinical depression.

The amount of serotonin in the brain has an affect on an individual's behavior. "Low levels of the chemical are associated with clinical depression". (1) According to an article in "Time Domestic" entitled Suicide Check, serotonin may not reach some parts of the brain in adequate amounts in suicide victims. The article cites a study by Dr. John Mann of the Columbia University College of Physicians and Surgeons in New York City. Dr. Mann's study "...focuses on a section of white matter-the orbital cortex-that sits just above the eyes and modulates impulse control. In autopsies of 20 suicide victims, Mann's group found that in almost every case, not enough serotonin had reached that key portion of the brain". (1)

Roughly seven years later Carol Ezzell revisits Dr. Mann's research with his colleague Victoria Arango. Arango's research, presented in 2001, at a conference of the American College of Neuropsychopharmacology had determined that "people who were depressed and died by suicide contained fewer neurons in the orbital prefrontal cortex" and that "in suicide brains, that area had one third the number of presynaptic serotonin transporters that control the brains had but roughly 30 percent more post synaptic serotonin receptors". (2) This means that the brain is trying extra hard to deliver whatever serotonin it could produce to the correct part of the brain. This system of serotonin production and absorption is known as the serotonergic system. The serotonergic system is what Arango believes is deficient in people who attempt or commit suicide. (2)

Arango and Mann are developing a positron emission tomography (PET) test that measures serotonergic system. This system would monitor the areas of the brain using serotonin "in patients who have the most skewed serotonin circuitry – and are therefore at highest risk of suicide". (2)

None of these articles in anyway suggest that an individual with serotonin deficiency will be a victim of suicide. Victoria Arango of the New York State Psychiatric Institute says the occurrence of suicide "starts with having an underlying biological risk" but that "life experience, acute stress and psychological factors each play a part". (2) The possibility of this predisposition means in the future diagnosis and prevention of the possibility of suicide could be easier.

Today individuals are conscious of genetic traits, which give them a higher chance of heart attacks or breast cancer, and have the ability to alter their behavior (i.e. smoking and nutrition) accordingly. Would the same be applicable to the possibility of suicide? Meaning before it becomes a reality, sufferers of this disorder would be able to have choices to take care of themselves accordingly using therapy and medication as a way of avoiding the possibility of disastrous consequences. New developments in this area of neuroscience will have an affect on the care given in the future to suicidal patients.

1) Bipolar Disorder; Complete Digest of information
2)Scientific American.com
3)Reutershealth,com
4)Psychiatric Institute 2000


Right Before My Very Eyes
Name: Patricia P
Date: 2003-02-26 15:22:56
Link to this Comment: 4846


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

"I'll believe it when I see it:" is one of many common catch phrases included in our every day vernacular. A person who declares this is asserting that they will not be fooled by another's assumptions or perceptions of the world. This understanding raises a great sense of security within us, concerning the things that we do see, and inversely, an unavoidable sense of insecurity in those beliefs that are not supported by vision. Do you believe in Ghosts? Angels? Out of body experiences? Would you believe if you could see them? Maybe not. But it is possible to offer those who are withholding there stamp of approval on things that exist but cannot be seen, a better summary of evidence, which could make the inability to see something an invalid criteria for belief. Could a summary of evidence be compiled that would support this: Our vision is incomplete, incorrect, and can even be as misleading as to create something within the brain that does not exist at all, shedding light on a brain that is more of a visionary, and less of a reporter.
Human beings rarely contemplate the significance of their own blind spot, a place where processes of neurons join together and form the optic nerve; it is here that the brain receives no input from the eye about this particular part of the world. What I discovered while entertaining myself with a simple eye exam aimed at divulging the capabilities of the brain in the face of the eyes blind spots was fundamental in my exploration of the trust we place in vision, and so I will explain it briefly. Our brain can ignore a dot that exists on the page and "fill" the spot with the color of its surroundings, no matter what the color. However, it is not that our brain cannot conceive of an image or of a shape to fill this place. Continuing with the experiment leads you to find that the brain will continue the line that is obstructed with the black dot, covering the sides of the dot in the surrounding color, and transforming the image before you into a line within your brain. A line that is absolutely not there. This reveals more than just a weakness in the eye, but an ability of the brain! (1)
John Whitfield, within A Brain in Doubt Leaves it Out, argues that it is due to the hemispheres in the brain and their disagreements with one another. In short, "The left hemisphere seems to suppress sensory information that conflicts with its idea of what the world should be like; the right sees the world how it really is." (2) This would explain why people with paralysis due to an injury affecting the right side of their brain might deny that they are disabled. More specifically, the parietal lobe can be held responsible for this congestion of what the brain wants to see. When tampered with, injured patients can witness their legs vanish before their eyes. (2) This further damages the credibility of our perceptions. It is alarming to acknowledge that the brain has the power to create, superimpose, and even remove.
Some still may not be satisfied that with manipulation of our brains, either by covering one eye to explore our blind spot or by prodding of the parietal lobe to cause a disappearing act, that we have conclusive evidence to discard sight as a valid criteria for believing or not believing in the existence of anything. This is understandable. However, there are many instances where the brain displays misconceptions and contortions due to a preconceived notion of the world. David Whitaker and Paul V. McGraw reported in Nature Neuroscience (3), that what our brain finds unfamiliar, we tend to distort, simply because of a lack of understanding or previous model. For example, when people are asked to identify the degree of tilt of italicized letters in one display, and then in those of a display of its mirror image, an interesting fact about perception was revealed. It was consistently reported that the mirror image was slanted 2 1/2 degrees greater than the letters italics in the commonly seen clockwise way. However, the same over exaggeration of tilt was not made with shapes or characters, leaving the misinterpretation to exist in a place within the brain that associates with "memory and meaning." (3) Without being provoked in any way, the brain continues to distort at a higher level of understanding.
Can a brain be stimulated to see an entire body that does not exist? Helen Pearson found that it can be. Through electrode stimulation of the right angular gyrus, patients reported experiences such as this; "I see myself lying in bed, from above." (4) We have established that the mere fact that you may be able to see a body (or a phantom limb) does not mean it exists. In much the same way, not see another body or entity, perhaps a ghost or an angel, does not shed enough light on whether or not they actually exist. To question the gift of vision's position as the warden of reality, and to come to terms with its capabilities to present us with inaccurate, distorted and often dream like perceptions of reality is vital in opening the window of possibilities to what exists past our noses.
And so, we could take from this that all individuals are, to some degree, "blind." Blindness leads the brain to create. In fact, when tested in a large group, people's brains tend to "create" even without the excuse of missing information attributed to their blind spot. And if we go further, and take the findings of John Whitfield and Helen Pearson into account, our brains are even capable of causing our surrounding to act in peculiar ways or cause completely new surroundings to appear. We must gain a new sense of self-awareness when we evaluate what exists beyond ourselves. The ability of our brain to ignore and create forces us to look at the fabric of what we consider reality. Beyond the existence of ghosts and angels, we must question our entire perceptions of the outside world as being always incomplete, leaving us more open to explore things that our eyes may not see. There may be an argument for believing something, even without "having seen it with your own eyes."

References

1)Serendip Vision Exam, A site within the courses homepage that explains and tests the boundaries of our blind spot.
2)Article, A Brain in Doubt Leaves it Out, by John Whitfield concerning the behaviors of the brain when it is unsure and the origins of those reactions.
3)Article, A Different Angle, on studies of the brain's response to degrees of change in similar but mirror image italics print.
4)Article, Electrodes Trigger Out-of-Body Experiences, just helped to add to the capabilities of the brain to create whole figures.


Attention Deficit/ Hyperactivity Disorder: the int
Name: Jennifer H
Date: 2003-02-26 20:25:26
Link to this Comment: 4851


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

One summer afternoon I recall while playing in my mother's office, that her client's mother was beaming with such life, emotion, energy as she proclaimed that her son was 'cured'. I was a bit baffled at the time because I had seen the boy everyday for the past month, and he didn't appear to be 'broken', at least physically. My mother, Doctor Janice Hansen, PhD, had done it, she had taken the boy off prescription medication and re-patterned his 'pathways', thereby, curing the boy who had been diagnosed with Attention Deficit/Hyperactivity Disorder (AD/HD). My mother humbly explained to me that she had just given that child a 'chance' to reach his full potential and that his 'pathways' were like a puzzle I which she had simply inserted the missing piece. Granted I was only 7 years old when this experience occurred, yet, this has prompted me to question the intentions and effects of medications on the body and the mind. This web discussion expresses my growth and research as an intellectual exploring the intricate web of diagnostic criterion of AD/HD, and question the relationship to the behavioral criteria of a Gifted Child.

Attention Deficit/ Hyperactivity Disorder is a neurobiological disorder where origin has yet to be determined via research. Still, the research that has been conducted seems to support the notion that it is not caused by environmental issues (4). There have been studies performed during stages of fetal development which I believe can aid in the scientific community's understanding of this disorder because there is a connection between premature birth and consequent neurobiological damage. Neuralization is one of the later systems to be formed during the development of the embryo ((5)); therefore, the embryo's neurological connections would be affected by premature birth. Although the origin of this disorder is important, the question addressed is whether there is misusage of treatment (prescription medications) and proper diagnosis of the disorder.

My great expectation of encountering studies that conveyed the overprescription (misusage) of medication to diagnosis AD/HD fueled my quest. Conversely, according to the Journal of the American Medical Association (1), there was a study performed that indicated that "little evidence of widespread over-diagnosis or misdiagnosis of AD/HD, or of widespread overpercription of methylphenidate (common drug used to treat AD/HD)". Yet, they also stipulated that some cases may have been misdiagnosed due to insufficient evaluation of the child in question. Could these experimenters be implying that the medical diagnostic criteria on which the medical/psychiatric field bases their evaluations are inaccurate?

The DSM-IV diagnostic criterion for Attention Deficit/ Hyperactivity Disorder illustrates the symptoms that the medical community utilizes to make their diagnoses (2). The AD/HD diagnosis is distributed into three categories: those that exhibit symptoms of inattention, those that display symptoms of hyperactivity-impulsivity, and, those that demonstrate symptoms of both the inattention and hyperactivity-impulsivity components. However, the behavioral symptoms utilized to diagnose AD/HD are not specialized for only this disorder. I have found that these symptoms that are used as diagnostic criteria in identification of individuals with AD/HD and those utilized to diagnosis 'gifted' children are similar.

Research indicates that in many cases, a child is diagnosed with AD/HD when in fact the child is gifted and reacting to an inappropriate curriculum (Webb & Latimer, 1993). The key to distinguishing between the two is the pervasiveness of the "acting out" behaviors. If the acting out is specific to certain situations, the child's behavior is more likely related to giftedness; whereas, if the behavior is consistent across all situations, the child's behavior is more likely related to ADHD. It is also possible for a child to be BOTH gifted and ADHD (3). The following compiled lists highlight the similarities and where the problem in diagnostic criteria exists:

Characteristics of Gifted Students Who Are Bored:
* Poor attention and daydreaming when bored
* Low tolerance for persistence on tasks that seem irrelevant
* Begin many projects, see few to completion
* Development of judgment lags behind intellectual growth
* Intensity may lead to power struggles with authorities
* High activity level; may need less sleep
* Difficulty restraining desire to talk; may be disruptive
* Question rules, customs, and traditions
* Lose work, forget homework, are disorganized
* May appear careless
* Highly sensitive to criticism
* Do not exhibit problem behaviors in all situations
* More consistent levels of performance at a fairly consistent pace
(Cline, 1999; Webb & Latimer, 1993)


Characteristics of Students with ADHD:
* Poorly sustained attention
* Diminished persistence on tasks not having immediate consequences
* Often shift from one uncompleted activity to another
* Impulsivity, poor delay of gratification
* Impaired adherence to commands to regulate or inhibit behavior in social contexts
* More active, restless than other children
* Often talk excessively
* Often interrupt or intrude on others (e.g., butt into games)
* Difficulty adhering to rules and regulations
* Often lose things necessary for tasks or activities at home or school
* May appear inattentive to details
* Highly sensitive to criticism
* Problem behaviors exist in all settings, but in some are more severe
* Variability in task performance and time used to accomplish tasks.
(Barkley, 1990; Cline, 1999; Webb & Latimer, 1993)
Compiled from ((3))

The diagnostic criterion for AD/HD and Gifted Children is inadequate and alternative methods for distinguishing between the two similar behavioral symptoms needs to be determined and perfected in order to insure that the treatment received is appropriate to the disorder at hand. The key to distinguishing between these similar behavioral symptoms is with observation prior to diagnosis; however, this instills a lot of power and responsibility in the hands of the doctor/ observer, which is a highly subjective process. How safe does this make you feel?


References

1) Journal of the American Medical Association, a rich resource of articles regarding AD/HD: article entitled 'Little evidence found of incorrect diagnosis or over-prescription for AD/HD' of interest.

2)Methylphendite and ADHD webpage, DSM-IV diagnostic criterion enclosed

3)ERIC Clearinghouse on Disabilities and Gifted Education, Great resource for diagnostic criteria for gifted children


4) Deficit Disorder Organization homepage , Questions on possible theories of origin.

5) Wolpert, Lewis. Principles of Development-2nd edition. Oxford University Press, publisher, 2001.


Autism: A lack of the I-function?
Name: Kate Shine
Date: 2003-02-27 03:25:24
Link to this Comment: 4857


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Autism: A lack of the I-function?
Name: Kate Shine
Date: 2003-02-27 03:34:55
Link to this Comment: 4858


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

In the words of Uta Frith, a proclaimed expert on autism, autistic persons lack the underpinning "special feature of the human mind: the ability to reflect on itself." (3) And according to our recent discussions in class, the ability to reflect on one's internal state is the job of a specific entity in the brain known as the I-function. Could it be that autism is a disease of this part of the mind, a damage or destruction to these specialized groups of neurons which make up the process we perceive as conscious thought? And if this is so, what are the implications? Are autistic persons inhuman? Which part of their brain is so different from "normal" people, and how did the difference arise?

The specific array of symptoms used to diagnose an individual as autistic do not appear as straightforward as Frith's simple statement. It seems hard to fathom that they could all arise from one similar defect in a certain part of the brains of all autistics. Examples of these symptoms include a preference for sameness and routine, stereotypic and/or repetitive motor movements, echolalia, an inability to pretend or understand humor (3), "bizarre" behavior(4) and use of objects (2), lack of spontaneity, excellent rote memory (2), folded, square-shaped ears (3), lack of facial expression, oversensitivity, lack of sensitivity, mental retardation, and savant abilities.

Obviously not all autistics exhibit all of these characteristics. Psychologists, however, often believe certain symptoms to be more indicative of the disease than others. The world autism stems from a Greek word meaning, roughly, "selfism." Autistics are described as very self-absorbed, and some academics refer to a short list of three characteristics to diagnose them. These are impairment of social interaction, impairment of communication without gestures, and restrictive and repetitive interests. (3) Tests have also been designed in attempt to diagnose the disease decisively. One of these is the Sally-Anne test, in which autistic children usually show an inability to understand the perception of another child in a scenario or to understand that she could believe something false. (4) Another test involves two flashing lights. Autistic children will look at a light if it flashes, but other children will show a tendency to move their eyes toward a new light while autistic children will not. (3)

Observation of the behavior of autistics makes it clear that they do interact with their world and understand certain aspects of it to a degree. However they often appear intensely focused on one perception or sensory experience and are unable to integrate multiple factors of emotion, intention, or personality in the way most people do. As a result of this inability to perceive order in all of the circumstances of their environment, they often find themselves in a world that seems very chaotic and random.

Jennifer Kuhn hypothesizes that many of the symptoms of autism are defense mechanisms stemming from a feeling of helplessness to control a situation. (3) Rats, when they are forced to jump at one of two doors that are randomly chosen to open unto food or stay closed and hurt the rat, will always pick the same door. (Lashley and Maier, 1934) Thus the repetitive behavior of autistics, their insistence on routine, and their echolalia can all be explained as attempts to calm themselves by achieving an inner order. They are effectively forsaking experimentation in an attempt to avoid failure. This might be evidence for damage to an I-function, as we have discussed that observation and experimentation are innate behaviors of humans, and that they require personal reflection.

However, many autistics do not exhibit behaviors such as echolalia all the time but rather more frequently when they are in an unfamiliar environment. (1) And I have observed in myself a tendency to get a nervous repetitive motion in my foot in certain situations or even a rhythmic movement of my head when I am concentrating very intensely, but I am still confident of my ability to reflect on my own thoughts.

One of the most obvious ways to find out if autistics share a collective brain abnormality that causes their unique array of characteristic behaviors is to look into the structure of the brain itself. In studies of autistic brains use scanning images as well as autopsies, scientists have generally not been able to agree on a single consistent abnormality. Some observations have been that the neurons in the limbic systems of the autistic subjects are often smaller and more densely packed than those of other individuals. Monkeys whose limbic systems were removed in experiments showed weaknesses in social interaction, memory, as well as exhibiting locomotor stereotypes similar to those of autistics. (2)

Another area that often exhibits abnormality in autistic brains is the cerebellum, especially in the CA1 and CA4 cells were there are less dendritic arbors and less Purkinje and granule cells. (2) Purkinje cells are responsible for cell death, and in where they are in small number cells may be allowed to crowd each other and not develop networks properly. This could account for autistics' tendency to be overwhelmed by stimulation. The cerebellum is also responsible for muscle movement, and other studies have found significantly fewer numbers of facial neurons (400 as compared to 9,000 in a control brain), which could also account for absence of facial expression. (3) Other areas of the brain which some studies have argued are abnormal in autistic brains are the left hemisphere which controls language (5), (6) the temporal lobes which control memory(2), and the brain stem(3).

However, many of these findings are still considered controversial or inconclusive and may not only be attributed to autism but to many people with developmental disorders. And even if these observations are accurate there is still no explanation for why these differences in brain structure occur simultaneously.

One very convincing hypothesis to explain these occurrences has been proposed by Patricia M. Rodier. Inspired by the fact that relatives of autistics are much more likely than the natural population to be diagnosed with the disease, Rodier became convinced that there must be some definite genetic factor contributing to it. However there was still the fact that an identical twin with an autistic sibling will only contract autism about 60% of the time, which makes an argument for environmental influence as well.

She then came upon a study of children whose mothers were exposed to a drug called thalidomide during their pregnancy, and found that a full 5% of their children were diagnosed as autistic. By combining information about the drug with other birth defects in the ears and faces of the children, she was able to guess that autism began 20 to 24 days after conception, when the ears and the first neurons in the brain stem are starting to grow. When she began to examine the brains of previously autopsied autistics, she noticed they showed evidence of particular short brain stems and often they also had a small or even absent facial nucleus and superior olive.

This information suggests that the genetic factor which causes these abnormalities in the primitive brain could then go on to cause the secondary abnormalities in the various other more sophisticated parts of the brain already mentioned. And Rodier has identified a gene known as Hoxa1 which is present in 40% of autistics but only 20% of non-autistics. It is found more often in relatives who are autistic than those who are not. Although this is by no means a conclusive argument that autism is genetic, there could be other variants of that allele which contribute to autism or certain genes which decrease the risk of it which haven't yet been discovered but could account more strongly for genetic determination. (3)

So could there be specific genes which actually code for the I-function? This is possible, but there is still certain evidence for environmental factors causing some of the symptoms of autism. In-utero exposure to not only thalidomide but also rubella, ethanol, and valproic acid have been liked to the disease. And there is a significant population of people who believe that the elimination of glutens and luteins found in milk, apple juice, wheat, and other products relieves the build up of certain chemicals in the brain and makes autistics more able to fit in with society. (8) A recent study of abused children has also claimed that the left hemisphere of the brain, specifically areas dealing with memory and expressing language in the limbic system, can be overexcited and damaged as a result of intense sexual or physical abuse. (5) Perhaps all of these environmental factors do additional damage to the different sectors the I-function.

What does all of this say about autistic people? Do the characteristic differences in their brains necessarily indicate that they are lacking this "I-function" or awareness part of the mind that makes someone human? I do not think so. The fact that rats and monkeys can show similar symptoms when these areas of their brains are damaged destroys this argument. The self-awareness or I-function does not appear to be limited to humans. And it may just be that the I-function operates differently or is damaged in the minds of autistics. There are huge degrees of severity in what are called the "autism spectrum diseases." These can be defined to include dyslexia, ADHD, Asperger's syndrome, pervasive developmental disorder, and more, with autism being the most severe. And any person can have symptoms of autism without being fully "autistic." But even severe changes in the I-function system should not necessarily be considered defects.

It is entirely possible that autistic people have thoughts and self-reflections which they cannot communicate through language or in society's "normal" modes of expression. While they do seem do have a diminished capacity to reflect on as much as other people do of the world at once, it may be that what they do reflect on could be just as much or more meaningful. Autistic savants, for example, show an amazing ability in one area such as music, art, or calculation which often surpasses what it seems a "normal" person could ever achieve. And these creations are not just byproducts of the rote memory and habit that some scholars insist is all that exists in autistics. It is my personal opinion after viewing the art of an autistic savant, Richard Wawro (9), that the work and the artist are infused with at least as much emotion and unique perspective as any "normal person."

Frith also argues that autistics are "...not living in a rich inner world but instead are victims of a biological defect that makes their minds very different from those of normal individuals." (4) But who is it that has the authority to decide what a "normal" individual is? The defense mechanisms that autistics display are evidence that they do experience distress and anxiety, but Freud argued that non-autistic or what Frith would call "normal" individuals also display a number of common defense mechanisms as a result of the anxieties in their lives, and this view is still widely accepted. Just because these may appear more normal or subtle to us does not mean they are not important.

In a society of autistics, we might feel a great deal of incoherence at how they might organize their world, and we might found ourselves the ones unable to communicate. In the movie "Molly," which concerns a fictional autistic woman who becomes normal and reflects on her experiences, there is a speech that illustrates this point beautifully. "In your world, almost everything is controlled....I think that is what I find most strange about this world, that nobody ever says how they feel. They hurt, but they don't cry out. They're happy, but they don't dance or jump around. And they're angry, but they hardly ever scream because they'd feel ashamed and nothing is worse than that. So we all walk around with our heads looking down, but never look up and see how beautiful the sky is." (10) If autistic people can possibly use their minds to see a beauty we cannot imagine, who are we to call them the abnormal victims? If they can only see the world in a sincere way and not understand irony or humor, is that necessarily a bad thing? We cannot truly know what autistics are aware of internally unless we have experienced their world.


References


1) Stereotypic Behaviors as a Defense Mechanism in Autism , from Harvard Brain website, Jennifer Kuhn, 1999.

2) Neurobiological Insights into Infantile Autism , Amy Herman, 1996.

3) Rodier, Patricia M. "The Early Origins of Autism." Scientific American February 2000: 56-63.

4) Frith, Uta. "Autism." Scientific American June 1993, reprinted 1997: 92-98.

5) Teicher, Martin H. "The Neurology of Child Abuse." Scientific American March 2002: 68-75.

6) Autism and Savant Syndrome, , web paper by Sural Shah.

7) Considerations of Individuality in the Diagnosis and Treatment of Autism, , web paper by Lacey Tucker.

8) Diagnosis: Autism, , Mothering Magazine, Patricia S. Lemer, 2003.

9) Paintings by Richard Wawro, , an online gallery of an autistic savant's work.

10) Duigan, J. (Director). (1998) Molly. [Video-tape]. Santa Monica, CA: Metro-Goldwyn-Mayer Pictures Inc.


Bipolar Disorder: An Overview
Name: Kate Tucke
Date: 2003-02-27 10:05:08
Link to this Comment: 4861


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Bipolar disorder, also commonly known as Manic Depressive Disorder, affects about two million American adults. That equals 1% of the population that is 18 or over.(1) One out of every five people with bipolar disorder commits suicide.(2) Obviously this is a serious illness, and yet very little is understood about it. The causes are unclear, the symptoms can be very different, and the genetic relationship is still fuzzy. There is no cure, but with proper treatment most individuals can live normal lives.

Bipolar disorder is a mood disorder, which means that those who have it experience extreme mood swings. There are two types of mood disorders, unipolar depressive disorders (what most people know as simply depression) and bipolar disorders. Bipolar disorder is characterized by at least some abnormal elevation of the mood as well as mood cycles over time.(3) There are four different types of cycles that a bipolar patient might experience. These are mania, hypomania, depression, and mixed episodes.

Mania begins with heightened energy and creativity and develops into a feeling of euphoria or of extreme irritability. People experiencing a manic episode often lack insight into his or her illness, deny that there is a problem and get very angry with anyone who points out the mania. In order to be considered a manic episode these conditions must last at least a week and interfere with the normal functioning of the person's life. In addition there must be at least four of the following symptoms: needing little sleep but having lots of energy, talking very fast, having racing thoughts, being easily distracted, having an inflating feeling of greatness and importance, or being reckless without considering consequences. Sometimes psychotic symptoms are also present in the form of hallucinations and delusions.(4)

Hypomania is a milder form of mania. The person experiences an elevated mood and is very productive. Hypomania feels good so patients often stop taking their medications while experiencing a hypomaniac episode. This is problematic because it often escalates to mania or depression.

A major depressive episode must last at least two weeks and must make it difficult for the person to function. The patient experiences feelings of sadness and loses interest in normally enjoyable activities. S/he also must experiences at least four of the following symptoms: difficulty sleeping or sleeping too much, difficulty eating or overeating, problems concentrating and making decisions, feeling slowed down or too agitated to sit still, feeling worthless, guilty or having low self-esteem, or thoughts of suicide or death. During depression, hallucination and delusions can also occur.

A mixed episode will have symptoms of both Mania and Depression either simultaneously or alternating frequently on the same day.(5)

There are different patterns of episodes which lead to different classifications of bipolar disorder. Bipolar I Disorder consists of manic or mixed episodes, and almost always depressive episodes too. Bipolar II Disorder consists only of hypomania and depression. This type is harder to recognize because during a hypomanic episode patients just seem more productive. People frequently overlook the hypomania and simply seek treatment for depression. Anti-depressants, however, could trigger a manic episode or make cycles more frequent. Rapid-Cycling Bipolar Disorder occurs when a person experiences at least four different episodes in a year, in any combination. This accounts for five to fifteen percent of bipolar patients and is more common in women.(6)

Depending on the person, bipolar disorder can manifest itself in different ways. Some people have equal periods of mania and depression, while other experience one more frequently. The average person has four episodes in the first ten years. Men are more likely to start with mania, while women are more likely to start with depression. The cycling can sometimes be seasonal. Episodes can last days, months, or years. Without treatment, mania usually lasts a few months, while depression lasts for over six months. Some people have normal periods in between episodes, while others constantly experience mild ups and downs.(7)

The treatment for bipolar disorder can be either aimed at ending the current episode or at preventing future episodes. Treatment can include medications (mood stabilizers, antidepressants or antipsychotics), education, or psychotherapy. Medications are almost always used in treatment. Mood stabilizers provide relief from acute episodes of both mania and depression and can also be used to prevent them. Lithium was the first known mood stabilizer and is still commonly prescribed. The other most common mood stabilizer is Divalproex, which is an anticonvulsant. It was originally used to treat seizures, but has been found to be effective in managing both euphoric and mixed episodes. It is particularly effective for rapid-cyclers and situations complicated by substance abuse or anxiety disorders. It is often preferred because it can be started at higher doses and is therefore faster than Lithium. Other anticonvulsants that are used as mood stabilizers are Carbamazepine, Lamotrigine, Gabapentin, and Topiramate.(8) Anti-depressants and anti-psychotics are also prescribed when those symptoms are present.

The cause of bipolar disorder is largely unknown. The neurotransmitters in the brain of a bipolar patient are different than normal patients, which means that there may be an abnormality in the genes that regulate neurotransmitters.(9) Bipolar disorder tends to run in families. A number of genes have been identified that could be linked, which implies that several different biochemical problems are occurring at the same time.(10) Biochemical, neurophysiologic, and sleep abnormalities have been linked to bipolar disorder, but none of them are specific to the disorder. It is unknown how unipolar, bipolar I, and bipolar II are related to each other.(11) It is possible that bipolar disorder makes people more vulnerable to physical and emotional stress, which is why life events can trigger episodes, but this is not the cause of the disorder. There is an "inborn vulnerability interacting with an environmental trigger" that starts an episode.(12)

One of the large problems with bipolar disorder is that many who suffer from it go undiagnosed. In a 1999 university study, 40% of the bipolar subjects were previously misdiagnosed as unipolar. As mentioned previously, this is dangerous because the use of anti-depressants alone can trigger a manic episode or lead to rapid-cycling. It is very important to identify individuals who suffer from bipolar disorder. 12% will commit suicide, usually during a depressive episode.(13) Many others will struggle with substance abuse as a way to self-medicate. One study showed that 60% of hospitalized bipolar patients had some form of lifetime substance abuse. 48.5% used alcohol and 43.1% used drugs.(14)

Many famous people have had bipolar disorder, including Teddy Roosevelt, Robert Schumann, Vincent van Gogh, and Sylvia Plath.(15) Manic episodes can trigger great creative periods, some of which are responsible for great works of art, music and literature. Bipolar disorder is obviously a serious illness which, if left untreated will at the very least disrupt a person's life. Fortunately, there are many ways to treat bipolar disorder, even though the causes of the illness are not understood. Treatment can be difficult, as the illness is often misdiagnosed. Even once a proper diagnosis is obtained, patients may need to try different combinations of treatment before finding success.
a>.

References

(1)Mayo Clinic, an article about treatment of Bipolar Disorder

(2)Goodwin & Jamison, Manic Depressive Illness, p. 228, referenced on a personal page with a summary of personal experiences with Bipolar Disorder

(3)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(4)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(5)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(6)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

Bipolar Disorders Information Center, a great overview of Bipolar Disorder

Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(9)Mayo Clinic, an article about treatment of Bipolar Disorder

(10)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(11)Minnesota Medicine: Diagnosing and Treating Bipolar Disorder, an article for doctors' guidance

(12)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(13)Minnesota Medicine: Diagnosing and Treating Bipolar Disorder, an article for doctors' guidance

(14)Substance Abuse, an article linking Substance Abuse and Bipolar Disorder

(15)Minnesota Medicine: Diagnosing and Treating Bipolar Disorder, an article for doctors' guidance


Depression: All in our heads?
Name: Stephanie
Date: 2003-03-02 01:45:19
Link to this Comment: 4888


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Is depression a figment of the Western world's cultural imagination? Do we create madmen of those who cannot keep up with the go-go-go attitude that defines American culture? Even if the condition we know as depression can be traced back to a certain state of the brain, is it culture that considers this state of the brain to be a mental illness? How do culture and the brain conspire together to produce the condition known as clinical depression?

To appreciate the complexity of the current understanding of depression, one must first look at how attitudes toward depression in the Western world have changed over time. During the Renaissance physicians believed that melancholy, a condition characterized by sullenness and fits of anger, was caused by an imbalance of fluid in the body. An improper balance of blood, phlegm, yellow bile and black bile in the body of the suffering individual was thought to be the cause of melancholy and a host of other mental illness (1). In the late 1800's a German psychiatrist developed the current classification of a condition of low spirits and mental gloom defined separately from other mental illness. It is not until 1905 that the word "depression" was used to describe this collection of symptoms (2).

The early 20th century was dominated by Freud's theory that depression was a disease of the mind. Self help proponents, like Tony Schirtzinger who suggests that "we [modern Americans] got depressed because we were like kids in a candy store", demonstrate that this idea still exists today (3). During the mid-20th century the medical world began to consider depression not only a disease of the mind, but also a disease of the brain.

Around this time, drugs began to be used to treat depression. Non-medical treatments were also developed, including psychoanalysis that focused on forcing the patient to confront irrational beliefs, and cognitive therapy that attempted to stop the negative feelings that allegedly led to depression. Beginning in the 1980s, antidepressant drugs began to target serotonin, working by inhibiting the re-uptake of this neurotransmitter (4).

Today, the medical world sees depression as primarily a disorder of the neurotransmitters. It is thought that in depressed people serotonin and norepinephrine levels are decreased and corticotrophin-releasing factory (CRF) levels are increased (5). Serotonin is a hormone that helps one to control powerful feelings such as fear and anger. In studies, animals with less serotonin were more likely to engage in impulsive, aggressive behavior (6). CRF is known as the "stress hormone". It prepares the body to deal with a dangerous situation. New research is suggesting that depressed people have a larger amount of this stress-inducing chemical in their spinal fluid (7).

This research might lead someone to believe that depression is strictly a neurobiological problem, but it is impossible to separate depression from culture. Culture influences every area of the disease: from diagnosis to treatment. Depression is diagnosed by a series of symptoms whose very definitions and descriptions require a cultural context. The interpretation of these symptoms is also dependent on culture. Grcbic writes that "in many parts of the world, people with symptoms relating to depression or anxiety, often do not view their problems as needing psychiatric or psychological intervention" (8). This is not because the symptoms are not real, but because "what is seen as abnormal and harmful in one culture, may be adaptive and accepted in another." (9).

The new wave of immigration in the United States is bringing with it new mood disorders, giving reason to rethink the way that mood disorders and other mental illness are thought about. There are quite a few culture-bound and culture-specific illness, most of them mental diseases. Culture-bound illnesses are defined as illness that are recognized in non-western cultures, but "do not have a one to one correspondence with a disorder in the west" (10). Anorexia nervosa is an example of an illness that anthropologists consider to be a culture-bound disease. The cultures of both North America and Western Europe are structured in such way as to produce an illness that is unique to these cultures ) (10). This is not to say that biology is not involved. As more research is done in the area of eating disorders, scientists are suggesting that serotonin may play a key part in the development of anorexia and bulimia.

That depression exists can not be denied, but it seems that the cut off between sadness and depression is manmade. This theory does not go against the neurobiological evidence. If the "brain=behavior" theory is right, then what doctors would classify as normal sadness would also involve some sort of change in brain make up. Someone has to draw the line. Why is it that in the U.S. depression in treated but not hwa-byung (Korean suppressed anger syndrome) (11)? Is it because the brain structures of Korean nationals are uniquely susceptible to hwa-byung? This might be true, but more likely it is because Western medicine doesn't recognize particular sets of symptoms as an illness or culture actually creates a set of circumstances that allow for the brain to be changed in such a way to create the illness.

Extending this to depression, it can be said that Western society facilitates depression by creating the circumstances for the illness to manifest itself and by giving a particular set of symptoms a name. The realization that depression is more than just a disease of neurotransmitters should not take away from the seriousness of the illness. Perhaps a more comprehensive view of depression can lead to more holistic treatments. It might also help us to take seriously the symptoms of illnesses that we might not have heard of. Often treatment of depression does involve both medication and therapy, but understanding depression is that in some ways a social construction might empower sufferers to fight depression. Depression attacks its victims from more than one angle, and it needs to be fought that way.

References


1) Clinical Depression Then and Now

2) Oxford English Dictionary

3) Depression in the Culture

4) A Brief History of Depression

5) Depression

6)Serotonin and Judgement

7) Depression and Stress Hormones

8) Impact of Culture and Stigma

9) Culture-bound Syndromes, Cultural Variations, and Psychopathology

10)Glossary of Culture-bound Syndromes

11) Kersham, Sarah. "Freud Meets Buddha." The New York Times 18 Jan. 2003:B1.


The Effects and Usages of Psilocybe Mushrooms
Name: Lara Kalli
Date: 2003-03-06 07:58:21
Link to this Comment: 4969


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

One of many naturally-occurring hallucinogens (or "entheogens", the term preferred by those who experiment frequently with these chemicals), psilocybe mushrooms have been in use for thousands of years as both a recreational and mind-expanding tool. Native Americans in both Central and South America regularly used them in religious ceremonies; they were believed to have divine properties. Popularized by the psychedelic movement of the 1960's, so-called "magic mushrooms" are now a staple of modern counterculture. (1)

There exists very little definitive or concrete knowledge on the full effects of psilocybe mushrooms on the brain. The primary active chemical in these mushrooms, 4-
phosphoryloxy-N,N-dimethyltryptamine or psilocybin, is a 5-HT2A and 5-HT1A post-synaptic receptor agonist. (2) In other words, the chemical structure of psilocybin is similar to that of the neurotransmitter serotonin, and it seems to take effect through the inhibition of the binding of serotonin to its receptors. (3) Once ingested, psilocybin eventually breaks down into 4-hydroxy-N,N-dimethyltryptamine or psilocin, which is approximately 1.4 times more potent by weight and a more unstable compound than psilocybin. (3)

The most common way to use psilocybe mushrooms is simply to eat them, although it is also viable to drink them in a tea. Once consumed, the user generally begins to feel the effects about thirty to sixty minutes later. The mushroom "trip" usually lasts four to six hours. As with all entheogens, the specific effects of mushrooms vary widely from person to person, but usually the trip begins with feelings of vague unrest or nticipation and a sense that reality has been undefinably altered or augmented in some way, often accompanied by nausea or general gastrointestinal discomfort. Once the trip is fully underway, one may experience quickly changing moods – often, the user will start laughing for no reason whatsoever – confusion, increased mental and physical energy, and most notably intense feelings of spiritual awareness, insight or revelation. Open-eye visuals, such as the appearance of moving patterns all over one's surroundings, are common at higher doses; closed-eye visuals will occur in almost every case. There is usually a two- to six-hour comedown period in which the user is not actually tripping but continues to have a sense of reality having been altered somehow. Sleep is also very difficult during this period. (1)

Since mushrooms have historically never been the object of a widespread popular usage fad, as was LSD during the 1960s, they have received considerably less media, governmental and scientific attention; thus, mushrooms might be regarded as a less significant substance. However, some recent studies have shown that psilocybin may in fact be an extremely useful tool in research on various forms of mental pathology. A study done in a hospital in Switzerland showed that stimulation of the receptors upon which psilocybin acts may moderate the brain's excessive release of dopamine, which is characteristic of acute psychosis. (2) An even more recent set of studies dealing with the effects of psilocybin and the dissociative anaesthetic ketamine on the AX-Continuous Performance Task – a test that measures the subject's ability to process auditory and visual context-dependent information – and generation of mismatch negativity – part of the auditory event-related potential that is induced by infrequent changes in a repetitive sound (5) From this data, scientists could begin to distinguish and understand the neuropharmacology of some of the abnormal cognitive processes of schizophrenia. It is believed that psilocybin will prove to be extremely useful in the gradual understanding of the mechanisms of this disease and other forms of psychosis.

Furthermore, more experienced recreational users of psilocybe mushrooms view them as an invaluable tool for the acquisition of self-knowledge or universal understanding. It cannot be argued that the effects of these mushrooms – and entheogens in general – have the potential to teach the user a great deal about himself, due to the deeply personal and introspective nature of the experiences they generate. In Psilocybin: The Magic Mushroom Grower's Guide, Terrance McKenna expressed his belief that psilocybe mushroooms were the mechanism by which humans would eventually evolve into a higher state: "...it is the occurence of psilocybin and psilocin in the biosynthetic pathways of my living body that opens for me and my symbiots the vision screens to many worlds. You as an individual and Homo sapiens as a species are on the brink of the formation of a symbiotic relationship with my genetic material that will eventually carry humanity and earth into the galactic mainstream of the higher civilizations." (7)

Psilocybin and psilocin are currently Schedule I substances under the Controlled Substances Act, meaning that the Drug Enforcement Administration has deemed them to be highly dangerous and without medical value. (4)

References

1. The Vaults of Erowid, an excellent and comprehensive resource on many psychoactive substances, not anti-drug use

2. The Good Drug Guide, an article on psilocybin

3. The Lycaeum, another comprehensive, not anti-drug use resource

4. homepage for the Drug Enforcement Administration

5. an article on the relationship between the effects of psilocybin and certain cognitive defects in schizophrenia

6. information about mismatch negativity

7. The Mushroom Speaks, Terrence McKenna's theories about the nature of psilocybe mushrooms


test
Name: test
Date: 2003-04-07 20:42:53
Link to this Comment: 5300


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Feeling SAD? Let There Be Light!
Name: Rachel Sin
Date: 2003-04-07 22:29:26
Link to this Comment: 5305


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

It's wintertime, and you are gathered for the holidays with all of your family and friends. Everything seems like it should be perfect, yet you are feeling very distressed, lethargic and disconnected from everything and everyone around you. "Perhaps it is just the winter blues," you tell yourself as you delve into the holiday feast, aiming straight for the sugary fruitcake before collapsing from exhaustion. However, the depression and other symptoms that you feel continue to persist from the beginning of winter until the springtime, for years upon end without ceasing. Although you may be tempted to believe that you, like many millions of other Americans, are afflicted with a case of the winter blues, you are most likely suffering from a more severe form of seasonal depression known as Seasonal Affective Disorder, or SAD. This form of depression has been described as a form of a unipolar or bipolar mood disorder which, unlike other forms of depression, follows a strictly seasonal pattern. (5).

During the winter, many of us suffer from "the winter blues", a less severe form of seasonal depression than SAD. Still others are sufferers have an already existent condition, such as pre-menstrual syndrome or depression, which is exacerbated by the coming of the winter. (2). In general, many people suffer from some form of sporadic depression during the wintertime. We may feel more tired and sad at times. We may even gain some weight or have trouble getting out of bed. Over 10 million people in America, however, may feel a more extreme form of these symptoms. They may constantly feel lethargic and depressed to an extent that social and work related activities are negatively affected. This more extreme form of the "winter blues" is SAD. Typical SAD symptoms include sugar cravings, lethargy, depression, an increase in body weight, and a greater need for sleep (1). Onset of these symptoms usually occurs in October or November, and the symptoms disappear in early spring. Frequently, people who suffer from SAD react strongly to variations in the amount of light in their surrounding environment. Most often, patients who suffer from SAD and live at more northern latitudes note that the more north they live, the more distinct and severe their SAD symptoms become. In addition, SAD patients note that their depressive symptoms increase in severity when the amount of light indoors decreases and the weather is cloudy. (4). Most commonly, symptoms of SAD appear in one's late twenties or thirties, yet the disorder has been less frequently diagnosed in children. Out of all of the patients who suffer from SAD, 70-80% are female. (1).

Multiple theories exist as to the origins of SAD. The exact cause is currently unknown, yet doctors believe that stress, heredity and the chemical makeup of the body all play a role in its initiation. A strongly-held belief is that the lack of presence of sufficient sunlight can lead to the disorder. Simply being in a room without windows for an extended period of time can trigger a depressive episode in SAD patients. The body's circadian rhythms, which regulate an individual's daily sleep-wake cycles, are disturbed when sunlight is not as prevalent. This is the reason why individuals with wintertime SAD often experience a difficulty in waking up in the morning during the longer nights of winter. (3). In addition, a disruption of the body's circadian rhythms is known to be a likely cause of depression. (6). Thus, during the wintertime months or time spent in a dark room, when the circadian rhythms can potentially be disrupted, a person who suffers from SAD may experience severe depressive episodes.

Scientists have also hypothesized that the disorder could be due to a decrease in serotonin, a key neurotransmitter in the brain. When the brain is deficient of serotonin, depression has been known to result. The production of serotonin has been thought to be caused by the presence of sunlight; people with SAD who suffer from depression have been found to have lower serotonin levels in their brain. In addition, research has found that SAD could also be due to an increase in the level of melatonin, a hormone which is connected to sleep. (3). The higher level of this hormone in SAD patients during the lengthier periods of darkness during winter have also been thought to be linked to their depressive symptoms. Although SAD is most commonly seen during the winter, the condition can also occur during the summer months. These patients plan trips to colder climates during the winter to relieve their depression. This rarer form of seasonal depression causes symptoms such as agitation, loss of weight, inability to sleep, anxiety and a loss of appetite.

Apparently, in patients who suffer from the more common winter form of SAD, an increase in the amount of time per day in which the sky is dark is thought to be one of the main causes of depressive symptoms. According to Dr. Daniel Kripke, a psychiatry professor at the University of California San Diego, the photoperiod of reduced light during the wintertime is a result of the reduced daylight, and that many mammals' seasonal responses are dominated by this photoperiod. Kripke explained that the mammals whose seasonal responses were dominated by the shorter photoperiod in winter experienced a change in appetite and lethargy during the winter. So naturally, it would seem reasonable that one of the main proposed treatments for SAD (aside from medication and psychotherapy, which have been proven to be effective) is an added source of light during the longer periods of darkness which occur during the wintertime. In fact, multiple tests conducted in Japan, the United States and Europe from 1986 through 1995 have shown that a strong light source proves highly effective in treating both nonseasonal and seasonal forms of depression. (6). In this form of SAD treatment, the light, which is between 10 and 20 times stronger than a regular room light, is placed several feet from the patient. The patient sits in front of this light in the morning (so as not to induce insomnia, which nighttime light therapy may cause) for at least a half a hour. (3). Side effects are minimal, and the treatment is very simple to undergo.

Although multiple light sources can be used in the treatment of SAD, one must be careful in choosing the proper kind of light therapy, which will not only be effective for use but safe for use as well. For example, in a 1992 study by Lam et al., tests were conducted in an effort to see which wavelengths of light were most efficacious in the treatment of SAD patients. It turned out that the UV wavelength of around 300 nm was not at all helpful in the treatment of seasonal depressive disorder. In addition, this low wavelength can cause harmful burns, as we all know from spending long days in the sun at the beach. Filters for certain wavelength —emitting light therapy devices should be used in order to prevent wavelengths which are below 400 nm from being emitted (5). In addition, full spectrum light used in rooms has not been efficacious in the treatment of SAD. The light used in treatment must be of higher intensity, at least ten to twenty times stronger than a typical light bulb used in a room. (2). Neither UV light nor full spectrum light should be used in the treatment of SAD; both have proven to be ineffective during light therapy trials.

Even though neither the full spectrum lights nor the 300 nm UV-emitting therapy device was effective in treating SAD patients, forms used with the special wavelength filter are safe and have proven to be very useful in the treatment of SAD. Most often, high-intensity light boxes which do not emit UV wavelengths have been used to treat the symptoms. Depending on the individual, smaller or larger lux (a measure of illumination) light boxes can be used. A normal light bulb emits somewhere between 200-500 lux. SAD patients may select a very high lux light box which emits10,000 lux, and requires a shorter amount of therapy time. Other patients prefer a weaker lux box (minimum of 2500 lux required), yet these boxes require a longer time spent in front of the light box. (1).

In addition, light visors (which also emit high-intensity luxes of light) are available, and are worn by the SAD patient. The visors are advantageous to patients who do not wish to sit idly in front of a light box for a half an hour or more; treatment can be administered while the patient is in motion. Also, a special dawn simulator can be used. This very strong light is placed beside the patient's bed, and its intensity is slowly augmented so as to reach its greatest strength when the patient is set to wake up. 1992-1993 studies conducted in Seattle by Avery et al. (where the sky is frequently overcast in the winter) have shown the dawn simulators to be highly effective. (6). In fact, the improvement was described by the Seattle study group as significant even when the final lux emission amount was as small as 250 lux!

The only rare side effects seen from the forms of light therapy used have been eye strain, nausea, irritability and headaches, and these often subside as the patient gets acclimated to the therapy. (4). Outdoor light, which provides just as much light as the light boxes (even when the sky is cloudy) has also been shown to improve SAD symptoms. SAD symptoms have been ameliorated in patients who took a daily 60-minute walk.

Many millions of Americans suffer from depression during the winter. However, a fascinating and often neglected fact is that those ten million who are afflicted with the more severe symptoms seen in SAD may suffer because of a simple deficiency in the amount of light present during the winter! Their brains may react in response, and a disruption of circadian rhythms, decrease in serotonin, and an increase in melatonin can result. As a consequence of these reactions in the brain due to the lack of light, the SAD patient can become severely depressed. So while trying to light up the lives of family members and friends with SAD through encouragement, please inform them about the many types of light therapy which are currently available — both you and the SAD patient will be glad that you did.
 

References

1)Seasonal Affective Disorder, From Northern County Psychiatric Associates of MD

2)Information and Frequently asked Questions about Seasonal Affective Disorder, From PhoThera Light Products

3)Seasonal Affective Disorder, From Mayo Clinic

4)Seasonal Affective Disorder, From Nation's Voice on Mental Illness

5)Seasonal Affective Disorder, from mentalhealth.com

6)Light Treatment for Nonseasonal Depression, from Psychiatric Times


The Pathways of Pain
Name: Neesha Pat
Date: 2003-04-08 14:07:28
Link to this Comment: 5324


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In 1931, the French medical missionary Dr. Albert Schweitzer wrote, "Pain is a more terrible lord of mankind than even death itself." Today, pain has become the universal disorder, a serious and costly public health issue, and a challenge for family, friends, and health care providers who must give support to the individual suffering from the physical as well as the emotional consequences of pain (1).

Early humans related pain to evil, magic, and demons. Relief of pain was the responsibility of sorcerers, shamans, priests, and priestesses, who used herbs, rites, and ceremonies as their treatments. The Greeks and Romans were the first to advance a theory of sensation, the idea that the brain and nervous system have a role in producing the perception of pain. But it was not until the middle ages and well into the Renaissance-the 1400s and 1500s-that evidence began to accumulate in support of these theories. Leonardo da Vinci and his contemporaries came to believe that the brain was the central organ responsible for sensation. Da Vinci also developed the idea that the spinal cord transmits sensations to the brain. In the 17th and 18th centuries, the study of the body and the senses continued to be a source of wonder for the world's philosophers. In 1664, the French philosopher René Descartes described what to this day is still called a "pain pathway" (5).

What prompted me to research about the various pain pathways was my grandmother's arthritis. She has suffered for many years with severe joint pain and in the past, has been treated with corticosteroids. Currently, she is taking Celebrex, (COX-2 inhibitor) which is a relatively new drug in the family of 'superaspirins'. What impressed me was how far medical research has come in the quest to conquer pain and interfere with the 'pain pathway'. "The philosophy that you have to learn to live with pain is one that I will never understand or advocate," says Dr. W. David Leak, Chairman & CEO of Pain Net, Inc. (4) . The focus of this paper has been on the numerous avenues explored by researchers and two methods of treatment that offer promising results.

What is pain? The International Association for the Study of Pain defines it as: An unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage (1). It is useful to distinguish between two basic types of pain, acute and chronic, as they differ greatly. (7)

Acute pain, for the most part, results from disease, inflammation, or injury to tissues. This type of pain generally comes on suddenly, for example, after trauma or surgery, and may be accompanied by anxiety or emotional distress. The cause of acute pain can usually be diagnosed and treated, and the pain is self-limiting, that is, it is confined to a given period of time and severity. In some rare instances, it can become chronic (1).

Chronic pain is widely believed to represent disease itself. It can be made much worse by environmental and psychological factors. Chronic pain persists over a longer period of time than acute pain and is resistant to most medical treatments. It can, and often does, cause severe problems for patients. There may have been an initial mishap such as a sprained back, serious infection, or there may be an ongoing cause of pain such as arthritis, cancer, ear infection, but some people suffer chronic pain in the absence of any past injury or evidence of body damage(1) . Many chronic pain conditions affect older adults. Common chronic pain complaints include headache, back pain, cancer pain, arthritis pain, neurogenic pain (pain resulting from damage to the peripheral nerves or to the central nervous system itself) and psychogenic pain (pain not due to past disease or injury or any visible sign of damage inside or outside the nervous system) . (2)

Pain is a complicated process that involves an intricate interplay between a number of important chemicals found naturally in the brain and spinal cord. In general, these chemicals, called neurotransmitters, transmit nerve impulses from one cell to another (5).

The body's chemicals act in the transmission of pain messages by stimulating neurotransmitter receptors found on the surface of cells; each receptor has a corresponding neurotransmitter. Receptors function much like gates or ports and enable pain messages to pass through and on to neighboring cells (10). One brain chemical of special interest to neuroscientists is glutamate. During experiments, mice with blocked glutamate receptors show a reduction in their responses to pain. Other important receptors in pain transmission are opiate-like receptors. Morphine and other opioid drugs work by locking on to these opioid receptors, switching on pain-inhibiting pathways or circuits, and thereby blocking pain (8).

Another type of receptor that responds to painful stimuli is called a nociceptor. Nociceptors are thin nerve fibers in the skin, muscle, and other body tissues, that, when stimulated, carry pain signals to the spinal cord and brain. Normally, nociceptors only respond to strong stimuli such as a pinch. However, when tissues become injured or inflamed, as with a sunburn or infection, they release chemicals that make nociceptors much more sensitive and cause them to transmit pain signals in response to even gentle stimuli such as breeze or a caress. This condition is called allodynia -a state in which pain is produced by innocuous stimuli . (1)

Scientists are working to develop potent pain-killing drugs that act on receptors for the chemical acetylcholine. For example, a type of frog native to Ecuador has been found to have a chemical in its skin called 'epibatidine', derived from the frog's scientific name, Epipedobates tricolor. Although highly toxic, epibatidine is a potent analgesic and, surprisingly, resembles the chemical nicotine found in cigarettes. Also under development are other less toxic compounds that act on acetylcholine receptors and may prove to be more potent than morphine but without its addictive properties(1) .
T
he idea of using receptors as gateways for pain drugs is a novel idea, supported by experiments involving substance 'P'. Investigators have been able to isolate a tiny population of neurons located in the spinal cord, that together form a major portion of the pathway responsible for carrying persistent pain signals to the brain. When animals were given injections of a lethal cocktail containing substance P linked to the chemical saporin, this group of cells, whose sole function is to communicate pain, were killed. Receptors for substance P served as a portal or point of entry for the compound. Within days of the injections, the targeted neurons, located in the outer layer of the spinal cord along its entire length, absorbed the compound and were neutralized. The animals' behavior was completely normal; they no longer exhibited signs of pain following injury or had an exaggerated pain response. Importantly, the animals still responded to acute, that is, normal, pain. This is a critical finding as it is important to retain the body's ability to detect potentially injurious stimuli. The protective, early warning signal that pain provides is essential for normal functioning. If such work could be translated clinically, humans might be able to benefit from similar compounds introduced, for example, through lumbar (spinal) puncture (2).

Another promising area of research using the body's natural pain-killing abilities is the transplantation of chromaffin cells into the spinal cords of animals bred experimentally to develop arthritis. Chromaffin cells produce several of the body's pain-killing substances and are part of the adrenal medulla, which sits on top of the kidney. Within a week or so, rats receiving these transplants cease to exhibit telltale signs of pain. Scientists believe the transplants help the animals recover from pain-related cellular damage. Extensive animal studies will be required to learn if this technique might be of value to humans with severe pain (8).

One way to control pain outside of the brain, that is, peripherally, is by inhibiting hormones called prostaglandins. Prostaglandins stimulate nerves at the site of injury and cause inflammation and fever. Certain drugs, including NSAIDs (nonsteroidal anti-inflammatory drugs), act against such hormones by acting on the blood vessels. Blood vessel walls stretch or dilate during a migraine attack and it is thought that serotonin plays a complicated role in this process. For example, before a migraine headache, serotonin levels fall. Drugs for migraine include the triptans: sumatriptan (Imitrix), naratriptan (Amerge), and zolmitriptan (Zomig). They are called serotonin agonists because they mimic the action of endogenous (natural) serotonin and bind to specific subtypes of serotonin receptors (8).

The explosion of knowledge about human genetics is helping scientists who work in the field of drug development. We know, for example, that the pain-killing properties of codeine rely heavily on a liver enzyme, CYP2D6, which helps convert codeine into morphine. A small number of people genetically lack the enzyme CYP2D6; when given codeine, these individuals do not get pain relief. CYP2D6 also helps break down certain other drugs. People who genetically lack CYP2D6 may not be able to cleanse their systems of these drugs and may be vulnerable to drug toxicity. CYP2D6 is currently under investigation for its role in pain (5).

The link between the nervous and immune systems is an important one. Cytokines, a type of protein found in the nervous system, are also part of the body's immune system, the body's shield for fighting off disease. Cytokines can trigger pain by promoting inflammation, even in the absence of injury or damage. Certain types of cytokines have been linked to nervous system injury. After trauma, cytokine levels rise in the brain and spinal cord and at the site in the peripheral nervous system where the injury occurred. Improvements in our understanding of the precise role of cytokines in producing pain, especially pain resulting from injury, may lead to new classes of drugs that can block the action of these substances (1).

Medications, acupuncture, local electrical stimulation, and brain stimulation, as well as surgery, are some treatments for chronic pain. Some physicians use placebos, which in some cases has resulted in a lessening or elimination of pain. Psychotherapy, relaxation and medication therapies, biofeedback, and behavior modification may also be employed to treat chronic pain (3). Some of the latest developments include COX-2 inhibitors and chemonucleolysis (7).

COX-2 inhibitors, ("superaspirins") are thought to be particularly effective for individuals with arthritis. For many years scientists have wanted to develop the ultimate drug-a drug that works as well as morphine but without its negative side effects. Nonsteroidal anti-inflammatory drugs (NSAIDs) work by blocking two enzymes, cyclooxygenase-1 and cyclooxygenase-2, both of which promote production of hormones called prostaglandins, which in turn cause inflammation, fever, and pain. Newer drugs, called COX-2 inhibitors, primarily block cyclooxygenase-2 and are less likely to have the gastrointestinal side effects sometimes produced by NSAIDs. In 1999, the Food and Drug Administration approved two COX-2 inhibitors- rofecoxib (Vioxx) and celecoxib (Celebrex). Although the long-term effects of COX-2 inhibitors are still being evaluated, they appear to be safe. In addition, patients may be able to take COX-2 inhibitors in larger doses than aspirin and other drugs that have irritating side effects, earning them the nickname "superaspirins. (7)

Chemonucleolysis is a treatment in which an enzyme, chymopapain, is injected directly into a herniated lumbar disc in an effort to dissolve material around the disc, thus reducing pressure and pain. The procedure's use is extremely limited, in part because some patients may have a life-threatening allergic reaction to chymopapain. (3)

Thus, we see that over the centuries, science has provided us with a remarkable ability to understand and control pain with medications, surgery, and other treatments. (3) Today, scientists understand a great deal about the causes and mechanisms of pain, and research has produced dramatic improvements in the diagnosis and treatment of a number of painful disorders. (9) For people who fight every day against the limitations imposed by pain, the work of scientists holds the promise of an even greater understanding of pain in the coming years. Their research offers a powerful weapon in the battle to prolong and improve the lives of people with pain: hope (1) .


References

1)National Institute of Neurological Disorders and Stroke
2)American Pain Society
3)American Academy of Pain Management
4)PainNet.Inc
5)International Association for the Study of Pain
6)MayDay Pain Project, The.
7)Pain Treatment: Janssen-Cilag Pharm.
8)American Chronic Pain Organization
9)Rest Ministries Chronic Illness
10)Worldwide Congress on Pain


The Pathways of Pain

Accomplishments of Medica
Name: Neesha Pat
Date: 2003-04-08 14:18:27
Link to this Comment: 5325



<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In 1931, the French medical missionary Dr. Albert Schweitzer wrote, "Pain is a more terrible lord of mankind than even death itself." Today, pain has become the universal disorder, a serious and costly public health issue, and a challenge for family, friends, and health care providers who must give support to the individual suffering from the physical as well as the emotional consequences of pain (1).

Early humans related pain to evil, magic, and demons. Relief of pain was the responsibility of sorcerers, shamans, priests, and priestesses, who used herbs, rites, and ceremonies as their treatments. The Greeks and Romans were the first to advance a theory of sensation, the idea that the brain and nervous system have a role in producing the perception of pain. But it was not until the middle ages and well into the Renaissance-the 1400s and 1500s-that evidence began to accumulate in support of these theories. Leonardo da Vinci and his contemporaries came to believe that the brain was the central organ responsible for sensation. Da Vinci also developed the idea that the spinal cord transmits sensations to the brain. In the 17th and 18th centuries, the study of the body and the senses continued to be a source of wonder for the world's philosophers. In 1664, the French philosopher René Descartes described what to this day is still called a "pain pathway" (5).

What prompted me to research about the various pain pathways was my grandmother's arthritis. She has suffered for many years with severe joint pain and in the past, has been treated with corticosteroids. Currently, she is taking Celebrex, (COX-2 inhibitor) which is a relatively new drug in the family of 'superaspirins'. What impressed me was how far medical research has come in the quest to conquer pain and interfere with the 'pain pathway'. "The philosophy that you have to learn to live with pain is one that I will never understand or advocate," says Dr. W. David Leak, Chairman & CEO of Pain Net, Inc. (4) . The focus of this paper has been on the numerous avenues explored by researchers and two methods of treatment that offer promising results.

What is pain? The International Association for the Study of Pain defines it as: An unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage (1). It is useful to distinguish between two basic types of pain, acute and chronic, as they differ greatly. (7)

Acute pain, for the most part, results from disease, inflammation, or injury to tissues. This type of pain generally comes on suddenly, for example, after trauma or surgery, and may be accompanied by anxiety or emotional distress. The cause of acute pain can usually be diagnosed and treated, and the pain is self-limiting, that is, it is confined to a given period of time and severity. In some rare instances, it can become chronic (1). Chronic pain is widely believed to represent disease itself. It can be made much worse by environmental and psychological factors. Chronic pain persists over a longer period of time than acute pain and is resistant to most medical treatments. It can, and often does, cause severe problems for patients. There may have been an initial mishap such as a sprained back, serious infection, or there may be an ongoing cause of pain such as arthritis, cancer, ear infection, but some people suffer chronic pain in the absence of any past injury or evidence of body damage(1) . Many chronic pain conditions affect older adults. Common chronic pain complaints include headache, back pain, cancer pain, arthritis pain, neurogenic pain (pain resulting from damage to the peripheral nerves or to the central nervous system itself) and psychogenic pain (pain not due to past disease or injury or any visible sign of damage inside or outside the nervous system) . (2)

Pain is a complicated process that involves an intricate interplay between a number of important chemicals found naturally in the brain and spinal cord. In general, these chemicals, called neurotransmitters, transmit nerve impulses from one cell to another (5).

The body's chemicals act in the transmission of pain messages by stimulating neurotransmitter receptors found on the surface of cells; each receptor has a corresponding neurotransmitter. Receptors function much like gates or ports and enable pain messages to pass through and on to neighboring cells (10). One brain chemical of special interest to neuroscientists is glutamate. During experiments, mice with blocked glutamate receptors show a reduction in their responses to pain. Other important receptors in pain transmission are opiate-like receptors. Morphine and other opioid drugs work by locking on to these opioid receptors, switching on pain-inhibiting pathways or circuits, and thereby blocking pain (8).

Another type of receptor that responds to painful stimuli is called a nociceptor. Nociceptors are thin nerve fibers in the skin, muscle, and other body tissues, that, when stimulated, carry pain signals to the spinal cord and brain. Normally, nociceptors only respond to strong stimuli such as a pinch. However, when tissues become injured or inflamed, as with a sunburn or infection, they release chemicals that make nociceptors much more sensitive and cause them to transmit pain signals in response to even gentle stimuli such as breeze or a caress. This condition is called allodynia -a state in which pain is produced by innocuous stimuli . (1)

Scientists are working to develop potent pain-killing drugs that act on receptors for the chemical acetylcholine. For example, a type of frog native to Ecuador has been found to have a chemical in its skin called 'epibatidine', derived from the frog's scientific name, Epipedobates tricolor. Although highly toxic, epibatidine is a potent analgesic and, surprisingly, resembles the chemical nicotine found in cigarettes. Also under development are other less toxic compounds that act on acetylcholine receptors and may prove to be more potent than morphine but without its addictive properties(1) .

The idea of using receptors as gateways for pain drugs is a novel idea, supported by experiments involving substance 'P'. Investigators have been able to isolate a tiny population of neurons located in the spinal cord, that together form a major portion of the pathway responsible for carrying persistent pain signals to the brain. When animals were given injections of a lethal cocktail containing substance P linked to the chemical saporin, this group of cells, whose sole function is to communicate pain, were killed. Receptors for substance P served as a portal or point of entry for the compound. Within days of the injections, the targeted neurons, located in the outer layer of the spinal cord along its entire length, absorbed the compound and were neutralized. The animals' behavior was completely normal; they no longer exhibited signs of pain following injury or had an exaggerated pain response. Importantly, the animals still responded to acute, that is, normal, pain. This is a critical finding as it is important to retain the body's ability to detect potentially injurious stimuli. The protective, early warning signal that pain provides is essential for normal functioning. If such work could be translated clinically, humans might be able to benefit from similar compounds introduced, for example, through lumbar (spinal) puncture (2).

Another promising area of research using the body's natural pain-killing abilities is the transplantation of chromaffin cells into the spinal cords of animals bred experimentally to develop arthritis. Chromaffin cells produce several of the body's pain-killing substances and are part of the adrenal medulla, which sits on top of the kidney. Within a week or so, rats receiving these transplants cease to exhibit telltale signs of pain. Scientists believe the transplants help the animals recover from pain-related cellular damage. Extensive animal studies will be required to learn if this technique might be of value to humans with severe pain (8).

One way to control pain outside of the brain, that is, peripherally, is by inhibiting hormones called prostaglandins. Prostaglandins stimulate nerves at the site of injury and cause inflammation and fever. Certain drugs, including NSAIDs (nonsteroidal anti-inflammatory drugs), act against such hormones by acting on the blood vessels. Blood vessel walls stretch or dilate during a migraine attack and it is thought that serotonin plays a complicated role in this process. For example, before a migraine headache, serotonin levels fall. Drugs for migraine include the triptans: sumatriptan (Imitrix), naratriptan (Amerge), and zolmitriptan (Zomig). They are called serotonin agonists because they mimic the action of endogenous (natural) serotonin and bind to specific subtypes of serotonin receptors (8).

The explosion of knowledge about human genetics is helping scientists who work in the field of drug development. We know, for example, that the pain-killing properties of codeine rely heavily on a liver enzyme, CYP2D6, which helps convert codeine into morphine. A small number of people genetically lack the enzyme CYP2D6; when given codeine, these individuals do not get pain relief. CYP2D6 also helps break down certain other drugs. People who genetically lack CYP2D6 may not be able to cleanse their systems of these drugs and may be vulnerable to drug toxicity. CYP2D6 is currently under investigation for its role in pain (5).

The link between the nervous and immune systems is an important one. Cytokines, a type of protein found in the nervous system, are also part of the body's immune system, the body's shield for fighting off disease. Cytokines can trigger pain by promoting inflammation, even in the absence of injury or damage. Certain types of cytokines have been linked to nervous system injury. After trauma, cytokine levels rise in the brain and spinal cord and at the site in the peripheral nervous system where the injury occurred. Improvements in our understanding of the precise role of cytokines in producing pain, especially pain resulting from injury, may lead to new classes of drugs that can block the action of these substances (1).

Medications, acupuncture, local electrical stimulation, brain stimulation, as well as surgery, are some treatments for chronic pain. Some physicians use placebos, which in some cases has resulted in a lessening or elimination of pain. Psychotherapy, relaxation and medication therapies, biofeedback, and behavior modification may also be employed to treat chronic pain (3). Some of the latest developments include COX-2 inhibitors and chemonucleolysis (7).

COX-2 inhibitors, ("superaspirins") are thought to be particularly effective for individuals with arthritis. For many years scientists have wanted to develop the ultimate drug-a drug that works as well as morphine but without its negative side effects. Nonsteroidal anti-inflammatory drugs (NSAIDs) work by blocking two enzymes, cyclooxygenase-1 and cyclooxygenase-2, both of which promote production of hormones called prostaglandins, which in turn cause inflammation, fever, and pain. Newer drugs, called COX-2 inhibitors, primarily block cyclooxygenase-2 and are less likely to have the gastrointestinal side effects sometimes produced by NSAIDs. In 1999, the Food and Drug Administration approved two COX-2 inhibitors- rofecoxib (Vioxx) and celecoxib (Celebrex). Although the long-term effects of COX-2 inhibitors are still being evaluated, they appear to be safe. In addition, patients may be able to take COX-2 inhibitors in larger doses than aspirin and other drugs that have irritating side effects, earning them the nickname "superaspirins. (7)

Chemonucleolysis is a treatment in which an enzyme, chymopapain, is injected directly into a herniated lumbar disc in an effort to dissolve material around the disc, thus reducing pressure and pain. The procedure's use is extremely limited, in part because some patients may have a life-threatening allergic reaction to chymopapain. (3)

Thus, we see that over the centuries, science has provided us with a remarkable ability to understand and control pain with medications, surgery, and other treatments. (3) Today, scientists understand a great deal about the causes and mechanisms of pain, and research has produced dramatic improvements in the diagnosis and treatment of a number of painful disorders. (9) For people who fight every day against the limitations imposed by pain, the work of scientists holds the promise of an even greater understanding of pain in the coming years. Their research offers a powerful weapon in the battle to prolong and improve the lives of people with pain: hope (1) .


References

1)National Institute of Neurological Disorders and Stroke

2)American Pain Society

3)American Academy of Pain Management

4)PainNet.Inc

5)International Association for the Study of Pain

6)MayDay Pain Project, The.

7)Pain Treatment: Janssen-Cilag Pharm.

8)American Chronic Pain Organization

9)Rest Ministries Chronic Illness

10)Worldwide Congress on Pain


A Hidden Compulsion: Obsessive-Compulsive Disorder
Name: Amelia Tur
Date: 2003-04-12 21:47:44
Link to this Comment: 5357


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Obsessive-compulsive disorder, commonly known as OCD, is a type of anxiety disorder and was one of the three original neuroses as defined by Freud. It is characterized by "recurrent, persistent, unwanted, and unpleasant thoughts (obsessions) or repetitive, purposeful ritualistic behaviors that the person feels driven to perform (compulsions)." (1) The prime feature that differentiates OCD from other obsessive or compulsive disorders is that the sufferer understands the irrationality or excess of the obsessions and compulsions, but is unable to stop them. What differentiates people with OCD from other usually healthy people with milder forms of obsession and compulsion is the fact that the obsessions and compulsions serve to interfere with the person with OCD's life to the point where they are extremely distressed, the obsessions and compulsions take a large proportion of their time, and serve to interfere with the their routine, functioning on the job, normal social activities, and relationships with others. (1) (3)

Some of the typical compulsions that someone with OCD may exhibit include an uncontrollable urge to wash (especially the hands) or clean, to check doors repeatedly to make sure that they are locked, confirming that appliances are switched off multiple times, "to touch, to repeat, to count, to arrange, or to save." (1) Obsessions that one with OCD may display can include fixation on dirt and contamination, the fear that one may act upon destructive or violent urges, having an overdeveloped sense of responsibility for the welfare of others, objectionable religiously blasphemous or sexual disturbances, other socially unacceptable behavior, and an overbearing concern with the arrangement and symmetry of items. Obsessions may be present along with compulsions, or compulsions may be present singly; often times the compulsions are designed to reduce the anxiety associated with the obsession. (1) (2) (5)

The most common presentation of OCD is washing. The person is compelled to wash their hands many times a day and have constant thoughts of dirt, germs, and contamination. The person may spend up to several hours each day washing their hands or showering, and generally attempts to avoid items that they perceive as sources of contamination. (1)

A second subtype of the disorder is extreme doubt joined with compulsive checking; while some sufferers are preoccupied with symmetry, most people exhibiting this subtype are concerned with the safety of others. While checking something, such as the status of an appliance, would alleviate the doubt of a typical person, when a person with OCD checks something, their doubt is often heightened and often leads to even more checking. (1)

Those with OCD are usually quite aware of their irrational or extreme fears and behaviors, but are unable to control them. Because such behavior seen in OCD sufferers is often considered "crazy," otherwise normal people who suffer from OCD are often driven to hide their symptoms. Many are able to do so with remarkable success, as they are normal in all other aspects of their lives. The tendency to hide such behavior may be the reason of the recent epidemiological studies that show an incidence rate of around 2%, rather than the previously thought 0.05%. On the Zoloft informational site on OCD, they state that as many as one in fifty Americans, up to five million people, have OCD at some point in their lives. Around a third of the cases of OCD in the United States begin in adolescence, the second third beginning in young adulthood, and the final third beginning later on in life. While more boys will show evidence of OCD early in life than girls, men and women exhibit OCD in equal numbers in adulthood. (1) (5)

OCD is largely resistant to typical psychotherapy, which tries to get at the source of the conflict by going back to early childhood. However, OCD is more receptive to behavioral therapy, wherein the person with OCD is presented with the feared or triggering situation but is prevented from carrying out the accompanying compulsion. OCD is generally resistant to drugs used in the treatment of anxiety, depression, and psychosis. However, the symptoms generally ease with medications that influence the brain's serotoninergic system. (1)
Serotonin is a neurotransmitter that must be removed from a synapse by reuptake before it can be fired again. Clorimipramine, fluvoxamine, and fluoxetine are currently the only psychoactive drugs that block the uptake of serotonin, and all are effective in the treatment of OCD; this leads doctors to believe that the cause of OCD is closely related to the uptake of serotonin in the brain. It is thought that OCD may be closely related to tic disorders, such as Tourette's syndrome; it has been suggested that OCD and Tourette's share a common genetic basis. The examination of the brains of those with OCD with PET and MRI scans has shown that these people have abnormalities in the cortex and basal ganglia coupled with decreased caudate volume. More simply put, it is thought that in both OCD and tic disorders, there is a defect in a circuit that runs from the frontal lobe to the basal ganglia to the thalamus and back to the frontal lobe. Defects in this circuit may have a genetic basis or may arise from immunological factors, as suggested from the appearance of OCD and tic disorders after a Streptococcus infection. (1) (2)

Scientists have found that mice will exhibit OCD-like symptoms when their Hoxb8 gene is manipulated. In a study done at the University of Utah in Salt Lake City, one of the two Hoxb8 genes was manipulated in utero on mice from a line of inbred mice, as the mice would be identical to others from the line with only the exception of the manipulated gene. The mice with the manipulated gene were shown to groom themselves compulsively, often leading to bald patches of skin and open sores. Furthermore, the mice would also compulsively groom other mice in their cages just as forcefully. The scientists feel that feel that this could lead to some interesting findings about OCD, as well as for trichotillomania, a rare disorder where people pull out their hair, as the hox genes are largely similar in all vertebrates and could have a large influence on behavior. The Hoxb8 gene has also been shown to be active in areas that control animal grooming and the beginning of human OCD symptoms. This research helps to show that OCD most likely has a biological basis as well as a psychological basis, rather than only a psychological one. (4)

Both the Access Science articles and the information from the Zoloft website stressed the fact that the causes of OCD most likely have much to do with problems in the nervous system, there were some differences in how the information was presented. In the Access Science articles, there seemed to be a sense of distancing from those with OCD; the main article on OCD made sure to differentiate between people with "normal" obsessions and compulsions, and those people who have OCD. There was a clinical air to the articles, as if they were aimed at people who were researching OCD for a project or just general information, rather than being aimed at people who might possibly have or know someone with OCD. The Zoloft site, on the other hand, presented its information in a way that conveyed the sense that anyone could have OCD, and that those with OCD are not unusual for having it. The information was geared to those with OCD, their family members, and people who could possibly have it. (1) (2) (3) (5)


References

1) Obsessive-Compulsive Disorder, An article on obsessive-compulsive disorder by Joseph Zohar on McGraw-Hill's Access Science site, an online encyclopedia of science and technology.

2) Anxiety Disorders, A section of an article on anxiety disorders by Daniel S. Pine on McGraw-Hill's Access Science site, an online encyclopedia of science and technology.

3) Neurotic Disorders, An article on neurotic disorders by Marshal Mandelkern on McGraw-Hill's Access Science site, an online encyclopedia of science and technology.

4) Ancient Gene Takes Grooming in Hand, An article by Bruce Bower found through McGraw-Hill's Access Science site, an online encyclopedia of science and technology.

5) Understanding Obsessive Compulsive Disorder (OCD), An informational site about OCD, from the makers of Zoloft, which is used in the treatment of OCD and other anxiety disorders.


Love in the Brain
Name: Clare Smig
Date: 2003-04-14 15:45:42
Link to this Comment: 5367


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Does brain equal behavior? Some people have argued that they have difficulty saying it does because they find it hard to believe that our individual, tangible brain controls emotions that many consider to be intangible, such as being in love. This paper will discuss the role that the brain actually plays in love- why we are attracted to certain people, why we feel the way we do when we are around them, and whether or not this is enough to say that in the case of love, brain does equal behavior.

The first stage of romantic love begins with attraction. Whether you have been best friends for a long time or you just met the person, you begin your romantic relationship when there is that feeling of attraction. But why are we attracted to some people and not to others? Some research and experimentation suggests that pheromones play a role in attraction ((1), (2), (3), (4)). Although the existence of pheromones in humans and the method by which individuals detect them is still under debate and requires further research, a study by Stern and McClintock on pheromones in women's underarm secretion gives the most solid evidence for the existence of human pheromones ((5)). It has been hypothesized that the brain detects these pheromones through an organ known as the vomeronasal organ (VNO), by receptors, or by the terminal nerve in the nostrils ((5)). Despite the fact that pheromones and how they are detected in humans is controversial, it has been suggested that selectivity for certain pheromones might explain why we are only attracted to certain people ((6)).

Research agrees, however, that whether or not pheromones exist, they are not the only reason we are attracted to an individual. Other factors such as social and environmental influences, genetic information, and past experience contribute to who we are and who we find attractive physically and emotionally ((5), (7)). In addition, an experiment by McClintock showed that women were attracted to the smell of a man who was genetically similar, but not too similar, to their fathers ((1)). Therefore, our genetic information might play a role in whether or not someone is desirable in order to avoid inbreeding or, on the other end of the spectrum, to avoid the loss of desirable gene combinations. Inevitably, however, it is our brain that processes another individual's appearance, lifestyle, how they relate to past individuals we have met, and, possibly, their pheromones. Then, based on this information, we decide, within our brain, whether or not this person is worth getting to know.

Almost immediately thereafter, it is uncontroversial that when someone experiences an attraction for someone else, their brain triggers the release of certain chemicals. These adrenaline-like chemicals include phenylethylamine (PEA) which speeds up the flow of information between nerve cells, dopamine, and norepinephrine (both of which are similar to amphetamines). Dopamine makes you feel good and norepinephrine stimulates the production of adrenaline. Together, these chemicals explain why when we are around someone we are attracted to we feel a "rush" and our heart beats faster ((8)). However, if you have ever been in love, you know that these feelings somewhat subside as you become more comfortable with someone and move from that attraction and "lust" stage to love.

But what is the role of the brain in the stage of love? One chemical, oxytocin, plays an important role in romantic love as a sexual arousal hormone and makes women and men calmer and more sensitive to the feelings of others. Physical and emotional cues, processed through the brain, trigger the release of oxytocin. For example, a partner's voice, look or even a sexual thought can trigger its release. Attachment to someone has been linked to chemicals released from the brain known as endorphins that produce feelings of tranquility, reduced anxiety, and comfort. These chemicals are not as exciting as those released during the attraction stage, but they are more addictive and are part of what makes us want to keep being around that person we are in love with. In fact, the absence of these chemicals when we lose a loved one plays a part in why we feel so sad ((8), (9)). But is that it? Are chemical releases triggered by the brain when we think of or are in the presence of our partner all there really is behind those "I love you's"?

Other research has shown that there are certain areas of the brain linked with being in love with someone. It is possible that our feelings for our partner are somehow stored in our brain. Researchers have found that when individuals are shown pictures of their loved ones, areas of the brain with a high concentration of receptors for dopamine are activated. Moreover, MRI images of the brains of these individuals showed that the brain pattern for romantic love overlapped patterns for sexual arousal, feelings of happiness, and cocaine-induced euphoria. This overlap and, at the same time, unique pattern indicates the complexity of the emotions that comprise romantic love ((6), (10)). These results did not occur when the individuals were shown pictures of non-romantic loved ones. Another similar experiment showed that individuals produced activity in the medial insula and the anterior cingulate of the brain. The former is a part of the brain associated with "gut feelings" and the latter is associated with feelings of euphoria ((11)).

Other areas of the brain that have been associated with love include the septal area, which has been associated with pleasure, and the frontal lobe, the most highly evolved part of our brain, which has been associated with higher mental functions such as trust, respect, desire for companionship, etc ((12)). Finally, the amygdala, which has direct and extensive connections with all the sensory systems of the brain and with the hypothalamus, is considered to be the emotional center of the brain. Therefore, it most likely also plays a role in the emotions surrounding love ((13), (14)). Consequently, it is highly likely that as we become more attached to someone through experience and time together, our love for them is processed and stored in our brain.

So if everyone in love is experiencing the same chemicals and activating the same areas of the brain, what makes love such a special experience? What makes your feelings any different from anyone else's? It is that person that you have fallen in love with. It is them and only them that can do or say the right things and touch you the right way so that those chemicals are released and those areas of the brain are activated. Finally, is love just a function of our brains? As shown by the experiment with college students, the pattern within the brain that formed when they saw their loved ones was complex, as are our brains. Although it is your partner's brain that enables them to act or say those things that trigger your brain to respond with those chemicals of attraction and attachment, everyone's brain is individual and makes up an individual "you" and that unique and special experience that we call love. Although this does not rule out other areas that many believe play a role in love, such as the soul, it shows that the brain does play a vital, if not ultimate role in all aspects of love and that this role is extremely complex and unique.


References


1) Gupta, Sanjay. Chemistry of Love: Do pheromones and smelly t-shirts really have the power to trigger sexual attraction? Here's a primer. Time. Feb 2002: 78.

2) Herman, Steve. Main Attraction: The search for human pheromones continues. Global Cosmetic Industry. Global Cosmetic Industry. Dec 2000: 54.

3) Cutler, Winnifred B. et al. Pheromonal influences on sociosexual behavior in men. Archives of Sexual Behavior. Feb 1998: 13.

4) Smell and Attraction

5) Ben-Ari, Elia. Pheromones: what's in a name? Bioscience. July 1998: 505-511.

6) Love Chemistry: New studies analyze love's effects

7) Mating and Temperament

8) What is chemistry and chemicals in love relationships

9) Chemicals

10) Love in the Brain

11) BBC News- Health- How the brain registers love

12) My search for love and wisdom in the brain by Marian Diamond

13) Bower, Bruce. Brain faces up to fear, social signs. Science News. Dec 1994: 406.

14) Biology of Love


Touched With Fire
Name: Elizabeth
Date: 2003-04-14 16:24:17
Link to this Comment: 5368


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In Touched with Fire: Manic Depressive Illness and the Artistic Temperament, Kay
Redfield Jamison explores the compelling connection between mental disorders and artistic creativity. Artists have long been considered different from the general population, and one often hears tales of authors, painters, and composers who both struggle with and are inspired by their "madness". Jamison's text explores these stereotypes in a medical context, attributing some artists' irrational behaviors to mental disorders, particularly manic-depressive illness. In order to establish this link, Jamison presents an impressive collection of artists who have suffered from mental illness, whether diagnosed correctly during their lifetime or discovered in hindsight. Well organized and interesting, Jamison provides an ideal introduction to this still
evolving idea, providing the reader with as many thought provoking questions as answers, and leaving the door open for further study.

Jamison begins with a brief explanation of manic-depressive illness and its effects on human behavior. The term "manic-depressive illness" refers to a variety of mental disorders which share similar symptoms, but range greatly in severity. These disorders alters one's mood and behaviors, disrupt established sleep and sexual patterns, and cause fluctuations in energy level. Manic-depressive illness cause cycles of manic, energized highs followed by debilitating, lethargic lows. Such disorders usually develop early in life and intensify over time, leading to maniacal highs and devastating lows. The manic energy associated with mental disorders may cause a person to require less sleep while raising energy levels increasing one's rate of thinking. These symptoms stimulate creativity and lead to an elevated level of productivity. Conversely, during the attendant lows associated with mental illness, the afflicted person experiences lethargy and hopelessness. Artists, in particular, often experience a creative block during their depressive periods, resulting in an intense frustration with their decreased productivity. In turn, this frustration may drive an artist to substance abuse, or even suicide. Depression is not the only cause of detrimental and possibly dangerous changes to one's behavior. Mania, of course, does not simply produce more creative energy. During a manic period, one tends to lose their grasp on reality, which could prompt irrational impatience, excessive spending, and impulsive sexual relations. Both manic and depressive periods alter behavior significantly and pose a
threat to the patient's life.

After outlining the effects of manic-depressive illness on human behavior, Jamison presents profiles of some of the numerous artists who have suffered from some sort of mental disturbance. Among the most notable manes are Picasso, van Gogh, Hemingway, Fitzgerald, and Poe. Additionally, Jamison discusses a number of lesser known artists who were affected by manic-depressive illness. Indeed, one cannot help but wonder if these men and women were prevented from reaching greater career heights by their mental disorders. Many studies concerning the link between creativity and mental illness have been conducted in recent years, with compelling results. For instance, a study by Dr. Colin Martindale of eminent English and French poets found that over half of these artists had suffered nervous breakdowns, been institutionalized, or suffered from hallucinations and delusions. Likewise, a study by Dr. Arnold Ludwig found highest rates of "mania psychosis and psychiatric hospitalization [were] in poets" (1). Poets, of course, are not the only artists to suffer from manic-depressive
illness. Indeed, Jamison lists a staggering number of painters, sculptors, and composers who also lived and created with severe mood disorders. Additionally, Jamison presents a detailed account of the madness of George Gordon, Lord Byron. Included are shorter, but equally compelling, case studies artists describing their mental illness and tracing the progression of their disease through family bloodlines.

However compelling the link between creativity and mental illness are, there are some flaws to consider. For instance, and most obviously, not all creative and productive artists have a mental disorder. Some writers and painters lead perfectly normal, healthy lives, finding their inspiration and creativity in other sources. One must not imply that creativity stems from madness, although many of the artists discussed by Jamison attribute their particular genius to the manic drive associated with mental illness. Also, one could also argue that madness hinders creativity rather than encouraging it. Many artists lose months of time to the stifling depression that follows manic periods, spending these potentially productive months unable to create. This depression, severe enough on its own, can increase greatly when an artist is frustrated with his or her inability to produce work at the manic level they once enjoyed. Additionally, it stands to reason that not every creative person suffers from a mental disorder. Indeed, although manic-depressive illness is more common among the artistic community, the majority of artists do not suffer from any sort of mental illness. This fact is easy to overlook when one considers the caliber of artists who have suffered from mental illness. Many prominent artists have been very upfront about their illnesses, and some even attribute their abilities to "madness". This gives the
general public an impression that all truly great artists suffer from some form of dementia, which, of course, is a false assumption.

Jamison ends her book with a short discussion of how the medical community can deal with manic depressive illness through medication. This is a complicated issue, as many artists feel heavy medication will deprive them of their inspiration and interfere with their creativity. It is a thorny, and relatively new, question, and Jamison merely outlines the controversy without offering an opinion on what should be done to rectify the situation, leaving the door open for further research. Mental illness in artists is a fascinating subject, and Jamison does an excellent job of providing a through portrait of many artists who have grappled with manic-depressive disorder, in addition to exploring how these disorders affect creativity and productivity. Jamison also maintains an awareness of the objections to her attempts to draw a correlation
between the mental illness and the artistic community, and addresses these issues accordingly.


References

1) Jamison, Kay Redfield. Touched with Fire: Manic-Depressive Illness and the Artistic
Temperament. Ontario: Free Press, 1993.


The Brains of Violent Males: The homicidal & suici
Name: Nicole Jac
Date: 2003-04-14 22:06:07
Link to this Comment: 5374


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

"It becomes increasingly evident that some of the destruction which curses the earth is self-destruction; the extraordinary propensity of the human being to join hands with external forces in an attack upon his own existence is one of the most remarkable of biological phenomena."
-Karl Menninger (1).

Violence is everywhere in our society- in movies, television programs, video games, and professional sports such as boxing and wrestling. In 2000, 28,663 deaths were related to firearms. 58% were reported as suicides and 39% were reported as homicides (2). The objective of this paper is to qualitatively evaluate and compare the brains of male murderers and male suicide victims. Even though more females attempt suicide, males are used for comparison because males are four times more likely to die from a suicide attempt (3). Male suicidal individuals have a higher success rate because they are more likely to kill themselves in a violent manner (i.e. using a gun).

At first glance, most people would argue that homicide and suicide are opposite behaviors, yet the relationship may not be that straightforward. If it is assumed that the brain dictates behavior and that suicide and homicide are independent behaviors, one would expect that researchers would find differences between the brains of suicide victims and murderers. At the other extreme, suicide and homicide can be considered similar behaviors because in both cases an individual engages in killing someone, the only thing that differs is where the killing impulse is directed. Homicide is directed towards the external world, whereas suicide is aggression turned inward. When the cause of unhappiness can be attributed to an external source, the extreme response is rage and homicide. In the absence of an external source, the extreme response is likely to be depression and suicide (4). The definition of suicide and homicide as extreme responses to unhappiness is not new. It implies that these behaviors are similar, therefore one would expect similarities in the brain, and the possibility that one individual can be suicidal and homicidal simultaneously. Freud first introduced the notion that suicide and homicide were not opposite behaviors. Freud believed that every aspect of human behavior carries within it, its very opposite. In this sense, the desire to kill others is also the desire to kill oneself.

Current research supports the notion that homicide and suicide are not unique behaviors characterized by distinctive brain anatomy and chemistry. It should be noted that homicide is the only crime that regularly results in the offenders taking their own life after committing the crime (5). Of the data collected on suicide victims and murderers, there are comparable deficiencies in the pre-frontal cortex and the serotonergic system. Mechanistically, the dysfunction of the prefrontal cortex in suicidal individuals and murderers is believed to be due to reduced levels of circulating serotonin.

Serotonin is a neurotransmitter involved in the regulation and inhibition of impulsive behaviors. The effects of serotonin are predominantly inhibitory, and considerable research indicates that serotonin is essential for self-control. Furthermore, serotonin regulates mood, arousal, aggression, impulse control, and sexual activity. Low levels of serotonin coupled with testosterone can often lead to violence and aggression. In a study of violent prisoners, it was found that the offenders had significantly lower levels of serotonin in their brains (6). Research has shown that 95 percent of the brains of suicidal individuals show an altered serotonergic system, which is characterized by reduced serotonin activity. Decreases in pre-synaptic serotonin nerve terminal binding sites have been observed in the suicidal brain and there are more postsynaptic serotonin receptors in the prefrontal cortex of suicide victims, which suggests that the body is trying to compensate for low levels of serotonin (7).

The serotonergic neurons of the raphe nucleus have long projections that terminate in the prefrontal cortex. It is logical that a reduced level of serotonin would have effects on a neuron's target site. Research has confirmed that the prefrontal cortex of murderers and suicide victims are different from the prefrontal cortex of a normal individual. The prefrontal cortex is often referred to as the "executive" region since it is where humans think, imagine, and make informed decisions (8). Damage to the prefrontal cortex manifests itself in the form of impulsivity, loss of self-control, immaturity, and altered emotionality. Although correlation does not indicate causation, researchers have occasionally used prefrontal cortex injury as an indicator of the likelihood of engaging in aggressive acts. In normal individuals, the frontal lobe is very active, whereas the frontal lobe can be quite inactive in the brains of murderers (9 , 12). Similarly, suicide victims often have fewer neurons in the prefrontal cortex than normal subjects (10). These findings indicate that the prefrontal cortex may be involved in the regulation of a restraint mechanism that is sub-optimal in suicidal and homicidal individuals.

One noteworthy example of the involvement of the serotonergic system in homicide and suicide is the case of Donald Schell. Schell had been taking Paxil (an antidepressant in a class of prescription drugs called selective serotonin reuptake inhibitors (SSRIs) for only 2 days when he shot and killed his wife, his daughter, his granddaughter, and then took his own life. There appeared to be no motivation for the murder-suicide (11). Numerous examples of such murder-suicides are reported in the media and in some psychological literature, however, there is little neurobiological research that provides firm evidence that links antidepressant use with murder and suicide. The manner in which suicidal or homicidal ideation develops when an individual is taking antidepressants remains unclear.

The problem with the comparison of the homicidal and suicidal brain is that we will never know causation of the deviations of these brains from "normal". Did the killing cause a change in the chemical make-up of the brain, or is the abnormal brain responsible for the production of the killing behavior? The brain of a murderer can be examined at any time, however researchers rely on postmortem studies to reveal clues about the chemistry of the suicidal brain. The trauma of suicide may influence the chemical activity of the brain in the moments after death. One might expect differences in the brains of murderers and individuals that engage in suicidal ideation, but no differences between the brains of murderers and suicide victims, since in the latter case the individual has killed. Yet, a person that attempted suicide and randomly failed should have a similar brain to an individual that succeeded in their suicide attempt because the intent was the same.

Current research shows that there are several shared features between the suicidal and homicidal brain, however, future research may challenge such claims. Perhaps researchers are examining the wrong part of the brain or attending to the wrong neurotransmitter system. Additionally, methodological flaws in experimental design, such as a faulty selection of comparison groups, may lead to poor data. Problems could be related to a lack of comparability across individuals due to age, educational level, and past history. Some studies used very small sample sizes which may undermine the reliability of their results. A more carefully controlled, systematic study is necessary to fully understand the similarities and differences between the suicidal and homicidal brain.

References


1) Menninger, Karl. (1938) Man Against Himself. New York: Harcourt Brace.
2) A fact sheet on suicide and homicide.
3) Suicide Prevention Fact Sheet, Issued by the CDC (National Center for Injury Prevention Control.
4) Suicide Facts.
5) Suicide Information & Education Centre .
6) Relative Murder , a web page by the Discovery Channel
7) An article written by J. John Mann , a researcher investigating the neurobiology of suicide.
8) Decision-making processes following damage to the prefrontal cortex, Article from Brain (2002).
9) "What's different about a Killer's Brain?" Whitley Strieber's web page
10) Why? The neuroscience of suicide: Physical clues to suicide , Scientific American article.
11) Paxil & murder/ suicide. A story about how the maker of Paxil held liable in murder/suicide.
12) The mind of a killer. ABC news web page which contains pictures of normal brain and a murderer's brain

Other interesting sites
A legal history of the "serotonin defense" .
Into the Mind of a Killer , This page has good information and several pictures.
Violent Brains , includes some pictures


The Insanity Defense- Should it be judged legally
Name: Priya Ragh
Date: 2003-04-14 23:17:13
Link to this Comment: 5378


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Former U.S president Ronald Reagan was shot by a man named John Hinckley in the year 1981. The president along with many of his entourage survived the shooting despite the heavy infliction of internal and external injuries. The Hinckley case is a classic example of the 'not guilty by reason of insanity' case (NGRI). The criminal justice system under which all men and women are tried holds a concept called mens rea, a Latin phrase that means "state of mind". According to this concept, Hinckley committed his crime oblivious of the wrongfulness of his action. A mentally challenged person, including one with mental retardation, who cannot distinguish between right and wrong is protected and exempted by the court of law from being unfairly punished for his/her crime. (1)
What is "insanity" and why is this subject of much controversy? Although I do not have a clear definition of insanity, most socially recognized authorities such as psychiatrists, medical doctors, and lawyers agree that it is a brain disease. However, in assuming it is a brain disease, should we link insanity with other brain diseases like strokes and Parkinsonism? Unlike the latter two, whose causes can be medically accounted for through a behavioral deficit such as paralysis, and weakness, how can one explain the behavior of crimes done by people like Hinckley? (2)
Much of my skepticism over the insanity defense is how this act of crime has been shifted from a medical condition to coming under legal governance. The word "insane" is now a legal term. A nuerological illness described by doctors and psychiatrists to a jury may explain a person's reason and behavior. It however seldom excuses it. The most widely known rule in the insanity defense refers to the M'Naghten rule which arose in 1983 during the trial of Daniel M'Naghten who pleaded that he was not responsible for his murders because he suffered from delusions at the time of the crime.
The rule states, " A defendant may be excused from criminal responsibility if at the time of the commission of the act, the party accused was laboring under such a defect of reason, from a disease of mind, as not to know the nature and the quality of the act he was doing..." (3)

The problem with this defense is that insanity here is either examined from a legal angle or a psychoanalytical one which involves talking to people and having them take tests. There is however, no scientific proof confirming the causal relationship between mental illness and criminal behavior based on a deeper neurological working of the brain sciences. The psychiatrist finds himself/herself in a double bind where with no clear medical definition of mental illness, he/she must answer questions of legal insanity- beliefs of human rationality, and free will instead of basing it on more concrete scientific facts. Let me use a case study to elaborate my argument that law in this country continues to regard insanity as a moral and legal matter rather than ones based on scientific analysis.
The insanity defense of Andrea Yates: The country was absolutely appalled when it heard that Yates, a mother of five children, had killed each of her children resulting in a horrific family slaughter. There were extremely polarized feelings about this case- sympathy (concluding that there must have clearly been some uncontrollable behavior) vs. retribution, punishment vs. treatment. What causes a human being to behave and act in such a manner? The criminal justice system and modern science approach the question with interestingly different perspectives. The M'Naghten rule allows little room for negotiation of the crime details. Essentially, the defendant is declared either sane or insane; the defect of reason from a brain disease makes them either right or wrong. The dichotomy left no shades of gray until the 20th century. Advances in the field of neuroscience indicated that certain mental diseases were caused in part by factors outside the control of the individual inflicted with the disease and the fact that medication could be used to successfully alter one's behavior was a scientific breakthrough. (4)
The legal system, feeling quite insufficient on its part, developed modern laws such as the "Irresistible Impulse" which included diseases like kleptomania, and pyromania, and the Durham test in 1954 which stated that the defendant is insane only if his/her criminal charges was a product of a mental disease or defect." The legal system that created the latter law readily removed it from practice after noticing that mental health experts had too big a role in determining what caused the defendant to commit the crime, but seldom answered as to what produced the action. Lawyers felt that left to the doctors and psychiatrists, the very notion of criminal charges and responsibility would be softened because all such behavior would then be categorized a mental illness.
It was the John Hinckley case that re-popularized the tough rule of the M'Naghten creating an ambiguous relationship between law, psychiatry, and neuroscience. Insanity, at the end of the day is a legal determination. The fact that the legal system had the authority to form and terminate the various laws of defense shows that it is a very malleable system of practice.
"Contradictions inevitably emerge where the laws of man are confused with the laws of nature: The former can be broken; the latter cannot. These contradictions are what make the "insanity defense" in a trial like that of John Hinckley Jr. appear to be an affront to justice and common sense." - Donald E. Watson (5)

Nonetheless, in today's insanity cases, mental health experts, doctors, and scientists have important roles to play. They can inform the jury of the nature of the defendant's mental illness, the likeliness that the crime might be repeated, and whether the defendant may bring harm upon himself/herself. However, like any court case, there will always be divided opinions amongst the mental experts regarding the outcome of the case depending on whether they testify for or against the defendant.
The key to balance "insanity" from a legal standpoint to a more medical one is for a breakthrough in neuroscience where well organized causal connections between mental illness and criminal act is established and made mandatory. There has been progress with medical experts introducing new concepts including the Battered Women Syndrome which recognize abusive conduct against women resulting in acts of self-defense.
One might wonder if criminals use the insanity defense to escape punishment?
Perhaps this skepticism on relying on legal methods in determining an "insane" person's fate calculates the fact that there are extremely few insanity cases and even fewer successful ones in reality. For instance, in the 1970s, a study in Wyoming revealed that the insanity defense was used in only .47% of all criminal cases (Melton, 1997, p.187) (6)
Moreover, in the same study it was observed that only one person, of those that used the defense, was acquitted. In the legal realm, insanity and sanity stand in a reversible state. When the defendant no longer tests positive in legal tests, an insane person miraculously becomes sane. Unfortunately, the same law does not account or recognize the physical, emotional or psychological states that may or may not be reversible.


References

(1) 1)All About the Insanity Defense, Mark Godo

(2) 2) Does Insanity "Cause" crime? : Thomas Szasz, M.D., The Myth of Mental Illness (1960)

(3) 3)M'Naghten Rule

(4) 4)The Yates case: Commentary for United Press International; Susan Crump is a former prosecutor for Houston

(5) < a name= "5">5) Donald E. Watson, MD taught and did research in nueropsychology, teaches at UC Irvine Medical School.

(6) 6) Statistics


Vasovagal Syncope
Name: Laurel Jac
Date: 2003-04-14 23:33:27
Link to this Comment: 5379

<mytitle> Biology 202
2003 Second Web Paper
On Serendip

My best friend "Dirk" can easily be picked out of a crowd. His 6'7 stature, impressive muscle mass, very blond hair, big blue eyes, and booming voice cause many people to stare at him-once, in Europe, a Japanese couple asked if they could take a picture of him. Addicted to weight lifting and athletics, my friend does not always enjoy admitting that he is a computer engineer-yes, my 22-year-old buddy is still afraid of the geek label. There is something else to which Dirk will not readily admit-he faints at the sight of blood. In fact, many things can trigger his fainting spells: blood, vomit, overheating, etc.

Dirk lives next door to my parents; we grew up together. Recently, he and my sister ran over from his house to ours, which is a distance of about 50 feet. My sister had not worn shoes; when they got to our house, they walked through two rooms before Dirk got dizzy. My sister had cut her foot, and the blood that had spread over the tile floor made Dirk turn his head away, and sit down. My mother ran to the rescue-Dirk's, not my sister's. She helped him breath deeply, and luckily he avoided fainting.

A few Christmases ago, Dirk caught a stomach virus. He made it to the bathroom just in time, but seconds after vomiting, he fell to the floor, and blocked the door. His parents frantically tried to open the door, they tried to revive him by screaming for probably five minutes, which seemed like an eternity to them at the time. Eventually they revived him.

The summer before that Christmas, Dirk was golfing with his high school's golf team on a hot July afternoon. At the end of the course, he and his coach walked to the parking lot. All of a sudden, Dirk toppled like a tree onto the pavement, suffering a concussion on top of fainting. Dirk's condition is called vasovagal syncope. Stubborn as he is, he often gets angry with his mother, a nurse, for fussing over him. But his mother knows from 22 years of experience that whether it is a particularly hot and humid day, or it is receiving a vaccination, Dirk will pass out unless he takes the proper precautions-resting, breathing deeply, and staying hydrated.

Vasovagal Syncope, also known as fainting, neurocardiogenic syncope, and neurally mediated syncope, is a very common condition, occurring in roughly half of all people at least once within their life; three percent of the population develops it repeatedly. It is not a serious condition.(2) A vasovagal response involves a decrease in the volume of blood that is returned to the heart, which enervates the baroreceptors(2) in the sympathetic nervous system to increase the force of each contraction of the heart. Consequently, the opposing parasympathetic nervous system is alerted to slow the heart rate and dilate the surrounding veins and arteries. These responses of the nervous system cause the blood pressure to drop very low, causing syncope (loss of consciousness).(1) Most patients are young and healthy, although vasovagal syncope can occur in the elderly population that has preexisting cardiac problems. Extremely hot weather and blood-alcohol levels are typical triggers. Some patients suffer from several, often attacks, while others may only experience them sporadically.(3)

While standing, the blood tends to settle in the legs. Maintaining the position for a long time can decrease blood pressure, which means that the brain may not receive proper blood supply. Lack of sufficient oxygen and nutrients can lead to syncope. In general, people do not pass out from standing upright. Mechanisms within the body control blood pressure. In a condition such as vasovagal syncope, the mechanisms do not perform their functions in the appropriate manner, misfiring, over-firing, or under-firing. Faulty mechanisms in the nervous system make syncope a possibility.(3) The mechanisms occur in everyone as we adjust to a new posture. Those with vasovagal syncope have an abnormal reflex to this information-there is an inundation of the messages from the barorecptors, and this overcompensation causes the halting of messages sent from the brain to constrict vessels, and the reverse is communicated, vessels dilate, less blood reaches the brain, and fainting ensues.(2)

In general, physicians use the tilt table test to determine if a diagnosis is necessary. The patient is placed in a quiet room, on either a hydraulic lift of swinging bed that can rotate between 60º and 90º, moving the patient from supine to head-up position. Heart rate and blood pressure are monitored throughout the test. Although some variation in technique exists within practicing physicians, the patient is usually monitored in the supine position for five minutes, and baseline heart rate and blood pressure measurements are taken. Next, the patient is moved into the head-up tilt position, and changes in blood pressure and heart rate, and the appearance of symptoms are noted every three to five minutes. If the patient experiences syncope, the patient is returned to the supine position, and they are considered diagnosable. If no symptoms of vasovagal syncope are recorded after at least ten minutes, often the tilt test is attempted again, this time giving the patient isoproterenol, which often shortens the amount of time before syncope occurs. When isoproterenol is used, physicians usually require a loss of consciousness in the patient in order to support a diagnosis of vasovagal syncope.(4)

Medicinal treatments include beta-blockers, fludrocortisone, midodrine, and SSRIs (selective seratonin reuptake inhibitors). Beta-blockers block the adrenaline system, preventing the abnormal reflex of the sympathetic nervous system that precedes the decrease of blood pressure. Fludrocortisone sends a message to the kidneys to retain more salt and water, thus increasing blood pressure. Midodrine tightens blood vessels. SSRIs are usually used in the treatment of depression or anxiety, but they also block the communication in the brain that triggers the blood vessels to open further.(2)

Instead of subjecting themselves to medications, patients with vasovagal syncope tend to choose lifestyle changes, and in most cases, this is all that is necessary for controlling the condition. Both recognizing their personal triggers, and learning to recognize when an episode is about to occur are necessary; an increase in the amounts of water and salt intake can prevent attacks. Blood is primarily composed of water and salt, so by increasing the amount of each, blood pressure may rise, possibly preventing syncope. More of both are needed especially in hot weather, or when vigorous exercise is performed.(2)

Patients who seek treatment outside of modern medicine often turn to licorice root, also known as sweet root. One of the active ingredients is glycyrrhizic acid. It has been used worldwide for thousands of years to treat a variety of ailments. Recent studies have shown that is useful in the treatment of heart disease. When consumed in large quantities, licorice root can raise the level of aldosterone found in the blood. Although this would be considered negative in most individuals, the increase of aldosterone can increase blood pressure, which is helpful in those with vasovagal syncope. If blood pressure increases around the time of a vasovagal episode, the constriction might counteract the activity of the parasympathetic nervous system, reducing the chance of syncope.(5)

Although vasovagal syncope is not a serious medical condition, those who suffer from its effects, like Dirk, cannot write off its impact on their lives. Tough guy to the core, he believes that taking beta-blockers or any other medication would be admitting that he is incapable of controlling himself. As I researched vasovagal syncope for this paper, I tried to explain the physiological processes that occurred in the body that had nothing to do with personal choice, and all though he was interested, he still maintains that he can control the episodes. I hope that this paper proves to everyone else who does not have a hard head that vasovagal syncope is not a matter of choice, apart from the choice of exposure incidents. I have asserted throughout this paper that vasovagal syncope is not a serious condition; it does, however, provide an interesting platform for research. I assumed, when I began researching, that I would find evidence of scientific research apart from merely understanding the process of syncope. Much of the knowledge seems to have been gleaned from observation of drug effects, so there has been no treatment specifically designed to treat vasovagal syncope. Perhaps more research will lead to more conclusive knowledge about the condition.

References

1)Med Help International, This website offers a forum for those with medical questions, allowing them to ask the advice of a physician.

2)London Cardiac Institute, This organization provides information to patients on several conditions. The patients are referred to the pages by their physicians.

3)Karen Yontz Women's Cardiac Awareness Center, Health Wise Physician's Corner provides information about several medical conditions.

4)Tilt Table Test

5)Health and Age


Yellow Voices and Orange-Foam Squid: Questions abo
Name: Danielle M
Date: 2003-04-14 23:37:02
Link to this Comment: 5381


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

"What's first strikes me is the color of someone's voice. [V--] has a crumbly, yellow voice, like a flame with protruding fibers. Sometimes I get so interested in the voice, I can't understand what's being said." --From Synesthesia: A Union of the Senses by Robert E. Cytowic

What would you make of the preceding account? Would you think the speaker was...crazy?...on drugs?...making a play for attention? Would you be skeptical if the speaker told you it was her natural way of perceiving the world? In truth, it is an example of the way in which about one in every 25,000 people observes the world (1). The term (I hesitate to use "scientific term" for reasons I'll discuss further on) given to the condition-- synesthesia--derives from the Greek roots syn, meaning together, and aesthesis, to perceive, and conveys the principle features of the synesthete's perceptual state (2). In synesthetic perception, stimuli activate not just one sense, but several. An oral stimulus isn't a taste alone--it may also be a taste, a shape, a color, a movement (1). For example, a synesthete might explain that the taste of "squid produces a large glob of bright orange foam, about four feet away, directly in front of me" (3). Such joint perceptions are automatic and involuntary, just as is usual perceptual experience, and, unlike imaginary images or ideas, synesthetic perception is not only vividly real, but "often outside the body, instead of imagined in the mind's eye" (2).

Though accounts of synesthetic experience are receiving increased study and documentation, the many in the scientific community remain partially unconvinced, if not wholly dismissive. Lacking sufficient empirical, objective data to depict the synesthetic experience, synesthetes and researchers of the condition have had to combat doubt, disregard, and ridicule in defense of the condition's reality and validity. The question raised by synesthesia then becomes: Why does science discount first-person evidence to such an extent? If a condition has little to no "objective" or empirical "proof," does that mean it can't exist? If researchers can produce no computer read-out, no resonance imaging, no technologically-generated chart, should the scientific community turn up its nose?

The existence of synesthesia has been questioned and discussed for nearly 300 years, and it received the most enthusiastic investigation between 1869 and 1930 (12). At the time, many were fascinated by the unconscious mind and its possible links to synesthesia; yet interest began to wane when researchers were unable to reach any definite conclusions (4). A decade later, the advent of the behaviorist movement in the caused synesthesia to finally fall "off scientists' radar" entirely (3). The principles of behaviorism saw science as strictly an empirically-based field of study in which "anything meaningful" (5) had to be quantifiable or measurable by "objective" machines (4). Humans became scientific "subjects," individuality of experience was discounted, and "subjective experience, such as synesthesia, was deemed inappropriate for scientific study" (4). Studying and measuring behavior was the only sufficiently reliable means of gathering information about the human experience; consciousness and subjective experience were irrelevant, or worse, simply wrong (5). Influenced by the popular principles of behaviorism, modern scientists likely view reports of synesthesia as fine and well, but ultimately worthless as testable, provable scientific information. What use has "hard science" for whimsical metaphors about squid and orange foam?

These behaviorist-influenced attitudes make substantiation of synesthesia troublesome, for while the phenomenonlogy of synesthesia makes it apparent that the condition is a conscious experience, that experience has yet to be fully captured by technological data (4). Because of the subjective nature of perception, the only true understanding researchers have gleaned has typically come from first-person accounts--and therein lies the trouble. Validation of the synesthetic experience "is largely aesthetic" and fundamentally ineffable, "a phenomenon whose quality must be experienced first-hand" and is thus difficult for science to accept (4). Luciano da Costa, in an article critiquing synesthetic research, exemplifies such scientific skepticism:

"The very features considered for its [synesthesia's] diagnosis...rely heavily on phenomenological evidence, which, of course, is subjective. There is no doubt that as such, i.e., without cogent physiological or anatomical substantiation, synesthesia is destined to be treated with understandable scientific caution...[E]ven if thousands of documented cases were available, that would not be enough to qualify synesthesia as a real physical phenomenon" (6).

Da Costa's sentiments echo the contemporary scientific reluctance to accept synesthesia as scientifically valid, and further illustrate the field's "refusal to take any subjective reports seriously" (4). So then, if something can't be objectively verified, can we believe in its existence? I'd assert that yes, we can--and we should. When dealing with human experience, subjective data is an inevitable part of research, and offers "substantially more than nothing" to the effort of making sense of human behavior (5).

What are the reasons for this refusal to entertain subjective validity? In addition to holding to behaviorist principles, the scientific community may be leery of synesthesia for other reasons, including synesthesia's close relation to typical conceptions of metaphor and artistic expression. Creative language abounds with metaphors resembling synesthetic perceptions: sharp cheese, loud shirts, blue music. Hence the resulting "classic fallacy" of shelving synesthesia as "mere metaphor" (7)--that is, as the product of an active or overly fanciful imagination. This reaction is somewhat understandable, after all, because it is so difficult for synesthetes to portray exactly what synesthetic perception is like. Faced with such ambiguity, the best external observers can do is to approximate it to what they already know. The closest approximation most scientists are familiar with is metaphor and imaginative language, as it's regularly used in common discourse "to describe everything from food and wine to art and music" (1). Thus, it's understandable that scientists, without any other point of reference, would associate synesthetic expression with their own concept of metaphor. And if we conceive of our metaphoric descriptions with deliberate effort, wouldn't synesthetes be doing so as well? We don't experience cross-modal sensations, but we can create the idea of them--couldn't synesthetes just be convincing themselves that these creations are real? When one synesthete, told others of her music/shape synesthesia, they could only assume she was "making it up to get attention," "had an overactive imagination," "or was spoiled and wanted attention" (8). Without experiential reference, it's quite difficult just to conceive of such radically different perception, and how can you accept what you can hardly imagine? As one synethete explained, "Synesthesia isn't easy to fathom. People who don't have it have a hard time understanding...what it's like" (9). While understandable, to dismiss synesthesia on the basis that it seems impossible in one's personal conception of reality is to needlessly constrict the possibility of human experience and to hinder progress into new and challenging realms of inquiry.

Another reason is the belief that subjective reports make for problematic evidence. This is true in part: the subjectivity and individual experience implicated in perception not only makes synesthesia difficult to describe, but also difficult to compare to the experiences of other synesthetes (10). Because different synesthetes--even those with the same sensory pairings--usually don't report identical responses, that variability has been interpreted as proof that synesthesia isn't real (8). Moreover, because synesthetes aren't data-reading machines, they don't "have access to [information about] the neural and cognitive processes" underlying their synesthesia, which necessarily limits the scope of information they can provide (10). These problems should not however result in the outright rejection of subjective reports. As there are significant aspects of synesthesia that can't be described or captured using third-person methods, first-person accounts of synesthesia are essential to its understanding. The subjective contributions of synesthetes are therefore quite valuable as research tools, and could offer significant contributions to knowledge about the condition (10).

Finally, scientists may further doubt synesthesia's reality because of the oft-held idea that if a person's experience of the world diverges dramatically from the "norm" they must be mentally ill, or at least notably distressed. Yet synesthetes are neither. Very rarely, for those who experience stimuli across many senses, their synesthesia becomes so overpowering as to be uncomfortable or even unpleasant; but for the majority, synesthesia is quite normal, even pleasurable (8). Clinically, the mental state of the majority of synesthetes is balanced and the results of standard neurological exams prove normal (4). But "how is it that these perfectly healthy and otherwise neurologically normal people can experience color when there is not color there to be perceived" (9). One synesthete encountered such doubts when she mentioned her color/word perceptions to a teacher, and was promptly reported as a schizophrenic (8). The teacher (like many scientists) must have expected that no one could have such a radically altered idea of reality and be mentally stable. Abnormality is most often thought of as an obstacle, a deficit, a problem--synesthesia is not. As such, it challenges long-held conceptions of normal versus abnormal, and typical beliefs about reality and unreality. If a person experiences their world in a way radically different from us, aren't they necessarily crazy? Can someone be so different and yet be "normal?" This last question (to which I would argue yes), is potentially the most frightening of the questions raised by science's treatment of synesthesia. It implies that someone could perceive reality in a way wholly distinct from our own idea of reality and not be "crazy." This then begs the question, so what is crazy? If it's not always what we thought, could we be crazy? Small wonder scientists prefer to distance themselves from synesthesia and its "whiff of mysticism" (8).

Thus, while skepticism toward first-person accounts has merit, an uncompromising insistence on empirical data sacrifices a profound source of information to an excessive insistence on externality and objectivity. Rather than swing to one extreme (pure objectivism) or the other (pure subjectivism), perhaps there is a compromise in which each approach can be seen as mutually edifying. First-person reports can not only lay a basic framework for experimentation and the generation of new hypotheses, they can also be used to compare objective results to actual subjective experience. First-person data could also help to better explain experimental inconsistencies by providing insight into "individual differences between synesthetes" (10). Without this sort of real-world check of empirical data, the results would be substantially less valuable and applicable outside the laboratory. In the end, ignoring first-person accounts would not be "a victory for objective science," but a victory for "an intolerantly narrow vision of science" (5).

If something can't be imagined with our own personal experience, does that then mean it can't exist in another? Is our notion of reality the only truth? I don't have the ultimate answers to such sticky questions, but I do think that to dismiss synesthesia out of an unspoken fear of raising these queries and challenging the sense of complacency they offer would be both craven and detrimental. We shouldn't capitulate to fear or forego discovery in order to maintain a sense of stability. Synesthesia should be explored as far as possible, allowing first-person accounts to generate new theories and support what information researchers discover. In the end, conclusions about the reality of divergent human perception shouldn't rest exclusively with inflexible (and possibly unattainable) standards of objectivity. Scientists would be foolish to insist on it as they have in dealing with synesthesia; such disregard would deaden the nature of inquiry, of stretching beyond previous bounds and moving in new circles of thought and conception. We shouldn't allow the pursuit of knowledge to be so enslaved by technological, objective information that we doubt our own knowledge of the world around us. To do so would be to further distance human experience from the ability to know or posit truth.

References

1)Synesthesia, an interview with Richard Cytowic, on the ABC Radio National Transcripts site.

2)Synesthesia and the Synesthetic Experience, a site by the Massachusetts Institute of Technology.

3)Everyday Fantasia: The World of Synesthesia, a journal article by Siri Carpenter on the APA site.

4)5) Synesthesia and Method, a journal article by Kevin B. Korb, on the Psyche site.

6) Synesthesia - A Real Phenomenon? Or Real Phenomena?, a journal article by Luciano da Costa, on the Psyche site.

7)Hubbard, E.M and V.S. Ramachandran. "Psychophysical Investigations into the Neural Basis of Synesthesia. The Royal Society 268 (2001): 979 - 981.

8)Cytowic, Robert E. Synesthesia: A Union of the Senses. 2nd ed. Cambridge: Massachusetts Institute of Technology, 2002.

9)"Blue Cats and Chartreuse Kittens" by Patricia Lynne Duffy, a review of Duffy's book by Alison Motluk, on the salon.com site.

10) Towards a Synergetic Understanding of Synesthesia: Combining Current Experimental Findings with Synesthetes' Subjective Descriptions, a journal article by Daniel Smilek and Mike J. Dixon, on the Psyche site.


Implications of Post-Traumatic Stress Disorder for
Name: Stephanie
Date: 2003-04-14 23:42:15
Link to this Comment: 5382


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


War is a complex concept that is increasingly difficult to understand, particularly in an age that allows for live images of combat to be beamed around the world. Many war films depict the brutalities of war and affects war has on participants, but it seems that these representations merely skim the surface. The 20th century is an era that saw a significant amount of military action: World Wars I and II, the Cold War, Vietnam, and the Gulf War - millions of men fought, some survived and live among us today. Unfortunately, the war experience for many veterans is traumatizing and as a result, many have been diagnosed with Post-Traumatic Stress Disorder (PTSD). This disorder is often quite mentally debilitating; this, then, begs the question of the social implications of the disorder as well as whether this has any bearing on the necessity of war.

At the minimum, PTSD is a branch of emotion that stems from stress or anxiety. Stress is not uncommon among humans as it can be caused by something as simple as gridlock or an argument. When we feel stressed, our body is attuned to exhibit the fight-or-flight response during which "the body releases chemicals that make it tense, alert, and ready for action" (1). PTSD, however, is a sector of stress that is very specialized for it occurs after traumatic events; these may include car accidents, earthquakes, rape, or military combat. People suffering from PTSD experience paranoia, flashbacks and generally have difficulty engaging in normal daily activities (2). One Vietnam veteran diagnosed with the disorder explains that he often has extreme emotional outbursts: " 'I developed a nasty temper, became very nervous, and have bad dreams that take me back into the war, like it's happening all over'" (3). The flashbacks that many veterans experience suggests that PTSD is largely related to the way they remember the event.

In order to create a memory, the brain releases chemicals "that etch these events into its memory bank with special codes" (4). However, when one experiences a traumatic event, the memory becomes vivid because the context surrounding the event is so significantly different from anything the victim has ever experienced before. For instance, many Americans may find that they can remember what they ate for breakfast on the morning of Tuesday, September 11, but cannot recall what they ate on any other Tuesday. It seems, then, that during a traumatic event, our senses are heightened - this may be due to the fight-or-flight response that readies us for action. Since we sense (or know) that something is amiss, our brain releases more chemicals that allow us to be more alert; this in turn may be the mechanism that helps us to remember traumatic events so well.

Although many of us may have vivid recollections of 9/11, we may not necessarily feel traumatized by it - unless, of course, it directly affected our life in some way. Veterans, however, are often completely traumatized by war because it is such an unnatural (though increasingly common) experience. The majority of men thrust into combat are in their 20s and at this time are still developing as a person. Consequently, when they are then plucked from their homes and placed in an extremely foreign environment, it forces them to shed their identity, thereby necessitating "radical breaks ... between self-systems as a consequence of participation in distinct social systems" (5). This results in the construction of a "fractured" self that often leads to confusion upon veterans' return home because they feel different and people say they seem different, yet they cannot seem to clarify who they were before the war. This confusion is mainly due to the brutality that war entails: killing - the taking of another human life - is an act that is taught in basic training as necessary for survival. One veteran describes his experience in basic training:

'In basic training, I felt wonderful. I felt physically perfect, mentally alert ...
Naturally there was a certain amount of brainwashing that goes through basic
[training]. They would constantly pound it into you that you have to kill to
survive. You know, you are going to Vietnam and you are going to fight a war
and all you are going to do is kill, kill, kill, kill'(5).

This is a particularly compelling story because it juxtaposes the military's image as a highly organized and efficient unit, with the reality of war as chaotic, terrifying, and anything but meticulously executed. Because young recruits who enter basic training may come to feel "physically perfect" or like a machine, they are unprepared for the terror they encounter when engaging in hand-to-hand combat which becomes a act of stalking - particularly when soldiers are faced with the enemy. Not only must they face another person who they have been instructed to kill, they must also face their own mortality as well as the enemy's mortality. At precisely this instant, two people must decide who will blink first, fire, and save their own life at the expense of another who happens to be in the same position. In this way, combat appears to be extremely traumatizing due to psychological fragmentation.

Furthermore, it is also the repetition of killing that seems to have the potential to create a physical memory. Although one becomes inured and desensitized to a particularly act after executing it many times, when the task is killing people, one still becomes desensitized to it, yet at the same time, it appears that even though there might be less hesitation, the soldier still experiences the rush of adrenaline and then the emotional crash that later follows the incident. Here, a veteran describes this phenomenon:

'We just kind of stumbled on the Vietcong and fortunately for us they had their
weapons laying on the ground. We had ours on our person. This one gook
reached for his automatic weapon and I shot him ... The first instant that it
happened was like a flow of adrenalin or whatever, I felt great. It gradually sank
in and made me sick. And then the next day I found out the kid was only 14 and
that did not help' (5).

This is similar to what many other veterans have told researchers about their experiences in combat. It seems that the "high" of killing may be in direct response to soldiers' survival mentality - once the enemy is eliminated, you are safe because you have diminished the threat against your own life. One veteran states that "Getting out safe was the only thing on [his] mind" and that "[He] only thought about survival. Every time somebody got killed [he] was glad it was not [him] lying there" (5). Another echoes the sentiment expressed by the previous veteran regarding his experience with killing which he says "felt great" but when he fully realized what had happened "the effect was one of depression, one of a nonfuture, discontent" (5).

It is these feelings of failure and wrongdoing that often make re-assimilation into civil society so difficult for veterans. This seems related to the idea of self-control in that soldiers do not have complete control over their actions while in combat since they are merely carrying out orders. Of course, they can make conscious choices like whether to fire their weapon or not, but because survival has become so ingrained in their minds, it seems that true self-control is obliterated from their being. It seems that upon returning home veterans may find the contrast between civilization and war overwhelming. In the war situation they lost control over their actions, whereas at home they reclaim it, but feel out of practice in controlling themselves. This harkens back to the idea of memory and how veterans perceive and remember the trauma they experienced.

Research shows that many veterans with PTSD exhibit significant changes in the make-up of the hippocampus and the medial prefrontal cortex which are parts of the brain that control learning, memory and stress as well our response to fear and stress (6). Thus, because PTSD veterans' brains are different, they may also have trouble learning new material due to the neurological effects of PTSD.

There are myriad social implications of PTSD that include experiencing difficulty with feeling any strong emotion, particularly love, or feeling disconnected from reality, and feeling guilty (7). All of these reactions impact veterans' relationships and their ability to function in society. If the effects of war are so debilitating, then why is war necessary? Even though wars are fought against so-called "evil", is there anything more evil than completely annihilating humans' sense of self - aside from the havoc that war wrecks on the area in which it is fought. It seems that the social implications of military combat PTSD victims calls into question the justification of war - for while war does occur between countries, it is carried out by people, by fellow humans beings who should never have to bear witness to such extreme horrors.


References

1) Stress info
2) American Psychiatric Association
3)Kulka, Richard A., et al. Trauma and the Vietnam War Generation. New York: Brunner/Mazel Publishers, 1990.
4) Post-traumatic Stress Disorder, Wars, and Terrorism
5)Wilson, John P., et al, eds. Human Adaptation to Extreme Stress. New York: Plenum Press, 1988.
6)The Invisible Epidemic: Post-Traumatic Stress Disorder, Memory and the Brain
7) Post-Traumatic Stress Disorder, Understanding the Pain


False Memory Syndrome And The Brain
Name: Kathleen F
Date: 2003-04-15 00:37:07
Link to this Comment: 5384


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In the mid-nineties, a sniper's hammering shots echoed through an American playground. Several children were killed and many injured. A 1998 study of the 133 children who attended the school by psychologists Dr. Robert Pynoos and Dr. Karim Nader, experts on Post-Traumatic Stress Disorder among children, yielded a very bizarre discovery. Some of the children who were not on the schools grounds that day obstinately swore they had very vivid personal recollections of the attack happening (1). The children were not exaggerating, or playing make-believe. They were adamant about the fact that they were indeed there, and that they saw the attack as it was occuring. Why would these children remember something so harrowing if they didn't actually experience it? What kind of trick was their brain playing on them? Why did it happen?

False Memory Syndrome (FMS) is a condition in which a person's identity and interpersonal relationships are centered on a memory of traumatic experience which is actually false, but in which the person is strongly convinced (2). When considering FMS, it's best to remember that all individuals are prone to creating false memories. A common experiment in Introduction to Psychology courses include a test similar to this one:
Look at this list of words and try to memorize them:

sharp thread sting eye pinch sew thin mend

After a few seconds, the students will be asked to recall these words, and are asked the following questions: Was the word "needle" on the list? Was it near the top? The majority of the class will vehemently agree that needle was, in fact, on the list. And not only that, it was actually quite close to being the first word. Some will attest to having vivid recollections of seeing the word "needle" on the page. These students have created a false memory. Due to the exposure of words similar to or related to the word "needle", they have very genuine memories of actually seeing the word on the list. Like the children who were absent from school on the day of the sniper attack, the false memories were stimulated by exposure to similar words (or in the sniper case, stories). In the school children's case, the false memories were created by the exposure to the stories of those who actually underwent the trauma. Our brain uses three diverse procedures to receive information, store information, and access it. These processes are: Sensory information storage, which acts like a very small holding tank, briefly storing information upon impact. Short term memory, in which the brain accounts for what has just happened, also based mainly on the senses. This has a bit more durability than sensory information storage because the brain can interpret the information it's receiving more so than in sensory information storage. Finally, there is Long term memory, the procedure in which the brain stores away significant or enduring information for retrieval at a later date (3).

So where exactly is this false memory of the sniper attack being stored? Because the brains of young children are not as fully developed as the brains of adults, it's interesting to consider that Jean Piaget, the well-known child psychologist, asserted that his earliest memory was of a botched kidnapping at the age of 2. He distinctly remembered details of watching his nurse try to fend off the kidnapper as he sat in his stroller, and the policeman's uniform as he chased the kidnapper away. Thirteen years after the alleged attack, the nurse admitted to Piaget's parents that she had fabricated the story. However, the story, told repeatedly by the nurse, crept into Piaget's psyche and expanded until it took on a life of its own. Piaget later wrote: "I therefore must have heard, as a child, the account of this story...and projected it into the past in the form of a visual memory, which was a memory of a memory, but false" (4).

Due to the way our brains work, remembering a kidnapping incident at the age of 2 could be nothing other than a false memory. The left inferior prefrontal lobe is not yet developed in infants (4). It is this lobe that is necessary for long-term memory. The complicated and sophisticated encoding necessary for remembering such an event could not occur in the infant's brain. However, the adult brain works in a far different way. Valerie Jenks, a woman living in Idaho, was raped at the age of 14. After a few seemingly trauma-free years, she began to become depressed shortly after her marriage and the birth of her first child. In therapy, Dr. Mark Stephenson convinced her to try hypnotherapy, and after her very first session, Jenks came to believe that she'd been sexually abused by her family and friends (5). In Freud's theory of "repression", the mind involuntarily expels traumatic events from memory to avoid overpowering anxiety and trauma (6). Aided by the memory of her rape at age 14, Jenks created a false memory - an elaborately fabricated memory of rape and molestation by her father and other family members. These memories were "repressed memories", said Stephenson. Further, Stephenson said she answered "yes" to many of his questions, not verbally, but by tapping the index finger of her left hand. These tappings were "body memories", claimed Stephenson. According to him, some patients have tried to explain their physical distress as coming from repressed "body memories" of incest. Therapists have told patients that "the body remembers what the mind forgets," and that many of the physical sensations they are experiencing during therapy (like Jenks' finger tapping) are symptoms of forgotten childhood sexual mistreatment. These memories, said Stephenson, are documented in cellular DNA. (7).) However, absolutely no scientific proof supports this notion of a "body memory". Memories are encoded and stored in the three processes of the brain discussed earlier. A psychical pain or sensation is not evidence that abuse occurred (5). Jenks no longer sees Dr. Stephenson and is seeking compensation for the horrific trauma she endured because of his treatment.

With organizations such as the False Memory Syndrome Foundation popping up around the globe, the awareness of FMS is spreading. False memories, in their most fundamental condition, are very real and existent in our world, as shown in the aforementioned psychology experiment. However, it is only when therapists, armed with the notion of Freudian "repressed-memories" and bizarre concepts like "body memories", implant unhealthy and false ideas into the brains of their patients that havoc ensues.

References

1)Recovered Memory Therapy and False Memory Syndrome, Recent Legal and Investigative Trends by Dr. John Hochman, M.D.

2) Memory and Reality: Website of the False Memory Syndrome Foundation

3) BodytalkMagazine.com How Memory Works

4) The Skeptic's Dictionary False Memory

5) Salon.com Health and Body - The Story of Valerie Jenks

6) How Memory Really Works Freud's Notion of Repressed Memory

7) FAQ for the False Memory Syndrome Foundation


Anosognosia for Hemiplegia: A Window into Self-Aw
Name: Cordelia S
Date: 2003-04-15 01:23:16
Link to this Comment: 5385


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

You wake up in a hospital bed, scared, confused, and attached to a network of tubes and beeping equipment. After doctors assault you with a barrage of questions and tests, your family emerges from the sea of unfamiliar faces surrounding you and explains what has happened; you have had a stroke in the right half of your brain, and you are at least temporarily paralyzed on your left side. You wiggle your left toes to test yourself; everything seems normal. You lift your left arm to show your family that you are obviously not paralyzed. However, this demonstration does not elicit the happy response you expect; it only causes your children to exchange worried glances with the doctors. No matter how many times you attempt to demonstrate movement in the left half of your body, the roomful of people insists that you are paralyzed. And you are, you just do not know it. How is this possible? You are suffering from anosognosia, a condition in which an ill patient is unaware of her own illness or the deficits resulting from her illness (1).

Anosognosia occurs at least temporarily in over 50% of stroke victims who suffer from paralysis on the side of the body opposite the stroke, a condition known as hemiplegia (1). Patients with anosognosia for hemiplegia insist they can do things like lift both legs, touch their doctor's nose with a finger on their paralyzed side, and walk normally (2). These patients are much less likely to regain independence after their stroke than patients without anosognosia, primarily because they overestimate their own abilities in unsafe situations (3). However, the implications of the illness go far beyond those for patients who suffer from it; anosognosia brings questions of the origin of self-awareness to the forefront. How can someone lose the ability to know when she is or is not moving? Is this some type of elaborate Freudian defense mechanism, or is this person entirely unaware of her illness? How is self-awareness represented in the brain, and is this representation isolated from or attached to awareness of others? Though none of these questions are fully answerable at this time, research into anosognosia has provided scientists and philosophers with insight into some of these ancient questions of human consciousness.

The question of "denial" versus "unawareness" is at the heart of debate between psychologists and neurologists about the origin of anosognosia (3). Proponents of a psychological explanation for the disorder insist that patients are aware on some level of their paralysis, but deny this information, as it would be traumatizing to the image of the self to admit to a lack of ability to control one's own body (4). However, this theory is countered by the fact that anosognosia in stroke patients almost always occurs after a stroke in the right hemisphere of the brain; though a stroke in the left hemisphere is no less devastating to the body, patients with left hemisphere strokes nearly always fully recognize the impact of their strokes on their bodies (5).



Another explanation of anosognosia draws on the fact that this disorder and hemiplegia are nearly always accompanied by hemispatial neglect, in which the patient does not recognize or attend to visual information on the side of the visual field contralateral to the brain damage (6). Some researchers believe that since right-brain stroke patients can be inattentive to visual information on the left side, they may simply be displaying the same inattention to the left half of their bodies when they have anosognosia (4). In other words, if one pays no attention to the left arm, one would not notice if the left arm is doing something odd, like not moving.

However, one of the premier experts on anosognosia, Dr. Vilayanur Ramachandran, has pointed out a key flaw in this theory: though hemispatial neglect patients with right brain damage acknowledge seeing stimuli presented to the left visual field if it is brought to their attention, for example by being moved or set on fire, no amount of attention drawn to an anosognosiac's immobile limbs will make her acknowledge that her limb is paralyzed (4). Usually the anosognosiac will insist that her limb is moving. If pressed, she will cite her arthritis or a lack of motivation as a reason for her immobility. When forced, the patient may even venture completely out of the realm of reality in defending her ability to move, stating that the immobile limb belongs to someone else, or is not a limb at all. Dr. Edoardo Bisiach, another expert in the field, once saw a patient who claimed that his paralyzed hand belonged to Bisiach himself. When Bisiach held his own two hands together with the patient's immobile hand, and asked how it was possible that he had three hands, the patient calmly replied, "A hand is the extremity of an arm. Since you have three arms, it follows that you must have three hands" (4). Ramachandran insists that this type of unrealistic rationalization is particular to patients with anosognosia; a patient suffering only from hemispatial neglect will not justify her beliefs with peculiar stories, but will accept a doctor's diagnosis(4).

Ramachandran favors an explanation of anosognosia dependent on both psychology and neurology. He maintains that due to the vast amount of sensory information the brain regularly receives, it must have a filter of some sort that lets it process only necessary information (4). Ramachandran's idea is that the left hemisphere of the brain contains a schema of the body in its entirety, which is updated as needed by a section of the right hemisphere. This right hemisphere function compares incoming sensory information to the left-brain schema, and decides which discrepancies are worth informing the left-brain about (4). For example, while a few sneezes can be brushed off, a fever will bring the right brain to inform the left-brain that one is sick. Not all discrepancies in information change the left-brain's schematic representation of the body, but the most important or startling ones do. Ramachandran believes that the right brain's ability to detect these discrepancies is damaged in patients with anosognosia (1). Thus, the left-brain receives no information about a change in the body's ability to move, and the current representation of the body as fully mobile is maintained (7).



Recent experiments have shown that if any information of a change in the body's abilities is present in anosognosiacs, it is extraordinarily inaccessible to the I-function. Ramachandran asked three hemiplegic anosognosiacs and two stroke victims with hemiplegia and no anosognosia to choose between winning a small prize for completing a task involving one hand (i.e., stacking blocks) and winning a large prize for completing a task involving two hands (i.e., tying a bow) (4). The hemiplegics with no anosognosia consistently opted for the smaller prize. However, the hemiplegic anosognosiacs chose repeatedly to attempt the two handed task, never learning from their failures, and never recognizing their limitations (4).

Amazingly, many anosognosiacs also seem unable to recognize their own limitations in other people (7). In a recent experiment, Ramachandran found that two thirds of tested hemiplegic anosognosiacs were not able to recognize paralysis in another person (4). He suggests that this is because we have a schema for the bodies of others as well as ourselves, and that they are represented in close proximity in our brains (7). This idea was supported by recent research with monkeys, which showed that the same areas that were active when a monkey completed a certain task were also active when he watched another monkey complete the same task (7). This information suggests that self-awareness is crucial in awareness of others. However, this research is in its early stages, and has not yet been used in the treatment of anosognosiacs.

Two methods of treatment, one primitive and one modern, are used currently to help bring a sense of awareness of failure to anosognosiacs. The first method, invented by Bisiach, involves pouring cold water in the ear of the patient on the side of the paralysis (4). Since nerves in the ear contribute information about the body's balance to the brain, Bisiach figured by shocking these nerves he might startle the part of the body responsible for updating the body schema with new information (4). This appears to work astonishingly well; patients undergoing this treatment often fully realize their paralysis for several hours. The second method of treatment uses virtual reality programming to give patients repeated feedback about their failures in a safe setting (3). This type of program helped I.S., a man with anosognosia for hemispatial neglect, without hemiplegia. I.S. was determined to drive, and saw no reason why he should not, until he was treated with a virtual reality program simulating street crossing. Since I.S. did not pay attention to cars coming on his left side, he consistently had "accidents", which caused the program to make crashing noises and flash warnings. This type of confrontation with his limitations seemed to cause I.S. to begin trusting his doctors over his own sense of self (3). Presumably, the shock of knowing that if he followed the information given to his I-function by the rest of his brain, he would die, caused him to realize that he needed to learn new ways to perceive himself and the world around him, perhaps even by trusting others over himself.



The issue of trust stands out as key in anosognosia, a disorder in which the patient can no longer trust her own information about herself. This seems almost unthinkable, hence the reason I chose to open this essay by addressing you, the reader. Could you possibly believe someone else's information about your body over your own? And, if you ever learned to survive as someone who could no longer trust your brain (and, thus, yourself), could you ever again have any type of free will? Could you be creative or original without fully believing in your own mind? The fact that free will is so hard to imagine without an intact sense of self makes me appreciate the seemingly ridiculous lengths to which anosognosiacs will go to defend their perceptions. For if there were an element of choice involved, I might rather believe that a man could have three hands than believe I had lost the ability to perceive myself. Perhaps, in anosognosia, ignorance about one's own ignorance is bliss.


References

1)Some Selected Aspects of Motor Cortex Damage in Man, Lecture notes explaining hemiplegia, hemineglect, and anosognosia

2)Neuropsychology and Neuropathology, Brief summaries of discussions on several neuropsychological disorders, including anosognosia and prosopagnosia

3)Unawareness and/or Denial of Disability: Implications for Occupational Therapy Intervention, Article discussing unawareness as obstacle in treatment, gives several case studies (download as pdf file)

4)The Brain that Misplaced its Body, Article discussing Ramachandran's research and several specific case examples (search in archive for article)

5) Unilateral Hemineglect and Anosognosia of Hemiparesis and Hemiplegia , Extremely comprehensive graduate student project on said topics

6) Memory: A Neurosurgeon's Perspective, Only briefly touches on anosognosia, but interesting regardless

7)Mind Over Body, Update on research and speculation on impact of anosognosia on general self-awareness


Video Games: A Cause of Violence and Aggression
Name: Grace Shin
Date: 2003-04-15 01:31:09
Link to this Comment: 5386


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

There is a huge hype surrounding the launch of every new game system - Game Cube, XBox, and Sony Playstation 2 being just few of the latest. Affecting children age 4 all the way to 45 year-old adults, these video games have called for concern in our society regarding issues such as addiction, depression, and even aggression related to the playing of video games. A recent study of children in their early teens found that almost a third played video games daily, and that 7% played for at least 30 hours a week. (1) What is more, some of these games being played like Mortal Combat, Marvel Vs. Capcom, and Doom are very interactive in the violence of slaughtering the opponent. The video game industries even put signs like "Real-life violence" and "Violence level - not recommended for children under age of 12" on their box covers, arcade fronts, and even on the game CDs themselves.

In the modern popular game Goldeneye 007 bad guys no longer disappear in a cloud of smoke when killed. Instead they perform an elaborate maneuver when killed. For example, those shot in the neck fall to their knees and then face while clutching at their throats. Other games such as Unreal Tournament and Half-Life are gorier. In these games when characters get shot a large spray of blood covers the walls and floor near the character, and on the occasions when explosives are used, the characters burst into small but recognizable body parts. In spite of the violence, the violent video games are also the more popular games on the market. (2) When video games first came out, indeed they were addictive... however, there seems to be a strong correlation now between the violent nature of games these days and the aggressive tendencies in game players.

On April 20, 1999, Eric Harris and Dylan Klebold launched an assault on Columbine High School in Littleton, Colorado, murdering 13 and wounding 23 before turning the guns on themselves. Although nothing is for certain as to why these boys did what they did, we do know that Harris and Klebold both enjoyed playing the bloody, shoot-'em-up video game Doom, a game licensed by the U.S. military to train soldiers to effectively kill. The Simon Wiesenthal Center, which tracks Internet hate groups, found in its archives a copy of Harris' web site with a version of Doom. He had customized it so that there were two shooters, each with extra weapons and unlimited ammunition, and the other people in the game could not fight back. For a class project, Harris and Klebold made a videotape that was similar to their customized version of Doom. In the video, Harris and Klebold were dressed in trench coats, carried guns, and killed school athletes. They acted out their videotaped performance in real life less than a year later... (3)

Everyone deals with stress and frustrations differently. However when action is taken upon the frustration and stress, and the action is taken out in anger and aggression, the results may be very harmful to both the aggressor and the person being aggressed against, mentally, emotionally, and even physically. Aggression is action, i.e. attacking someone or a group with an intent to harm someone. It can be a verbal attack--insults, threats, sarcasm, or attributing nasty motives to them--or a physical punishment or restriction. Direct behavioral signs include being overly critical, fault finding, name-calling, accusing someone of having immoral or despicable traits or motives, nagging, whining, sarcasm, prejudice, and/or flashes of temper. (4) The crime and abuse rate in the United States has soared in the past decade. More and more children suffer from and are being treated for anger management than ever before. Now, one can't help but to wonder if these violent video games are even playing a slight part in the current statistics. I believe they do.

Calvert and Tan (5) compared the effects of playing versus observing violent video games on young adults' arousal levels, hostile feelings, and aggressive thoughts. Results indicated that college students who had played a violent virtual reality game had a higher heart rate, reported more dizziness and nausea, and exhibited more aggressive thoughts in a posttest than those who had played a nonviolent game do. A study by Irwin and Gross (6) sought to identify effects of playing an "aggressive" versus "nonaggressive" video game on second-grade boys identified as impulsive or reflective. Boys who had played the aggressive game, compared to those who had played the nonaggressive game, displayed more verbal and physical aggression to inanimate objects and playmates during a subsequent free play session. Moreover, these differences were not related to the boys' impulsive or reflective traits. Thirdly, Kirsh (7) also investigated the effects of playing a violent versus a nonviolent video game. After playing these games, third- and fourth-graders were asked questions about a hypothetical story. On three of six questions, the children who had played the violent game responded more negatively about the harmful actions of a story character than did the other children. These results suggest that playing violent video games may make children more likely to attribute hostile intentions to others.

In another study by Karen E. Dill, Ph.D. & Craig A. Anderson, Ph.D., violent video games were considered to be more harmful in increasing aggression than violent movies or television shows due to their interactive and engrossing nature. (8) The two studies showed that aggressive young men were especially vulnerable to violent games and that even brief exposure to violent games can temporarily increase aggressive behavior in all types of participants.
The first study was conducted with 227 college students with aggressive behavior records in the past and who completed a measure of trait aggressiveness. They were also reported to have habits of playing video games. It was found that students, who reported playing more violent video games in junior and high school, engaged in more aggressive behavior. In addition, the time spent playing video games in the past were associated with lower academic grades in college, which is a source of frustration for many students, a potential cause for anger and aggression as discussed in the previous paragraph.

In the second study, 210 college students were allowed to play Wolfenstein 3D, an extremely violent game, or Myst, a nonviolent game. After a short time, it was found that the students who played the violent game punished an opponent for a longer period of time compared to the students who played the non violent game. Dr. Anderson concluded by saying, "Violent video games provide a forum for learning and practicing aggressive solutions to conflict situations. It the short run, playing a violent video game appears to affect aggression by priming aggressive thoughts." Despite the fact that this study was for a short term effect, longer term effects are likely to be possible as the player learns and practices new aggression-related scripts that can become more and more accessible for the real-life conflict that may arise. (9)

The U.S. Surgeon General C. Everett Koop once claimed that arcade and home video games are among the top three causes of family. Although there have been studies that have found video game violence to have little negative effects on their players, there are also many studies that have found a positive correlation between negative behavior, such as aggression, and video and computer game violence. Thus, in order to totally assess the effects of game violence on its users, the limiting conditions under which there are effects must be taken into account, which include age, gender, and class/level of education. (10) However, violent games do affect children, as the studies show, especially early teens, and I feel that there needs to be a stricter regulation regarding the availability of these games to young children.

References

1) BBC News Web site in UK.

2) Game Research Website, covering the art, the business, and the science of computer games.

3) American Psychological Association, Article on the main study discussed in this paper.

4) Mental Help Net, Psychological Self-Help. This site has a lot of interesting links to mental illnesses and just understanding personalities.

5) Calvert, Sandra L., & Tan, Siu-Lan. (1994). Impact of virtual reality on young adults' physiological arousal and aggressive thoughts: Interaction versus observation. Journal of Applied Developmental Psychology, 15(1), 125-139. PS 527 971.

6) Irwin, A. Roland, & Gross, Alan M. (1995). Cognitive tempo, violent video games, and aggressive behavior in young boys. Journal of Family Violence, 10(3), 337-350.

7) Kirsh, Steven J. (1997, April). Seeing the world through "Mortal Kombat" colored glasses: Violent video games and hostile attribution bias. Poster presented at the biennial meeting of the Society for Research in Child Development, Washington, DC.

8) SelfhelpMagazine. Article under teen help. It is a great library of various mental disorders and personal growth topics!

9) American Psychological Association.

10) Internet Impact This paper is a collaborative essay consisting of research and policy recommendations on the impact of the Internet in society.


Inner Vision: an Exploration of Art and the Brain,
Name: Alanna Alb
Date: 2003-04-15 01:45:11
Link to this Comment: 5387


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Is artistic expression intertwined with the inner workings of the brain more than we would ever have imagined? Author and cognitive neuroscientist Semir Zeki certainly thinks so. Zeki is a leading authority on the research surrounding the "visual brain". In his book Inner Vision, he ventures to explain to the reader how our brain actually perceives different works of art, and seeks to provide a biological basis for the theory of aesthetics. With careful attention to details and organization, he manages to explain the brain anatomy and physiology involved when viewing different works of art without sounding impossibly complicated – a definite plus for scientists and non-scientists alike who are interested in the topic of art and the brain. Throughout the book, Zeki supports his arguments by presenting various research experiments, brain image scans, and plenty of relevant artwork to clarify everything described in the text. By mostly focusing on modern masterpieces (which include Vermeer, Michelangelo, Mondrian, kinetic, abstract, and representational art), he convincingly explains how the color, motion, boundaries, and shapes of these unique works of art are each received by specific pathways and systems in the brain that are specially designed to interpret each of these particular aspects of the art, as opposed to a single pathway interpreting all of the visual input.

The subject matter that Zeki approaches here is no easy topic to clearly explain to others, especially since a whole lot remains to be discovered in the field itself. Yet Zeki does a superb job of explaining. In my neurobiology class, I recently learned that if we bang our arm or rub our hands together, it is not really the body that is feeling the pain of banging or the sensation of rubbing, but rather the brain itself. After hearing this idea, I was very surprised and excited to see that Zeki had actually devoted all of chapter 3 to dispelling the "myth of the seeing eye". Here he specifically points out that a painter does not paint with her eye; she paints with her brain (13). I was especially pleased that Zeki made this point, since we so easily forget that the eye is merely the organ through which the brain receives filtered input from the outside world. While it is true that our ability to see depends on the eye and the brain working hand in hand, and damage in either one will ultimately affect the other, only the brain is capable of transforming the necessary input or output so that we are able to see the painting before us. The eye only serves to transmit these signals from the outside world, to the brain, and back again. We only tend to think that people see with their eyes because of well-meaning but incorrect figures of speech commonly spoken in society (13-14). For example, we may hear something like "she has a good eye for painting ocean scenes" or "she eyes the angle of the Golden Gate Bridge just right – look how well she can sketch every line and curve on paper". Hearing statements such as these very often will eventually mislead us into thinking that it is only with the eye that one sees.

Another hotly debated issue within our class is if the brain does really equal behavior. Up until I read this book, I had always been skeptical of that equation; but now, after reading this book I have come to believe that the brain does equal behavior. The reason is due to Zeki's belief that "the function of art and the function of the visual brain are one and the same" and that "art is an extension of the functions of the brain" (1). It is not the statement itself that has changed my opinion, but rather the way in which he goes about proving the statement in every chapter.

I was very surprised to discover that the retina of the eye is not connected to all of the cerebral cortex; rather, it is connected to only a portion of the cortex that is considered the "vision center", or where images produced/received by the brain are processed. However, it does not end there – it is only the beginning. This vision center is made up of smaller vision centers, each intricately connected to each other in some way. Each center is specialized to process a certain aspect of a visual image; for example, the center Zeki labels "V1" is considered to be the cortical retina or "seeing eye" of the brain, since it selectively redistributes information received from an outside image to more specialized centers for processing. For example, information about color and wavelength would be sent the area known as "V4" to be interpreted by the brain.

It should be noted that while damage caused to V1 would most likely result in total blindness, damage done to one of the more specialized centers would result in the inability of the brain to process specific input relating specifically to the damaged area. Zeki relates one experiment performed where a sponge was presented to a woman who had some damage located in her "association" center located next to V1, but possessed no actual damage in V1 itself. In the experiment, the woman was still able to see the sponge, but found herself incapable of understanding what it actually was. For me, this discovery was solid evidence that the brain equals behavior, because the woman's damaged brain consequently led to her behavior, or her inability to understand what the object was that was placed in front of her. If one little damaged spot in the brain could affect her behavior that drastically, then this demonstrates that in general, whatever happens to the brain, happens inevitably to behavior. A deficit in the brain equals a deficit in the behavior as well.

Is not art a behavior of the artist then? The artistic brain yields artistic behavior. All artists' brains are similar, but only in the sense that they are all capable of exhibiting some type of artistic behavior. With this one exception in mind, I realize that all artists still possess brains that are very different from each other's. If this were not true, then I do not believe that we would see all of the various kinds of abstract, kinetic, and other forms of modern art on display in museums today. Different artistic brains yield different artistic behaviors, or different works of art (or even the ability to appreciate particular works of art). Thus, it only makes sense for me to conclude that people who are non-artists also have brains that are different from artists, since they are incapable of producing any type of artwork; in other words, they exhibit different behavior as opposed to the actual artist. For the majority of the chapters in the remainder of the book, I think this is a key point that Zeki distinctly presents to the reader.

What about people who are incapable of appreciating a particular kind of art? They too have different brains. For example, Zeki mentions that a person who is prosopagnosic will not appreciate portrait painting because she is incapable of recognizing faces, due to damage in the face recognition area of the brain. However, she can still enjoy looking at other kinds of art since the other parts of the brain responsible for interpreting those kinds of art have not been affected. This is because viewing certain kinds of art activates different parts of the brain's vision center. Hence, the same is true for the artist herself: she is only capable of painting an artwork that does not contain any elements that a damaged part of her brain's vision center would not be able to process.

Although Zeki has no possible way of proving this theory, I am especially intrigued by the way in which he tries to defend his idea – and doing it quite convincingly as well. Zeki devotes his last chapter entirely to the discussion of Monet's brain, actually questioning whether Monet's brain may have had some damage in its vision center. He comes to this conclusion after a careful observation of Monet's paintings of the Rouen Cathedral. This is a shock to read at first, because who would ever take a glimpse at Monet's paintings and claim that there was a possible abnormality in his brain? However, Zeki hypothesizes that Monet might have been dyschromatopsic, or limited in his ability to see colors (210). As uncomfortable as this makes me, I fear that Zeki might be right for he specifically notes how Monet failed to account for the varied lighting conditions in his paintings of the cathedral – this is significant because Monet painted the cathedral in all kinds of weather and lighting conditions. In each of the paintings, I observed how the brightness of the light depicted seemed to be the same throughout the whole picture. This is rather odd when given some considerable thought.

One of my last questions to Zeki's book is how he could explain how different artists manage to paint similar works of art. The answer is as simple as this: "different modes of painting make use of different cerebral systems" (215). I think the message he tries to communicate to the reader is that while different artists are capable of creating similar paintings, the paintings are never exactly alike because each artist makes use of different visual pathways in the brain to create her own unique work of art. Thus, here I think is another perfect example of how the brain equals behavior. It is again demonstrated here that different artistic brains will create distinct works of art.

Overall, I think that the book is deeply intriguing and engaging – it draws the reader in so intensely that she cannot break free until she reads the very last page. Zeki manages to bring to light so many new ideas about the visual brain. He takes what little we do know about the brain and distinguishes myth from fact. It is interesting to note how much of the book is really just hypothetical guesses proposed by Zeki, since there is still so much about the physiological workings of the brain that we have yet to discover. Nevertheless, I found it fun to read the book and compare the known facts to the theories and make guesses as to what might actually be found to be true someday. This is a most delightful book, and I highly recommend it to anyone who has even the slightest interest in uncovering the mysterious links that exist between the brain and visual art.

References


Looking Out for Future Pain
Name: Luz Martin
Date: 2003-04-15 01:51:14
Link to this Comment: 5388


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


Pain is a method used by the body to interpret the outside world. Our skin is covered with sensory neurons that are responsible for acquiring information about the body's surroundings (6). Some of the nerve endings involved in the pain sensing process are called nociceptors (6). Most of the sensory receptors and nociceptors come from an area near the spinal cord (6). The information from the sensory neurons is sent through intermediate neurons and is passed onto the motor neurons that are involved in a physical movement, or are sent to the brain (1). In the brain, the information is interpreted and behavioral and emotional reactions are created (6). The definition of pain used by the International association for the Study of pain describes it as a sensory or emotional interpretation that is produced when there is the potential or actual occurrence of tissue damage (2).

Adults are able to verbalize the intensity of their pain and can help monitor the effectiveness of treatment when there is damage to the body tissue. How can adults interpret the pain in infants who cannot verbalize their experience? What concerns should we have when treating tissue damage in babies? What about the damage treatment of babies inside the womb?

It has been noted that a newborn has sensory nerve cells that have a greater respond rate than an adult (4). With sensitive sensory nerve cells, the spinal response to a stimulus is also increased and lasts for a longer period of time when compared with an adult (4). The appearance of these sensitive nerve cells is found on a larger portion of a newborn's skin when compared with adults (4). These sensory areas are called receptive fields (4). The receptive fields help the nervous system keep track of where the stimulus was received (4). With a larger receptive field, babies are unable to pin point the exact location of the stimulus (4).

Since newborns have very sensitive sensory nerves, the same response is produced to any stimulus without regard to the intensity (4). A newborn may react in the same way to a pinch as to a soft touch (4). The newborn will respond to non-harmful experiences as if they were potentially harmful (4).

Questions have been raised about the level of sensation that the fetus itself undergoes when using surgery to address abnormalities in a fetus (1). Surgery involves the opening of the mother's uterus to perform corrections on the fetus (1). Once corrected the fetus is returned to the mother's uterus to complete the normal term of development (1). The use of fetal surgery is a way of increasing the survival rate of the fetus after birth (1). The organs corrected would have inevitably prevented the baby from living after birth (1). The corrected threatening abnormalities may produce effects such as respiration failure, neurological damage, and heart failure (1).

The development of the nervous system may help determine the level of neural activity that can be interpreted as pain. The fetus is able to respond with reflex motions as early as 7.5 to 14 weeks (3). Although at 16-35 weeks the fetus displays patterns of reflexes the spinal cord is not yet fully developed (3). Usually the responses observed are exaggerated (2). Although the fetus is able to respond to a stimulus, the fetus needs the cortex as well as memory to experience pain (2). With memory, the fetus may interpret the sensations as pain due to past experiences that would lead to anxiety (2). Lloyd-Thomas and Fitzgerald suggest that the exaggerated response helps the fetus react to stimuli that it is not yet able to synthesis and produce a direct response to (2). The fetus displays receptor systems that have not yet matured (2). Without memory, the fetus is unable to interpret the stimulus as pain, but the fetal nervous system does respond to protect from harmful tissue damage (2).

Although the fetus and a newborn may not feel pain an interest in the issue has been raised due to the observation of the long-term consequences on the body's pain systems. An injury experienced early in development may affect the response to pain in childhood as well as adulthood (4).

It has been observed in adults that when a sensory nerve has been injured the nociceptive system, or pain system is altered (5). The pain experienced should only last for a period of time, until the tissue damage is corrected (5). When there is tissue damage and the sensory neuron is damaged as a result, the nociceptor is no longer considered reliable. The pain system ignores the damaged nociceptor's signals of pain sent to the brain (4).

The only way that pain is sensed in the region where the injured nociceptor is located is by having a neighboring nerve cell that is still functional monitor the area (4). This area continues to be on pain alert and remains sensitive to touch even after the tissue damage has been corrected (4).

Some suggest that because the fetus may not necessarily feel pain, the sensory neurons are in danger of malfunctioning (4). The neurons may be permanently altered if surgery is performed without taking precautions to prevent damage from reoccurring in the sensory neurons (4). Disabling the nociceptors may result in the spreading of nerve terminals in healthy sensors, thereby altering the areas covered by the sensory nerve (4). The nervous system is left with an altered picture of what area the pain is coming from (4). Oversensitive areas may develop as a result of alteration (4). These areas may be triggered by damage that will remain sensitive even though the damage is gone (4).

The sensation of pain is a system that incorporates physical sensation as well as past experiences. The experience of pain through emotions may affect how well a person can function in daily life. Considering ways to prevent pain receptor damage in developing nervous systems may help avoid future pain. There is no better time to catch a problem than before it begins.

References

1)Annual Reviews Medicine, Fetal Surgery, By Flake Alan W. and Michael R. Harrison; 1995

2)British Medical Journal, For Debate: Reflex responses do not necessarily signify pain British Medical Journal; 1996 (28 September), By Lloyd-Thomas, Adrian R. and Maria Fitzgerald.

3)New England Journal Of Medicine, Pain and its Effect in the Human Neonate and Fetus. The New England Journal Of Medicine, Volume 317, Number 21: Pages 1321-1329, 19 November 1987. By K.J.S. Anand, M.B.B.S., D.Phil., And P.R. Hickey, M.D

4)Medical Research Council, , The Birth of Pain. MRC News (London) Summer 1998:20-23. By Fitzgerald M.

5)Richeimer Pain Medical Group, , Understanding Nocicieptive & Neuropathic Pain; December 2000.

6)The Association of the British Pharmaceutical Industry, The Anatomy of Pain.


Anorexia Nervosa: An issue of control
Name: Annabella
Date: 2003-04-15 03:05:09
Link to this Comment: 5389


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

As medicine has progressed through the years, so have the avenues for diagnosing the various causes of many disorders. Recently there have been new discoveries about the disorder anorexia nervosa. Anorexia nervosa is a life-threatening eating disorder defined by a refusal to maintain body weight within 15 percent of an individual's minimal normal weight. (2) Other essential features of this disorder include an intense fear of gaining weight, a distorted body image, and amenorrhea (absence of at least three consecutive menstrual cycles when otherwise expected to occur) in women. (1) Theories about the causes of anorexia nervosa include the psychological, biological, and environmental. This paper will discuss the question of the multiple origins of anorexia nervosa, and attempt to identify a common underlying cause.

Conservative estimates suggest that one-half to one percent of females in the U.S. develop anorexia. Because more than 90 percent of all those who are affected are adolescent and young women, the disorder can be characterized primarily as a women's illness. It should be noted, however, that children as young as 7 have been diagnosed, and women 50, 60, 70, and even 80 fit the diagnosis. (5) Like all eating disorders, it tends to occur in pre or post puberty, but can develop at any life change. One reason younger women are particularly vulnerable to eating disorders is their tendency to go on strict diets to achieve an "ideal" figure. This obsessive dieting behavior reflects a great deal of today's societal pressure to be thin, which is seen in advertising and the media. Others especially at risk for eating disorders include athletes, actors, and models for whom thinness has become a professional requirement. (3)

The classic anorexic patient (although more and more variations are being seen) is an adolescent girl who is bright, does well in school, and is not objectively fat. She may be a few pounds overweight and begins to diet. She comes to relish the feeling of control that dieting gives her and refuses to stop. However grotesquely thin she may appear to others, she sees herself as fat. To combat the slowing of metabolism that accompanies starvation, severely restrictive eating is combined with excessive, frenetic physical activity. (7) The term anorexia, or absence of appetite, is a misnomer because patients are often hungry and can be quite preoccupied with food. They may cook elaborate meals for others, hoard food, or establish intricate rituals around the food they do eat. These behaviors resemble those seen in some patients with obsessive-compulsive disorders, and may extend outside the arena of eating. Generally, patients with anorexia nervosa are very secretive and defensive about their eating habits and may deny any problems when confronted. They may dress in oversized clothes (in an effort to hide their wasted bodies) or refuse to have meals with others. (3) The hallmark of anorexia nervosa is denial and preoccupation with food and weight. In fact, all eating disorders share this trait including binge eating disorder and compulsive eating. One of the most frightening aspects of the disorder is that people with anorexia continue to think they look fat, even when they are bone-thin. Their nails and hair become brittle, and their skin may become dry and yellow. Depression is common in patients suffering from this disorder. People with anorexia often complain of feeling cold (hypothermia) because their body temperature drops. They may develop long, fine hair on their body as a way of trying to conserve heat. Food and weight become obsessions as people with this illness constantly think about their next encounter with food. (6)

As previously stated the causation of anorexia nervosa can be biological, psychological, or environmental. One psychological theory is that food intake and weight are areas that a young woman can control in a life otherwise dictated by an overly involved family, which may include a parent who is unduly concerned with weight or appearance. (8) From this stems the idea that one cause of anorexia may be the development of control issues. Many anorexics admit that they began the downward spiral towards anorexia when they started to perceive that they had lost control of their lives. This is especially common in college students who perceive that they have no control over the direction of their lives. For example, the case of Betsy, who is a hard working college student in her second year. She is anxious about her future. Like her mother, Betsy gets depressed. She goes through periods when she feels very discouraged about life in general. She also is uncertain about how she looks though her friends think she looks great. (5) In an interview conducted with her psychologist, she stated " I began to stop eating as a way to gain some control over my life. I was shocked at the measure of relief that it gave me to have control of this tiny thing. From then on controlling what I ate became an obsession, a very twisted obsession." (5) It seems then, that one psychological causes of anorexia is a perceived lack of control in ones life. Through controlling ones food intake, anorexics are reaffirming that they have control. Can all of the psychological causes of anorexia nervosa be traced to a control issue?

Since the symptoms of anorexia often appear around puberty, another hypothesis is that the girl is afraid of becoming a woman, and therefore diets away all signs of puberty (i.e., breasts, hips, menses). The full-time preoccupation with weight also allows her to avoid adolescent social and sexual concerns and potential conflicts with parents. (8) In case of the pubescent girl, it is possible to trace the cause of anorexia past the initial preoccupation with weight to deeper control issues. The pubescent girl in the throw of fluctuating hormones, and massive physical change, would fit the bill of a person whom would feel as if they had lost control of their world. Their body is turning against them, what better way to reassert control than to regulate its intake of food and thus its shape. Puberty is also the time in which a girl begins to identify herself as a woman, both mentally and physically. She will look at her environment for female role models to copy. This leads to another cause of anorexia—the media. The role of the American cultural ideal of thinness is thought to contribute, at least by encouraging initial dieting, but the scope of its influence is unclear. On any given day, 25% of American men and 45% of American women are actively dieting; moreover, children are starting to diet as early as first grade. Anorexia is now being seen in nonwestern countries that receive American television, so it may be that a cultural ideal of thinness is a potent catalyst. (5)

We have seen that there are various psychological instigators for anorexia, most of which can be traced back to issues of control. However, many researchers propose that anorexia can also find some origin in a biological cause. Most advocates of the biological cause believe that anorexia is caused genealogically (i.e. passed down through their parents). (7) According to a recent study, mothers and sisters of people with anorexia or bulimia are at higher risk of having one of these disorders. Compared to the rest of the population, these mothers and sisters have a risk for anorexia that is 11 times higher and a risk for bulimia that is 4 times higher. (4) At this point one might ask, is it the similar genes or environment that is causing anorexia? As in most human disease, the answer appears to be "both." Better understanding of the genetic roots of eating disorders will make it easier to identify and understand the environmental factors. Some studies that were done on twins demonstrated that anywhere from 50 to 90% of the risk for anorexia is said to be genetic. (4) To date, medical science has not found a specific gene or genes that contribute to anorexia and bulimia. But this doesn't mean medical science has no clues. Genes that affect appetite may be involved. These genes would regulate the feeling of "fullness" after eating. People with anorexia may feel full early, and so suppress their appetites completely. (7)

There is also the issue of a common environment that might explain the higher rates of anorexia nervosa amongst mothers and sisters. Family dynamics must come strongly into play. The boundaries between generations tend to be blurred in the families of persons with eating disorders. That is, parents and children are constantly involved in each other's problems. Other researchers point to early events in family life that cause a "paralyzing sense of ineffectiveness." (5) In both events, perceived lack of control in ones life (due to overbearing parents, or a sense of ineffectiveness) can be associated with the initial issues. A this point the question is posed, since there is a link between family members and anorexia, why do only certain members of the family have anorexia and not the entire family? The genetic and environmental differences that explain differences in our appearance and health. Some family members will inherit genes that predispose to eating disorders, and others will not. Some family members will be exposed to environmental agents that trigger disease, and others will not. (5)

In this paper the question has been raised, what is the cause of anorexia nervosa? This question led to the idea that part of the cause of anorexia nervosa could be a perceived lack of control. To prove this we looked at the three main ideas's behind the causation of anorexia. Most physicians believe that anorexia can trace its origins to psychological, genetic, or environmental issues. In both the psychological and environmental cases it was demonstrated that, upon closer look, there was evidence that suggested a deeper problem of perceived loss of control. For example, the pubescent girl who finds herself going through many changes and becomes anorexic. At first glance the anorexia was caused by a fear of becoming a woman. Is this fear of becoming a woman not indicative of a greater fear of loss of control, not only of ones body (that is changing daily), but also of ones mind (that is at the beck and call of hormones)? In this case it is easy to link the cause of anorexia back to a control issue. Through the same method can we trace control issues back to the original cause of environment. The family environment was used as an explanation as to why mothers and sisters were more highly susceptible to anorexia. However, that same theory then suggested that early events in communal family life (i.e. death of a parent) could cause a "paralyzing sense of ineffectiveness," that in turn caused anorexia. This point easily flows over to prove that perceived lack of control was once again culpable as part of the cause of anorexia. Many conditions spanning from genetic to psychological all help cause anorexia, in most cases the common thread of control issues can be found in each. Future breakthroughs might identify even more causes of anorexia, and hopefully answer the genetic question. Yet no matter how much we research there will always be a part of the human mind that will remain a mystery.


World Wide Web Resources
1)Anorexia Nervosa, a site that give a thorough explanation of what anorexia nervosa is, and its symptoms.
2)Medical Student Conducted Studies, an interesting site that give a listing of studies that medical students are conducting. It also lists the results of the study. In this paper it helped to clearly outline the effects of anorexia/
3)Causes of Anorexia, this site lists the different causes for anorexia, and their effects on the body.
4)Entrez-PubMed, a wonderful site that lists the medical studies that were conducted by the government.
5)Hospital Practice , a site that lists the articles written by various different doctors. This is an extremely informative site that helped identify the broadly different causes of anorexia nervosa.
6)Link Between Depression and other Mental Illness, an article that discusses the links between mental illness and depression. It addresses a very wide variety of issues.
7)Genetics and Eating Disorders, this site looks at the link between genetics and eating disorders. It also looks at the different medical studies that have been conducted on the topic.
8)On the Teen Scene, this site looks at the effect of anorexia nervosa on teenagers. It looks at questions as to why it affect that one age group so hard, and its ramifications.


Postpartum Depression
Name: Nupur Chau
Date: 2003-04-15 03:23:35
Link to this Comment: 5391


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Most of us were appalled when, in 2001, Andrea Yates, a Texas mother, was accused of drowning her five children, (aged seven, five, three, two, and six months) in her bathtub. The idea of a mother drowning all of her children puzzled the nation. Her attorney argued that it was Andrea Yates' untreated postpartum depression, which evolved into postpartum psychosis that caused her horrific actions (1) . He also argued that Andrea Yates suffered from postpartum depression after the birth of her fourth child, and that she attempted suicide twice for this very disorder ((1)). What is postpartum depression, and how can it cause a mother to harm her very own children, altering her behavior towards her children in a negative way?
One in ten women experience postpartum depression ((2)), a condition that often goes undiagnosed, and occurs in women after childbirth. A reason for the lack of diagnosis of postpartum depression is a milder, more common form of depression after childbirth, often known as the "baby blues". The baby blues occur in mothers three to five days after childbirth ((2)) , and may last for as little as a couple hours to a couple weeks ((4)). These symptoms include

* mild sadness
* tearfulness
* anxiety
* irritability, often for no clear reason
* fluctuating moods
* increased sensitivity
* fatigue ((2))

The treatment for the baby blues are frequent naps, a proper diet, and plenty of support from partners, family, and friends ((3)). Generally, the baby blues subside without any sort of serious treatment. However, the baby blues may evolve into postpartum depression. One study discovered a link between postpartum depression and the baby blues: out of the women that were diagnosed with postpartum depression six weeks after delivery, two-thirds of them experienced the baby blues ((2))
Postpartum depression is more serious than the baby blues, with mothers experiencing symptoms with a greater severity and a longer duration than the baby blues. Almost ten percent of recent mothers experience postpartum depression ((3)), occurring anytime within the first year after childbirth ((3)). The majority of the women have the symptoms for over six months ((2)) . These symptoms include

* Constant fatigue
* Lack of joy in life
* A sense of emotional numbness or feeling trapped
* Withdrawal from family and friends
* Lack of concern for yourself or your baby
* Severe insomnia
* Excessive concern for your baby
* Loss of sexual interest or responsiveness
* A strong sense of failure and inadequacy
* Severe mood swings
* High expectations and over demanding attitude
* Difficulty making sense of things ((3))

Consequently, the treatment for postpartum depression is more intense than that for the baby blues. Among the many treatments, many mothers undergo intense counseling, take antidepressants, or even experience hormone therapy ((3)).
In rare instances, postpartum psychosis is diagnosed (one-tenth or two tenths of a percent experience it ((2)) ). When experiencing postpartum psychosis, new mothers can experience auditory hallucinations, as well as delusions and visual hallucinations ((4)), making them lose their sense of what is real and what is false. Treatment is imperative an often times done under immediate hospitalization.
What is about childbirth that leads to depression, whether it is a mild or severe form, and how does that affect the brain? The causes of postpartum depression are vague and have yet to be defined by researchers. It is thought that it is the hormonal changes that may trigger the depression. During the nine months of pregnancy, the woman's level of estrogen and progesterone increase significantly ((4)); however, within the first twenty-four hours of childbirth, the hormone levels drop dramatically, back to the level that they were before pregnancy ((4)) . It is this dramatic hormonal change that is believed to be the cause of postpartum depression among new mothers. This hormonal change can be equated to the "mood swings" that a woman experiences during her menstrual period, a time when her hormone levels are slightly irregular(1). Additionally, thyroid levels can also drop during childbirth, and thus may be a factor in postpartum depression ((4)).
Consequently, ways to prevent another Andrea Yates from going too far is to treat postpartum depression seriously. Because the baby blues are so common, postpartum depression and psychosis are often misdiagnosed as the baby blues, or even more frequently, not diagnosed at all. Thus, postpartum depression must be taken seriously.


References


1)Study Works! Online: What is Postpartum Depression?

2)Postpartum Depression and Caring for Your Baby

3) Postpartum Coping: the Blues and Depression

4)Frequently Asked Questions about Postpartum Depression


Creativity and Bipolar Disorder
Name: Nicole Meg
Date: 2003-04-15 03:27:43
Link to this Comment: 5392


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

History has always held a place for the "mad genius", the kind who, in a bout of euphoric fervor, rattles off revolutionary ideas, incomprehensible to the general population, yet invaluable to the population's evolution into a better adapted species over time. Is this link between creativity and mental illness one of coincidence, or are the two actually related? If related, does heightened creative behavior alter the brain's neurochemistry such that one becomes more prone to a mental illness like bipolar disorder? Does bipolar disorder cause alterations in neurochemistry in the brain that increase creative behavior through elevated capacity for thought and expression? Is this link the result of some third factor which causes both of the two effects?

Centuries of literature and innumerable studies have supported strong cases relating creativity--particularly in the arts, music and literature--to bipolar disorder. Both creativity and bipolar disorder can be attributed to a genetic predisposition and environmental influences. Biographical studies, diagnostic and psychological studies and family studies provide different aspects for examining this relationship.

A 1949 study of 113 German artists, writers, architects, and composers was one of the first to undertake an extensive, in-depth investigation of both artists and their relatives. Although two-thirds of the 113 artists and writers were "psychically normal," there were more suicides and "insane and neurotic" individuals in the artistic group than could be expected in the general population, with the highest rates of psychiatric abnormality found in poets (50%) and musicians (38%). (1) Many other similar tests revealed this disproportionate occurrence of mental illness, specifically bipolar disorder, in artistic and creative people, including a recent study of individuals over a thirty-year period (1960 to 1990). Overall, when comparing individuals in the creative arts with those in other professions (such as businessmen, scientists, and public officials), the artistic group showed two to three times the rate of psychosis, suicide attempts, mood disorders, and substance abuse. (1)

Another recent study was the first to undertake scientific diagnostic inquiries into the relationship between creativity and psychopathology in living writers. Eighty percent of the study sample met formal diagnostic criteria for a major mood disorder versus thirty percent of the control sample. The statistical difference between these two rates is highly significant, where p<.001. This means that the odds of this difference occurring by chance alone are less than one in a thousand. Of particular interest, almost one-half the creative writers met the diagnostic criteria for full-blown manic-depressive illness. (1) This is not to say that the majority of artists are bipolar but rather that there is a considerably higher incidence in bipolar disorder among artists than among the general population.

Collectively, these studies and numerous others have clinically supported the existence of a link between bipolar disorder and creativity. Now the question applies: Is bipolar disorder the result of above-average creativity or is above-average creativity the result of bipolar disorder or are the two a result of some third factor which causes the two effects? From the sources I have encountered, I believe a stronger case is made for the latter, although it is impossible to scientifically or psychologically answer that question at this time.

Predisposition to bipolar disorder is genetically inherited and current studies suggest the same for predisposition to creativity but is there a common genetic factor, which determines the expression of both traits? If there were, neither creativity nor bipolar disorder would implicitly cause the other. A recent study hypothesized that a genetic vulnerability to manic-depressive illness would be accompanied by a predisposition to creativity, which, according to the investigators, might be more prominent among close relatives of manic-depressive patients than among the patients themselves. Significantly higher combined scores from a creativity assessment test were observed among the manic-depressive patients and their normal first-degree relatives than among the control subjects, suggesting a possible genetic link between the two characteristics, as both are prevalent in families with a history of bipolar disorder and not as evident in control families. (1) A wide variety of artistic and creative talents, ranging from music to art to mathematics, were exhibited among the family members of the bipolar patients as well. The varied manifestations of creativity within the same family suggest that whatever is transmitted within families is a general factor that predisposes them to a creative mentality, rather than a specific giftedness in a single area. The coexistence of creativity accompanied by manic depression, whether expressed in bipolar patients or not expressed in their predisposed family members, suggests that a third factor, yet unidentified, may be orchestrating the expression of the two.

Assuming both creativity and bipolar disorder, or at least predisposition to the illness, are expressed simultaneously, what accounts for heightened creativity in people upon onset of bipolar disorder? A deficit in normal information-processing could be manifested in a severe behavioral disorder, but it could also favor creative associations between information units or a propensity toward innovation and originality. (2) The altered neurological structure and functioning in the frontal lobe, prefrontal cortex, hippocampus, hypothalamus and cerebellum associated with bipolar disorder may also allow for more creative thought.

People with bipolar mood disorders tend to be more emotionally reactive, which gives them greater sensitivity and acuteness. This, coupled with a lack of inhibition due to compromised frontal lobe processes, permits them unrestrained and unconventional forms of expressions, less limited by accepted norms and customs. They are more open to experimentation and risk-taking behavior, and, as a consequence, more assertive and resourceful than the mean. (2) (3) Characteristics of the bipolar disorder, such as lowered inhibition, allow for freer expression of previously contained ideas and the constant flux between manic and depressive states also gives an unusual kaleidoscopic perspective of the world. All of these factors can account for increased creativity once the illness erupts. (5)

The current model supports the existence of a relationship between creativity and bipolar disorder as the coexisting effects caused by some third factor. Uncovering the origin of the relationship between creativity and bipolar disorder will require continued studies, particularly those implementing brain scans and genetic isolation techniques, aimed at identifying this mysterious third factor that would link the two traits together. (4) The new equipment and test available, such as PET scans, Magnetic Resonance Imaging and gene mapping, has complicated the process by offering new ways to explain bipolar disorder as a possible collection of disorders presenting closely similar symptoms. Hence, the third factor may actually be a combination of multiple factors like environmental insults to fetal development, hormonal imbalances in the womb and inordinate stress during development in addition to genetic factors. (6) When it is determined which of these factors, acting either alone or in various combinations, are the mysterious third factor, the origin of the relationship between creativity and bipolar disorder will be unveiled.


References

1) Jamison, Kay Redfield. Touched with Fire. New York: Simon & Schuster, 1993.

2) Journal of Memetics, an article addressing creativity, evolution and mental illness.

3)Bipolar Disorder, an educational resource about bipolar disorder.

4) Manic-Depressive & Depressive Association of Boston, an article discussing the genetics of bipolar disorder.

5) Diagnostic and Statistical Manual of Mental Disorders, an online version of the resource book.

6) From Neurons to Neighborhoods, a book that addresses early development of the brain.


Phantom Limbs: What is the Cause of Sensation?
Name: Melissa Os
Date: 2003-04-15 03:36:42
Link to this Comment: 5393


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Do amputee patients actually feel pain in missing limbs? If so how and why? These are some of the questions that have been asked for many years. One particular case study example displays this phenomenon. In 1983 a man named Aryee lost his right arm in an accident during a storm at sea. An experiment was done on him with the following results related in this dialogue between patient and observer:
'-"See if you can reach out and grab this cup in your right hand. What are you feeling now?
-I feel my fingers clasping the cup.
-Okay try it again." (As the patient tries to reach for the cup, the doctor pulls it farther away)
-Ouch! Why did you do that?
-Do what?
-It felt like you ripped the cup right out of my fingers"' (2) .
This dialogue that occurred between Aryee and the experimenter is a description of a common sensation felt by many amputee patients. Although they are missing an arm, finger, or leg, they have sensations similar to Aryee that include tingles, itching, and even pain where the limb used to be. Thus these patients feel sensations "which seem to emanate from the amputated part of the limb" (4). Phantom limb is the name for this type of phenomenon.

Phantom limb is not a newly discovered occurrence among amputee victims; however it has persisted as a medical mystery for hundreds of years. Even the words associated with the sensation phantom and phenomena suggest that it is somehow out of the ordinary and without a rational explanation. Nonetheless, there is a long documented history of phantom limbs. During the Civil War, in 1872, Silas Weir Mitchell who worked at a hospital in Philadelphia wrote the first clinical documents on patients that experienced feeling where their missing limb used to be located (2) , (5). Today amputees are still experiencing this phenomenon. The sensation of phantom limbs "occurs in 95-100 percent of amputees who lose an arm or leg" (6). At first people merely feel sensations coming from their amputated limb, however in time it develops into pains of stabbing, burning, or cramping (4). In addition, temperature and texture can be felt, such as warmth, cold, and rough surfaces (6). While the phantom limb now a day is no longer considered a myth, rather a legitimatized medically proved sensation the source of these sensations is still a topic of much debate. There are many theories neurobiologists argue are the cause of phantom limb. Two of these theories are based on two differing concepts of the brain: one that the brain is hardwired, the other that the brain, particularly, the cortex can be reorganized.

There was a time when the belief was held in the scientific community that adult brains lost their plasticity after adolescence (3). The idea was that there was no ability for change in the cortex but that the brain is considered to be hardwired with particularly functions that continue despite the fact that a limb might be missing. Michael Merzenich supported this theory with research on monkeys. He amputated the index finger of an adult monkey and recorded the signals that reached the cortical map (2). In the somatosensory cortex there is the representation of the human body called the homunculus, which is like a map of the body (2). The experiment resulted in the discovery that neurons in the index finger region of the homunculus were fired whenever fingers next to the amputated one were touched (2). In this experiment there was no evidence for neuronal growth, rather "unmasking [which] sends new impulses to the previously empty region" (2). Existing axon branches uncover at some point after the amputation and continue to operate. This concept is linked to the fact that body image or perception of the body is genetically hardwired in the brain (1), thus the cause of phantom limbs.

The other theory about the causation of phantom limbs is based on a discovery by T.P. Pons. In his experiment on Silver Spring Monkeys, he discovered that the face region of the cortex replaced the amputated arm's cortex. Thus there is a reorganization of the brain, in which axonal branches sprouted up from facial cortex across amputated limbs area (2). Sensations in the phantom limb can simultaneously be felt in the face. This may mean that the actual physical sensations the patients are experiencing are being processed by the face, but associated with the missing limb. "The brain has reorganized itself after the brain the injury [amputation] such that neurons that were responsive to the missing inputs become responsive to remaining inputs" (3). For Pons and then other neurobiologists that followed this line of thought, the brain could not be hardwired. Instead this experiment suggests that if the cortex can change and alter in such a way it can't possible be as set in stone as once thought.

Phantom limbs are an intriguing concept. The thought that our body can feel what is not there has been a much debated and analyzed part of the human nervous system. Although many theories have been experimented and debated, the idea that seems most probable is that the brain reorganizes itself to deal with the change of the body. In all that the brain can do and does do it seems impossible for something like phantom limbs to happen in a brain that is hardwired, rather the possibility that with the change in the body the brain changes as well seems highly likely. The only question that remains, which may require further research, is why the face region of the cortex takes over the amputated limb region as opposed to other regions?

References


1) Biology Articles, paper on differing theories.

2)Biology Homepage for Macalester, discussion of phantom limbs.

3)College Biology Page, information on reorganization of cortex.

4)Harvard Biology page, article on phantom limbs.


5) Ramachandran, Vilayanur. "Phantom Limbs and Neural Plasticity." Neurological Review March 2000: 317-320

6)MIT Phantom Limb Page.


Your Brain, Your Enemy
Name: Marissa Li
Date: 2003-04-15 04:10:05
Link to this Comment: 5394

As ridiculous as it seems, sometimes you read someone's online profile and they say something prolific. A carelessly placed quote prompted a debate in my brain that caused some ample paranoia. "The greatest mistake you can make in life is to be continually fearing you will make one." Thank you to the friend from high school whose personal AIM profile gave me a topic for my neurobiology paper, PARANOIA.
If it has been confirmed that brain equals behavior, than why don't we fear our own thought processes? Persons with paranoia disorder are not aware that they are in fear of their own brains, but in some respect fear of oneself and what ones brain can create is exactly what persons with paranoia disorder experience. Everyone experiences small doses and bouts of paranoia on nearly a daily basis, but not everyone exists on its affects. Those with paranoia disorder deal with a constant nagging that they cannot control because it tends to control them, hence your brain as your enemy. Though the causes of paranoia are not clearly defined in either social or medical fields, the obvious truth is that paranoia stems from the brain and the nervous system causing persons to be "highly suspicious of other people" (4). According to studies paranoia stems from several possible areas. "Potential factors may be genetics, neurological abnormalities, [and] changes in brain chemistry. Acute, or short-term paranoia may occur in some individuals overwhelmed by stress" (4).

In terms of genetics, paranoia is not defined as something strictly hereditary, however there is a tendency towards its occurrence in families with members with schizophrenia or other mental disorders (6). Socially speaking paranoia appears to be passed down from parent to child through shear exposure and environment. If certain personality traits are innate within a person, than the possibility of a genetic inclination towards paranoia does not appear way off base. This of course stems from discussion on whether or not personality is developed or innate. In almost everything somebody does, his or her personality comes through. The question of nature versus nurture starts at the very root of the physical structure, straight from the brain and the nervous system, the decided director of behavior.

Biologically some studies of schizophrenia and other psychosis have shown actual irregularities in the composition and functions of a paranoid brain. "The search [for abnormal brain chemistry] has become very complex, as more and more of the chemical substances that carry messages from one nerve cell to another—the neurotransmitters—have been discovered" (1). Many examinations of the paranoid brain have shown irregularities in the firing of neural circuits and decreased activity in the prefrontal cortex causing what is assumed to be an "impaired ability to judge whether their [a person's] fears are rational" (6). Does simple misfiring cause paranoia disorder or just paranoia itself, which is perpetuated by a brain that insists upon creating fear within the individual? If everyone's neurotransmitters fall out of place now and than, what judges true paranoia? Is it created through an education of fear and insecurity or is completely biological, and than of course, is biology affected by ones surroundings? Whether or not paranoia is created in the brain and nervous system is a given; it is, however what consumes the brain so much that forces it to create, or innately have a personality disorder?

Stress is another external factor similar to the idea behind the environments affect on the upholding of paranoia disorder. Paranoia tends to be "more prevalent among immigrants, prisoners of war, and others undergoing severe stress" (6). This is also seen as more "acute" or temporary paranoia symptoms and causes, but is still a convincing argument towards nurture being the cause of paranoia disorder.

Paranoia is not something easily curable. Because sufferers of their own internal fears are so self-absorbed, they are "immune to reason" (5). They are also difficult to treat "because the person may be suspicious of the doctor" (4). Though medications and therapy are very common "attemptable remedies" for those with paranoia disorder, if the person cannot control their own mind, than how can they be expected to change what their own mind is doing to them? Because a paranoid personality has to deal with its own issues on a consistent basis, automatically, a person suffering from paranoia gets sucked into isolationism.

The basic symptoms of paranoia are "concern that other people have hidden motives, the expectation of being exploited by others, an inability to collaborate, a poor self image, social isolation, detachment, and hostility" (2). Everyone has their moments of each trait, but to what extent one harbors fears of isolation, detachment, etc. is what actually causes paranoia. Regardless of whether or not paranoia is a biological, chemical, natural, innate, external, or stress induced condition, in any state it cause a human being to fear their own control and their own brain. Persons with paranoia believe within themselves that they have control, and yet they are the ones that force themselves to become "aware" of their surroundings and insecure around all those surrounding them.


1) On the Couch: Faces of Paranoia

2) Paranoia

3)
Paranoid Personality Disorder
4
Paranoid Personality Disorder

5)
Self Protection or Delusion? The Many Varieties of Paranoia

6)
Useful Information on Paranoia


Men are from Mars, Women are from Venus -- Brain a
Name: Arunjot Si
Date: 2003-04-15 05:51:18
Link to this Comment: 5395


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

If we were to examine a high school calculus classroom or the staff at an engineering program of a college or university, chances are that the male to female ratio would be significantly skewed. Why are women and men so different in their choices and behavior? The brunt of popular opinion focuses on the environmental cues that lead to our distinct behaviors. But is there also an innate biological basis to the choices and differing abilities between men and women? Cognitive functioning or brain processing differences in the two genders has been a point of interest and contention for many years. The purpose of this essay is to explore if neuroanatomical and genetic differences between males and females play a role in the development of "gender-specific" behaviors, perceived intellectual strengths and professional choices.

Equality regardless of gender or creed is an axiom that is crucial to our modern day society. And yet even in this 21st century, the number of women in certain "male dominated" professions, has remained fairly unchanged. Many social theorists believe that women are discouraged from such professions and that if they were given an unbiased, level playing field, that demand for these professions would be identical for both males and females. Mary Pipher, a psychotherapist for adolescent females writes, "With girls... their success is attributed to good luck or hard work and failure to lack of ability, with every failure, girls' confidence is eroded. All this works in subtle ways to stop girls from wanting to be astronauts and brain surgeons. Girls can't say why they ditch their dreams, they just 'mysteriously' lose interest" (10). Experiments have shown that women perform better when given tests that they believe are unbiased to either gender. However, it may be naïve to believe Piper's statement without studying the male and female innate differences.

Contrary to popular belief, gender and anatomical sex refer to two distinct and separate constructs as each develops at different times and in different parts of the body. John Money coined the phenomenon that codes for masculinity or femininity as "Gendermaps" (1). At a very early age and through an interaction of both nature and nurture, this gendermap imprint is established. What makes gender identification and sex so frequently parallel to each other is that gendermap evolvement is notably also induced by hormones that emanate from the developing fetus (1).

Behavioral Differences:

Though there are many similarities in the cognitive abilities of men and women, there are also discernible differences. For the most part, the behavioral differences between the intellectual capacities of the sexes have to do more with patterns of ability than the actual intellectual capacity (3). For one, attention and perception differ early on. Baby girls have been noted to gaze longer at objects than baby boys. Later they rely on landmarks and memory for guidance. Boys on the other hand, have a better visual-spatial ability such as aiming at stationary or moving targets and detecting minor movements in their visual fields more easily. The fact that males perform better in navigation seems to agree with the possible theory that evolutionarily, many of these abilities would have been important for survival in the time of hunter-gatherer societies, where males navigated unfamiliar terrain while hunting, and females foraged more nearby areas gathering food (3). Another difference is their verbal ability. Women have been repeatedly shown to excel in language and tasks that involve manual dexterity and perpetual speed such as visually identifying matching items. Men appear to have an advantage in tasks requiring quantitative and reasoning abilities and excel in math as well as science (14).

Neuroanatomical Differences:

There are epidemiological suggestions that there may be neuroanatomic differences contributing to the cognitive functioning of males and females, although the literature is by no means conclusive. While it would be ethically suspect to make any conclusions based on observational anatomic research - it is useful to distill the anatomic differences cited in the literature to date.
Comparison in size shows that the male brain is on average 10% larger than the brain of females, although women usually have a larger percentage of information-processing gray matter. A greater proportion of gray matter suggests a greater processing capacity. This explains why the belief that greater head size indicates greater intelligence is invalid in this instance. Women, albeit smaller have more efficient brains - thus explaining why the sexes score similarly on intelligence tests (9). Magnetic Resonance Imaging has shown that male brains contain more white matter and cerebrospinal fluid than females, which may contribute towards better spatial, geometrical capability (11).

Another caveat in the neuroanatomical discrepancy of the sexes is in the hippocampus, hypothalamus and corpus callosum. The right cerebral hemisphere is larger in males leading to possibly the aggressiveness typical of male behavior. One test that verifies the asymmetry is that male rats given testosterone (the principle male sex hormone), develop a thicker, right hemisphere (2). In females, the cerebral hemispheres are symmetrical, with both functioning equally in the processing of speech. This higher level of interchangeability of hemispheres may relate to increased language prowess in females (12). For all these dissimilarities, it is important to keep in mind that the above published differences only represent gross averages of study populations. They cannot and should not be used to directly prove causality, but rather a potential avenue for further exploration.

Genetic and Physiological Differences:

The genetic makeup of individuals tends to dictate physiological differences. An individual with an extra Y chromosome or XYY instead of XY genotype will not only have a different phenotype, but will be much more aggressive due to the increased "maleness". The XYY syndrome brings up another very intricate issue, criminology with behavioral genetics. XYY subjects may be more violent. Adoption and twin studies also show a genetic linkage to certain behavior. Identical twins are genetically identical, and because of their similarity of criminal behavior, it is suspected that behavior is genetically linked (15).

Infants have been shown to have differences in behavior at a young and tender age, preceding much environmental influence. One study reported that the least sensitive female infant may be more sensitive than even the most sensitive male. Female infants appear more sensitive to noise and are also more social. "They are more inclined to "gurgle" at people and to recognize familiar faces than baby boys" (8). The behavioral differences between infants of different sexes appear very early in life, indicating that the mechanism controlling these behavioral patterns is innate and not learned from society.

The difference in the sexes may begin even earlier than the cradle in their perinatal existence. In a study by Emese Nagy, the heart rates of 99 newborns were measured with simultaneous video recording of their behavior. Proving that alert newborns have a similar differences in heat rate as those found in adults: males had a significantly lower baseline heart rate than girls, suggesting that heart rate is gender dependent from birth onward (6).

Hormones also seem to play a major role in sexual differences. Sexual dimorphism according to some studies relies on the presence of androgens such as testosterone. If absent, this leads to female gendermaps. Others suggest the presence of estrogen influencing female gender. In rats, there is a region called SDN (sexually dimorphic nucleus) in the brain that is larger in males. Giving testosterone can actually increase the size of the SDN (2).

An intriguing issue is how estrogen and progesterone hormone levels are
much higher in women, and fluctuate across the lifespan, including puberty, seasonally, during the menstrual cycle and after menopause. Researchers have examined the contributions that these hormones make to behavior and learning through various tests. Females that have been exposed to high levels of testosterone such as in congenital, adrenal hyperplasia, demonstrate a more aggressive behavior as well as improved spatial skills - behavioral strengths of the opposing sex (14). Hormone replacement therapy is actually used in the medical field today for post-menopausal women. Increasing certain hormone levels will improve their attention levels, similar to the situation for adolescent girls with dyslexia, who via increases in their estrogen level during puberty show an improvement in their reading abilities.

Environment:

Although there is much evidence suggesting that gender differences be based on the natural and physiological distinctions in our bodies, the environmental and social influences certainly play a major role as well. From day one, starting with their initial noisy entrance into this world, babies are viewed according to their gender; even color coding them right away. Oh, it's a girl! God forbid if this child ends up wearing blue - she may have a social identity crisis. With their clothes, toys and even rooms decorated accordingly, it would be difficult to remain untouched by this social stereotyping that goes on in society throughout life. Society plays a role in initiating and perpetuating role-assignments of the genders, influencing decision-making and social behavior.

There are cases to support the nature versus nurture theory however, such as the infamous Joan/John case. In 1967, a twin boy had an accidental castration, and a medical decision was made, to raise the child as a girl with the help of surgery. For the next two decades, many sex reassignments were performed. Twenty-five years later, this medical success story came to haunt those same sexologists when Joan, though initially unaware of the facts could not adjust to the change and chose to return to her former sex (7).

Conclusion:

In a society where the exploration of innate variation is a topic of ethical controversy, it is only prudent to approach any data or discussion on gender difference very gingerly. Data interpreted to show genetic biases for differences among humans in intelligence, motor learning capabilities, criminality, and a broad range of other behaviors has, unfortunately, been used to support racism and other forms of bigotry (3). Because of such societal interpretations, scientists are becoming much more cautious of the conclusions they draw in an effort to avoid discriminatory sociological or political ramifications (13).

Throughout this entire semester, we have discussed the matter of brain equaling behavior. Quantifying exactly what behavior to expect from a man or a woman is an enigma, as we're talking about not just the anatomical variations one is born with, be it an XX sex chromosome or an XY, but also the changes that occur over one's lifespan. The brain is a structure that continues to evolve, be it because of the stages in development or stimuli/experiences of life. While the dominance of nurture over nature is no longer assumed yet the percentage of the effect that genetics and environment play on sexual identity is uncertain.

References

1) Gender Identity Disorder by Anne Vitale

2) The Role of Estrogen in Sexual Differentiation by Elaine Bonleon de Castro

3) Gender Differences in Cognitive Functioning by Heidi Weiman

4) Sex on the Brain - Biological Differences between Genders by Deborah Blum

5) Cognitive Development

6) Gender-Related Heart Differences in Human Neonates by Emese Nagy

7) Boys will be Boys: Challenging theories on Gender Permanence by Josh Greenberg

8) Neural Masculization and Feminization by Mary Bartek

9) Thinking about Brain Size

10) Gender Issues - Excerpt from "Reviving Ophelia" by Mary Pipher

11) Women's Brains - More Effective?

12) Speech Processing in the Brain

13) The Nature Versus Nurture Debate

14) The Genetic-Gender Gap

15) Explanations of Criminal Behavior


Why do only some depressed people commit suicide a
Name: Irina Mois
Date: 2003-04-15 06:56:48
Link to this Comment: 5396

Depression can be a very debilitating and devastating illness. Nonetheless, most people, through medical treatment, counseling, and/or emotional support from family and friends can overcome it. The questions remains - what about the people who do not successfully overcome this illness? What happens to them? The most obvious answer is that they end up committing suicide (although there are some people who end up living with the illness for the rest of their lives). I had always wondered what is it that makes one person strong enough, or selfish enough-as some people believe, to commit suicide? Is it something in their brain chemistry, their personality, their surrounding environment, their diet etc?

Here are a few facts about depression:
1. "Suicide is the eighth leading cause of death in the United States and is among the three leading causes of death for those aged 15 to 34 years. For every person in the U.S. who dies by suicide, 10 people attempt suicide but survive." (1)

2. 80% of people who suffer from depression never attempt suicide (2)

3. There are 1 million suicides per year worldwide. (5)

Depression can be caused by a number of things, such as diet, genetic makeup, or a traumatic event. However it is believed that the final step before depression occurs takes place in the brain. Serotonin, dopamine, and/or neuroepinephrine levels are disrupted, leading to depression.(6)

There are several theories on why only a fraction of depressed people commit suicide. There is evidence today to suggest that the pre-frontal cortex of suicide victims, where all the executive decisions are made, is malfunctioning; specifically, the serotonin breaking system. (2) Another part of the brain which is thought to be different in suicide victims is the brain stem. Mark Underwood, a neurobiologist at the New York State Psychiatric Institute, has found 30 percent more serotonin neurons in this area, along with a lower serotonin activity. These neurons seem to be smaller and malfunctioning. (2) Because serotonin victims seem to have more serotonin neurons, it is believed that they inherit this problem (given that you are born with an exact number of neurons). However, environmental factors should also be taken into consideration. For example, people who have experienced abuse during their childhood are prone to greater impulsivity. (5)

Currently Selective Serotonin Reuptake Inhibitors (SSRI) increase serotonin levels and are known to alleviate severe depression and with it, the probability of suicide. Cognitive therapy has also proven to reduce the probability of suicide, without treating the depression. So the person is still depressed but they are less likely to commit suicide. However, if that is the case, then that doesn't explain how the suicidal individual overcomes the low serotonin activity level and the small, malfunctioning serotonin receptors. Cognitive therapy does not change the chemistry of the brain. Therefore, it may be the case that serotonin activity may only cause depression and not be related to probability of a depressed person committing suicide.

To complicate the situation, there are two types of suicides: suicide thinking of a normal person and suicide thinking of a depressed person. Depressed suicides are likely to happen suddenly, whereas "normal" (there is nothing normal about suicides but the term is used to describe people who do not suffer from depression) suicides are more planned. Also, depressed suicides are more likely to try and cut themselves first in order to escape the mental pain. (7) If that does not work, they are likely to attempt suicide.

There are still several statements which do not make sense to me. One such statement is: "Serotonin, which influences how nerves in the brain transmit messages, seems to work abnormally in the frontal lobes of people who commit suicide. In other words, it doesn't trigger the normal restraint against extreme actions like suicide." (5) If that is the case, why is it that these people do not kill other people, or rob stores, or rape, or poison the NYC water system? If they can't control extreme actions, why is it that the only extreme action which they take is suicide? Once again, serotonin levels only seem to cause depression but may not be directly responsible for the increased probability of suicide.

Another statement which disturbed me was this description of depression: "Depression is a disorder of the brain and body's ability to biologically create and balance a normal range of thoughts, emotions, & energy." (4) First, how do we know what "normal" means. Who defines normal? Second, thoughts and emotions are not just created by the brain. There has to be some kind of outside input in order for a thought or emotion to be created. Maybe someone pissed you off. Or maybe you had an experience which really left a mark on you. So I believe that it is a lot more complex that just "the brain creates thoughts/emotions". The creation on energy is obviously even more ridiculous. The creation of energy depends on nutrition, sleep, stress level etc. So a definition like the one above is nothing short of complete ignorance on the part of the person who wrote it. The worst part is that it comes from a site where FAQ about depression are posted. I can only be outraged that this person(s) want(s) to share and spread their ignorance to other people who are in desperate need to answers.

Although I have not found an answer to my original question, I do finally understand why it is that the topic of depression and suicide is taboo. It is clear from the sites I have visited that suicide is thought of as a brain disease. It is an action which is done on impulse and it implies that the person does not have control over their actions. This obsession with control (which we have partly covered in class) is extremely inappropriate in this case. Depression is a lot more complex and it is affected by biological, environmental, and genetic factors. A person's ability to control their depression/suicide tendencies may not even exist. We do not know enough about this illness to make it taboo.


1) by - Kevin Malone, M.D. and J. John Mann, M.D.
2) 3) 4) 5) 6) 7)


The Effects of Schizophrenia on the Brain
Name: Adina Caza
Date: 2003-04-15 07:20:52
Link to this Comment: 5397


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Schizophrenia is a severe mental illness that affects one to two percent of people worldwide. The disorder can develop as early as the age of five, though it is very rare at such an early age. (3)) Most men become ill between the ages of 16 and 25 whereas most women become ill between the ages of 25 and 30. Even though there are differences in the age of development between the sexes, men and women are equally at risk for schizophrenia. (4) There is of yet no definitive answer as to what causes the disorder. It is believed to be a combination of factors including genetic make-up, pre-natal viruses, and early brain damage which cause neurotransmitter problems in the brain. (3)

These problems cause the symptoms of schizophrenia, which include hallucinations, delusions, disordered thinking, and unusual speech or behavior. No "cure" has yet been discovered, although many different methods have been tried. Even in these modern times, only one in five affected people fully recovers. (4) The most common treatment is the administration of antipsychotic drugs. Other treatments that were previously used, and are occasionally still given are electro-convulsive therapy, which runs a small amount of electric current through the brain and causes seizures, and large doses of Vitamin B. (3)

Due to neurological studies of the brain, antipsychotic drugs have become the most widely used treatments. These studies show that there are widespread abnormalities in the structural connectivity of the brains of affected people. (2) It was noticed that in brains affected with schizophrenia, far more neurotransmitters are released between neurons, which is what causes the symptoms. At first, researchers thought that the problem was solely caused by excesses of dopamine in the brain. However, newer studies indicate that the neurotransmitter serotonin also plays a role in causing the symptoms. This was discovered when tests indicated that many patients better results with medications that affect the serotonin as well as the dopamine transmissions in the brain. (6)

New test and machines also enabled researchers to study the structure of schizophrenic brains using Magnetic Resonance Imagery (MRI) and Magnetic Resonance Spectroscopy (MRS). The different lobes of affected brains were examined and compared to those of normal brains, showing several structural differences. The most common finding was the enlargement of the lateral ventricles, which are the fluid-filled sacs that surround the brain. The other differences, however, are not nearly as universal, though they are significant. There is some evidence that the volume of the brain is reduced and that the cerebral cortex is smaller. (2)

Tests showed that blood flow was lower in frontal regions in afflicted people when compared to non-afflicted people. This condition has become known as hypofrontality. Other studies illustrate that people with schizophrenia often show reduced activation in frontal regions of the brain during tasks known to normally activate them. (1) Even though many tests show that the frontal lobe function performance is impaired and although there is evidence of reduced volume of some frontal lobe regions, no consistent pattern of structural degradation has yet been found. (2)

There is, however, a great deal of evidence that shows that the temporal lobe structures in schizophrenic patients are smaller. Some studies have found the hippocampus and amygdala to be reduced in volume. Also, components of the limbic system, which is involved in the control of mood and emotion, and regions of the Superior Temporal Gyrus (STG), which is a large contributor in language function, have been notably smaller. The Heschl's Gyrus (which contains the primary auditory cortex), and the Planum Temporale are diminished. The severity of symptoms such as auditory hallucinations has been found to be dependent upon the sizes of these language areas. (2)

Another area of the brain that has been found to be severely affected is the prefrontal cortex. The prefrontal cortex is associated with memory, which would explain the disordered thought processes found in schizophrenics. Test done on humans and animals in which the prefrontal cortex has been damaged showed similar cognitive problems as those seen in schizophrenic patients. The prefrontal cortex has one of the highest concentrations of nerve fibers with the neurotransmitter dopamine and scientists have learned that the relatively new antipsychotic drug, which increases the amount of dopamine released in the prefrontal cortex, often improves cognitive symptoms. They also found that the prefrontal cortex contains a high concentration of dopamine receptors that interact with glutamate receptors to enable neurons to form memories. This means that dopamine receptors may be especially important for reducing cognitive symptoms. (5)

While these drugs do help control the symptoms of schizophrenia, they do not get rid of the disorder. It is becoming clearer ever day, just what damage schizophrenia is doing to the brain, but researchers are nowhere near to finding all of the answers. Different researchers are still arguing over the conclusiveness of the data that does exist. Other scientists are trying to discover the cause of schizophrenia. Is it caused by various genes, by a virus, or from trauma? This too is still a mystery. The only thing that is truly known is that the disorder is debilitating and that it affects nearly every portion of the brain. Obviously, much more research still needs to be done to help those who suffer from it.


References

1)E-Mental Health,

2) E-Mental Health,

3)National Institute for Mental Health,

4)Psychiatry 24 x 7,

5) Society of Neuroscience,


6)Health-Center,


The Roles of NREM and REM Sleep On Memory Consolid
Name: Alexandra
Date: 2003-04-15 07:21:25
Link to this Comment: 5398


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

All mammals exhibit Rapid-Eye-Movement, or REM, sleep, and yet on certain levels this type of sleep would seem to be disadvantageous. During REM sleep, which is when most dreams occur, the brain uses much more energy than during non-REM (NREM) sleep. (1) This "waste" of energy coupled with the increased vulnerability of this state on account of the body's paralysis at this time suggests that there must be a very important reason, or reasons, for the existence of REM sleep and in extension of dreams. Determining the function of dreams, however, has proved very problematic with many arguments which directly oppose each other. Some of the primary functions of dreaming have been tied to is role in development, its production of neuro-proteins, and also to how it may allow for the "rehearsal" of neurons and neuronal pathways. The influence of dreaming on learning is one of the hottest debates. Some argue that dreams aid in learning, others that dreams aid in forgetting, and yet others that dreams have no effect on learning or memory. That REM sleep seems to aid in development might argue that REM sleep may be connected to learning. It seems that most scientists believe that REM sleep aids in certain memory consolidations although some argue that it actually leads to "reverse learning.

Before discussing the role of NREM and REM in learning, it is necessary to clarify the identity of and differences between the two. This type of sleep is marked by different stages based on different the different brainwaves exhibited. REM sleep differs from NREM in that most dreams occur during REM sleep although the two activities are not synonymous. REM is also marked by an increase in brain activation, breathing , and also heart-rate while the body becomes paralyzed. (2)


Although the precise role may be arguable, REM sleep seems to play a role in development. Although newborn infants spend about half of their sixteen to eighteen hours of sleep time a day in REM sleep, adults spend only about an hour and a half in REM sleep. (1) This difference in both amount and percentage of REM sleep between infants and adults indicates the importance of REM sleep, or of dreams, in development. Several dream researchers have hypothesized that REM sleep may play an important role in infant brain development by providing an internal source of powerful stimulation which would prepare the baby for the almost infinite "world of stimulation it will soon have to face" and also by facilitating the "maturation of the nervous system." (1)

The relative amount of REM sleep in other mammals exhibits in connection with their level of development at birth also supports the idea that REM sleep must aid in development. (1) Typically, animals born relatively mature, such as dolphins, giraffes, and guinea pigs, demonstrate low-amounts of REM sleep, while animals born relatively immature, such as ferrets, armadillos, and platypuses, exhibit higher levels of REM sleep. (3) Humans fall in between the spectrum of amounts of REM sleep with platypuses having the most REM sleep and some species of dolphin and whale exhibiting none. (3)

Partially because infancy is the time when most new information must be taken in as a part of development, scientists have hypothesized the existence of a connection between REM sleep and learning. It seems that the time during infant development would be one of the times when most learning, or processing of information, occurs.

The most commonly held belief among the scientific community seems to be that REM sleep consolidates memories and aids in learning. An article in Science recently declared that "neuroscientists have long known that memory consolidation goes on during sleep." (3) A more recent discovery is that NREM sleep may also play a role, albeit a different one, in learning. Robert Stickgold from the Massachusetts Institute of Technology has found that different phases of sleep are tied to different types of learning. Learning visual skills depends on the slow-wave sleep of the first quarter of the night and also on the REM sleep of the last quarter. Learning movements relies much more on the NREM in the later part of the night. (4) In an important study in 2001, Matthew Wilson, also of MIT, found that rats dream about their activities (i.e. running through a maze) during NREM sleep in addition to in REM sleep. Unlike during REM replay, where the experience occurs approximately in real time, the memory segments that were replayed during NREM seemed to be snippets of experience. (5) Also, unlike REM sleep, slow wave sleep seemed to replay only what had happened immediately before and not something twenty-four hours ago. (5) Because of the possible time-delay REM memory reactivation, it might be representative of a more gradual reevaluation of slightly older memories. (5)

Most evidence for the memory consolidation hypothesis comes from indications that an increase in learning results in an increase of REM sleep, that memory processing occurs during REM sleep, and that sleep deprivation harms the ability to learn (3) In general it seems that having had enough REM sleep before and also after the learning of new information will help with the remembering of that information. The fact alone that REM sleep stimulates the learning region, the hippocampus, has been argued as an indication of REM's influence on learning. (6) Learning tasks that need high levels of concentration or the acquisition of new skills is followed by an increase in REM sleep. (1) Many studies show that learning after having reached a plateau can only take place with the help of REM sleep. A study at MIT has shown that volunteers' skill at key-tapping and speed-spotting tasks improved by 20 per cent after one night's sleep after training, and with more extra nights, it increased even more. (4) Karni and Sagi's establishing that changes in the plasticity of particular neuronal loci which underlie perceptual learning may happen during sleep, also argues for the importance of sleep to memory. (7) The increased production of proteins, which also occurs during deep sleep, may be tied to the learning process if those proteins are in fact associated with learning. (6) Finally, backing up the idea that suppression of REM affects memory consolidation is the study by Dinges and Kribbs that shows that REM deprivation impaired performance on longer tasks, while shorter tasks remained unimpaired. (5)

In apparent contradiction with the concept that REM sleep plays a part in remembering new information is the hypothesis that people actually dream to forget. Crick and Mitchison have proposed that "the function of dream sleep is to remove certain undesirable modes of interaction between cells in the cerebral cortex which could otherwise turn parasitic." (1) Their second hypothesis is that "if these hypothetical 'parasitic' modes of neuronal behavior do in fact exist, then it might be that they 'are detected and suppressed by a special mechanism." (1) Crick and Mitchison call this hypothetical process "reverse learning" or "unlearning," but explain that it "is not the same as normal forgetting." (1) They say that according to their model, "attempting to remember one's dreams should not be encouraged, because such remembering may help to retain patterns of thought which are better forgotten. These are the very patterns the organism was attempting to damp down." (1) Although dream-sleep may prevent "perpetual obsessions or spurious hallucinatory associations," Crick and Mitchison acknowledge that it would be difficult to test for the existence of the reverse learning mechanism. (1) Also it does not seem that people who routinely remember their dreams would be more prone to "hallucinations, delusions, and obsessions" than people who typically forget their dreams. (1) Doing a study to test for the existence of an increase in "parasitic" memories in individuals who take MAO inhibitors to treat depression, which block REM sleep, in comparison with people who have normal REM sleep cycles might help test out Crick and Mitchison's hypothesis.

Despite the many studies demonstrating the function of REM sleep in memory consolidation, there is debate about the validity of the claim that REM sleep aids in learning. For example, Jerome Siegel argues in Science that the evidence for the importance of the role of REM sleep in memory consolidation is contradictory and weak. (3) On the most basic level, the fact that people remember so few of their dreams may argue against their function in terms of learning (8) (3). He argues that learning does not result in an increase in REM-sleep. For example, one of his objections to research is that Smith's study, which documented the higher density of REM in college students after intensive exams, assumes that there can be a control group among humans, which is often very difficult to obtain. (3) Siegel also takes the lack of difference in the amounts of REM sleep time between students with average IQ's and those with high IQ's to prove the lack importance of REM sleep as a memory-consolidator. (3) The problem with this assertion is that it assumes that intelligence is easily quantifiable. His faith in a study involving humans also contradicts his previous objection to the Smith study. He argues against the deleterious effect of REM suppression by stating that the methods used on rats (i.e. the platform technique in which the rat wakes up whenever it starts REM sleep because it falls into water) results in an increased level of stress, which can explain the decrease in learning ability. (3)

Even if objections to the particulars of certain experiments may be valid, these objections can never prove that REM-sleep has no role in learning or memory-consolidation. Although the precise roles, which REM and NREM play, may not be understood fully, there seems to be a connection between both types of sleep with memory. REM sleep seems more conducive to the learning of extended or sequential information since "playback" periods of REM far outlast the bursts of "playback" during NREM sleep. (5) Although my feeling that REM sleep must aid in learning and memory-consolidation may be some-what instinctual, I cannot but help think about my uncle Eli Ginzberg, who wrote one-hundred-ten books (mostly on economics), always got at least nine hours of sleep a night. Although it seems fairly clear that sleep does play aid in memory-consolidation, what remains to be done is to keep testing REM sleep and the less-studied NREM sleep for evidence as to if and how it fulfills this task.


References

1)Lucidity Institute website, "Chapter 8: Dreaming: Function and Meaning,"


2)First Webpaper on Serendip, Great Neurological Resource

3)The REM Sleep-Memory Consolidation Hypothesis," article on Center for Sleep Research's homepage, Interesting site for sleep disorders

4)Nature website, good for scientific articles

5)MIT News website, interesting articles

6); TALK ABOUT SLEEP, Inc., basics answers about sleep

7)Harvard Undergraduate Society for Neuroscience, connected to Computer Science Program

8)UCSC Psych Website,


Child Abuse Changes "I" in More Ways than One
Name: C.Kelvey R
Date: 2003-04-15 08:36:50
Link to this Comment: 5399


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

If brain equals behavior, then a relationship should exist between structural differences in the brain of abused children and changes in the behavior of abused children. Child abuse has been understood to have psychological effects, which are manifested by specific behaviors. Abused children express an array of behaviors ranging from depression, anxiety, post-traumatic stress and suicidal ideations to aggression, impulsivity, delinquency, hyperactivity and substance abuse (1). Furthermore, ten to twenty percent of adult survivors of child abuse suffer from dissociative or post-traumatic stress disorder (2). While there is a normal developmental path for the brain, childhood trauma can disrupt the progress and direction along this path. In response to physical and psychological trauma, a child's brain structure will change in ways that do not occur in children not exposed to trauma. These changes in the brain structure cause changes in behavior, including better or worse strategies for coping with the trauma of abuse. Therefore, changes in the brain and behavior due to child abuse suggest that environmental stressors can profoundly influence the development of the brain, and that the structure of the brain in turn controls behavior. Thus, just as the self acts on the environment, the environment acts on the self. Accordingly, understanding the effect of child abuse on the nervous system may help develop a clearer understanding of a model of the nervous system that incorporates the "I-function". This model in turn raises questions on the permanency of any definition of the "self".

An explanation for changes in the brain due to child abuse may involve the production or survival of neurons. A genetically predetermined, stepwise sequence develops the lower brain initially and then the higher brain centers develop gradually during childhood (3). Due to the sequential development of the brain, each stage depends on the healthy development of proceeding stages. In addition, there is a genetically determined sequential growth, proliferation and overproduction of axons, dendrites, and synapses in different regions of the brain (3). In any brain however, not all synaptic connections survive. Between the ages of three and eight, a child's brain has been shown to have twice as many neurons and connections than those of an adult brain (4). There are two environmentally dependent maturation processes of the brain with distinct critical periods. These critical periods are the times when the organizing systems of different functions are extremely sensitive to environmental inputs (5). Synaptic connections are eliminated if a particular experience does not occur during a critical period and new synapses are not generated or promoted if an experience does not take place. For an abused child, instead of a period of comfort and security, there are often periods of stress and fear (3). Furthermore, exposure to stress hormones released as a result of abuse significantly change the shape of the largest neurons in the hippocampus, kill neurons, or suppress the production of new neurons (6). For example, glucocorticoids are released during stressful periods and circulate for months, killing neurons and reducing the volume of the hippocampus (2). Thus, just as natural experiences can affect the survival or degeneration of neurons, the survival of neurons may possibly be the cause of structural variations within the brain of children who experience abuse.

Research comparing the brains of abused children and control subjects, primarily conducted by Martin Teicher, an associate professor of psychiatry at Harvard Medical School, has shown that abuse seems to induce a cascade of molecular and neurobiological effects that alter the development of specific areas in the brain. These areas include the limbic system, left hemisphere, corpus callosum and cerebellar vermis (6). EEG abnormalities in the left hemisphere were observed in sixty percent of 115 youngsters with documented histories of abuse (7). Similar brain-wave abnormalities are often seen in people with a greatly increased risk for suicide and self-destructive behavior (6). The limbic system is the brain's emotional processing center and includes the amygdala and hippocampus. MRI scans also revealed an association between early maltreatment and the reduction in the size of the adult left hippocampus or amygdala (6). Bermner et al. compared MRI scans of seventeen adult survivors of abuse with seventeen control subjects and found that the left hippocampus of abused subjects was twelve percent smaller (7). Teicher also observed a reduction in the left side of the brain that suggests that the right hemisphere was more active in the abused patients. Other research has found that these left hemisphere deficits may in turn contribute to the development of depression and increase the risk of memory impairments (8).

Animal and human studies have also shown that abuse can reduce the size of the corpus callosum by up to forty percent (9). The corpus callosum is the bundle of nerves that facilitates the communication between the right and left side of the brain. Teicher found that neglect was associated with a twenty-four to forty-two percent reduction in the size of various regions of the corpus callosum in boys while sexual abuse had no affect. The opposite was true for girls, who showed eighteen to thirty percent reduction due to sexual abuse, while neglect had no effect (8). The reduced of integration between the right and left hemispheres may predispose the patients to shift abruptly from the logical, rational, language controlling left side to the creative and emotional right side and remain in one hemisphere as opposed to moving seamlessly between the two sides (9). Many survivors of childhood abuse tend to reside in their left hemisphere when they function well but when traumatic thoughts arise, they retreat into the right hemisphere (9). In effect, abuse has rewired the nervous system to survive the traumatic experiences. Two primary adaptive response patterns to child abuse are the hyperarousal "fight or flight" response or the dissociative "freeze and surrender" response (10). Young children are most likely to use the dissociative response, which attempts to prevent memories from being integrated into the consciousness (10). The ability to limit the integration of memories may be the result of separating the functioning of the two hemispheres. Furthermore, the relatively unilateral use and development of one side of the brain could account for dramatic shifts in mood or personality (8).

Additionally, the cerebellar vermis is more active in abused children with a greater blood flow in the area (8). The cerebellar vermis is involved in emotion, attention and the regulation of the limbic system and is very sensitive to elevated stress hormones, particularly glucocorticoids (8). The greater degree of blood flow in the region may be aimed at controlling the electrical activity within the limbic system (8). While the cerebellar vermis attempts to maintain an emotional balance, trauma may impair its ability and as a result an individual may be particularly irritable (8).

Child abuse's effects on the brain often have a direct connection to the survivors' behavior and interaction with their environment. Abused children who alter their sense of consciousness are effectively altering their sense of self. A sense of self arises from one's own sense of identity combined with an interpretation of others reaction to the person's behavior. For victims of abuse, those behaviors may include imaginary companions or reach the point of a multiple personality disorder. The internal inputs to the nervous system that are struggling to define oneself through specific actions may conflict with the external inputs that are forcing an opposing reaction. This conflict can cause a person's consciousness to fragment, as the person must act out different messages from the subconscious. The I-function is involved in coordinating subconscious thoughts into behavior and ultimately produces a reaction, which may not be explained by a conscious thought. When the I-function receives signals, the response is to transmit the signals to the appropriate centers to organize appropriate behavior. The I- function tries to provide a coherent story with a seemingly logical response, while the story is split into two realities. One reality may be an internal and inherent fight or flight response, while the other reality is the external abuse, which forces a freeze and surrender response. While the system may expect inputs such as comfort and security, the reality of stress and rejection may cause a conflict that requires the nervous system to either become 'sick' or adapt its operating structure.

Although there may be a genetic template for the nervous system, environmental influences organize the nervous system and create individual connections that manifest in behavior. Examining neurons provides one method for noting physical changes in the brain due to abuse. Additional observations of connections between child abuse and biological changes in the brain support the contention that the brain is influenced by environmental factors and result in distinct behaviors. Abusive experiences may literally provide the organizing framework in the brain of a child. The organization of the brain includes the role of the I-function in formulating behavior and ultimately defining self. Instead of a box, the I-function should be represented as a dotted line surrounding an area that cannot be permanently defined because it is constantly changing as it adapts to varying realities. "Who am I?" is a question that can never be definitively answered. However, formulating a definition of "self" and "I" is significantly influenced by interactions throughout childhood. Therefore, it should be remembered that every individual has the power and choice to strengthen, as oppose to fragment, the mind of a child.

References

1)Relationship between Early Abuse, Posttraumatic Stress Disorder, and Activity Levels in Prepubertal Children

2) Hidden Scars: Sexual and other abuse may alter a brain region

3) Environmental Influences on Brain Development

4) Violent Changes in the Brain

5)
Development of the Cerebral Cortex: XIII. Stress and Brain Development: II

6)Psychological Trauma and the Brain

7)Psychological Abuse May Cause Changes in the Brain

8)McLean Researchers Document Brain Damage Linked to Child Abuse and Neglect

9)"Abuse stunts one part of the young mind, says a new study"

10)Childhood Trauma, the Neurobiology of Adaptation and Use-dependent Development of the Brain: How states become Traits"


The Disabling Effects of Selective Mutism
Name: Ellinor Wa
Date: 2003-04-15 10:45:08
Link to this Comment: 5403


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


Among the vast range of anxiety induced disorders that exist, Selective Mutism may be the most disabling to its victims. It has been estimated that approximately one in a thousand children suffer from this presumed psychiatric ailment wherein the ability to speak is limited to the household or other areas of comfort. (2) Public places and schools elicit so much anxiety within these children that their natural capacity to speak is suppressed. Once a child under five years of age exhibits the behavior described, for over a month, and without having other speech impeding barriers affecting them such as autism or a second language, he or she will most likely be diagnosed with selective mutism. (2)


Many hypotheses have been posed as to what causes selective mutism, however, no determinate conclusions have been made. In most cases it has been proven that anxiety disorders are hereditary, thus, nearly all children who become selectively mute have family members who were afflicted with the same or more serious anxiety disorder, like obsessive compulsive disorder, schizophrenia, or social phobia. The fact that anxiety disorders pass through generations implies that brain chemistry is perhaps genetic or that serotonin levels are inherited. Other suggested causes of selective mutism have been speculated upon, however, little research has been instated. Abuse, neglect, extreme shyness, extremely embarrassing experiences like vomiting or having diarrhea in a classroom setting, or living in a home environment with exceptionally nervous parents may also lead to become selectively mute. These theorized causes tend to describe the background of children who have no similar disorders running in the family. (4)


Doctors, for the most part, lean towards medication for those afflicted with selective mutism; however, other methods are practiced as well. (6) Environmental support, confidence boosting, therapy, and behavioral management classes have been known to aid in their struggles. A strong emphasis has been put on the necessity of treatment; for it has been proven that those enduring the disorder selective mutism, will only worsen. Eventually, the disorder will carry on into adulthood.


It should be clarified that children suffering from selective mutism do not choose not to speak, but rather cannot speak under anxiety ridden circumstances. For instance, many cases have been described wherein a therapist will bribe one of these children with a toy. They will offer to give to them the toy; a Barbie and a truck have been used as examples, if they can say what the object is. (1) The selectively mute child will strain themselves to say the word, but cannot force the sound out of their mouths.


Is the choice to speak or not speak a conscious or subconscious choice? It would seem that their desire to obtain the toy would be stronger than their desire to avoid the stressful situation of having to speak since these children are attempting to formulate the sound. Thus, the desire is repressed by a subconscious decision. Moreover, the fact that anxiety disorders are inherently hereditary goes to prove the lack of control over their ability to speak; low serotonin levels or genetics may inhibit their ability to express themselves verbally under high stress situations. This suggests selective mutism should be considered more a neurological disorder than a psychiatric disorder.


While the I - function has been known to instigate behavior human beings cannot normally imitate on their on accord, for instance, attempting to replicate the same eye movements which occur while following the movements of a pencil, perhaps the I - function also inhibits behavior that human beings would normally be able to perform by themselves. A correlation could be made between the disabled voice of the selectively mute who intensely desire to speak, as seen in the previous example wherein the patients tried speaking in order to receive the toy, and the I-function. Maybe the I-function is genetically differentiated and may be altered through certain medications.


An interesting phenomenon has emerged in relation to selective mutism. The children who suffer from this disorder, which essentially limits the basic form of human expression, usually compensate for that lack through artwork or music. It has been estimated that a great deal of those afflicted, for the period in which they undergo immense amounts of silence, gradually improve drastically in their creative abilities as artists and musicians. Often, their talent significantly surpasses that of their peers. Because most doctors prefer to elect medical treatment for those who suffer from selective mutism, namely prescribing them Prozac, the troubled children usually become verbally inclined once again. Once they are able to speak confidently, parents have reported that their children's artistic or musical abilities subside. Dr. Shipon - Blum, a specialist in the field of selective mutism, imparts, "Often, months after I have treated a child for selective mutism, parents will call me and want to know where their Picasso went." (1) Several explanations could be given as to the mystery behind this finding: perhaps there is a correlation between brain chemistry and creativity, or maybe Prozac or other anti depressants stunt creativity, or great expression through creativity could be a psychological method of release in the place of a lost sense.


In accepting medical treatment as a verifiable one, it is implied that selective mutism is a brain disorder, wherein the chemistry of the brain is erroneous. Low levels of serotonin can be heightened with the use of Prozac and other antidepressants. Also, it has been proven that most anxiety disorders are hereditary. If selective mutism is predestined, what accounts for the actual repression of speech as well as the associated artistic inclination? Conceivably, it would seem that along with the hereditary aspect of selective mutism as a disorder, there could be a hereditary gene, with which it is correlated, for creativity. When the disorder is cured through medicine, their creative ability is diminished along with it. On the other hand, a possibility lies in the notion that the I-function could be responsible for impeding the natural facilities of a person given these unfortunate genes, and could be adjusted through the use of modern medicine.


References

1)Selective Mutism, a general information site on selective mutism
2)The Selective Mutism Foundation, a support sight to better understand the disorder
3)Philadephia Page, a site with excerpts about selective mutism from the Philadelphia Inquirer
4)Selective Mutism UK, an interesting article about the seriousness of selective mutism
5)Anxiety-Panic Website, a site which describes several other anxiety disorders
6)Mental Health web page, a helpful site providing several articles about selective mutism
7)Anxiety Network, illustrates well the treatment available for those selectively mute


Somnambulism and the I-Function
Name: Tiffany Li
Date: 2003-04-15 11:32:50
Link to this Comment: 5404


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


Somnambulism, or sleepwalking, belongs to a group of parasomnias. This disorder of arousal is characterized by complex motor behaviors initiated during stages 3 and 4 of non-rapid-eye-movement (NREM) sleep (slow-wave sleep) (3). Behaviors during sleepwalking episodes can vary greatly. Some episodes are limited to sitting up, fumbling and getting dressed, while others include more complex behaviors such as walking, driving a car, or preparing a meal (2). After awakening, the sleepwalker usually has no recollection of what has happened and may appear confused and disoriented. The behaviors performed while sleepwalking are said to be autonomous automatisms. These are nonrelfex actions performed without conscious volition and accomplished independently from the I-function (3). This insinuates that everything done while sleepwalking is involuntary because the exhibited behavior is not a result of the I-function's output. Therefore if the I-function is not involved what causes people to sleepwalk? What happens to the I-function during sleepwalking? What does this imply about brain and behavior?

Sleep is a succession of five recurring stages: four non-REM stages and the REM stage. Researchers have classified these stages of sleep by monitoring muscle tone, eye movements, and the electrical activity of the brain using an electroencephalogram (EEG) (4). EEG readings measure brain waves and classify them according to speed. Alertness consists of desynchronized beta activity whereas relaxation and drowsiness consist of alpha activity (4). Stage 1 sleep includes alternating patterns of alpha activity, irregular fast activity and the presence of some theta activity. This stage is a transition between sleep and wakefulness (4). The EEG of stage 2 sleep contains periods of theta activity, sleep spindles, and K complexes (sudden, sharp waveforms). This stage is believed to help people enter deeper stages of sleep (4). Stage 3 sleep consists of 20-50 percent delta activity and stage 4 sleep of more than 50 percents delta activity (4). Stages 3 and 4 are characterized as being slow wave sleep in addition to being the deepest levels of sleep. Approximately 90 minutes after being asleep, people enter rapid-eye-movement (REM) sleep (4). REM sleep consists of rapid eye movements, a desynchronized EEG, sensitivity to external stimulation, muscle paralysis and dreaming (4).

Sleepwalking occurs during stages 3 and 4 of the sleep cycle, the deepest levels of sleep. This slow-wave sleep is normally characterized by synchronized EEG activity (4). This indicates that mental activity is very low during these stages of sleep. However researchers have shown that the EEG of a sleepwalker has diffuse, rhythmic, high-voltage bursts of delta activity associated with abrupt motor activity (1). This is very different from the EEG activity normally associated with slow-wave sleep. In addition to the EEG results, they found that there is a decrease in regional cerebral blood flow in the frontopariental cortices during sleepwalking (1). This indicates that sleepwalking is a dissociated state consisting of motor arousal and persisting mind sleep, which seems to arise from the selective activation of thalamocingulate circuits and the persisting inhibition of other thalamocortical arousal systems (3).
This study provides several interesting insights into the role of the I-function and its relationship with the nervous system. The results indicate that during sleepwalking there is no input from the I-function due to the lack of strong mental activity in the EEG. Thus the I-function is inactive during slow-wave sleep and somnambulism. The absence of the I-function in slow wave sleep signifies that sleepwalkers despite their outward appearance of being awake are completely unconscious of their actions at the time they are sleepwalking. This also explains why sleepwalkers rarely recall what occurs during a somnambulism episode and often appear disoriented and confused upon awakening.

It is interesting to observe the differences in mental activity between REM sleep and slow wave sleep. They both involve the production of complex behaviors, however REM sleep has a high mental activity associated with dreaming while sleepwalking has very little mental activity (4). Evidence indicates that during REM sleep, the particular brain mechanisms that become active during a dream are those that would become active if the events in the dream were actually occurring (4). For example, McCarley and Hobson, demonstrated that the cortical and subcortical motor mechanisms become active in dreams that contain movements, as if the person were actually moving (5). Therefore in REM sleep the I-function is active and the person is conscious of his/her actions. In somnambulism, the I-function is inactive and the sleepwalker is unconscious of his/her behavior. The sleepwalker is passively producing complex behaviors. These observations indicate that the same outputs are produced using different parts of the brain.

If the I-function is not responsible for the behaviors generated during sleepwalking, what is? The results from the study indicate that the nervous system is responsible for somnambulism. It stimulates the motor outputs seen in sleepwalkers. This indicates that the nervous system can produce behaviors as complex as the ones produced by the I-function. Therefore the I-function is not as influential as it was once thought to be because the nervous system can generate the same outputs with or without its presence. In addition, the study provides evidence that an output can originate within the nervous system without any input from the external environment or the I-function.

The nervous system can function independently from the I-function, but the opposite is not possible. It is able override the I-function and cause people to perform actions that they are unconscious of. This leads me to believe that the nervous system plays a greater role in our behavior than our I-function does.

Somnambulism is a fascinating behavior. It is induced by a dissociation between mental and motor arousal (1). It provides good insight into the correlation between the nervous system and the I-function. Sleepwalking demonstrates that the nervous system is capable of performing behaviors similar to those specified by the I-function and that it can function independently from it. Despite my greater understanding of somnambulism I was unable to determine why the nervous system causes people to sleepwalk. It has been shown that no dreaming occurs during these stages of sleep. Therefore I do not understand what sleepwalkers acting out. This question still remains open for investigation.


Works Cited

1)Bassetti, C., Vella, S., Donati, F., Wielepp, P. Weder, B. SPECT during sleepwalking. Lancet 2000 Aug 5; 356(9228):484-85

2)3)Masand, P., Popli, A., Weilburg, J. Sleepwalking. American Family Physician 1995. v5 n3 p649.

4)Carlson, N. Physiology of Behavior. 7th ed. Allyn and Bacon. USA, 2001

5)McCarley, R.W. and Hobson, J. A. The form of dreams and the biology of sleep. In the Handbook of dreams: Research, Theory, and Applications, edited by B. Wolman. New York: Can Nostrand Reinhold, 1979.


Human Pheromones: What are the implications?
Name: Kate Shine
Date: 2003-04-15 12:33:10
Link to this Comment: 5405


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Our class was initially shocked to learn about proprioception, as it was difficult to grasp the concept that our bodies could possess senses about themselves and the outside world of which our I-functions are unaware. Even more disturbing was the idea that these bodily perceptions could regulate our behaviors without our conscious thought or consent. The class seemed comforted, however, when this seeming sixth sense was found to regulate many functions we consider thoughtless and ethically unimportant, such as heartbeat and balance. These choices we could leave to our bodies as long as our I-functions had power over the meaningful concerns of life.

But the mounting evidence for the existence of human pheromones throws a wrench into the mechanism of this easy solution. Pheromones, or chemical signals sent from one individual to another which affect behavior, are argued to influence meaningful behaviors once thought to be completely controlled by conscious personal choice, such as sexual willingness and attraction. They also introduce the possibility that we may be constantly communicating with each other and making interpersonal judgments of which we are unaware. The possible implications of this invisible sense are significant and far-reaching.

The first convincing evidence for the existence of human pheromones was presented in 1971 when Martha McClintock published a paper documenting the synchronization of the menstrual cycles of her and her fellow female dorm mates.(10) It seemed likely that something pheromonal was at work, as this phenomenon mirrored a similar occurrence caused by pheromones in mice, known as the Lee-Boot effect. (2) McClintock provided further evidence for this a few years ago in a controlled experiment published in the Journal Nature. (1) She found that secretions from the underarms of females in the follicular phase of menstruation significantly shortened the cycles of other female test subjects when applied under their noses, and secretions from the ovulatory cycle accordingly lengthened their cycles. In addition, she noticed that certain females seemed much more sensitive to the secretions than others, with responses of lengthening or shortening ranging from 1 to 14 days difference.

McClintock also predicts that pheromones of social interaction may be found to affect humans in many more of the same ways they have been found to affect rats, including: age of puberty onset, interbirth intervals, age at menopause, and level of chronic oestrogen exposure throughout a woman's life. (1) Not only does this evidence point toward a type of invisible chemical communication between women, but the variable sensitivities of certain women compared to others indicates that certain women may be dominant over others in determining the cycles of the entire group. Further evidence of this invisible power structure is provided by Michael Russell, who performed a case-study on a female colleague who had observed that it was always her cycle to which other females synchronized. (2)

Other studies provide evidence that it is not only females who communicate with each other pheromonally, but that males as well as females can influence each other sexually. Various studies have found that sexual exposure to males causes irregularly cycling women to begin cycling more regularly(4,2), which is another well-studied occurance in mice known as the Whitten effect, and has been linked to pheromones (2). Dr. Alex Comfort also noted that during Victorian times the average age of the onset of menstruation was much higher than in post-Victorian times, when co-education of males and females became more acceptable. (2) So male pheromones may play a large part in the regulation of the hormones which cause menstruation. Women who have sex with men at least once a week have in fact been found to have fewer infertility problems and milder menopause than those who do not. (4) And sex may not be necessary, but rather just the exposure to men's pheromones which are released only at close range. The implications of these findings are that male pheromones may be necessary for women to achieve optimimum health, and that this may in part explain the female attachment to men.

Other studies by Russell also found that around 6 weeks of age almost all babies will react more favorably to a pad containing the sweat of their mother than to a stanger's pad, and that people can identify their own sweaty shirts as well as those of a strange male and female with a relatively high rate of accuracy. (2) These functions in humans seem similar to the identifying functions of pheromones found in many other animals. (8)

But perhaps the most controversial human behavior which may be influenced by pheromones is sexual preference and mate selection. A case study by Kalogerakis found that at around the age of three years a boy named Jackie began to prefer the smells of his mother much more than those of his father, especially after she had recently had intercourse. The smells of the father at this time, until the boy reached six years old, caused aversion and some nausea. This behavior supports not only the theory of sexual attraction by pheromones but also Freud's theory of an innate "Oedipal complex" in young boys. (2) And in addition to women's health being beneficially affected over time by exposure to male pheromones, the moods of women have been shown to improve when exposed to the male steroid androstadienone. (12) A study also found men and women are more attracted to individuals whose genetically based immunity to disease is most different from their own. (5) Companies have not only begun to market so-called "pheromone colognes" containing compounds meant to attract members of the opposite sex, but these colognes have been reported to have some success for both men and women. (4,5)

There are still many who are skeptical about the actual existence of a pheromone receptor in humans which is separate from other smell receptors in the nose. The vomeronasal organ, which serves this purpose in other species, has long been thought nonexistent in humans after a certain fetal growth stage. However, there is a distinctive pit in the human nose with nerve endings which may still serve this purpose, if the axons of these neurons end in separate, more primitive parts of the brain than the more common nasal sensory neurons. This has yet to be definitively proven, but it seems especially unlikely that menstrual synchronization could be caused by scent alone and not specific chemical factors independent of the I-function.

It may not seem important that woman are unknowingly communicating and adjusting their menstrual cycles in order to achieve equilibrium with those around them, but the idea that some women may be dominant over others in this process is very interesting. What evolutionary quality do these women have, and what is its purpose? Is there an aspect to human personality and social rank which is inherent and unseen? There could in fact be many pheromonal factors working within and across the sexes all the time, influencing our preferences for social and sexual interaction. If Kalogerakis is to be believed, disturbing sexual developments such as Freud's Oedipal complex cannot simply be explained away, but are much more deeply rooted. And what might the implications for our society be with the increasing chemical complexity of the products we use, many containing animal and/or human pheromones? Although the ultimate decision of choosing a mate or selecting a friend must pass through the I-function, how do we know where our attitudes towards others come from? Could it all be more unconscious than we think?

References

1)Regulation of Ovulation by human pheromones, Stern and McClintock's publication in Nature 1998
2)Pheromones in Humans: Myth or Reality?
3) Chicago: Campus of the Big Ideas
4) Sexes: The Hidden Power of Body Odors, article from Time, 1986
5) Pheromones: Potential participants in your sex life
6) Following Your Nose to Optimal Sexual and Reproductive Health and Happiness, student webpaper 1999
7) Study finds proof that humans react to pheromones
8) Human pheromones: Communication through body odour, article in Nature 1998
9) Love Stinks: Intraspecies Chemical Communication, student webpaper 1999
10) Nailing Down Pheromones in Humans
11) Pheromones, student webpaper 1998
12) University of Chicago research points to new category of odorless chemical signals


The Hysteria Over Conversion Disorder
Name: Neela Thir
Date: 2003-04-15 19:24:37
Link to this Comment: 5409


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Scientists in fields connected to neurobiology and psychiatry remain mystified about the cause of Conversion Disorder. The disorder is characterized by physical symptoms of a neurological disorder, yet no direct problem can be found in the nervous system or other related systems of the body. This fact alone is not unusual; many diseases and symptoms have unknown origins. Conversion Disorder, however, seems to stem from "trivial" to traumatic psychological events and emotions rather than biological events. The extreme symptoms often disappear as quickly as they appear without the patient consciously controlling or feigning them. Thus, Conversion Disorder serves as a significant example of how blurred the conceived demarcated divisions of mind/body/behavior can be.

Conversion Disorder is diagnosed solely by its physical symptoms seen in patients. Symptoms can be divided up into three groups: sensory, motor and visceral. Sensory symptoms include anesthesia, analgesia, tingling, and blindness. Motor symptoms may consist of disorganized mobility, tremors, tics, or paralysis of any muscle groups including vocal cords. Visceral functions include spells of coughing, vomiting belching, and trouble swallowing (1). Most of these symptoms are strikingly similar to existing neurological disorders that have definitive organic causes. Conversion Disorder, on the other hand, defies the nerve patterns and functions from which the symptoms should follow. CT scans and MRIs of patients with Conversion Disorder exclude the possibility of a lesion in the brain or spinal cord, an electroencephalograph rules out a true seizure disorder, and spinal fluid eliminates the possibility of infections or other causes of neurological symptoms (2). The abnormal behavior shown in Conversion Disorder cannot be accounted for biologically, and this fact has set off even more scientific theories about the many ways in which biology must explain the phenomena.

Optokinetic nystagmus (sub-cortically controlled steady tracking the eye followed by the fixation of the eye on another object) can still be observed in patients with apparent blindness when they are shown a rotating striped drum thus proving their correct operation (2). Numb hands characterize a type of conversion disorder called "glove anesthesia"; the sensitivity stops at the patient's wrists. This clear demarcation does not correlate to any known nerve pattern or function. Also, patients who show paralyzed arms with functioning shoulder muscles that work normally to correct posture of their truck contradicts what we know about how the nervous system is structured (3). These examples demonstrate how the symptoms the patients express belie a properly working nervous system. Knowledge of neurobiology, or lack of it, seems to influence how the symptoms play out in the patients. It has been shown that symptoms become more biologically traceable when the patient knows more about the body's physiological functioning. Learned information stored in the mind is used to determine the physical symptoms unconsciously expressed by the patient. The conscious functions of the mind can work unconsciously through the nervous system to effect behavior.

Treatment of Conversion Disorder primarily involves psychotherapy. Often patients go into spontaneous remission, or they have a complete recovery shortly after visiting a psychiatrist (2). Despite the effective psychological treatment, the patients are often incredulous when told their symptoms are "imaginary" or "mental". Because this is often counterproductive in the doctor-patient relationship, patients are not told that their condition is thought to be psychological; they are first treated as if the symptoms are organic (3). Associating "mental" with "imaginary" in the minds of the doctors indicates a powerful assumption that something that is controlled by the mind and expressed through the body is not "real" and therefore a feigned illness. The mind, with its intimate connection to the nervous system and behavior by way of the larger structure of the brain, is more than a superficial layer of us as humans. It too is a part of the body, and a schism in the mind, whether it initiates from military trauma, sexual abuse, or depression remains a very real source of physical and medical illness.

This discussion of the distinctions of the mind and body are not new. Conversion Disorder was originally called "hysteria," and it has been described by documents hundreds of years old. The ancient Egyptians attributed it to a malpositioned or "wandering" uterus, and the name derives its meaning from this distinctly feminine problem. In the 1560's, the first documented study was done that claimed that "hysteria" was located in the mind rather than the body. Two hundred years later, the French neurologist Charcot hypothesized that hysteria originated from an organic weakness of the nervous system. Sigmund Freud became captivated by this idea and eventually replaced "hysteria" with the term "conversion" because he theorized that the symptoms were "intrapsychic conflicts" manifested (or converted) physically in the patient (3). This progression towards the psychological has been reflectively inverted in recent theory.

Increasingly, Conversion Disorder is being thought of and researched biologically rather than psychologically. Technology has supplied methods of testing elements of the nervous system, and yet no definitive causes have been identified. It has only recently become common thought that Conversion Disorder arises from some sort of collaboration between the mind and the nervous system (3). This distinction between the "mind" and the "nervous system " (and the revelatory idea that they are indeed related) demonstrates a crucial partition that most scientists seem to make between the two. In my view, the mind is the cognitive function of a larger concept, which can be called, as we do in class, "the brain". The brain encompasses both the biological stratum of the nervous system as well as the cognitive stratum of the mind. As debates over Conversion Disorder show, they are interrelated. The nervous system does not only determine the mind, but it too can be influenced by the mind. To attempt to divide them for study may be useful, but their ultimate relationship should not be permanently separated. Parobek describes the state of Conversion Disorder as a "Tower of Babel that obscures accurate identification and nomenclature" (3). This is true in that, like the myth, the relationship between the two elements of the brain create a confusing cacophony of precise causation because that is its inherent structure. Nomenclature is a system of understanding and representing, and if the construction of the Tower of Babel and Conversion Disorder cannot be contained within one language or one scientific discipline, then perhaps the nomenclature and the building of scientific research needs to broadened to include more "multi-lingiustic" elements. The total functioning of the nervous system and the mind meets, exists, and functions together in what is more appropriately called "the brain," and the brain, as a whole, determines behavior.

To understand Conversion Disorder completely will probably require a more multi-discipline approach instead of trying to locate its cause and process on a purely biologic level. When trying to pinpoint whether a patient's symptoms hail from "real" or malingering sources, the observed difficulty lies in the seeming dichotomy between mind and body. This dichotomy however remains a created one for the benefit of our own understanding. Yet, in the case of Conversion Disorder, delineated scientific thinking seems to have prevented our understanding rather than facilitating it; by inspecting the trees, we are missing the forest.


References

1)PsychNet-UK

2)Emedicine: Instant access to the minds of medicine., Dufel, Susan M.D. "Conversion Disorder".

3)Parobek, Virginia M."Distinguishing conversion disorder from neurologic impairment".Journal of Neuroscience Nursing. 04/97. Volume 29. Number 2. p. 128.
Infotrack: Expanded Academic
, scroll down to E-journals, select Science Direct and search for title


Why Can't I Speak Spanish?: The Critical Period Hy
Name: Stephanie
Date: 2003-04-16 12:42:35
Link to this Comment: 5416


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

"Ahhhhh!" I yell in frustration. "I've been studying Spanish for seven years, and I still can't speak it fluently."

"Well, honey, it's not your fault. You didn't start young enough," my mom says, trying to comfort me.

Although she doesn't know it, she is basing her statement on the Critical Period Hypothesis. The Critical Period Hypothesis proposes that the human brain is only malleable, in terms of language, for a limited time. This can be compared to the critical period referred to in to the imprinting seen in some species, such as geese. During a short period of time after a gosling hatches, it begins to follow the first moving object that it sees. This is its critical period for imprinting. (1) The theory of a critical period of language acquisition is influenced by this phenomenon.

This hypothetical period is thought to last from birth to puberty. During this time, the brain is receptive to language, learning rules of grammar quickly through a relatively small number of examples. After puberty, language learning becomes more difficult. The Critical Period Hypothesis attributes this difficulty to a drastic change in the way that the brain processes language after puberty. This makes reaching fluency during adulthood much more difficult than it is in childhood.

The field of language acquisition is very experimental because scientists still do not completely understand how the brain deals with language. Broca's area and Wernicke's area are two parts of the brain that have long been identified as areas important for language. Broca's area is the left frontal cortex, while Wernicke's area is the left posterior temporal lobe. These areas are connected by a bundle of nerves called the arcuate fasciculus. Both Paul Broca and Karl Wernicke had patients with lesions with lesions on their brains. The problems caused by these lesions led to the discovery of Broca's area as the sight for the production of speech and Wernicke's area as tied to language comprehension. (2) The location of these areas, as well as the effects of anesthetizing one half of the brain have lead scientists to believe that language is primarily dealt with by the left hemisphere of the brain.

Recent studies have shown that activity in the planum temporale and the left inferior frontal cortex during acts of language are not unique to hearing individuals and therefore cannot be attributed to auditory stimuli. The same brain activity was shown in deaf individuals who were doing the equivalent language task in sign language. This adds more support to the idea of specific areas of the brain devoted to language. (3)

Noam Chomksy suggests that the human brain also contains a language acquisition device (LAD) that is preprogrammed to process language. He was influential in extending the science of language learning to the languages themselves. (4) (5) Chomsky noticed that children learn the rules of grammar without being explicitly told what they are. They learn these rules through examples that they hear and amazingly the brain pieces these samples together to form the rules of the grammar of the language they are learning. This all happens very quickly, much more quickly than seems logical. Chomsky's LAD contains a preexisting set of rules, perfected by evolution and passed down through genes. This system, which contains the boundaries of natural human language and gives a language learner a way to approach language before being formally taught, is known as universal grammar.

The common grammatical units of languages around the world support the existence of universal grammar: nouns, verbs, and adjectives all exist in languages that have never interacted. Chomsky would attribute this to the universal grammar. The numerous languages and infinite number of word combinations are all governed by a finite number of rules. (6) Charles Henry suggests that the material nature of the brain lends itself to universal grammar. Language, as a function of a limited structure, should also be limited. (7) Universal grammar is the brain's method for limiting and processing language.

A possible explanation for the critical period is that as the brain matures, access to the universal grammar is restricted. And the brain must use different mechanisms to process language. Some suggest that the LAD needs daily use to prevent the degenerative effects of aging. Others say that the brain filters input differently during childhood, giving the LAD a different type of input than it receives in adulthood. (8) Current research has challenged the critical period altogether. In a recent study, adults learning a second language were able to process it (as shown through event related potentials) in the same way that another group of adults processed their first language. (9)

So where does this leave me? Is my mom right, or has she been misinformed? The observation that children learn languages (especially their first) at a remarkable rate cannot be denied. But the lack of uniformity in the success rate of second language learning leads me to believe that the Critical Period Hypothesis id too rigid. The difficulty in learning a new language as an adult is likely a combination of a less accessible LAD, a brain out of practice at accessing it, a complex set of input, and the self consciousness that comes with adulthood. This final reason is very important. We interact with language differently as children, because we are not as afraid of making mistakes and others have different expectations of us, resulting in a different type of linguistic interaction. Perhaps that LAD processes those types of interactions better. There is not yet enough research to make a conclusive statement, but even the strictest form of the Critical Learning Hypothesis does not say that that language learning is impossible in adulthood. I guess that means doesn't get me off the hook. I'd better keep studying.


References

1) Learning Who is Your Mother, The Behavior of Imprinting by Silvia Helena Cardoso, PhD and Renato M.E. Sabbatini, PhD.

2) The Brain and Language Language page of the Neurobio for Kids websight.

3) Brain Wiring for Human Language Scientific American article.

4) Universal Grammar [Part 1] Forum area of Gene Expression websight.

5) The Biological Foundations of Language, Does Empirical Evidence Support Innateness of Language? by Bora Lee.

6) Evolution of Universal Grammar by Martin A. Nowak, Natalia L. Komarova, and Partha Niyogi.

7) Universal Grammar by Charles Henry.

8) A concept of 'critical period' for language acquisition, Its implication for adult language learning by Katsumi Nagai.

9) Brain signatures of artificial language processing: Evidence challenging the critical language hypothesis by Angela Friederici, Karsten Steinhauer, and Erdmut Pfeifer.


Speaking of Happiness: The Interplay between Emoti
Name: Kat McCorm
Date: 2003-04-17 00:01:20
Link to this Comment: 5428


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

"And this is of course the difficult job, is it not: to move the spirit from it's nowhere pedestal to a somewhere place, while preserving its dignity and importance."

I cry. There is pressure behind my eyes, my skin turns blotchy and my lips tremble, and mucus clogs my airways, making it difficult to breath. I hate crying in front of others: not because I want to hide how upset I am, but because the second that most people perceive my emotional state as fragile, they assume my reasoning and mental functions are also not sound. The outward expression of an inward instability is something we save for those who we know and trust best. They do not view our emotionality as a weakness, they already know us to be strong. Crying is represented in our culture as a lack of control. When upset, the "ideal" is to keep a cool head (and a poker face), not allowing emotions to enter into the decision making process. However, I submit that without our emotional base, rationality would have no reason or foundation upon which to operate.

A multitude of opinions are found on the subject: are emotions more a function of the heart or of the head? According to Antonio Damasio (1), emotions and feelings are an integral part of all thought; yet we as humans spend much of our time attempting to disregard and hide them. In the view of source (2), experience is the result of integration of cognition and feelings. In either view, it remains indisputable that emotions are not what we typically make them out to be: the unwanted step-sister of our cultural sweetheart reason. Reason in our culture denotes intelligence, cognition, and control. Emotions seems such a "scary" concept to our collective mind because they can be so overwhelming, and can cause us to lose the control we are so reticent to relinquish. Consequently, the perceived division between emotion and reason has resulted in more polar divisions that we experience on a daily basis: the great schism between the humanities and the sciences, for example. However, as is pointed out by a recent NIMH study on Emotion and Cognition (3), this historical division between emotion and cognition is losing its utility as research progresses. The integration of the concepts is reflected in the interdisciplinary interest: from neurobiology to psychology, the implications are far reaching.

An early interpretation of the relationship between emotion, cognition and physiology was that of William James, who thought of emotions as results of physiological processes of the autonomic nervous system (6). According to his school of thought, first comes cognition, then a physiological response, and then an emotion. In response to an event such as the death of a friend, first the cognition steps in about what this means, then the body begins to cry, and because we are crying, we begin to feel sad. Another later theory was proposed by Walter Canon and Philip Bard. This theory (4)proposes that in response to a stimulus, a signal is sent to the thalamus, where the signal splits and goes toward the experience of an emotion and the other half towards producing a physiological response. Most modern neurobiologists do no agree with this theory, however. The question remains: what physiological structure can we pinpoint as the source of our emotions.

A recent study by NIMH study attempted to trace emotional and cognitive memory to a physical structure in the brain, and found that the amygdala has a large role in the storage and integration of both. There is much evidence that emotions and cognitive processes are in some way interdependent. One example of this is the research of Monica Luciana (3). According to her findings, spatial working memory is influenced by goals and emotional states, as she tested through the use of dopamine and serotonin agonists and antagonists. When dopamine agonists are used (emulating an emotion of happiness), subjects performance on spatial abilities test was improved. The use of serotonin agonists (emulating negative emotions) slightly impaired performance as did dopamine antagonists. In the same study, depressed patients were found to have increased blood flow to the amygdala, and therefore increased amygdala activity. In reflecting on this, I observed that many of my own depressed periods occurred at times when friends and family described me as "thinking too much." Could this be related to the over activity of the amygdala, a structure which serves as an integration center and "memory 'enabler'."? Shakespeare describes a similar experience in "As You Like It" (5), saying: "it is a melancholy of mine own, compounded of many simples, extracted from many objects, and indeed the sundry contemplation of my travels, which, by often rumination, wraps me in a most humorous sadness."

I love. I, the self, love something in the environment around me. But this is not just some ethereal feeling, which cannot be placed or defined or seen, whih can only be felt and described by musicians and poets. Love, traditionally, is placed on an invisible pedestal, floating mysteriously above us, our rationality, our flesh. But I love. I feel weak in the knees, butterflies in the stomach, head over heels in love. Somehow, we still connect this otherworldly experience with physical sensations. And yet, we are reticent to bring love from it's abstract home into our own bodies, our own minds; afraid that if we recognize our emotion as something totally contained with our brains, something totally human, we will wake to find some of the wonder, tenderness, and luster gone. Is this also reflective of some human insecurity? Not until we can bridge the illusioned gap between our emotions and our cognition can we understand fully the relationship between our brain and our behavior.

References

1) A.R. Damasio, Descartes' Error, 1994

2) Thinking, Emotions, and the Brain

3) From Neurobiology to Psychopathology: Integrating Cognition and Emotion, on the NIMH website

4) Laughing out Loud to Good Health.

5) William Shakespeare (1564–1616). The Oxford Shakespeare. 1914. , on the bartleby website.
6 Theories of Emotion--Understanding our own Emotional Experience.


Review: The Forbidden Experiment by Roger Shattuck
Name: Geoff Poll
Date: 2003-04-17 02:05:13
Link to this Comment: 5431


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

It is one of the oldest unanswered questions in all of science, taking a comfortable backseat in importance only to "Where did we come from?" and "Why are we here?" Though less exciting than its company in the pool of great unanswerable questions, the paradoxical 'Nature or Nurture?' question—paradoxical because the two can not be pieced apart as the question form implies—has tormented truth-bound scientists for years, presenting them with a set of hypotheses that can not be empirically tested. Recent advances in genetics have brought forward multitudes of new possibilities for those who would study the pure effects of environmental variables on the now controlled for nature. While it makes for exciting news in animal testing, we do not allow ourselves to manipulate other human beings for the sake of collecting data. Our strong moral stance does not diminish our curiosity and so the question must be asked: What would we do if a case in which the human had already been manipulated, by no will of our own, fell into the hands of science? How far would we go?

Every couple hundred years, one of these cases, by chance or by a case of true cruelty, falls in the hands of scientists, eager to make the most of such a "misfortune". Roger Shattuck's The Forbidden Experiment follows one of the more prominent cases of our recent history, that of Victor, the Wild Boy of Aveyron.

The book takes little time to peak the reader's curiosity with the tale of a "savage" twelve-year-old wandering out of the woods of southern France on a cold January evening in 1800. Without a known history or the ability to communicate with his captors, Victor, as he was later named, was assumed to have lived in the wild for at least six years and probably more. In the midst of an intellectually lively France, Victor found immediate fame and was brought to Paris so that the best scientists could to take advantage of studying a human raised almost completely in isolation. The story is a short one, lasting only the 28 years the Victor lived in Paris society before his death in 1828, and his time of immediate interest to the Paris scientific community finishes long before that. (1)

To an intellectually enlightened community the contradiction should have been apparent; Victor was detained and forced to learn how to live as a proper human being in civilized society, but he was treated every step of the way as if he were an animal. The inconsistencies are not more apparent than they were in the often times tortured mind of the main scientist overseeing the progress of Victor, the young and gifted Dr. Itard. It is Itard who subjects Victor to inhumane treatment from keeping him on a leash during walks around the grounds, to using violence to arrive at a desired level of obedience. Victor, from most observers' accounts, was not human but some kind of savage creature, which justified that type of treatment. Itard was one of the only scientists who disagreed, but his own beliefs that Victor was more than animal had already condemned his own methods. (1)

The focus of the Shattuck's narrative falls within the five years Itard spent attempting to give back to Victor human qualities we all grow up with, but which Itard believed had lain dormant for the years that Victor spent in the wild. The author's treatment of Itard makes for most of the interesting reading of the book. Shattuck clearly admires in Itard his determination, his passionate curiosity and his faith in the innate goodness of the almost altogether unresponsive being in front of him. It is through much of Itard's own pen that we get not only the notable events of Victor's progress, but a look inside the mind of a man who needed to believe that we are made of something more than the right combination of a biological basis and a timely social training. It is that last element, the human one, which Itard searched for. (1)

But his search was misguided and from his original diagnosis he never frees himself from a strict idea of what success with Victor would be. He sees his biggest challenge as that of language and in the end it is his inability to give Victor the gift of speech that gives him the feeling of failure. Shattuck aptly jumps on the irony of that situation; Itard kept Victor in an institution for deaf-mutes, and he never once attempted to teach Victor sign language. (1)

At each step he hoped for Victor to act like a his idea of a human while treating him still more and more like a lab rat. The most striking series of events arrived with the onset of puberty in Victor. Itard saw the heightened sexual drive that comes with puberty as a drive towards love—the most human of all emotions—as well as towards the forming of new ties and relationships between young people. Itard had kept Victor isolated from any peers his age but when Victor began to feel urges, the scientist set up controlled situations with females where he expected Victor's natural urges to lead him towards some kind of more substantial communication. Without any kind of reference Victor just felt frustrated and gave up, not knowing what to do. Itard gave up as well and condemns Victor as an idiot. (1)

Years later, shortly after his death, a peer doctor honored Itard's work in a statement that throws light on Itard's greatest obstacle, his own peer community. "To train an idiot," praised Dr. Bosquet, "to turn a disgusting, antisocial creature into a bearable obedient boy—this is a victory over nature, it is almost a new creation."(165). It was this way of thinking about people that caused the short-sidedness allowing the experimentation to proceed and a very creative young scientist to find himself walled in. (1)

Sadly, Victor and Itard's case does not stand alone. Among many other claims of "wild" children, the most famous recent case was that of Genie. Thirteen years old when found in 1970, Genie had been deprived by her parents of all social contact for the last ten years of her life. She was physically debilitated and was not able to speak, a talent she never fully retrieved. As soon as she was discovered by a social worker, Genie was apprehended by scientists with the same vigor as Victor had been 150 years earlier. (4)

"It's a terribly important case," claims Harlan Lane, a psycholinguist and author of a book about Victor called, The Wild Boy of Aveyron. Regarding Genie, Lane posits, "Since our morality doesn't allow us to conduct deprivation experiments with human beings, these unfortunate people are all we have to go on." (3)

Victor was abandoned by Itard after he failed to produce speech. He lived the rest of his life in isolation, not at all a deviation from his first 18 or so years. (1) Genie is still alive but struggles with all aspects of her life. She has no more contact with the researchers assigned to her case after they were brought to court for taking advantage of her for selfish motives. (3)

Shattuck approaches the case of Victor with a great curiosity and in many way justifies the ignorance and cruelty of Itard and the other scientists of the time. In a more recent book, Forbidden Knowledge, he retracts some of that innocence and makes some heavy criticisms of our overly inquisitive society. He references mostly mythology and his examples range from Adam and Eve to Mary Shelley's Dr. Frankenstein. In each case temptation overrides our intellect. (5)(6) Victor, on the other hand, was unconcerned with knowledge and ego, and not tempted by power and money. His mind was freer than ours may never be. Shattuck speaks of Victor escaping from "humanity into animality"(181). It's an attractive concept. Of course we would like to teach Victor to talk. We want to know what he knows. Itard preferred to refer Victor's state of that of forgetfulness. (1) Is that what we want, to forget? What are our motives in the end? To learn from the dumb how not to speak, or to teach language so that Victor may live with us and suffer the way that we do. What does that make us?

References

1) Shattuck, Roger. The Forbidden Experiment. New York: Farrar Straus Giroux, 1980

2)NOVA transcript, transcript for 'Genie' episode

3)The Civilizing of Genie , the story of Genie

4)Feral Children Website, a great resource about 'wild' children

5)Online News Hour, Shattuck interview

6)Ethical Culture Book Review, review of Forbidden Knowledge


The Runaway Brain: A Review
Name: Kate Tucke
Date: 2003-04-17 11:12:14
Link to this Comment: 5433

<mytitle> Biology 202
2003 Second Web Paper
On Serendip

Christopher Wills has written a fascinating chronicle of human evolution in a style that will keep the reader glued to the book to find out what happened next. The Runaway Brain is organized into four sections. First Wills addresses The Dilemmas, the many problems that students of evolution encounter mainly from public perception of the subject and from the many prejudices of those involved with the work. The question of where our species first appeared is a particularly contentious one, although it is now widely accepted that the species originated out of Africa. There are, regardless, those who still disagree and especially at first, many dismissed an African origin out of hand. Wills' second main issue is that of the transition to actual "humanity" and if it occurred once or twice. As he discusses in the chapter entitled "An Obsession with Race", those who deride those of African descent often use the multiple origin theory as one that justifies racism. Wills decries this abuse of the science and firmly argues against those that would use evolution to further racist propaganda. He also takes issue with those who insist on believing that all of humanity came from one Eve and one Adam, instead putting forth the theory of the "mitochondrial Eve"; that we all descend from the mitochondrial DNA, but that we do not in fact descent from two individuals.

Wills' own slant on the issue is that humans are involved in a feedback loop which he calls the "runaway brain". Wills claims that humans are unique in that they have culture which has developed. The culture injects an otherwise unknown into the evolutionary process. Humans, Wills says, had advanced brains which allowed them to create a complex culture. The culture challenged their brains and led to more complex brains as the species involved. This process continued to repeat and is still repeating today. This is what Wills claims is driving us towards our ultimate best.

The second section of the book is titled The Bones and tells the story of the archeological remains of the ancestors of humanity. Wills creates a fascinating tale as he describes the lives, feelings and desires of the people involved in finding these bones. Not only does he describe the find and its significance to the understanding of evolution, he also tells the story of the finder making the section more of a human drama than a dry telling of facts. In this section it becomes really apparent that Wills is trying to create an interesting version of evolution that everyone will be interested in reading. In the preface to the book he states that he wants to bridge the gap between scientists and the public and here he does so, mentioning that you might be reading this book while sitting by a fire with your pets. Wills is quite adept at turning the facts of evolution into a story telling process.

After tracing the various archeological finds showing our possible descent from apes, Wills goes on to discuss the genetics of evolution in The Genes. He first describes the basics of genetics, instructing the reader as to the basic nature of genes, alleles, DNA and mutations. He places a very strong emphasis in this section on the fact that mutations are usually neutral, usually having neither a positive or negative influence on the organism. Wills provides the basic genetic knowledge needed to understand how traits can be passed down from generation to generation in a population and in turn how this can alter the makeup of the population.

The final section of the book, The Brain, attempts to discover why the human brain differs from all other animals. As with The Genes, Wills first begins with a description of the basic functioning of the brain and then moves into a more complex analysis of why our brain structure has led us to the more complex abilities we see in human culture today. Wills shows that human brains were not that different from the other organisms that we evolved from originally, but that human brains have become more complex over time. Here again he puts forth his theory of the "runaway brain". He postulates that human culture has created a feedback loop with the complexity of the brain; one leading to an increase in the other and back and forth with our current brain as a result. Hence, the brain has been directed towards as ultimate goal. The goal is the complex structure that we utilize today. This is not, however, the ultimate endpoint for humanity, Wills argues. He claims that evolution is still ongoing today, although we cannot see it at work and will continue in the future.

The Runaway Brain is a highly enjoyable read for anyone interested in evolution. Every fact and discovery is punctuated with a human interest story by telling the history of the discoverer. Every important event is marked with an anecdote explaining the significance in a way that everyone should understand. Wills stated in the beginning that his "goal is to try and close the gap between the scientist's perception of how evolution works and the public's; in essence, to demystify the process of evolution"(xx). In this respect, he has succeeded admirably. The problem with the book, however, is that the sections are quite disjointed. Although the reader will learn a lot about The Bones, The Genes, and The Brain, they are not all tied together in a way which the reader can easily comprehend. The message of the "runaway brain" is also not clear enough. Although Wills refers to it periodically, he does not use the mountain of evidence he presents in this book to back up his theory. As an exploration of how humans evolved to their current complexity, Wills does an admirable job of chronicling but was not so successful at explaining the process.

References

Wills, Chrisopher. The Runaway Brain: The Evolution of Human Uniqueness. New York: Basic Books, 1993.


Murderous Mommies
Name: Zunera Mir
Date: 2003-04-18 16:39:08
Link to this Comment: 5442


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


"I was looking for a way to get attention to myself, and maybe if I could just do something drastic enough, that someone would see that I needed help"-mother who tried to suffocate her child (1)

Why would any mother try to suffocate her child? Is it not true that:

"A mother's love for her child is like nothing else in the world. It knows no law, no pity, it dares all things and crushes down remorselessly all that stands in its path?" --Agatha Christie.

Mothers, in most cases, are seen as the essential "caregivers" in many societies/ cultures. A novel or textbook, screenplay or script, Hallmark card or holiday, could celebrate "motherhood," and what it entails, at one point in time. The bond of mother and child is shown to be "unbreakable" and we hear stories of mother's lifting cars to save pinned children, essentially sacrificing their lives in order for their children's survival. Growing up, we might hear that being a mother is an "under-appreciated job; and "all the work mothers do whether paid or unpaid - has social and economic values"(1). Mothers can essentially be the shapers of the future society: able to raise children, and possibly even hold down a job, while still being able to cook and clean. Author Ellen Bravo stated, "Only Clark Kent had to be Superman, but every mother has to be Superwoman" (2).

Just recently (about 26 some years ago) physicians began to disagree with Miss Christie's belief that mothers are extraordinary defenders of children. Dr. Roy Meadow coined the term, Munchausen Syndrome by Proxy (MSBP), in 1977, to describe an unusual and bizarre behavior exhibited by parents of extremely "sick" children after observing a mother who had tampered with her child's urine samples (3). Parent(s)/caregiver(s) of a child in MSBP falsify illnesses or fabricate symptoms for children. They will "purposefully inflict pain/harm on another person, usually his/her own child" (4). Not only will the parent inflict pain, they will try to elicit "health problems" in their children through various means, usually undetectable by forensic exams (5). They will suffocate the children, poison them, inject them with feces/urine/insulin, block airways with fingers/hard objects, scrub the child's skin with cleaning solvents and bleaches, or feed children large amounts of sugar or salt (6), (3), (5). The mother (usually the sufferer of Munchausen's and perpetrator of MSBP more than 95% of the cases), or, very rarely, the father, might also feed the child more medication than required, tamper with medical equipment, or swap medicine (7), (8), (6).

Perpetrators can make their deceptions look like plausible and valid medical problems, which doctors will then try to treat:

"The perpetrator can make the a child's emesis, urine, or feces appear bloody by adding their own blood, or paint, or dye, or cocoa. Feeding a child large amounts of salts or sugars can create electrolyte imbalance. Scraping a child's skin with a sharp object or applying irritating solutions, such as oven cleaner, or dyes can cause rashes that will last for days, weeks, or even months. Sedatives, tranquilizers, or the injection of drugs or foreign material can induce neurological symptoms. Only the imaginations of these parent's limit the variety of believable signs and symptoms. The children not only suffer from the parents' actions, but they are also subjected to an extensive array of invasive radiological, medical, and surgical procedures that are unnecessary and painful (9).

Take the case of young Jennifer Bush. Her mother, Kathleen Bush, admitted Jennifer to a hospital on "130 separate occasions" (8),. Jennifer spent "640 days in hospitals" and "underwent approximately 40 surgeries for chronic illnesses such as immune system deficiency, gastrointestinal problems and seizure disorders and needed a feeing tube to eat" (10), (8). The surgeries Jennifer Bush suffered removed not only her gallbladder, but also her entire appendix, and part of her intestines (9).

What did the mother get out of it? Instant fame? Previous First Lady, Hillary Rodham Clinton, did chose Jennifer as the poster child for her campaign on healthcare reform (9), (10). Kathleen Bush was lauded and given the highest praise for her "exemplary devotion" to her little daughter (11). However, the cause could go deeper than attention seeking, especially since MSBP is an unusual form of abuse.

Why is MSBP bizarre? There are basically three unusual facts surrounding MSBP. One- the fact that young children are suffering, even dying, from unknown causes; Two- mothers are the ones behind the child's suffering or eventual death; Three- The mothers are knowingly harming their children and trying to cover up what they are doing from others, even from their spouses.

Young children, more vulnerable than adults and older children, can be susceptible to many diseases. However, if there is a history noted of similar sibling illnesses. Unexplained reasons for death/illness of sibling, more than one or two cases of SIDS in the family, or the doctor being baffled by the patient's symptoms, could suggest that the child's illness could be due to an "unlikely" alternative cause-mothers. This was concluded to be the cause of a child's "sickness" once nurses, spouses, doctors, and other family members noticed that when the mother's left the child, the signs and symptoms of the "illness" would alleviate, or no longer surface. An undercover video recorded by an English doctor, David Southall, confirmed suspicions that some children were actually harmed by their parents. "In all Southall recorded 39 different children over the course of eight years. In the taping, 33 of the 39 children are seen being attacked" (12). If this is true, than one in five "cot deaths," or SIDS, is "actually a murder resulting from a mother with MSBP" (5).

This may be hard to accept for many people since, as stated before, mothers, and parents, in general, are seen to be the "caregivers" and "protectors" of children. For instance: "Few people ask questions, for how many people would dare to think that this wonderful, kind, caring, compassionate person who has devoted all her life to helping others is, in reality, a murderer?" (5). This denial can be linked to societal views. Mothers and parents are placed in high esteem when it comes to the care of children. Therefore, why would a doctor even question the reports made by the parent's of a "sick" child. "Physicians are trained to believe in the medical history given by a caregiver and may not question the facts" (13). On the other hand, if the perpetrator of MSBP is a nurse or doctor, society is taught to trust the people "in the coats and uniforms." We give people in uniform a certain degree of power. Police have the right to enforce the law, and doctors have the right to heal and save the sick. Whether or not they abuse this power can not be controlled by what society deems the "proper" function of a job. People, given the power, may use it or abuse it. As long as one does not get caught, society and others will never know how one is using his/her power. Moreover, all the same, most people trust each other. Doctors trust parents to tell the truth about symptoms/problems and parents trust doctors and nurses to treat and not harm their children.

MSBP violates this societal norm. Mothers, or even fathers, afflicted with MSBP, no longer follow the "traditional" role as caregivers. Instead, they inflict pain and irrational attacks on their own flesh-and-blood. They only seem to get away with it because many parents with MSBP are "highly attentive," "unable to leave the side of the child," while also being "supportive to staff" and involved in the "hospital environment," extremely "close to hospital staff" (14), (15). Thus, on the surface, this parent seems to be a caring and overprotective mother. What the medical staff, spouse, and the family, do not realize is the amount of care this "devoted" individual put into hiding the abuse inflicted on the child and the fabrications created around it.

Why would anyone do this? Causes for MSBP are large and varied. One of the most popular reasoning's behind the disorder could be traced to the mother's own childhood. Most of the mothers with MSBP may have had an "emotionally deprived childhood with a high probability of physical abuse" (9). These mothers could have suffered at the hands of their own parents, causing them to do the same to their children. Another popular reason related to the presence of abuse as a child, is the search for recognition, sympathy, or just "attention," such as Kathleen Bush sought when abusing her daughter Jennifer. These women could be in an unhappy marriage, where the husband is inattentive and distant, or could have depression due to childhood abuse and trauma (7), (16), (4), (6), (15), (9). If the husband is distant, the mother may harm the children to "grab the husband's attention or even [use it] as a method of taking revenge against the husband" (4).

The Munchausen (a separate syndrome in which the individual inflicts pain or falsifies illnesses to gain entry into the hospital) could carry over to the victim abusing his/her child.

A more controversial cause deals with mother's "outwitting doctors"- an authority figure in society. "Some offenders might receive gratification that as they fool the doctors. They derive enjoyment from knowing what is wrong with the child as medical experts remain baffled" (15). The relationship developed by the mother and doctor is strictly "sadistic"-the mother harming the child to get back at the doctor (7). In some cases, the doctor could stand in as the target for the mother's hatred or ambivalence towards her own father (7). If the mother felt that her father "rejected her," leading her to feelings of "psychological abandonment" and "emotional hunger," causing her to fail in "developing a sense of self," the mother might redirect her feeling about her father's treatment of her to a male doctor (7).

All of the "sources/causes" of MSBP, I feel, can be related to power and control.

"The mother may feel hugely empowered, because she knows society is on her side- mothers don't deliberately harm their children in such a way, thus giving her the edge in a social encounter with the doctor and to a certain extent she has gained control over the thoughts and actions of a well educated doctor by 'feeding him lies' and determining the periods of acute illness in the child" (7).

Like any abusive disorder, control and power is a major issue. For anorexics, it is to starve themselves. Bulimics control the amount of food after bingeing by either vomiting, or doing intense exercise. When a child is abused, the first thing they lose is control and power. Their attacker wields all the power and all the control. As we discussed in class, control means a lot to us, whether it is physical or mental. To some people, the fact that we can not control certain aspects/functions of our lives frightens them or makes them feel inadequate (9). Thus, the reality behind this disorder can be seen when examining the actions the mother might take.

In order to cause symptoms in the child, mothers have to physically induce the illnesses." As stated earlier in the paper, they can achieve this in numerous ways (injecting feces, applying cleaning solutions to skin, smothering, etc.). The inducing of "sickness" in a child wields a lot of power and control for the mother (7), (16). She can do whatever she likes with the child. The mother has the ultimate upper hand in how much the child will suffer before she gets caught, or before the child dies. This absolute control over a being results in secondary benefits: attention to the mother, feeling of worth by attentive hospital staff and doctors, recognition for "heroism," revenge, etc. Overall, if the mothers practicing MSBP really were affected by childhood abuse/trauma enough to feel that they are "ignored," then the control/power cause would be the most logical cause for the development of MSBP.

To this date, we do not have enough information to pin one cause onto MSBP. However, I do not think the syndrome can actually be explained by one cause, despite the lack of information. Any disorder, especially mental, can be attributed to several factors and causes. Mother's that perpetuate MSBP that never had any childhood abuse/trauma might be led to harm their child for other reasons: to get back at their husband's for disregarding them, to deal with insecurity by searching for attention and support through hurting their child, depression, the list could go on.

A MSBP victim survivor noted,

"Just as we have been called as a nation to stand together, may we survive to pull each other up to the mountaintop and shout-NO MORE. Our children deserve to be safe, to know and receive unconditional love. This will be our mission and our legacy to the next generation" MB (13).

What is important now is to realize that MSBP DOES exist, despite whatever reason. The potential victims are the defenseless children. By reaching out to the mothers and helping them deal with their problems, or by recognizing the "signs" of MSBP early on, we could potentially prevent more deaths of children (9).


Bibliography

1)Mothers and More Homepage , A non-profit organization site dedicated "to improving the lives of mothers through support, education and advocacy. We address mothers' needs as individuals and members of society, and promote the value of all the work mothers do."

2) CNN.com Homepage, Article by Annelena Lobb on "the market value" of a mother's work

3)Hendrick Health System Homepage, A brief definition and description of Munchausen Syndrome

4)General Health Articles, Brief paper on Munchausen Syndrome

5) Bully Online,Site on Attention Seeking Disorders. Has a section on MSBP

(6)Body Magazine Online, site on women's health and lifestyle.

(7)School of Medicine Papers, Paper by Med student on MSBP, entitled Munchausen Syndrome by Proxy

(8)Court TV Homepage, Site on trials and verdicts

9)Nursing Career Site, Self-Study Module on MSBP for RNs

10)CNN online Homepage, Article on Kathleen Bush case by Susan Candiotti

11)Self Help Magazine, Paper on MSBP by Marc D. Feldman, M.D., entitled, "Parenthood Betrayed The Dilemma of Munchausen Syndrome by Proxy"

12)ABC news online, News OnLine. Article on MSBP.

13)MSBP Survivor Network Site

14)PediatricBulletin, Article on MSBP on Pediatric Bulletin Site

15)Allen Cowling Investigations Homepage, Site on False Allegations. Paper on MSBP.

16)Stranger Box site, Post-Trauma Page. Section on MSBP.


Current Research Investigations of Corollary Disch
Name: Michelle C
Date: 2003-04-22 05:40:52
Link to this Comment: 5483


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Corollary discharge assists both human and non-human animals distinguish between self-generated (internal) and external motor responses. By sending signals which report important information about movement commands and intension animals are able to accurately produce motor sequences with ease and coordination. When a motor command initiates an electric organ discharge, the signal transmits important information to the brain which serves as a feed-back mechanisms which assist with self-monitoring; this is formally defined as corollary discharge(6).

Although the corollary discharge system is one of the most important systems which animals possess for the control and detection of motor movements, its specific neurological mapping is largely unknown. Many studies which investigate the specific nature of corollary discharge focus on either auditory or visual sensory perception. Current investigations of corollary discharge are commonly associated with the use non-human primates and humans who suffer from Schizophrenia. By using non-human primates for the investigation of the neuronal network of the corollary discharge system both invasive and non-invasive investigation may be explored. In addition, investigating Schizophrenics who suffer from auditory hallucinations, the inability to differentiate between spoken and thought speech (1), may also significantly contribute to the advancement and increased understandings of the corollary discharge systems.

Sommer et. al., proposed a neuronal pathway for corollary discharge in non-human primates. He suggested that this pathway extends from the brainstem to the frontal cortex. Within the pathway the brain was to initiate movement as well as supply internal information which was then used by sensory systems to adjust for resultant changes(5). These adjustments were said to occur within the peripheral receptors and motor planning systems which would then prepare the body for future movements(5). Corollary discharge signals in Sommer's study were identified as movement related activity which projected upstream (up the spinal cord) away from motor neurons, transmitting information but not causing any actual movement (4). By measuring the neuronal firing of the superior colliculus in the frontal cortex during normal and stimulated saccade movements of the eye, Sommer measured the corollary discharge signals in monkeys. The results of Sommer's study suggested that non-human primates did in fact transmit corollary discharge signals during eye saccades which was suggestive of a brainstem to frontal cortex pathway for transmission.

While Sommer's study provided novel and interesting ideas in regard to the specific pathway of corollary discharge, it focused largely of saccadic eye movements in non-human primates. Sommer's non-human primate research findings are consistent with human research; however, they may not be completely generalizable to all human populations. Currently there are various methods for the investigation of corollary discharge which are used in human population. The use of Electroencephalogram (EEG), functional magnetic resonance imagining (fMRI), positron emission tomography (PET) and single photon emission computer tomography (SPECT) all have worked to provide rich and accurate data in regard to both the specific neuronal signaling and function of the corollary discharge system in humans (4). Some of the most promising research, which utilize many of these methods, has been found with the investigation of human subject who suffer from Schizophrenia, a disorder which has been suggested to result from a deficit in corollary discharge function.

Schizophrenic patients with positive symptoms commonly experience auditory hallucinations (2). Although all schizophrenics do not experiences these hallucinations, a modest population report having these experiences (1). The inability to distinguish between covert speech (thoughts) and overt speech (talking) are common characteristics which are said to be a distinguishing marker of auditory hallucinations (1). These deficits of perception have been thought to occur as a result of the dysfunction of the corollary charge system. Without the proper facilitation of overt and covert speech by this system, these different speech mechanisms are easily confused and may result in server neuroses. Normal persons who do not suffer from Schizophrenia are said to be more sensitive to the factors (signals) which help to distinguish overt and covert speech. With these sensitivities, they are better able to make accurate decisions about what speech mechanism they perceive.

The role of corollary discharge in schizophrenia was investigated by Ford, et. al in a 2001 study using Schizophrenics who had reported having auditory hallucinations. Ford recorded subjects responses to covert and overt speech using EEG and sound recordings (1). The study's findings suggested a frontal-temporal pathway for corollary discharge signaling as well as a reduced sensitivity to signaling in schizophrenics providing a basis for the inability to efficiently distinguish between covert and overt speech. While these findings suggest only that Schizophrenics elicit less attention resources (EEG acoustic probe stimulation reduced) to overt speech as apposed covert speech it provides a explanation of why Schizophrenics might experience auditory hallucinations (1).

An additional study by Ford et. al provided a more extensive explanation of the corollary deficits associated with the hallucinations of Schizophrenics. Research findings suggested that Schizophrenics lacked an ability to recognize self-generated speech as their own (2). This inability to distinguish between internal self-general speech and externally generated speech was suggested to result from deficiencies in the corollary discharge systems of Schizophrenics who experienced auditory hallucinations (2).

A recent study by Ray Li, et. al explored the altered performance of schizophrenics in an auditory detection and discrimination task. While the study's findings concurred with the existing literature which suggest a corollary discharge deficit, there was no evidence found which support the idea of the deficit as specific to auditory hallucinations (4). Both positive and negative symptomed Schizophrenics displayed decreases in their sensitivity to perceptual channels during auditory detection tasks as compared to normal subjects. Schizophrenics who experienced hallucinations, however did not significantly differ from non-hallucinating Schizophrenics on their level of perceptual sensitivity (4). These findings pose an interesting contradiction which challenges the original correlation between dysfunctional corollary discharge systems and auditory hallucinations in Schizophrenic patients. It suggested that these hallucinations in fact may not occur as a result of deficits in the corollary discharge systems, but possibly from another underlying mechanism (4).

Both the neurological mapping of the corollary discharge system and its implications in Schizophrenia remain unclear. Sommer et. al provide interesting findings suggesting a brainstem to frontal cortex pathway for corollary discharge in non-human primates. These findings correlate in many ways to the frontal-temporal pathway which Ford proposed in his investigation of Schizophrenic patients. While it is logical that the frontal regions of the brain contributes to the underlying mechanisms of corollary discharge, considering it contains areas such as the Brocas and Wernikies (areas commonly implicated in decision making and planning), the specific neural network of the corollary discharge system other contributing regions requires much more investigation to be determined. Current research regarding the role of corollary discharge and Schizophrenia provide useful avenues for investigation of the specific nature of corollary discharge, if Schizophrenia is truly a disorder which corollary discharge is dysfunctional. While the investigations of both Ford and Ray Li support this idea, various studies propose apposing findings. Further investigation of both the specific deficits of Schizophrenics perceptual incapacities and there relationship to the specific nature of the corollary discharge system should be perform in an effort to better understand and interpret the mapping and functional properties of the corollary discharge system in Schizophrenics and of normal persons.

References

1)Science Direct ,


2)Science Direct,

3)Letters to Nature,


4)Schizophrenia Research, PubMed,


5)Science Magazine,


6)University of Waterloo Cognitive-Neurospych


Narcolepsy: Am I really that tired?
Name: Christine
Date: 2003-04-23 17:12:27
Link to this Comment: 5505


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Sleepiness, whether due to sleep apnea, heavy snoring, idiopathic hypersomnolence, narcolepsy or insomnia from any number of sleep-related disorders, threatens millions of Americans' health and economic security (1). Perhaps somewhat most concerning of these disorders are those that allow sleep without having any control over when it happens-idiopathic hypersomnolence and narcolepsy. The two are closely related in that both cause individuals to fall asleep without such control, yet narcolepsy occurs without any dreaming during naps (2). For years, narcoleptic people have been falling asleep in corners, concerned, as they have given numerous attempts to try to stay focused and awake. But besides the excessive fatigue that people experience, there surely must be more that can be associated with causing such sleepiness among people at an uncontrolled level. There might especially not be a reason involving the I-function of the brain, as people are not aware of when necessarily they will fall into their deep sleep.

Narcolepsy has been clinically defined as a chronic neurological disorder that involves the body's central nervous system (CNS). The CNS is basically like a "highway" of nerves that carries messages from the brain to other parts of the body. Thus, for people with narcolepsy, the messages about when to sleep and when to be awake sometimes hit roadblocks or detours and arrive in the wrong place at the wrong time. This is why someone who has narcolepsy, not managed by medications, may fall asleep while eating dinner or engaged in social activities-or even at times when they are so focused on being awake, yet they cannot be due to their narcoleptic nature.

In many cases, however, diagnosis is not made until many years after the onset of symptoms. Studies on the epidemiology of narcolepsy show an incidence of 0.2 to 1.6 per thousand in European countries, Japan and the United States, a frequency at least as large as that of Multiple Sclerosis (3). This is probably due to the fact that patients only go to see a physician after many years of excessive sleepiness, rather than trying to approach it right away, assuming that sleepiness is not indicative of any kind of disease. Most people actually probably assume that their need to sleep is due to something that can be easily controlled by changing one's daily routines and/or lifestyle, which with such reasoning would easily delay any visits to a physician.

Assumptions aside, recent discoveries indicate that people with narcolepsy lack a chemical in the brain called hypocretin, which normally stimulates arousal and helps regulate sleep. They also discovered that there is a reduction in the number of Hcrt cells or neurons that secrete hypocretin (4). What this is due to is still rather uncertain. It is obvious that some degenerative process occurs which would lower the number of cells/neurons that can secrete this stimulating substance. However, it might also be an immune response to something else occurring in the body. For example, the amount of seratonin in the body may also have something to do with sleep response. During REM sleep, your brain produces a vital chemical called Seratonin (5). Seratonin is a neurotransmitter, involved in the transmission of nerve impulses. Your body uses Seratonin every day and lack of REM sleep means you are producing less or none at all. The substance that processes this neurotransmitter is the amino acid tryptophan. In the brain, it increases the amount of seratonin, which cause one to have better feeling of their well-being. Seratonin is a chemical that helps maintain a "happy feeling," and seems to help keep our moods under control by helping with sleep, calming anxiety, and relieving depression (6). Thus, if there is not enough seratonin produced, mood can be affected as can the ability to sleep. Among other possible factors, these hopefully can account for some of the visible symptoms we can actually see, or in the lapses in neurological pathways, that we cannot see.

Among the symptoms usually observed, excessive daytime sleepiness is usually the first noted. This is probably the most overwhelming and troubling symptom as people who want to stay awake during the day encounter this problem. The "I-function", a link between body and mind, which seems to be in control of conscious actions, does not seem to play a part in narcolepsy. Instead, it seems that it is forced to sit back because of the lapse in neurological connections. We can see this in that in a narcoleptic person, the "I-function" has been affected and thus cannot help regulate sleep under a conscious individual's scope of regular control over their body. This may lead to what is perceived to be as little sleep, as a person may be frustrated and tiring themselves out trying to keep themselves awake. While someone who is narcoleptic may get more sleep than most, they still feel exhausted and sleepy. Also, at times a narcoleptic may also experience sudden excitement or be taken off guard, in which case their muscles may become weak, leading them to collapse and fall asleep-another symptom of narcolepsy known as cataplexy. So in essence, the narcoleptic, on a regular basis listens to its body as to when it goes to sleep without even consulting the "I-function" which results in frustration as people have become accustomed to being in control of such an integral part of their daily lives.

Narcolepsy is also determined through other symptoms like sleep paralysis, hypnagogic hallucinations and automatic behavior. Sleep paralysis is being unable to talk or move for a brief period when falling asleep or waking up. Many people with narcolepsy will suffer short-lasting partial or complete sleep paralysis. Hypnagogic hallucinations are vivid, scary dreams and sounds reported when falling asleep. And automatic behavior, on the other hand, is the habit of carrying out routine tasks but without full awareness or memory of them later (4). These symptoms often aid in a physician's diagnosis of narcolepsy. However, as they are often associated with other disorders, it becomes increasingly difficult to differentiate what constitutes a symptom of narcolepsy compared to the effects seen from some other disorder. There are, on the other hand, polysomnogram tests available that would measure brain waves and body movements as well as nerve and muscle function (4). This is a big leap in helping diagnose narcolepsy, yet it is an option many people might not be able to take on, for whichever reason, be it economically or for possible inconclusive results nonetheless.

There are many treatments currently available to victims of the sleepy disorder. There is currently no permanent cure for narcolepsy, however, the options available help deal with it in a fitting manner. Stimulants are the mainstay of drug therapy for excessive daytime sleepiness and sleep attacks in narcolepsy patients. These include methylphenidate (Ritalin), modafinil, dextroamphetamine, and pemoline. Dosages of these medications are determined on a case-by-case basis, and they are generally taken in the morning and at noon. Other drugs, such as certain antidepressants and drugs that are still being tested in the United States, are also used to treat the predominant symptoms of narcolepsy (7). On the other hand, drug treatment is only one element of dealing with the symptoms of narcolepsy.

Changes in daily behavior can also help to encourage nighttime sleeping. Avoiding caffeine, nicotine and alcohol in the late afternoon or evening, as well as exercising regularly-at least three hours before bedtime are easy things to incorporate into one's lifestyle to help get more sleep in. Eating foods that are high in tryptophan promotes sleep (8). Great examples of this can be found in turkey, milk, bananas and yogurt. Making sure to get enough nighttime sleep in, at least eight hours could help with this, just as taking regular naps that last between 20-40 minutes at a time. These power naps can really help boost one's energy to alleviate the stress of not getting enough rest in one's day.

So the question remains: What really causes narcolepsy? Yes, we already know that it is a chronic neurological disorder that involves the CNS. But what it is that accounts for this lapse in pathway? While much research has been done, it still remains a difficult task to be able to diagnose narcolepsy, as many of the symptoms are often associated with other more credible disorders, at least in the eyes of those avoiding physicians. Many options are available to try to control this disorder. What we have seen to regularly play a huge role in controlling the body-the "I-function", is being forced to stand aside while the body dictates its irregular sleeping pattern. The intricacies of narcolepsy remain something to be further investigated to find a means to diagnose it properly, preferably earlier on, and to treat it so that this disorder can one day not bring about further economic or personal grief.

References


1)Sleep Apnea, Snoring, Narcolepsy, Insomnia and Other Causes of Daytime Fatigue

2)Better Sleep Now!

3)Center for Narcolepsy: Symptoms and Diagnosis

4)Living With Narcolepsy

5)Sleepnet.com Apnea Forum

6)Seratonin: The chemistry of Well-Being

7)Sleep Channel: Narcolepsy

8)Sleep: Alternative and Integral Therapies


Change Blindness
Name: Erin Fulch
Date: 2003-04-26 21:52:59
Link to this Comment: 5521

<mytitle> Biology 202
2003 Second Web Paper
On Serendip

After investigating spatial cognition and the construction of cognitive maps in my previous paper, "Where Am I Going? Where Have I Been: Spatial Cognition and Navigation", and growing in my comprehension of the more complex elements of the nervous system, the development of an informed discussion of human perception has become possible. The formation of cognitive maps, which serve as internal representations of the world, are dependent upon the human capacities for vision and visual perception (1). The objects introduced into the field of vision are translated into electrical messages, which activate the neurons of the retina. The resultant retinal message is organized into several forms of sensation and is transmitted to the brain so that neural representations of given surroundings may be recorded as memory (2). I suggested in my previous paper that these neural representations must be maintained and progressively updated with each successive change in environment and movement of the eye. Furthermore, I claimed that this information processing produces a constant, stable experience of a dynamic, external world (1). However, myriad studies and the testimony of any motorist who has had the unfortunate experience of hitting an unseen object, contradict the universality of that claim and illuminate a startling reality: human beings do not always see those objects presented in their visual field nor alterations in an observed scene (3,4,5,6,7,8,9). The failure to consciously witness change when distracted for mere milliseconds by saccade or artificial blink events is referred to as "change blindness." In order to comprehend this phenomenon, the physical act of looking and the process of seeing must be differentiated. Through an examination of change blindness, we may confirm and attempt to explain this distinction.

The concept of change blindness has been addressed over the course of nearly half a century, with increasing focus on the subject throughout the past five years (3). Although biologists, psychologists, and philosophers have yet to resolve definitively the paradox of looking without seeing, the investigation of each theory on the matter yields deeper insight into visual perception and sight as well as a decreasingly incorrect understanding of those components of the nervous system, which are crucial for visual cognition. Under normal viewing conditions, changes produce transient signals that can draw attention. Change blindness studies are designed to eliminate or block these transient signals by inserting a visual disruption when the change occurs (3). Flicker Paradigm studies examine the occurrence of change blindness and attempt to explain the inability to not see that which is directly in front of our eyes. The Flicker Paradigm demonstrates the essentiality of attention in the process of seeing (4). The alternation of an object and a modified version of that same object is interrupted by millisecond flashes of blank space. Subjects are then asked to report changes in the images.

In order understand the events leading to the failure to recognize change, comprehension of the mechanism by which change is successfully recognized is requisite. According to the traditional understanding of this process, an individual must form an internal representation of the initial display. Then, comparison must be made between the first and second display. Because the initial display is no longer available, the internal representation must be retained for comparison to the second display. Finally, conscious access to the results of this comparison must be available. To explicitly report the change, the observer must be acutely aware of those results. Change blindness, then, is a product of failure in at least one of these three component processes (5). Although the majority of theories on change blindness focus on failed creation of representation or comparison, failed detection has been shown to occur even in the presence of accurate formation of a representation if a proper comparison is not made between that representation and the second display (6). Furthermore, a comparison does not guarantee the avoidance of change blindness if any fault exists with the representation including insufficient detail or entire absence of the changed feature (7).

This same mechanism involved in the perception of change has been investigated through the Flicker Paradigm and described in terms of memory. According to this paradigm, change perception is a highly integrated process involving several steps. First, information must be loaded into visual short-term memory (VSTM) (3). A subject must retain that information over the duration of the blank interval, compare the stored to the visible information in the new display and unload the VSTM and shift attention to a new location. Without focused attention, representations of objects within the brain are ephemeral and no conscious detection of change can occur. Focused attention, therefore, acts as the mediator of change perception by giving objects coherence across time and space (8). Interestingly, much proof exists that something even beyond anticipation of a change must be present for the observation of change. Rensink has shown that even when instructed to look for change, a task that leads subjects to actively attend an image, change can go undetected (4).

The theoretical explanations for the malfunction of change perception are dependent upon the two common understandings of this capacity. The first of these theoretical approaches begin with the assumption that the creation of internal representations of the outside world and subsequent activation of those representations allow us to engage in the experience of seeing. The other prominent approach suggests instead that the world functions as its own external representation, and sight is enabled once an organism has mastered the governing laws of sensorimotor contingency (9). Those who subscribe to the notion of the creation of an internal representation of surroundings attribute change blindness to the sparseness of this representation (4). Thus, only those environmental objects encoded as interesting are attended and seen. When a distracting factors draws attention away from an object observed without attention, a change in that object goes unnoticed because it was not properly and initially represented in the internal image (3). The proponents of the second philosophy, however, contend that the structure of the rules that govern visual perception, sensorimotor contingencies, and the knowledge of those rules mediate the explorative activity which visual perception is understood to be ( 9). A distinguishing law or contingency is the fact that when the eyes close during blinks, the stimulation changes drastically, becoming uniform: the retinal image goes blank. Scientists holding these beliefs suggest that vision requires the satisfaction of two basic conditions. First, the environment must be explored in a manner governed by sensorimotor contingencies, both fixed by the visual apparatus, and those fixed by the character of the observed objects. Second, the brain of the observer must be actively exercising mastery of the laws of sensorimotor contingencies. These theories question the formation of an internal representation under the flicker paradigm circumstances, suggesting instead that the external environment serves as its own representation for immediate probing during which change blindness can occur (9). Results of experiments conducted on these bases show the very presence of a visual stimulus may obligatorily cause observers to make use of the world in the "outside memory" mode, even though it is less efficient that normal memory processing.

In both theoretical approaches, findings suggest limitations on the amount of information that can be consciously retained and compared between two views, even over short delays (6). Thus, successful change detection requires attention to be focused on an object. And yet, all details and aspects of even those objects under close attendance are often only partially retained and compared across views. Visual perception is an extraordinarily complex function, immense in the levels of processing as well as in its differential success in variable tasks. This variation in capability results from the components of perception and their respective operations. For example, short-term visual memory, which essential for detection of change, is found to exhibit exquisite performance in memory tasks but has a decreased threshold under certain circumstances (6). Current research is valuable, not for its conclusive findings regarding the occurrence of change blindness, but for the implications it holds for perception and for the future of its study. Work is being done continuously to elucidate the relative roles of spatial location, memory, sensory and attentional mechanisms, and general psychophysical capacities in response to specific cognitive demands (3, 7, 9, 10). By exemplifying the potential for failure of a system that we assume to work flawlessly in our daily interactions, change blindness has incited uniquely directed investigations of visual consciousness and facilitated the overall comprehension of the human neurological experience.

References

1)"Where Am I Going? Where I Have Been", First Web Paper

2)Human Position Sense and Spatial Maps, a resource on spatial cognition

3)Rensink Collection, a grouping of articles by one of the creators of the Flicker Paradigm

4) Annual Review of Psychology , an article by Ronald Rensink on change detection and visual perception

5)Cognet, a site on Cognition

6)Memory For centrally attended changing objects in an incidental real world change, An article by Levin, Simons, Angelone, and Chabris

7) Scott-Brown, K.C. & Orbach, H.S. (1998) "Contrast Discrimination, Non-Uniform Patterns and Change Blindness". Proceedings of the Royal Society of London. 256 (1410): 2159-2164.

8)Max Planck Institute

9)A sensorimotor account of vision and visual consciousness , Behavioral and Brain Sciences article from 2001

10)Glasgow Caledonian University, current research in vision sciences


A Review: The Forbidden Experiment by Roger Shattu
Name: Geoff Poll
Date: 2003-04-27 02:34:01
Link to this Comment: 5522


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

It is one of the oldest unanswered questions in all of science. Though slightly more grounded in empirical science than the likes of "Where did we come from?" or "Why are we here?" the impossible Nature/Nurture dichotomy has tormented truth-bound scientists for years. Recent advances in genetics have brought forward new possibilities for those who would study the pure effects of environmental variables on animals, but we are far from allowing ourselves to manipulate other human beings in such ways for the sake of collecting data. This strong moral stance does not diminish our curiosity and so the question must be asked: What would we do if a case in which the human had already been manipulated, by no will of our own, fell into the hands of science? How far would we go?

Every couple hundred years, one of these humans, by chance or by a case of true cruelty, falls into the hands of scientists, eager to make the most of such a 'misfortune'. Roger Shattuck's The Forbidden Experiment follows one of the more prominent cases of our recent history, that of the 'Wild Boy of Aveyron.'

The book takes little time to peak the reader's curiosity with the tale of a "savage" twelve-year-old wandering out of the woods of southern France on a cold January evening in 1800. Without a known history or the ability to communicate with his captors, Victor, as he was later named, was assumed to have lived in the wild for at least six years and probably more. In the midst of an intellectually lively France, Victor wandered into immediate fame and was brought to Paris so that the most capable scientists could take advantage of studying a human raised almost completely in isolation. The story is a short one, lasting only the 28 years the Victor lived in Paris society before his death in 1828. The duration of his immediate interest to the scientific community in Paris was far shorter. (1)

To an intellectually enlightened community the contradiction should have been apparent: They detained Victor and forced him to learn how to live as a proper human being in civilized society; he was treated every step of the way as if he were an animal. The inconsistencies were nowhere more apparent than they were in the often times tortured mind of the main scientist overseeing the progress of Victor, the young and gifted Dr. Itard. It was Itard who subjected Victor to inhumane treatment from keeping him on a leash during walks around the grounds, to using violence to arrive at a desired level of obedience. While popular opinion had Victor as some kind of savage creature, which justified the type of treatment he received, Itard disagreed. His own radical beliefs that Victor was more than animal had already condemned his own methods. (1)

The span of Shattuck's narrative falls within the five years Itard spent attempting to give back to Victor human qualities we all grow up with, but which Itard believed had lain dormant for the years that Victor spent in the wild. The author's treatment of Itard makes for most of the interesting reading of the book. Shattuck admires in Itard his determination, his passionate curiosity and his faith in the innate goodness of the almost altogether unresponsive being in front of him. It is through much of Itard's own pen that we get not only the notable events of Victor's progress, but a look inside the mind of a man who needed to believe that we are made of something more than the right combination of a biological foundation and a timely social training. It was the training which Victor lacked, but in his own training Itard was searching for a third element—that which makes us human. (1)

His search was misguided and after his original diagnosis, he never freed himself from a strict idea of what success with Victor would have been. He saw his biggest challenge as that of language and in the end his inability to provide Victor with the gift of speech gives him the feeling of failure. Shattuck aptly jumps on the irony of that situation: Itard kept Victor in an institution for deaf-mutes, and he never once attempted to teach Victor sign language. (1)

At each step of the training he hoped for Victor to act like his idea of a human while treating him still more and more like a caged animal. The most striking series of events arrived with the onset of puberty in Victor. Itard saw the driving sexual force of puberty as a push towards love, the most human of all emotions, and towards the forming of new ties and relationships between young people. He had kept Victor isolated from peers but when the boy began to feel sexual urges, the scientist set up controlled situations with females where he expected Victor's natural urges to lead him towards some kind of more substantial communication. Without any kind of reference to go by Victor felt frustrated and gave up. Itard gave in as well and condemned Victor as an idiot. (1)

Years later, shortly after his death, a younger doctor honored Itard's work in a statement that throws light on Itard's greatest obstacle—his own colleagues. "To train an idiot," praised Dr. Bosquet, "to turn a disgusting, antisocial creature into a bearable obedient boy—this is a victory over nature, it is almost a new creation."(165). It was this way of thinking about people that allowed the experimentation to proceed and a very creative young scientist to find himself walled in. (1)

Sadly, Victor and Itard's case does not stand alone. Among many other claims of "wild" children, the most famous recent case was that of Genie. Thirteen years old when found in 1970, Genie had been deprived by her parents of all social contact for the previous ten years of her life. She was physically debilitated and was not able to speak, a talent she never fully retrieved. As soon as she was discovered by a social worker, scientists apprehended Genie with the same vigor as they had Victor 150 years earlier. (2)

Harlan Lane, a psycholinguist and author of a book about Victor called The Wild Boy of Aveyron, justifies the work done on Genie, pleading, "It's a terribly important case." Lane reasons, "Since our morality doesn't allow us to conduct deprivation experiments with human beings, these unfortunate people are all we have to go on." (3)

Victor was abandoned by Itard after he failed to produce speech. He lived the remainder of his life in social isolation, not at all a deviation from his first 18 years. (1) Genie continues to live but struggles with all aspects of her life. Contact with the researchers assigned to her case was broken off after they were accused of treating her inhumanely for the gain of fame and prestige, without any hint of scientific direction. The suit was settled out of court. (3)

Shattuck approaches the case of Victor with great curiosity and in many way justifies the ignorance and cruelty of Itard and the other scientists of the time. In a more recent book, Forbidden Knowledge, that innocence is less evident as he makes some heavy criticisms of our natural affinity towards inquisitiveness. He alludes mostly to mythology; his examples range from Adam and Eve to Mary Shelley's Dr. Frankenstein. Each classic case demonstrates an instance of temptation overriding intellect. (4),(5) Victor, on the other hand, was unconcerned with knowledge and ego. His mind was freer than ours may ever be. Shattuck speaks of Victor escaping from "humanity into animality"(181)(1). It is an attractive concept.

It makes sense that we would like Victor to speak to us. We want to know what he knows. Itard never thought of Victor's mental state in terms of knowledge, but preferred forgetfulness. (1) Is that what we want; to forget? What are our motives in these 'unfortunate' instances? Would we learn from the dumb how not to speak; how to forget? Or would we teach language and culture so that Victor may live with us and suffer as we do? What does that make us?

References

1) Shattuck, Roger. The Forbidden Experiment. New York: Farrar Straus Giroux, 1980

2)NOVA transcript, transcript for 'Genie' episode

3)The Civilizing of Genie , the story of Genie

4)Online News Hour, Shattuck interview

5)Ethical Culture Book Review, review of Forbidden Knowledge

6)Feral Children Website, a great resource about 'wild' children


Autism: A lack of the I-function?
Name: Kate Shine
Date: 2003-04-27 19:36:12
Link to this Comment: 5527


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In the words of Uta Frith, a proclaimed expert on autism, autistic persons lack the underpinning "special feature of the human mind: the ability to reflect on itself." ((3)) And according to our recent discussions in class, the ability to reflect on one's internal state is the job of a specific entity in the brain known as the I-function. Could it be that autism is a disease of this part of the mind, a damage or destruction to these specialized groups of neurons which make up the process we perceive as conscious thought? And if this is so, what are the implications? Are autistic persons inhuman? Which part of their brain is so different from "normal" people, and how did the difference arise?

The specific array of symptoms used to diagnose an individual as autistic do not appear as straightforward as Frith's simple statement. It seems hard to fathom that they could all arise from one similar defect in a certain part of the brains of all autistics. Examples of these symptoms include a preference for sameness and routine, stereotypic and/or repetitive motor movements, echolalia, an inability to pretend or understand humor ((3)), "bizarre" behavior((4)) and use of objects ((2)), lack of spontaneity, excellent rote memory ((2)), folded, square-shaped ears ((3)), lack of facial expression, oversensitivity, lack of sensitivity, mental retardation, and savant abilities.

Obviously not all autistics exhibit all of these characteristics. Psychologists, however, often believe certain symptoms to be more indicative of the disease than others. The world autism stems from a Greek word meaning, roughly, "selfism." Autistics are described as very self-absorbed, and some academics refer to a short list of three characteristics to diagnose them. These are impairment of social interaction, impairment of communication without gestures, and restrictive and repetitive interests. ((3)) Tests have also been designed in attempt to diagnose the disease decisively. One of these is the Sally-Anne test, in which autistic children usually show an inability to understand the perception of another child in a scenario or to understand that she could believe something false. ((4)) Another test involves two flashing lights. Autistic children will look at a light if it flashes, but other children will show a tendency to move their eyes toward a new light while autistic children will not. ((3))

Observation of the behavior of autistics makes it clear that they do interact with their world and understand certain aspects of it to a degree. However they often appear intensely focused on one perception or sensory experience and are unable to integrate multiple factors of emotion, intention, or personality in the way most people do. As a result of this inability to perceive order in all of the circumstances of their environment, they often find themselves in a world that seems very chaotic and random.

Jennifer Kuhn hypothesizes that many of the symptoms of autism are defense mechanisms stemming from a feeling of helplessness to control a situation. ((3)) Rats, when they are forced to jump at one of two doors that are randomly chosen to open unto food or stay closed and hurt the rat, will always pick the same door. (Lashley and Maier, 1934) Thus the repetitive behavior of autistics, their insistence on routine, and their echolalia can all be explained as attempts to calm themselves by achieving an inner order. They are effectively forsaking experimentation in an attempt to avoid failure. This might be evidence for damage to an I-function, as we have discussed that observation and experimentation are innate behaviors of humans, and that they require personal reflection.

However, many autistics do not exhibit behaviors such as echolalia all the time but rather more frequently when they are in an unfamiliar environment. ((1)) And I have observed in myself a tendency to get a nervous repetitive motion in my foot in certain situations or even a rhythmic movement of my head when I am concentrating very intensely, but I am still confident of my ability to reflect on my own thoughts.

One of the most obvious ways to find out if autistics share a collective brain abnormality that causes their unique array of characteristic behaviors is to look into the structure of the brain itself. In studies of autistic brains use scanning images as well as autopsies, scientists have generally not been able to agree on a single consistent abnormality. Some observations have been that the neurons in the limbic systems of the autistic subjects are often smaller and more densely packed than those of other individuals. Monkeys whose limbic systems were removed in experiments showed weaknesses in social interaction, memory, as well as exhibiting locomotor stereotypes similar to those of autistics. ((2))

Another area that often exhibits abnormality in autistic brains is the cerebellum, especially in the CA1 and CA4 cells were there are less dendritic arbors and less Purkinje and granule cells. ((2)) Purkinje cells are responsible for cell death, and in where they are in small number cells may be allowed to crowd each other and not develop networks properly. This could account for autistics' tendency to be overwhelmed by stimulation. The cerebellum is also responsible for muscle movement, and other studies have found significantly fewer numbers of facial neurons (400 as compared to 9,000 in a control brain), which could also account for absence of facial expression. ((3)) Other areas of the brain which some studies have argued are abnormal in autistic brains are the left hemisphere which controls language ((5)), ((6)) the temporal lobes which control memory((2)), and the brain stem((3)).

However, many of these findings are still considered controversial or inconclusive and may not only be attributed to autism but to many people with developmental disorders. And even if these observations are accurate there is still no explanation for why these differences in brain structure occur simultaneously.

One very convincing hypothesis to explain these occurrences has been proposed by Patricia M. Rodier. Inspired by the fact that relatives of autistics are much more likely than the natural population to be diagnosed with the disease, Rodier became convinced that there must be some definite genetic factor contributing to it. However there was still the fact that an identical twin with an autistic sibling will only contract autism about 60% of the time, which makes an argument for environmental influence as well.

She then came upon a study of children whose mothers were exposed to a drug called thalidomide during their pregnancy, and found that a full 5% of their children were diagnosed as autistic. By combining information about the drug with other birth defects in the ears and faces of the children, she was able to guess that autism began 20 to 24 days after conception, when the ears and the first neurons in the brain stem are starting to grow. When she began to examine the brains of previously autopsied autistics, she noticed they showed evidence of particular short brain stems and often they also had a small or even absent facial nucleus and superior olive.

This information suggests that the genetic factor which causes these abnormalities in the primitive brain could then go on to cause the secondary abnormalities in the various other more sophisticated parts of the brain already mentioned. And Rodier has identified a gene known as Hoxa1 which is present in 40% of autistics but only 20% of non-autistics. It is found more often in relatives who are autistic than those who are not. Although this is by no means a conclusive argument that autism is genetic, there could be other variants of that allele which contribute to autism or certain genes which decrease the risk of it which haven't yet been discovered but could account more strongly for genetic determination. ((3))

So could there be specific genes which actually code for the I-function? This is possible, but there is still certain evidence for environmental factors causing some of the symptoms of autism. In-utero exposure to not only thalidomide but also rubella, ethanol, and valproic acid have been liked to the disease. And there is a significant population of people who believe that the elimination of glutens and luteins found in milk, apple juice, wheat, and other products relieves the build up of certain chemicals in the brain and makes autistics more able to fit in with society. ((8)) A recent study of abused children has also claimed that the left hemisphere of the brain, specifically areas dealing with memory and expressing language in the limbic system, can be overexcited and damaged as a result of intense sexual or physical abuse. ((5)) Perhaps all of these environmental factors do additional damage to the different sectors the I-function.

What does all of this say about autistic people? Do the characteristic differences in their brains necessarily indicate that they are lacking this "I-function" or awareness part of the mind that makes someone human? I do not think so. The fact that rats and monkeys can show similar symptoms when these areas of their brains are damaged destroys this argument. The self-awareness or I-function does not appear to be limited to humans. And it may just be that the I-function operates differently or is damaged in the minds of autistics. There are huge degrees of severity in what are called the "autism spectrum diseases." These can be defined to include dyslexia, ADHD, Asperger's syndrome, pervasive developmental disorder, and more, with autism being the most severe. And any person can have symptoms of autism without being fully "autistic." But even severe changes in the I-function system should not necessarily be considered defects.

It is entirely possible that autistic people have thoughts and self-reflections which they cannot communicate through language or in society's "normal" modes of expression. While they do seem do have a diminished capacity to reflect on as much as other people do of the world at once, it may be that what they do reflect on could be just as much or more meaningful. Autistic savants, for example, show an amazing ability in one area such as music, art, or calculation which often surpasses what it seems a "normal" person could ever achieve. And these creations are not just byproducts of the rote memory and habit that some scholars insist is all that exists in autistics. It is my personal opinion after viewing the art of an autistic savant, Richard Wawro ((9)), that the work and the artist are infused with at least as much emotion and unique perspective as any "normal person."

Frith also argues that autistics are "...not living in a rich inner world but instead are victims of a biological defect that makes their minds very different from those of normal individuals." ((4)) But who is it that has the authority to decide what a "normal" individual is? The defense mechanisms that autistics display are evidence that they do experience distress and anxiety, but Freud argued that non-autistic or what Frith would call "normal" individuals also display a number of common defense mechanisms as a result of the anxieties in their lives, and this view is still widely accepted. Just because these may appear more normal or subtle to us does not mean they are not important.

In a society of autistics, we might feel a great deal of incoherence at how they might organize their world, and we might found ourselves the ones unable to communicate. In the movie "Molly," which concerns a fictional autistic woman who becomes normal and reflects on her experiences, there is a speech that illustrates this point beautifully. "In your world, almost everything is controlled....I think that is what I find most strange about this world, that nobody ever says how they feel. They hurt, but they don't cry out. They're happy, but they don't dance or jump around. And they're angry, but they hardly ever scream because they'd feel ashamed and nothing is worse than that. So we all walk around with our heads looking down, but never look up and see how beautiful the sky is." ((10)) If autistic people can possibly use their minds to see a beauty we cannot imagine, who are we to call them the abnormal victims? If they can only see the world in a sincere way and not understand irony or humor, is that necessarily a bad thing? We cannot truly know what autistics are aware of internally unless we have experienced their world.


References


1) Stereotypic Behaviors as a Defense Mechanism in Autism , from Harvard Brain website, Jennifer Kuhn, 1999.
2) Neurobiological Insights into Infantile Autism , Amy Herman, 1996.
3)Rodier, Patricia M. "The Early Origins of Autism." Scientific American February 2000: 56-63.
4)Frith, Uta. "Autism." Scientific American June 1993, reprinted 1997: 92-98.
5)Teicher, Martin H. "The Neurology of Child Abuse." Scientific American March 2002: 68-75.
6) Autism and Savant Syndrome, , web paper by Sural Shah.
7) Considerations of Individuality in the Diagnosis and Treatment of Autism, , web paper by Lacey Tucker.
8) Diagnosis: Autism, , Mothering Magazine, Patricia S. Lemer, 2003.
9) Paintings by Richard Wawro, , an online gallery of an autistic savant's work.
10Duigan, J. (Director). (1998) Molly. [Video-tape]. Santa Monica, CA: Metro-Goldwyn-Mayer Pictures Inc.


Something Happens Sometimes: A sample and critique
Name: Sarah Feid
Date: 2003-04-28 17:12:51
Link to this Comment: 5537

<mytitle> Biology 202
2003 Second Web Paper
On Serendip

"Do you remember how electrical currents and 'unseen waves' were laughed at? The knowledge about man is still in its infancy." - Albert Einstein

Introduction

Perception of future events (precognition), communication through thoughts (telepathy), material manipulation without physical contact (telekinesis), sight of an object or place millions of miles away with enough accuracy to draw it (remote viewing) – these are a few cases of what is referred to as "psi phenomena," also known as parapsychological or psychic phenomena. "Psi" refers to "anomalous processes of energy or information transfer... that are currently unexplained in terms of known physical or biological mechanisms."(1) Long dismissed by scientists and other skeptics all over the world, these occurrences are often attributed to trickery, hallucination, lying, chance, and even spiritual influence. Claims of psychic ability come from many varied sources. From the friend who has premonitory dreams and the dog who knows when the master has decided to come home, to the glamorous astrologer with a 900-number and the clairvoyant with a TV show, stories of paranormal abilities range from personal and thought-provoking to distant and Hollywood-esque. Are these things really possible? What does the scientific community actually know about these phenomena? Ultimately, one must ask the question, what can the scientific community know about these phenomena?

This paper is intended to provide a small sample and critique of the available scientific research on these unexplained and often dismissed phenomena. The examples which form this review are: research on unexplained phenomena not associated with "psychic" individuals, large-scale research centering on many individuals with "psychic talent," and an investigation of the claimed abilities of a single internationally celebrated "psychic."

Despite the historical and prevalent stigma and sensationalization associated with this field, many respected educational establishments have laboratories involved in the research of psi. The Princeton Engineering Anomalies Research program, instituted in 1979 to investigate mind-matter interactions (2); the Parapsychological Association, a 1957 offshoot of the Duke Laboratory (3); the Koestler Parapsychology Unit at the University of Edinburgh (4); and Stanford University's 1946 endeavor, Stanford Research Institute are four of these. It should be noted that Stanford Research Institute separated from the university in 1970, and became SRI International. (5)

Examples

Impersonal phenomena
If a person is asked to identify the color of a rectangle, and is subsequently asked to read a randomly generated color name, it is well-known that a matching color name will be called out faster than a mismatching color name. This pattern of outcome, the Stroop effect, has been used since the 1930s to measure cognitive interference. By recording the time it takes to announce the rectangle's color (T1) and comparing that to the time it takes to read the subsequent color name (T2), researchers can approximate how long it takes individuals to process these inputs. (6)

Holger Klintman, a clinical psychologist at Lund University in Sweden(7), was attempting to improve the sensitivity of these measurements when he discovered another interesting pattern – there was correlation between T1 and T2. Specifically, if the color name and the rectangle color matched, then T1 for that trial was faster than if they mismatched. That is, if the subject was shown a red rectangle, and the randomly generated color name in the future was going to be the word 'red', the subject announced the color of the rectangle faster. This is a startling correlation – since the color name was randomly generated only after the rectangle image was taken away and T1 recorded, there was no logical way for the future word to have affected T1. And yet, it did. In fact, in five experiments designed to rule out the possibility of mechanical or design influence, he reported a surprising cumulative p-value of 10^-6. Dean Radin of the Boundary Institute revisited this phenomenon with a paper published in July of 2000, where he reported similar correlation with a different experimental design, and a highly significant p-value of 0.001. (6)

Government Research
In 1969, US intelligence sources concluded that the Russian government was investing large amounts of time and money into development of 'psychotronic' weaponry, which included psychic spies. In response, the US government began research in 1972 into the viability of remote viewing as an intelligence tool. The program, originally the SCANATE program under the Central Intelligence Agency (CIA), was maintained for 23 years under various names. In that time, several hundred projects involving thousands of remote viewing sessions were completed. Psychics from this program were made available to several intelligence branches, including the CIA, the National Security Agency (NSA), the US Army Intelligence and Security Command (INSCOM), the Army Chief-of-Staff for Intelligence (ACSI), and the Defense Intelligence Agency (DIA). (8)

The SCANATE program, eventually termed the STAR GATE program, combined active operational training in psychic intelligence gathering with intensive laboratory research. The laboratory component began at SRI in 1972. The researchers collected individuals who they thought showed natural psychic ability, with a minimum accuracy rate of 65%; 23 remote viewers in total were involved in the STAR GATE program. One of the fruits of this program was a set of instructions developed by professed psychic Ingo Swann, which supposedly allow any human being to develop the ability to remote view. This instruction set was used in the training aspect of the program. The National Academy of Science's National Research Council reviewed the program unfavorably in 1984, and the American Institutes for Research released a report in 1995 that stated a statistically significant effect had been demonstrated in the program, with a 15% accuracy rate, but which was overall a negative review. The CIA concluded that no useful intelligence data had ever been provided by the program, and terminated it in 1995. (8)

Professional Psychic
Uri Geller, an Israeli psychic, is an unabashed showman. He has been internationally famous since the 1970's for his purported ability to bend spoons with the lightest touch, and often no touch at all. He has been demonstrating this ability to friends since he was five years old, and has been giving paid public performances since 1969. An author, artist, inventor and controversial figure, he has been alternately praised and condemned by the media as a true psychic and a dramatic magician, respectively. He claims many abilities – telepathy, telekinesis, remote viewing, dowsing (detecting specific metals and minerals in the ground), precognition, the ability to erase digital media, and the ability to make seeds sprout in his hand within seconds, are among them. He credits himself with helping certain sports teams win, with influencing the US Army, and with possibly averting World War III by mentally bombarding a Russian-American diplomatic meeting with thoughts of peace. Like other psychics, his abilities do not work under all circumstances, and while his successes support his claims, his failures often lead to wholesale dismissal of them. (9) (10)

Uri Geller was brought to the Stanford Research Institute in 1973 by respected physicists Harold Puthoff and Russell Targ (11) of the SRI Electronics and Bioengineering Laboratory. They carried out a series of blind and double-blind experiments in a controlled environment, during which Geller was asked to demonstrate extra-sensory perception in two protocols: remote viewing of selected images, and remote viewing of a die shaken in a steel box. In the first series, Geller produced drawings of images which were sketched by people not in contact with Geller, and which were sealed in multiple opaque envelopes which he could not touch and often was not allowed to see. In the second series, he determined the upward face of a die shaken in a box. While Geller on occasion did decline to provide a response when he was "unsure," all of the responses he did provide were without error: the die faces were correct in each of the eight trials, and the target pictures were correctly matched with Geller's sketches by two SRI scientists unassociated with the project. The probabilities of these outcomes occurring by chance were 3x10-7 and 1x10-5 against, respectively. While Puthoff and Targ reported also observing Geller's metal-bending abilities, they concluded that the control conditions were not sufficiently stringent as to provide data supporting the paranormal claim. (12)

Discussion

Claims of this nature are difficult to assess. Support for them is often largely anecdotal, and full of uncertainty. A psychic can rarely perform before a group of expert magicians; it could be that they create a stressful and inhibitive environment that is by its nature detrimental to the psychical mechanism, or it could be that the psychic dare not reveal himself as a charlatan while surrounded by those who would surely catch him. While the psychics ply their trade, the skeptics seek to debunk them at every possible turn. (13) And yet, while the skeptics ply their trade, others seek to debunk their debunkings! (11) In many cases the believers document their cases just as well as the skeptics.

Even controlled research presents a fundamental problem: how does one even begin to design an experimental protocol for phenomena which defy biology and physics as we know it? The fact is, experiments designed to test the existence of psi cannot in reality be trusted to do just that. All such experiments can do is provide possible evidence about the nature of psi. Without knowing what exactly one is testing, one cannot narrow down the possible factors affecting it, and therefore one cannot effectively interpret any results.

In many ways, such research is simply running in circles. For example, Dean Radin's Stroop-based experiment yielded significant results; the same experiment does not always yield significant results. What if the mechanism he was observing was affected by the wavelength of the lights in the lab, the gravitational pull of the moon, the collected moods of those involved with the study? Without knowing the nature of the mechanism, we cannot know what is capable of influencing it, and therefore can reach no significant conclusions from either its success or failure at materializing. All we can say, from Radin's or SRI's research, is that something happens sometimes. Which, ultimately, is what Uri Geller or any psychic would say.

These examples were presented as a variety of sources both of data and interpretations. While there are fundamental limits to the ability of this data to actually describe any mechanism clearly, it does afford us the opportunity to acknowledge that there may indeed be aspects of biophysical interactions that we have seldom observed and never explained. Neither parapsychologists nor skeptics can prove that psi is real or not real. All they can do is interpret the data, whether it be data from a primary paper or data from one's own experience.

References

1) http://comp9.psych.cornell.edu/dbem/does_psi_exist.html"
Cornell University, Psychological Bulletin,1994, Vol. 115, No. 1, 4-18. Does Psi Exist? Replicable Evidence for an Anomalous Process of Information Transfer by Daryl J. Bem and Charles Honorton. A very clear, useful online paper concerning parapsychology.

2)http://www.princeton.edu/~pear/
The Princeton Engineering Anomalies Research: Scientific Study of Consciousness-Related Physical Phenomena website.

3)http://www.parapsych.org/
The Parapsychological Association website.

4)http://moebius.psy.ed.ac.uk/
The Koestler Parapsychology Unit at the University of Edinburgh website.

5)http://www.sri.com/
The SRI International website.

6) http://www.boundary.org/articles/tri2.pdf
Evidence for a retrocausal effect in the human nervous system. Radin, D. and E. May. One of several primary papers available on the Boundary Institute website.

7) http://www.lu.se/lu/engindex.html
The Lund University website.

8) http://www.fas.org/irp/program/collect/stargate.htm
The Federation of American Scientists' intelligence projects database's STAR GATE entry.

9)http://www.uri-geller.com/
Uri Geller's website.

10) http://www.andrewtobias.com/newcolumns/990324.html
A columnist's personal experience with Uri Geller.

11) http://www.michaelprescott.freeservers.com/FlimFlam.htm
A critical review of James Randi's skeptical book, Flim Flam Flummery!

12) http://www.uri-geller.com/content/research/sria.htm
The text of the SRI report documenting experiments with Geller and others. Found on Geller's website.

13)http://www.randi.org/
The James Randi Education Foundation. Instructional skeptical resource from professional magician and self-proclaimed psychic debunker "The Great Randi".

Fun resources:

1)http://www.psipog.net/
Psychic Students in Search of Guidance online community. Offers peer instructions, media files, and community activities.

2)http://www.fork-you.com/
Personal site of a professed fork-bender. Offers photographs, descriptions, and instructionson how to mentally channel energy into metal to render it soft enough to twist into tight spirals and interesting shapes before it rehardens.


The Genetic Basis of Addiction
Name: Neesha Pat
Date: 2003-04-29 02:31:13
Link to this Comment: 5556


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Addiction is defined as a compulsive physiological need for and use of a habit-forming substance, characterized by tolerance and by well-defined physiological symptoms upon withdrawal. Broadly, it is defined as a persistent compulsive use of a substance known by the user to be physically, psychologically, or socially harmful (1). Addiction is a complex phenomenon with important psychological and social causes and consequences. However, at its core it involves a repeated exposure to a biological agent (drug) on a biological substrate (brain) over time (2). Most abnormal behaviors are a consequence of aberrant brain function, which means that there is a tangible goal to identify the biological underpinnings of addiction. At a NIDA meeting a decade ago, a scientist announced to a spellbound audience that he had identified some of the genes associated with drug abuse. He described the mutations in those genes that lead people to abuse marijuana, heroin, cocaine, and other drugs. His landmark discovery brought scientists a giant step closer to dramatically curbing drug abuse (5). The genetic basis of addiction encompasses two broad areas of inquiry. One of these is the identification of genetic variation in humans that partly determines susceptibility to addiction. The other is the use of animal models to investigate the role of specific genes in mediating the development of addiction. Whereas recent advances in this latter effort have been rewarding, a major challenge remains, to understand how the many genes implicated in rodent models interact to yield as complex a phenotype as addiction (7). What I questioned in this paper was how and why researchers had turned to genetics to solve the problems of addiction. I strive to give an account (using numerous experiments as examples) of why genes play the most important role in addiction and how manipulations of genes can lead to possible cures for addictive personalities. Further, the ponderance that Brain = Behavior and the inherent ramifications of such, proves no more fascinating than when addressed in the context of the biological basis of addiction.

Investigators believe that there is great variability among individuals when it comes to their vulnerability to becoming addicted. "Pretty much everyone enjoys having their dopamine levels shoot up dramatically, which can happen without the use of drugs, as with the 'runners high'. However, not everybody craves the experience so much that it consumes them. Nor does everyone experience the same changes in brain function!" says Alan Leshner, director of NIDA (5). For marijuana addicts, the non-family environment has the biggest influence, accounting for 38% and with genes accounting for 33%. Thus we see that the ability of drugs of abuse to alter the brain does indeed, in part, depend on genetic factors. Acute drug responses as well as adaptations to repeated drug exposure can vary markedly, depending on the genetic composition of the individual. Genetic factors can also influence the brain's responses to stress and are thus also likely to contribute to stress-induced relapse (2).

A gene might contribute to addiction vulnerability in several ways. A mutant protein (or altered levels of a normal protein) could change the structure or functioning of specific brain circuits during development or in adulthood. Animal models have provided information that adaptations that drug exposure elicits in individual neurons alter the functioning of those neurons, which in turn alters the functioning of the neural circuits in which those neurons operate. This leads eventually, to the complex behaviors (for example, dependence, tolerance, sensitization and craving) that characterize an addicted state (1). A critical challenge in understanding the biological basis of addiction is to account for the array of temporal processes involved. Thus, the initial event leading to addiction involves the acute action of a drug on its target protein and on neurons that express that protein (7).

A study was conducted at Yale University Department of Psychiatry and Pharmacology, on the molecular and cellular adaptations that occur gradually in specific neuronal types in response to chronic drug exposure, particularly those adaptations that have been related to behavioral changes associated with addiction. The study focused on opiates and cocaine, not only because they are among the most prominent illicit drugs of abuse, but also because considerable insight has been gained into the adaptations that underlie their chronic actions. The results of the study showed that the best established molecular adaptation to chronic drug exposure is up-regulation of the adenosine 3',5'-monophosphate (cAMP) pathway. This phenomenon was first discovered in cultured neuroblastoma and glioma cells and later demonstrated in neurons in response to repeated opiate administration. Acute opiate exposure inhibits the cAMP pathway in many types of neurons in the brain, whereas chronic opiate exposure leads to a compensatory up-regulation of the cAMP pathway in a subset of these neurons. Up-regulation of the cAMP pathway would oppose acute opiate inhibition of the pathway and thereby would represent a form of physiological tolerance. Upon removal of the opiate, the up-regulated cAMP pathway would become fully functional and contribute to features of dependence and withdrawal (1).

Many researchers attempted to identify the genetic basis of these behavioral differences by the use of Quantitative Trait Locus (QTL) analysis. There is now direct evidence to support this model in neurons of the locus coeruleus, the major noradrenergic nucleus in the brain. These neurons generally regulate the attentional states and activity of the autonomic nervous system and have been implicated in somatic opiate withdrawal. Up-regulation of the cAMP pathway in the locus coeruleus appears to increase the intrinsic firing rate of the neurons through the activation of a non selective cation channel. This increased firing has been related to specific opiate withdrawal behaviors. cAMP Response Element Binding (CREB) protein, one of the major cAMP regulated transcription factors in the brain, was knocked out to create mutant mice. These mutated mice deficient in CREB showed attenuated opiate withdrawal (1). Overexpression of CREB in the nucleus accumbens counters the rewarding properties of opiates and cocaine; overexpression of a dominant-negative CREB mutant has the opposite effect. These findings suggest that CREB promotes certain aspects of addiction (for example, physical dependence), while opposing others (for example, reward), and highlight that the same biochemical adaptation can have very different behavioral effects depending on the type of neuron involved (1).

A recent NIDA funded study illustrates how genetic differences can contribute to or help protect individuals from drug addiction. The study shows that people with a gene variant in a particular enzyme metabolize or break down nicotine in the body more slowly and are significantly less likely to become addicted to nicotine than people without the variant (5). Dr. Edward Sellers of the University of Toronto examined the role that a gene for an enzyme called CYP2A6 plays an active role in nicotine dependence and smoking behavior. CYP2A6 metabolizes nicotine, the addictive substance in tobacco products. Three different gene types or alleles for CYP2A6 have been identified by previous research- one fully functional allele and two inactive or defective alleles. Each person has a maternal and paternal copy of the gene. Therefore, a person can have two active forms of the gene and normal nicotine metabolism; one active and one inactive copy and impaired nicotine metabolism; or two inactive copies, which would further impair nicotine metabolism (6).

The study found that people in a group who had tried smoking but had never become addicted to tobacco were much more likely than tobacco-dependant individuals in the study, to carry one or two defective copies of the gene and have impaired nicotine metabolism. The researchers theorize that the unpleasant effects experienced by people learning to smoke, such as nausea and dizziness, last longer in people whose bodies break down nicotine more slowly. These longer lasting aversive effects would make it more difficult for new smokers to persist in smoking, thus protecting them from becoming addicted to nicotine. Generally, smokers with slower nicotine metabolism do not need to smoke as many cigarettes to maintain constant blood and brain concentrations of nicotine (5). Currently researchers are looking into the possibility of being able to block the enzyme, consequently leading to the prevention of people becoming smokers.

Several large chromosomal regions have been implicated in addiction vulnerability, although specific genetic polymorphisms have yet to be identified by this approach. It has been established however, that some East Asian populations carry variations in enzymes (for example, the alcohol and aldehyde dehydrogenases) that metabolize alcohols. Such variants increase sensitivity to alcohol, dramatically ramping up the side effects of acute alcohol intake. Consequently, alcoholism is exceedingly rare in individuals for individuals that are homozygous for ALDH2 allele, which encodes a less active variant of aldehyde dehydrogenase (7).

Despite the progress made in the genetic field, candidate gene approaches are limited by our rudimentary knowledge of the gene products and the complex mechanisms underlying addiction (2). As a result, more open-ended strategies are needed, such as those based on analysis of differential gene expression in certain brain regions under control and drug-treated conditions. Differential display, for example, enabled the identification of NAC-1, a transcription factor-like protein, which is induced in nucleus accumbens by chronic cocaine and is now known to modulate the locomotor effects induced by cocaine (3).

In addition to secondary targets, there are numerous neurotransmitters, their receptors and post-receptor signalling pathways that modify responses to acute and chronic drug exposure. For example, several behavioural aspects of mice lacking the serotonin 5HT1B receptor indicate enhanced responsiveness to cocaine and alcohol; notably, they self-administer both drugs at higher levels than wild-type controls. The mice also express higher levels of FosB (a Fos-like transcription factor implicated in addiction) under basal conditions. These observations point to the involvement of serotonergic mechanisms in addiction. There are many other such examples. Mice deficient in the dopamine D2 receptor or the cannabinoid CB1 receptor have a diminished rewarding response to morphine, implicating dopaminergic systems and endogenous cannabinoid-like systems in opiate action. The stability of the behavioural abnormalities that characterize addiction, indicate that there is a high possibility of drug-induced changes in patterns of gene expression. One demonstration of this approach showed that FosB accumulates in the nucleus accumbens (a target of the mesolimbic dopamine system) after chronic, (but not acute) exposure to several drugs of abuse, including opiates, cocaine, amphetamine, alcohol, nicotine and phencyclidine (also known as PCP or 'angel dust'). This is in contrast with other Fos-like proteins, which are much less stable than FosB and induced only transiently after acute drug administration. Consequently, FosB persists in the nucleus accumbens long after drug-taking ceases (6).

Pathological gambling (PG) is an impulse control disorder and a model behavioral addiction. Familial factors have been observed in clinical studies of pathological gamblers, and twin studies have demonstrated a genetic influence contributing to the development of PG. Molecular and genetic research has identified specific allele variants of candidate genes corresponding to the neurotransmitter systems associated. Associations have been reported between pathological gamblers and allele variants of polymorphisms at dopamine receptor genes, the serotonin transporter gene, and the monoamine-oxidase A gene (8).

Dr. David Russell at Yale University examined the role for Glial-Derived Neurotrophic Factor (GDNF) in adaptations to drugs of abuse. Infusion of GDNF into the ventral tegmental area (VTA), a dopaminergic brain region important for addiction, blocks certain biochemical adaptations to chronic cocaine or morphine as well as the rewarding effects of cocaine. Conversely, responses to cocaine are enhanced in rats by intra-VTA infusion of an anti-GDNF antibody and in mice heterozygous for a null mutation in the GDNF gene. Chronic morphine or cocaine exposure decreases levels of phosphoRet, the protein kinase that mediates GDNF signaling in the VTA. Together, these results suggest a feedback loop, whereby drugs of abuse decrease signaling through endogenous GDNF pathways in the VTA, which then increases the behavioral sensitivity to subsequent drug exposure (10).

In 1999, researchers funded by the NIH unearthed the unexpected connection between circadian rhythms in insects and cocaine sensitization, a behavior that occurs in both fruit flies and vertebrates, and that has been linked to drug addiction in humans. The circadian clock consists of a feedback loop in which clock genes are rhythmically expressed, giving rise to cycling levels of RNA and proteins. Four of the five circadian genes identified to date influence responsiveness to freebase cocaine in the fruit fly, Drosophila melanogaster. Sensitization to repeated cocaine exposures, a phenomenon that is seen in humans and animal models, and associated with enhanced drug craving, is eliminated in flies mutant for certain genes (period, clock, cycle, and doubletime), but not in flies lacking the gene timeless. Flies that do not sensitize owing to lack of these genes do not show the induction of tyrosine decarboxylase normally seen after cocaine exposure. These findings indicate unexpected roles for these genes in regulating cocaine sensitization and indicate that they function as regulators of tyrosine decarboxylase (9).

Animal models have proved to be pivotal to our understanding of neurobiological mechanisms involved in the addiction process. One drawback (and one that is not limited to the field of addiction) is that sometimes a genetic mutation is found to result in a phenotype without any plausible scheme as to how the mutation actually caused the corresponding phenotype (4).. Various types of microarray analysis have led to the identification of large numbers of drug-regulated genes; it is typical for 1–5% of the genes on an array to show consistent changes in response to drug regulation (12). However, without a better means of evaluating this vast amount of information (other than exploring the function of single genes using traditional approaches), it is impossible to identify those genes that truly contribute to addiction (7). As we can see, the method to approaching drug addiction is not straightforward- which dugs should be used as models? What should researchers be looking for- Drug abuse per se? sensation seeking? specific biological markers?

Fortunately, the increasing sophistication of genetic tools, together with the increasing predictive value of animal models of addiction, makes it increasingly feasible to fill in the missing pieces—to understand the cellular mechanisms and neural circuitry that ultimately connect molecular events with complex behavior (11). Genetic work remains the highest priority, as it will greatly inform our understanding and treatment of addictive disorders in the years to come. In addition, the genetic basis of individual differences in drug and stress responses represents a powerful model of the ways in which genetic and environmental factors combine to control brain function in general (1).


References

1)Nestler, E., Aghajanian, G. (1997) Molecular and Cellular Basis of Addiction, Science, New Series, Volume, 278, Issue 5335.

2)Vulnerability to Addiction

3)Brain Buildup Causes Addiction

4)Everyone is Vulnerable to Addiction

5)Genes can Protect Against Addiction, NIDA Notes

6)Biological Basis of Addiction Studied

7)Genes and Addiction, Nature Genetics

8)Genetics of Pathological Gambling, Dept. of Psychiatry, Acala University, Spain.

9)Response to Cocaine Linked Biological Clock Genes, NIDA Notes

10)Promising Advances Towards Addiction , NIDA Notes

11)Genetic Animal Model of Alcohol and Drug Abuse , NIDA Notes

12)Dependance of the Formation of the Addictive Personality on Predisposing Factors

13)Evidence Builds that Genes Influence Cigarette Smoking


The Perfect Prescription or Just Prescribed Nonsen
Name: Lara Kalli
Date: 2003-05-01 04:52:11
Link to this Comment: 5595


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Amphetamine-dextroamphetamine, or Adderall, is one of the most commonly prescribed pychiatric medications today. It is generally used to treat attention-deficit/hyperactivity disorder (ADHD), though less frequently it is prescribed for narcolepsy, epilepsy and parkinsonism. (1) The almost cavalier manner in which the psychiatric community distributes Adderall prescriptions has led to the public's view of it as somewhat of a "wonder drug", but the reality of the matter is that this drug is as harmful as it is helpful, if not more so.

It is important first to understand what Adderall is exactly and its effects on the brain. As an amphetamine, Adderall is a CNS stimulant. Its primary mental effects are increased alertness and ability to concentrate on a single task - hence its efficacy in the treatment of ADHD - along with a sense of well-being, severe reduction in appetite, and possible paranoia. The physical effects of this drug include restlessness, dry mouth, increased heart rate, breathing rate and blood pressure, and pupil dilation. (2) Overdose symptoms include anxiety, panic attacks, delerium, hallucinations, and highly aggressive behavior; in extreme cases, patients may experience what is known as amphetamine psychosis, a state that is from a clinical perspective virtually indistinguishable from paranoid schizophrenia. Doctors traditionally start patients on a 5 to 10mg daily regimen, increasing the dosage to as high as 40mg as needed, and keeping in mind the fact that both physical and psychological dependence are quite likely with
extended use. (3)

There can be no question that, in many cases, Adderall can be quite useful. The number of clinical studies done that have shown the efficacy of amphetamine administration in the treatment of ADHD verges on ridiculous in its size. Furthermore, many people to whom the drug has not been prescribed have found that a one-time use of the drug is extremely helpful, most notably in the case of students doing homework (4). Small wonder that this drug is now so popular amongst physicians and the general public alike.

However, Parents Magazine reported that the rate of prescription of Adderall and similar drugs to minors has reached a record high, to the point where experts are beginning to be concerned that psychiatrists are encouraging a "quick fix" attitude rather than making a genuine attempt to treat the problem (5). This is a genuine concern. Like all psychiatric medication, Adderall treats the only the symptoms of a disorder and should not be used in lieu of therapy. In addition, extended amphetamine use can create serious problems aside from the distinct possibility of physical and psychological dependence. Malnutrition, vitamin deficiencies, severe weight loss, skin disorders, depression, speech and thought disturbances - these are all possible results of long-term use of amphetamines (2). The current popularity of Adderall does not change the fact that it is far from being an innocuous drug, and ought not to be treated lightly by doctors.

Furthermore, the significant increase in the drug's availability allows for an increased potential for abuse, by both those to whom it has been prescribed and those to whom it has not. A study performed by Dr. Christine Poulin showed that, in a year, 14.7% of seventh, ninth, tenth and twelfth grade students to whom amphetamines had been prescribed had voluntarily given away some of their medication, 7.3% had sold some of it, 4.3% had had some of it stolen and 3% had been forced to give some of it up (6). The more Adderall is prescribed, the larger the numbers that those percentages represent will be.

The psychiatric community's extremely lax attitude toward the dispensing of Adderall and other similar ADHD medications speaks volumes about the state of our culture, and those volumes are not filled with positive sentiment. There is far too much pressure put on performance and efficiency, to the point where it is deemed necessary and legitimate in a rapidly increasing number of cases to jeopardize health and well-being for them. To put it simply: In relation to its many potential adverse effects, Adderall is very much over-prescribed, and that in and of itself is an understatement. While its usefulness cannot be denied, it is a drug whose use carries with it serious repercussions, and the fact that the nature of our society has made its widespread use and abuse practically normative is frightening.

References


1. RxList FAQ on Adderall
2. Erowid's amphetamine vault; a highly informative, not anti-drug resource
3. Internet Mental Health's Adderall drug monograph; contains a great deal of information for both patients and psychiatrists
4. Erowid's amphetamine experience report vault; individual accounts of experiences with amphetamines
5. Parents Magazine article abstract
6. Write-up of Dr. Christine Poulin's study


From Biblical Times to Today: What Has Changed and
Name: Patricia P
Date: 2003-05-01 10:57:25
Link to this Comment: 5598


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

"Epilepsy is a brain disorder involving recurrent seizures. You can relax. It's not the end of the world." This was my neurologist's introduction to my diagnosis as an epileptic with partial petit mal seizures including a curious, not to mention exciting, history of 2 grand mal seizures. As a 12-year-old girl, I remember feeling confused and greatly changed by these words that I had yet to understand the meaning of. As I grew to learn more about my condition, I realized that there are people around the globe, ranging in age, race, social and economic background that have experienced this same confusion. Collectively, we have gathered an incomplete, but valuable and working concept of epilepsy. Although it is one of the earliest recorded diseases, it attracts the attention of doctors, scientists, and researches everywhere, still in search of a clear understanding of the causes of particular seizures. Different nations contribute to our ever-expanding understanding of its history, epidemiology, prognosis and mortality, along with clinical manifestations and differential diagnosis. Tracing modern diagnosis and therapies back to biblical times allows us to compare another very important aspect of epilepsy: very similar modern and ancient perspectives on this disorder.
Our language gives clues as to the longevity of epilepsy: the term epilepsy derives from the Greek word "epilambien" which means "to take hold of" or "to seize." (1). Epilepsy is a disease with one of the longest recorded histories and an impact spanning the globe, allowing healers and physicians from a wide range of countries and time periods to study epilepsy. Worldwide studies have estimated the mean prevalence of active epilepsy, (i.e. continuing seizures or the need for treatment) at approximately 8.2 per 1000 of the general population.) (2) Further research in the world is looking to explain why in developing countries, such as Colombia, Ecuador, India Liberia, Nigeria, Panama, and the United Republic of Tanzania and Venezuela show a prevalence rate of over 10 per ever 1,000 people. Thus, it seems that at any one time, approximately 50 million people in the world suffer from epilepsy. (3) Today, most would agree that epilepsy involves a deviation from normal brain activity through instability of neurons. The neurobiology of epilepsy is hard to describe, as different types of seizures are related to different parts and separate problems within the brain. This instability causes them to fire in a rapid and/or excessive, synchronous and/or inconsistent manner, with excess electrical discharges within our brain resulting in a seizure. Because this involves very complicated brain activity, and "instabilities" of varying degrees or locations in the brain, there is a great difference between the appearance and treatment of acute presentations, and more severe or chronic presentations. The severity of these seizures depends on several factors: if the episode is fever induced (febrile,) can be located to one area of the brain (partial seizures and temporal lobe seizures,) can be traced to places throughout the entire brain (generalized seizure,) or can be categorized by the intensity and level of electrical activity in the brain (petit mal seizures and grand mal seizures.) Unfortunately, although epilepsy's history dates back before biblical times, there are still very surprising gaps left to be filled.
Diagnosis is complicated and often specific to the type of seizures a person may suffer from. For a variety of medical reasons, a great number of people may experience only one seizure in their lifetime. This, however, does not meet the criteria for an epilepsy diagnosis. Many people are misdiagnosed, as there are several episodes that a person may experience that mimic a seizure, which are not. Furthermore, many seizures go undiagnosed, (most often petit mal seizures) due to their unfocused, almost "blank-out" appearance, and extremely short duration. The diagnosis of epilepsy requires a minimum of "2 or more unprovoked" seizures in a person's lifetime. (4) Although seizures are a symptom of the disease, and not always epilepsy itself, it is important to realize that what is most often "diagnosed" is what type of seizure the patient has experienced. Partial or focal seizures, for example, are most often traced to one localized part of the brain, and may not impair consciousness at all. The spread of these seizures has the potential to create generalized seizures, (also known as generalized tonic-clonic or grand mal seizures.) These involve electrical discharges that affect the entire brain, causing a loss of consciousness along with the popularly depicted muscle spasms or stiffness. (The image of a helpless person frothing at the mouth and shaking uncontrollable is not always an accurate depiction of this type of seizure.) Status epilepticus is the most rare and severe form of epilepsy. A person will suffer from frequent seizures without recovery of consciousness between each episode, or one single extremely prolonged episode. From a prolonged period of time without breathing, the lack of oxygen then results in damaged tissue, ultimately leading to brain damage or sudden death.
Differential diagnoses are quite common. A diagnosis of epilepsy requires several seizures; of whichever type or category they fit, occurring in a predictable pattern. Seizures may also be induced by an injury or trauma to the head, high fever or heat stroke, diabetes (seizures can occur when blood sugar levels are too low,) or in the presence of a brain tumor (30 to 40 percent of patients with brain tumors also have seizures.) (5) When diagnosing a patient with epilepsy, if the root of the seizure can be attributed to any one of these alternate circumstances, the seizure is then regarded more as a symptom of that condition, rather than epilepsy.
The unique history of our methods of diagnosis for epilepsy shed light on our ever-evolving understanding and improved treatment. During the Roman era, epilepsy was diagnosed by providing a piece of jet for the patient, waiting to see if they would collapse, most likely encouraging the nickname, "The Falling Sickness." Ancient Greek doctors practiced burning the horn of a Goat, (an animal considered to be prone to epileptic seizures) underneath the patients nose. (6) Today, we have moved away from the "smell test" methods, and moved toward studying the body and the brain. A thorough physical examination is preformed, often followed by a myriad of neurological exams. Blood and Urine are also studied in an attempt to identify and kidney or liver problems that may lead to a differential diagnosis, or give clues as to what anti epileptic drugs may be harmful and non-compatible with the patient. The fluctuation of electrical impulses is then measured by an electroencephalograph (EEG), which transmits signals to nerve cells with the use of electrodes. The EEG equipment is designed to receive and intensify the fluctuations of voltage, and then transfer the information to a computer. This has become an invaluable tool for understanding the particular root or type of seizure disorder a person has. However, this is not to say that although this would be the exception, a person may still have a normal EEG reading and have epilepsy. Magnet-response-imaging (MRI) is another technique used to diagnose epilepsy, as well as computerized axial tomography (CAT Scan) more commonly used to locate or rule out brain lesions as the cause of seizures.
Treatment is available and there are several options. Which treatment an epileptic will receive is almost always contingent on the severity of their presentation, what part of the world they are being treated in, and what each individual suffering from this disease personally feels most comfortable with. Before the emergence of anti-epileptic drugs, ketogenic dieting techniques were a popular method of treatment. Although the diet closely resembles starvation in an attempt to change the body's metabolic state, it is capable of improving select types of seizure control. (7) However, today up to 70% of adults and children diagnosed with epilepsy, in both developed and still developing countries, show that their condition can be successfully treated with the use of anti-epileptic drugs. (8) Some of these drugs include Phenobarbital, Acetazolamide, Carbamazepine, Clonazepam, Levetiracetam, Phenytoin, Topiramate and many others. The UK, Scotland, Germany, China, along with many other countries, as is common with regards to the treatment of most diseases, prefer different drugs. If you lived in China, for example, it is likely that an epileptic would first consider herbal remedies, which include deadlocked silkworm, gastrodia tuber, antelope horn, centipede, and dozens of other ingredients. In the United States, Diazapam, (otherwise manufactured as Valium, Stesolid and Diazemuls,) and Phenobarbital are more common drugs used to treat more serious forms of epilepsy. Although economic and geographical boundaries do not play a role in who experiences epilepsy, unfortunately, they play a huge role in whether or not they will be able to receive proper treatment. Chinese herbal supplements, drug therapy, strict and specific diet restrictions, and surgery are among many of the common options. It is important to keep in mind that the treatment should match the presentation of the disorder, whether it is acute or chronic.
Prognosis for those who can afford and obtain access to all medical options is very encouraging. Children have some of the most promising prognosis, as many children grow out of epilepsy before the need for serious treatment is necessary. Although many presentations of epilepsy are chronic, lifelong conditions, extremely effective options of treatment are available. Sadly, permanent brain damage or death can occur as a result of a serious seizure.
There are no concrete preventative measures to avoid epilepsy. However, that is not to say that prevention is not a key word to an epileptic. The goal is to avoid possible seizures, and often times, situation that promote seizures. Some factors that may increase a person's risk of seizures may include brain injury, history of seizures in the family, and other medical problems effecting electrolytes. Remaining aware of your physical condition, as with many illnesses, can help decrease your risk of having a completely unexpected incident. Exposure to certain medications and illicit drugs (such as the commonly debated, over-the-counter drug discussed in the media today, ephedrine,) has been known to intensify and cause seizures. Common to my experience as well as many other epileptics, a doctor will also recommend that their patient avoid anything and everything that may have been responsible for triggering their previous seizure. Provocative factors may include stress (emotional or physical), flashing lights (such as strobe lights, television, video games, etc,) over-hydration, fatigue, and any combination.
Unfortunately, epilepsy is also associated with an increased rate of mortality. This may be for a myriad of reasons, spanning from those which are medical, such as an underlying brain disease, (tumor or infection,) which may be the cause of seizures in a particular patient, Status epilepticus, or for other sudden an unexplained causes that result in respiratory or cardio-respiratory arrest during a seizure. Other causes of mortality are due to drowning, burns, or head injuries as a result of location at the time of the seizure, (especially car accidents,) and suicides. (9)
Some of the earliest writings on this disease reveal that it was once known as the "Holy Sickness," studied by the Greek physician Hippocrates, whose studied epilepsy with the belief that it could be cured with the understanding of what is known today as humoral pathology (controlling body fluids or humors.) Fascinatingly, recent translations, dating from about 500 BC, of a Babylonian tablet have revealed even earlier descriptions of epilepsy. However, it was during biblical times that the most famous historical account of a seizure was given in St. Matthew's Gospel, Ch. 17, Verses 15-17 of the bible: "Lord have mercy on my son, for he is lunatick and sore vexed, for oftimes he falleth into the fire and oft into the water. And I brought him to thy disciples and they could not cure him. Then Jesus answered and said, O faithless and perverse generation how long shall I be with you? bring him hither to me. And Jesus rebuked the Devil and he departed out of him: and the child was cured from that very hour." Mark Chapter 9, Verses 17-18, confirm the initial suspicion that the latter account is one of epilepsy; "he has an evil spirit in him and can not talk. Whenever the spirit attacks him, it throws him to the ground, and he foams at the mouth, grits his teeth and becomes stiff all over." Today the boy's condition would most likely be diagnosed as a grand-mal seizure, but at the time, traditional healers would most likely surmise that there had been an act committed against God, along with the presence of demons, that caused this horrific episode.
If we could imagine that the young boy from this biblical story was able to tell of his experience with epilepsy, and his interaction with the healers of the time, he may recount an experience similar to this:
"I do not remember exactly what happened. I felt lightheaded upon awakening. My father has told me that evil spirits have possessed me, and we must go to Jesus. I am ashamed, and unable to imagine what sin God is punishing me for. My father has told me that demons can enter the body at birth. However, this has not stopped neighbors from fearing me. I am alone and in need of healing. I know that God would be displeased if I did not seek him first for a cure, and so I will go to Jesus, although there are other doctors who believe that my condition is due to an imbalance of humors. My father has told me that it is the casting out of demons that will cure me. Jesus prayed for me and cast out the devil, and I was told I was cured. I prayed the demons would not return to haunt me, in fear that I would be abandoned by all of my community."

Today, the initial reaction of epileptics, actions of healers, and understanding of epilepsy are completely different. Our methods of diagnosis are based on a completely separate belief system, and our treatment of the patient, as well as the disease, has undergone centuries of change.
The most accurate modern day account I can give is my own. Although some of the details are hazy, the experience of my first seizure was like none other. I was a very academically interested and concerned student at a very young age. However, my teachers began to notice (as is common with many children suffering from petit mal seizures,) that I would have spells that showed "a real lack of attention and frequent disorder." My mother returned from a parent-teacher conference a little short of concerned for my health: "You are going to bed earlier! That's it! And you better start looking at your teachers in class so that you remember to focus on what they are saying!" It was not until my mom found me on the bathroom floor on February 19th of 1997 that we took my lack of focusing spells seriously. I was home from school with the flu and had just recently taken my temperature, which was nearing about 101 degrees feirenheight. I was standing near the shower, and began to feel light headed so I attempted to catch my balance on the bar of the shower. I still remain unsure of whether or not I fell (which may have caused the seizure,) or had the seizure and then fell. However, I recall this distinction being very important to every doctor I was brought to see. The fall was quite bad; I hit my head on the tile of the shower, yet the doctors' questions about the fall were left unanswered. I do not recall regaining consciousness until I hit the cold air outside while I was being carried out to the ambulance on a stretcher. It was my first grand mal seizure, and it clearly warranted an ambulance.
I live on a small island, and the retelling of this story would not hold the same impact without expressing how thoroughly embarrassed I was. All the people in my tiny community were standing outside their houses. My little sister was crying and my mother was gasping, clutching her chest, and asking questions uncontrollably. Personally, the most horrific aspect of this disease is the feeling of 'I have no idea what just happened,' combined with a constant fear of: 'Next time this is going to happen in school and all the kids are going to think I need an exorcism. I will become the ultimate freak show!' I was instructed not to fall asleep in the ambulance, which I now deduce was a precaution involving the fall, attempting to avoid slipping into unconsciousness.
By the time I arrived at the hospital, I was feeling fully recovered. After the seizure, I had had very brief but shooting stomach pains, but those subsided quickly and my goal was to recover from the shock. I'll never forget one extremely kind, vivacious, and un-professional nurse remaking, "Next time, your parents can give you milk and chocolate syrup so you can mix them some chocolate milk." I appreciated her humor, and the lighthearted, stress-free atmosphere that all of the doctors and staff had worked so hard to create. The doctor in the emergency room was less playful, as he wrote down my complete medical history, including specific pieces of history from my parents and ancestors. He was specifically interested in the details of my birth, any complications or illnesses involving my nervous system, any possibly related or suspicious childhood events, any medications I was currently taking, and some brief questions regarding drug and alcohol use. Especially because this was my first seizure, it was made clear to me that a detailed description of the events leading up to the seizure was crucial in distinguishing my seizure type. Appointments were immediately made regarding an EEG, MRI, as well as a CAT scan.
As I grew older, there were obstacles that my family and I needed to take into serious consideration. We made some very controversial decisions for my circumstances, that I feel are decisions that many epileptics struggle to make. Families often need to consider whether it is appropriate or not for their child to disclose this information on certain forms. On one hand, you place yourself and others in danger if you conceal that you are epileptic and are then placed in a situation that brings on a seizure, with no one prepared to handle a situation if it arises. On the other hand, it is a very real fear of an epileptic that they will be discriminated against. The fear that when applying for summer programs, jobs, and other positions and opportunities that would attribute to their growth as a person and resume for college, that they would be discarded as a serious liability. For instance, we chose doctors in New York City, not only because they were some of the best, but also because a New York doctor is not required by law to disclose information to New Jersey (or more specifically, the New Jersey DMV.) I was adamant about not becoming "disabled" until we were clear about my condition. These are some of the more controversial aspects of epilepsy that are most difficult to address and talk about. Different doctors will advised families to take complete separate courses of action based on their specific case. But I found my interactions with doctors to be most fascinating, as they acknowledged that I might suffer from something that many people opt to hide from the world. Traumatizing than the seizure itself, the stigma of being incapable is a huge concern.
I have been "seizure free" for about 3 years now. The experience has sharpened my awareness of the importance of people who dedicate their lives to professions as nurses, doctors and specialists. It has sparked my interest in the origin of this disease, the first treatments, conceptions, and fears surrounding it, and the speed at which our understanding has evolved. I recognize that my condition is minor compared to other epileptics, and that many cases of epilepsy are life altering, creating a very painful dependency on medications, friends, and family, and in severe cases, even surgery. Epilepsy is unique in that it effects people from all over the world, represents itself in tremendously varying degrees, and can been treated very differently depending on a patients geographical location. But what I find most fascinating are the similarities in experience for the person experiencing the seizure over thousands of years. The value of evaluating a person's experience with epilepsy during biblical times, in comparison to a person's experience today, is to recognize two main things. Shame and secrecy are not exclusive to either time period's approach to handling this disease, even though epilepsy has been considered one of the most serious brain disorders in every country of the world. Secondly, we must remain committed to the search for even greater neurological understandings, and new ways for scientist, neurologists, doctors, psychiatrists, and psychologists to introduce methods of caring for this disease and its victims, while implementing what we find over a broader region of the world. I will always remember the first explanation I was given for what I was experiencing: "Epilepsy is a brain disorder involving recurrent seizures. You can relax. It's not the end of the world." It was comforting, and also extraordinarily true.

References

WWW Sources
1)Epilepsy: Across Time and Place, an interesting perspective of epilepsy's groth and change over the years.

2) Epilepsy- EPIDEMIOLOGY, ETIOLOGY AND PROGNOSIS , a sort of fact sheet about a variety of complicated aspects of epilepsy.

3) Epilepsy-EPIDEMIOLOGY, ETIOLOGY AND PROGNOSIS , same fact sheet, only focusing on mortality, prognosis and interesting facts.

4) Seizures , site explaining the diagnosis and other distinctions between seizures and common occurrences mistaken for them.

5) Seizures: Common Causes , site examining seizures, outside of epilepsy, focusing on differential diagnosis and tumors.

6) German Epilepsy Museum , a fascinating site focusing on common folk lore concerning epilepsy and several descriptive stories illustrating epilepsies role in the past, and all over the country.

7) Ketogentic Diet , a description and exploration of one very common and world wild treatment for epilepsy, being used less often today with the introduction of drugs.

8) Surgical Treatments for Epilepsy , a personal questionnaire helping one who is epileptic to decide if surgery is right for them; packed with valuable and interesting information.

9) Seizure-Related Injuries in Epileptics , an editorial by Somsak Tiamkao looking at injuries related to epilepsy.


Flight or Fight
Name: Elizabeth
Date: 2003-05-01 14:38:35
Link to this Comment: 5601


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

In most cases, a cornered animal will either run from or attack the danger
threatening him. Most animals, including humans, will respond in a similar fashion, both
behaviorally and psychologically, when confronted with a stressful situation. This
response, commonly called "flight or fight", can, for humans, serve as a reaction to less
serious events, such as crossing a busy street or rushing to catch a plane. First discovered
by scientist Walter Cannon in the 1930s (4), the flight or fight response differs not only
from one species to another, but also between the genders. While males tend to have an
aggressive response to stress, females deal with stress in a more calm and subdued
manner. Females may exhibit the bodily symptoms associated with the flight or fight
response, but they tend not to experience the extremes of this phenomenon, unlike males.

The flight or fight response prepares our bodies to either confront danger head on,
or to run from it as quickly as possible. When facing a potentially hazardous situation,
one's heart rate, breathing, and metabolism quicken, while muscles contract and the
digestive system slows down or stops altogether. Additionally, the brain sends the
kidney a signal which releases adrenalin into the bloodstream, which gives the body an
extra rush of energy. This adrenalin is transported to the heart, lungs, and muscles in
order to ready these organs for a dangerous situation. Breathing becomes more rapid as
the lungs attempt to provide more oxygen, in order to insure proper muscle functioning.
To transport this extra oxygen, the blood vessels near the heart and lungs dilate. In turn,
this dilation forces the heart to beat faster as more blood requires faster transport through
the body. At the same time, blood vessels contract in areas not needed for defense, such
as the digestive system. The contraction of blood vessels cause the familiar feeling of
having "butterflies" in one's stomach. In addition to the symptoms which ready the body
for an imminent flight or attack, the response may also trigger headaches, blurred vision,
dry mouth, a sore back and neck, heart palpitations, and increased sweating (2). While
flight and fight genetically evolved as a reaction to situations of acute stress caused by
extreme, immediate peril, the response has adapted itself to the modern atmosphere of
chronic stress. Chronic stress triggers repeated instances of the flight or fight response in
situations where one may not actually need the physical readiness provided by an
adrenalin release. In such cases, a body does not have proper time to allow a return to
normal bodily conditions between stressful situations. Chronic stress is common among
those who work stressful jobs, live in cities, or often find themselves facing dangerous
situations (1).

Over the years, the symptoms associated with the flight or fight response have
developed in order to help humans deal well with stressors, even when these stressors do
not pose a life or death consequence. In recent years, questions have arisen regarding
whether men and women experience the flight or fight response in the same manner.
Prior to 1995, only 17% of the participants in studies concerning human responses to
stress were female (3). Of course, this discrepancy led to a strong gender bias in the
results, causing an uncertain perception of how women respond to stressful situations.
While it has been found that women can also experience the flight or fight response,
studies now show that females they do not react to stressful situations in the same manner
as men. A recent UCLA study of female responses to stress found that women tend to
care for others and seek help from outside sources during times of stress rather than
aggressively confront their problems. This calmer mode of coping has been termed the
"tend and befriend response". The study, conducted by Shelley E. Taylor, based on the
results of hundreds of biological and behavioral studies conducted on thousands of male
and female subjects, both human and animal, reports that women indeed experience a
different response to stress than men (6). Some components of the flight or fight stress
response seem to be linked only to males, including an increased pain inhibition and high
cortisol response (3). These aspects of the flight or fight response are triggered by male
sex hormones, which, while also present in much lower amounts in females, obviously do
not affect women to the same extent or in the same manner as they affect men. During a
stressful situation, both males and females receive a rush of stress hormones. After the
initial influx of epinephrine, norepinephrine, and cortisol, females experience a rush of
oxytocin and endorphins, aided by estrogen, the female sex hormone. Oxytocin is a
stress hormone responsible for soothing and calming the body during stress. This
hormone, which is also released during childbirth and breastfeeding, acts as a powerful
mood regulator by reducing anxiety and encouraging the female to seek friendship. Men
also produce oxytocin, but in lower quantities (6). When released in reaction to stress,
oxytocin inhibits the flight or fight response as it triggers an opposite reaction, classified
as attachment behavior, or the "tend and befriend" response. Males, however, react to
these initial stress hormones but creating more testosterone, which in turn triggers the
flight or fight response (5). Hormonal reactions such as these play an integral role in
determining which type of stress response is triggered.

The flight or fight response may be stronger in males because of genetic
inheritance. This aggressive response evolved over time as a method of protection,
which was passed down genetically to offspring. In the past, males were more likely to
encounter the life or death situations which warranted a flight or fight response, which
may be why the response is more fully developed in and utilized by males. Thus, over
the years males have developed a response to stress which values the survival of the
individual over the wellbeing of the pack, emphasizing either direct combat with an
enemy or a swift retreat. However, this self-centered response contradicts the females'
nurturing instincts, as the flight or fight response may leave more vulnerable members of
the community open to attack. In contrast to the male stress response, females, who
traditionally have played a larger role in caring for children, are more likely to experience
a more sedate response to danger which protects not only themselves, but their offspring
as well (6).

While these biological considerations must be taken into account regarding
dramatic life or death situations, it remains unclear whether the same differences exist
between male and female responses to situations to moderate stressors. It has been
established, in various studies, that women tend to reach out to those around them during
times of stress, particularly communicating with other women (6). Men, in contrast, keep
their emotions to themselves and prefer not to share feelings which may betray any
weakness. Therefore, these distinct differences in reaction also create biological
differences in the way males and females respond bodily to stress. Men are much more
likely to experience high blood pressure, which comes from the elevation of the body
during flight or fight response and also not having an external outlet for the emotions
produced by stress. Females, on the other hand, experience an increased heart rate. This
may originally have functioned as a soothing mechanism for children being held to their
mother's chest during times of stress and crisis (5).

However, in spite of the mounting evidence which indicates that men and women
experience fundamentally different responses to stressful situations, this may not always
be the case. The research in this area is still very preliminary, and some contradictions
remain. Indeed, studies conduct on certain species of monkeys suggest that the females
exhibit aggression instead of the tend and befriend response (5). Also, not all men will
become aggressive when confronted, just as not all women will remain calm and seek
outside support. As more studies have been performed on female subjects, it has become
increasingly apparent that the flight or fight response may not be is not the primary
reaction to stress for females. Instead, this aggressive response may function as a
secondary reaction or a last resort. Females are much more likely to rely on the tend or
befriend response, in which they turn to an established support network of friends and
families for help. The primary goal of females in any stressful situation is to protect her
offspring. While tend and befriend may not be as aggressive or even effective as the
male's fight or flight response, females need to protect the young in order to allow the
species to survive. The flight or fight response leaves the vulnerable open for attack and
only considers the survival of the individual rather than of the female. Considering her
genetic role as nurturer of the species, it stands to reason that females while choose a
method of stress response which protects those who cannot protect themselves.


References

1)Flight or Fight

2)Fight or Flight

3)Do Women Differ from Men?

4)The Science Show

5)Women, Men, and Approaches to Stress

6)A Woman's Response to Stress


Fluid on the Brain: Hydrocephaly
Name: Rachel Sin
Date: 2003-05-01 19:12:01
Link to this Comment: 5602


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Last spring, I traveled to the Mutter Museum with my Developmental Biology class. This museum, located in Philadelphia, contains many examples of medical anomalies from the past. As my classmates and I walked through the top floor of the museum, we encountered multiple glass cases including preserved facial and bodily skin lesions. Perhaps the two things which fascinated me the most on the top floor were the picture of Chang and Eng (conjoined Siamese twins) and the Soap Lady. However, as I reached the lower level of the museum, I realized that I was about to be even more awe-stricken by what I saw, as I was instantly drawn to one glass case. This case contained multiple glass jars, filled with tiny preserved fetuses. Although all of the fetus-containing jars were disturbing, one in particular caused my jaw to drop; this fetus's head was larger than the others, and looked excessively swollen. My eyes peered down at the label under the jar: "Hydrocephaly", it read. Soon, I was joined by other shocked classmates in front of this case, and we all began to ask the same types of questions, such as: "Why was this fetus's head so large? Could it had lived through adulthood if it had been born alive and were afflicted today?" The image of that fetus was so disturbing to me that it stuck in my head, and left me wanting to answer the questions posed by my classmates and myself.

Perhaps our most burning question about the hydrocephalic fetus was the question regarding the size of its head (WHY was it so large?). The enlargement of a hydrocephalic head is actually due to a buildup of cerebrospinal fluid, or CSF, in the cranium. The buildup of CSF which is seen in hydrocephaly (also known as hydrocephalus and "water on the brain") is due to a lack of balance between the absorption and production of the CSF. As a result, the ventricles get larger, and pressure is created on the tissue of the brain surrounding the cranium. (2). Among infants who have hydrocephaly, enlargement of the head is due to the buildup of fluid in the central nervous system, which causes an expansion of the head, and a bulging out of the soft spot or fontanelle. This is due to the fact that prior to the age of 5, fusion of the bony plates which compose the skull surface has not yet occurred.

Typically, the CSF which is produced in the ventricles passes through the ventricles before encircling the brain and the spinal cord. Finally, it undergoes reabsorption over the brain's surface, into big veins which serve to transport the CSF to the heart. In hydrocephaly, this usual sequence of CSF transport is interrupted by a certain factor, leading to the buildup of CSF. In many child patients with hydrocephaly, the exact cause is unknown. However, certain potential factors leading to its development include vascular problems, infection, trauma, bleeding, structural problems and tumors. While a small number of these problems are genetic, some of these problems are present after birth, others are present during pregnancy (5). (as seen in the fetus at the Mutter Museum).

Several possible causes have been proposed for congenital hydrocephaly. One possible cause is German measles, also known as Rubella. During pregnancy, this condition can cause hydrocephaly and other malformations in the developing fetus. A second cause is T gondii or Toxoplasmis organism, which can be passed on through contact with an animal with the infection, the consumption of poorly-cooked meat, or contact with soil which has been contaminated. A rare congenital cause of hydrocephaly which is genetic is X-linked hydrocephaly, which is transmitted via the X-chromosome to the son from the mother. One in 20 males is afflicted with this condition (The large majority of cases exist in males), which can only be inherited from the maternal X-chromosome. A final cause of congenital hydrocephaly is viral; this cause is from the herpes class of viruses, and is known as CMV, or Cytomegalovirus. The symptoms seen in this form are similar to those present in patients with a cold virus. (1).

In addition, a risk of having hydrocephaly exists in premature babies. The prematurely-born baby is more susceptible to the condition, as its development is still progressing at the time of birth. A crucial area in the brain is that which is found below the ventricles' lining; during development, activity below the ventricles is great, and the blood supply here is abundant as a result. Unfortunately, this causes fragility in the blood vessels of the area, which may rupture if the baby is extremely ill or has a large blood pressure shift. (7). Other causes of Hydrocephaly include diseases which have detrimental affects on the brain, such as spina bifida and meningitis.

Hydrocephaly can exist in two different forms. The first is known as non-obstructive or communicating hydrocephaly. In this form, communication still exists between the subarachnoid space and the ventricular system, despite swelling of the ventricles. This form of hydrocephaly is usually caused by the after affects of hemmorhaging or infection. In the second form of hydrocephaly, there exists no communication between the subarachnoid space and the ventricular system. This second form is known as obstructive or non-communicating hydrocephaly. (6).

Although enlargement of the head is one of the symptoms which is characteristic of hydrocephaly, it is not the only symptoms. In addition, varied symptoms appear at different stages in younger age groups. In infancy, early symptoms of hydrocephaly can include a bulging of the cranial soft spots (called fontanelles). This bulging can occur both in the presence or absence of head enlargement. In addition, the infant can exhibit vomiting and separated sutures. As hydrocephaly continues in the infant, he or she can display muscle spasms, and a lack of control of temper. Late in infancy, multiple symptoms can appear, including a delay in development, lethargic feelings, a decrease in cognitive functioning, trouble eating, loss of bladder control, slowness in growth, a decrease in movement, high-pitched cries and sleepiness. Other symptoms appear in older infants and young children, and will differ according to the degree of pressure damage caused by the condition. These symptoms include a loss of coordinated motion, changes in vision, psychosis, confusion or other mental aberrations, vomiting, poor pattern of walking, crossed eyes, headache, and vision changes. (3). Although hydrocephaly appears most frequently in children, it can occur during adulthood as well.

Multiple methods exist for detecting hydrocephaly in both adults and children. In older children and adults, if symptoms appear which are characteristic of hydrocephaly, an MRI or CT scan can be performed. These tests will aid the doctor in finding any causes responsible for the hydrocephaly. (6). In children who are younger than 1 year old, the doctor will usually ensure that the rhythm of growth of the head and its size are normal by measuring the head in a medical checkup. This method can be employed for infants, seeing as how two of the main symptoms of hydrocephaly in infants are a bulging soft spot and increased speed of cranial growth. (4).

Looking back at the fetus in the Mutter Museum, I still recall our other burning question. We looked at the tiny fetus in the jar, and asked, "If the fetus had been born today, could it have survived until adulthood?" If the hydrocephaly were detected and treated, yes, it most definitely could have survived until adulthood. And yes, it could even live for a full lifespan. Not only do there exist many extra-sensitive ultrasound devices that can detect diseases while a child is still in the womb, but various treatment options are available for individuals with hydrocephaly. Although the condition is most often treated via surgical methods, non-surgical forms of treatment are available as well. In drug treatment, a drug which can effectively reduce the cerebrospinal fluid production would be used. One option is a carbonic anhydrase inhibitor known as Acetazolamide. This drug has been documented to cause a decline in the formation of cerebrospinal fluid through the choroid plexus, and has been used successfully to treat hydrocephaly in immature infants. Another non-surgical option which exists to treat this condition is the head-wrapping. In this method (which was used in the past and put into use again by Epstein), bandages made of muslin are applied to the head firmly. In addition, head compression was achieved through the use of rubber or adhesive plaster bandages. The main goal of this treatment option is to augment the trans-ependymal cerebrospinal fluid absorption (or, open the CSF pathways which are blocked by increasing the pressure inside the head. However, this method has been negated and is no longer commonly used. (6).

Even though non-surgical methods are available to help treat hydrocephaly, the condition is most frequently treated through surgical means. One method of surgery which can be used is known as Endoscopic third ventriculostomy, or ETV. This method involves creating an opening at the base of the third ventricle. This method ensures that the spinal fluid can flow to be absorbed in the basal cisterns. (1). However, although this method is starting to be used more often, the most reliable and efficacious surgical procedure to treat hydrocephaly involves the insertion of a shunt. The shunt is composed of a valve (to thwart back-flowing of cerebrospinal fluid and keep the speed of draining under control) and several tubes. The device serves as a means of redirecting the cerebrospinal fluid to the bloodstream from the blocked ventricular path. Proper positioning of the shunt is crucial; the upper end should lead to the brain's ventricle, while the lower end goes either to the abdomen or the heart for drainage. In addition, the lungs' lining can be used as a drainage site. (7). Usually, if the patient with hydrocephaly (which has not been caused by tumors) is treated with a shunt insertion, there is an excellent prognosis for success in managing the condition. (1).

Unfortunately, although the shunt is the most effective means of treating hydrocephaly today, it does not come without complications. Many potential complications include physical disabilities, intellectual impairment, meningitis, complications of surgery, dysfunctioning of the shunt (such as tubal separation, kinking, blockage or related problems), infection of the region where the drainage of fluid occurs, and neurological damage such as a decrease in function, movement and sensation. (3). As the shunt is a foreign body inside the patient, complications are not to be wholly unexpected. In fact, 6 out of 10 children with hydrocephaly who have shunts inserted will need some form of correction at some point in their lifetime. If a shunt is not working properly, it is important that the child seek medical attention immediately. At times, a child's cerebrospinal fluid blockage causing hydrocephaly may vanish completely following the shunt's insertion. Yet when this happens, it is unnecessary to perform surgery (which may cause more harm than good) to rid of the shunt. The shunt can remain inside of the body for the rest of the child's life without causing any problems.

Thinking back to the field trip to the Mutter Museum last year, I remember myself and my classmates looking at the hydrocephalic fetus, almost in horror, and asking questions. We wondered why its cranial size was so much larger than normal. We also couldn't help but ask ourselves what the fetus's prognosis would have been today, had it been born. It is true that we often fear what we do not understand. That being said, I wish I had been educated about the actual cause of hydrocephaly when I was viewing the fetus, and that I could have gained encouragement from having the knowledge that the condition is highly treatable. Perhaps if my classmates and I had known those facts, our reaction to the Mutter display would have been a little bit different.

References

1)About Hydrocephaly from Arc of King County

2)Hyrdocephaly, from the TX Dept. of Health

3)Hydrocephalus, from Medline Plus

4)Hydrocephalus, From Diagnostico.com

5)Pediatric Neurosurgery - Hydrocephalus, From Dept. of Pediatric Neurosurgery, Columbia University

6) Hydrocephalus, From Dept. of Pediatric Neurology, Adelaide University

7)What is Hydrocephalus?, From the Association for Spina Bifida and Hydrocephalus


The Origins of Depression:Environmental, Genetic,
Name: Clarissa G
Date: 2003-05-02 16:01:45
Link to this Comment: 5606


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Depression has become an increasingly prevalent behavioral illness among individuals in our society. But how does one "get" depression? According to some recently published articles, depression is not solely a case of familial, chemical or environmental deficiencies, but a combination of these three factors. In other words the origins of depression are not from one specific deficiency. Depression could emerge as the result of a combination of many factors, which need to be looked at as a whole, in order to obtain proper treatment.

Genetic predisposition is currently being researched as a key component in the makeup of major depression. A correlation has also emerged between family members. A specific study focused on mothers of children with depression. This study found that often mothers of depressed children were in fact suffering from depression themselves. The depression that could be experienced in children of mothers with symptoms of depression could be a learned behavior. But this is unlikely. What is more likely is that these children are members of families with genetic traits that make them more susceptible to depression.

Recently, further support of a genetic link to depression has been uncovered. Researches have found that children who are being treated for depression often times have mothers who are themselves suffering from depression or other mental illness. In a study conducted by the New York State Psychiatric Institute at Columbia University in New York City mothers who brought their children in for a psychiatric evaluation were also given a questionnaire on the state of their own mental health. Researchers found that a significant portion of the "mothers rated their overall emotional health as being 'fair to poor'". (1) The mothers reported psychiatric disorders from major depression to drug abuse. Some even reported thinking about suicide or self harm within recent weeks. Dr. Ferro of Reuters health "said that the findings 'basically confirm what we know from family studies of depression—that there is familial transmission of depression and that the mothers of children who are depressed and coming for treatment are frequently depressed as well". (1)

A another recent study of depression identified the possibility of a specific area on a chromosome, which can contribute to depression in women. (5)
Scientists at the University of Pittsburg have identified a link between depression in women and the gene CREB1. CREB1, which is located on the 2q33-35 chromosome, has been found to have significant alterations in patients who have died with major depression. The CREB gene is a regulatory protein, which appears to be responsible for neural plasticity as well as cognition and long-term memory. CREB interacts with estrogen receptors and may be related to irreversible dementias. The crux of the University of Pittsburg article is that CREB does affect occurrence of depression, specifically in women. CREB interacts with estrogen receptors women making them more at risk for a depression.

The level of serotonin absorbed in the brain is also a factor in the onset of depression. When the brain does not absorb or produce the necessary amount of serotonin an individual will become depressed. In a study done at the University of North Carolina at Chapel Hill researches determined that even when a patient is not suffering from a depressive episode the ability of the brain to absorb an appropriate amount of serotonin to regulate emotions is hindered. (4) Damaged serotonin reuptake has significant ramifications for the treatment of depression. Some of the focus when treating depressed or formerly depressed patients should be on the functioning of the brain. Even when a patient is not currently experiencing depression the serotonin absorption in the brain needs to be closely monitored. The study suggests that even when depression is not being experienced the symptoms are still present and need to be treated. Perhaps elongating the time period in which a patient takes medication. This study offers further support to the chemical and biological origins of depression.

The previous studies strongly suggest genetic or biological deficiencies as the root cause of depression. These two factors are not to be overlooked when treating depression, however they are just a predisposition. Not a prediction of whether or not someone will develop depression. That being said, one should also look at environmental factors as a possible cause of depression.

Environmental triggers can expose individuals to susceptibility to depression. Loosing a job, a death in the family, a mental or physical illness of a loved one are just a few environmental factors that can lead to depression. The environment or life experiences can lead to a downward turn in our mood. Sometimes these experiences are so strong they themselves can make someone experience a major depression.

The question asked in the beginning of this paper was where does depression originate? This question "is rather beside the point. There is only one of you, not a separate physical or psychological you. Depression may be triggered by either physical or psychological events...both seemed to be involved" (2). Research into the development of depression helps psychologists and psychiatrists better understand treatment. By biological, genetic, and environmental cause should not be viewed as separate in determining a diagnosis, but in tandem. The origins of depression are found when examining and treating the patient's environment, genetics and biology as a whole(3).


1)Mothers of Depressed Offspring,short concise articles

2)HealthyPlace.com,Depression Community

3)Mood Disorders Unit ,a plethora of information

4)health research,information on seretonin

5)University of Pittsburg,depression gene in women

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


The Genetic Basis of Addiction
Name: Neesha Pat
Date: 2003-05-04 23:55:24
Link to this Comment: 5609


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Addiction is defined as a compulsive physiological need for and use of a habit-forming substance, characterized by tolerance and by well-defined physiological symptoms upon withdrawal. Broadly, it is defined as a persistent compulsive use of a substance known by the user to be physically, psychologically, or socially harmful(1). Addiction is a complex phenomenon with important psychological and social causes and consequences. However, at its core it involves a repeated exposure to a biological agent (drug) on a biological substrate (brain) over time (2). Most abnormal behaviors are a consequence of aberrant brain function, which means that there is a tangible goal to identify the biological underpinnings of addiction. The genetic basis of addiction encompasses two broad areas of inquiry. One of these is the identification of genetic variation in humans that partly determines susceptibility to addiction. The other is the use of animal models to investigate the role of specific genes in mediating the development of addiction. Whereas recent advances in this latter effort have been rewarding, a major challenge remains, to understand how the many genes implicated in rodent models interact to yield as complex a phenotype as addiction(7).

Dramatic advances have been made over the past two decades, both in neuroscience and the human genome project- complete genome sequences are providing a framework to allow the investigation of biological processes, and have revolutionized the understanding of genes and their role in addiction. Researchers have long questioned what exactly leads a person to become addicted and what causes the 'recurrent' cravings that characterize addiction. My hypothesis was that genes play the most important role in addiction and manipulations of such genes could lead to possible cures for addictive personalities. I questioned the mechanism by which genes contributed to addiction vulnerability and sought to answer these questions by giving a detailed description of the avenues explored by researchers and the consequent emergence of 'addiction-genetics'. Further, the ponderance that Brain=Behavior and the inherent ramifications of such, proves no more fascinating than when addressed in the context of the biological basis of addiction.

Investigators believe that there is great variability among individuals when it comes to their vulnerability to becoming addicted. "Pretty much everyone enjoys having their dopamine levels shoot up dramatically, which can happen without the use of drugs, as with the 'runners high'. However, not everybody craves the experience so much that it consumes them. Nor does everyone experience the same changes in brain function!" says Alan Leshner, director of NIDA (5). For marijuana addicts, the non-family environment has the biggest influence, accounting for 38% and with genes accounting for 33%. Thus we see that the ability of drugs of abuse to alter the brain does indeed, in part, depend on genetic factors. Acute drug responses as well as adaptations to repeated drug exposure can vary markedly, depending on the genetic composition of the individual. Genetic factors can also influence the brain's responses to stress and are thus are also likely to contribute to stress-induced relapse(2).

Results of a genome-wide search, or "genome scan", by a team of researchers led by Dr. George Uhl, from the National Institute on Drug Abuse (NIDA) in Baltimore, Maryland, provided the first evidence that specific regions of the human genome differed between abusers of illegal drugs and non-abusers. The findings of Dr. Uhl and his colleagues was an important step toward identifying genes that affect a person's vulnerability or resistance to substance abuse, and offer hope for identifying individuals at high risk for addiction and matching abusers with the most effective treatments. The researchers looked for differences in the frequency of 1,494 genetic variants known as SNPs (single nucleotide polymorphisms, or "snips") between DNA samples from 667 unrelated individuals with a history of heavy drug use and 338 individuals with no significant lifetime use of any addictive substance (controls). Using the SNP markers, whose locations in the genome are known, the research team identified more than 40 regions across the genome that differ between drug abusers and controls in DNA samples from both European Americans and African Americans. Eight of these regions have been previously linked to alcohol or nicotine dependence, suggesting that genes in these regions contribute to individual vulnerability to abuse of multiple substances(11).

A gene might contribute to addiction vulnerability in several ways. A mutant protein (or altered levels of a normal protein) could change the structure or functioning of specific brain circuits during development or in adulthood. Animal models have provided information that adaptations that drug exposure elicits in individual neurons alter the functioning of those neurons, which in turn alters the functioning of the neural circuits in which those neurons operate. This leads eventually, to the complex behaviors (for example, dependence, tolerance, sensitization and craving) that characterize an addicted state(1). A critical challenge in understanding the biological basis of addiction is to account for the array of temporal processes involved. Thus, the initial event leading to addiction involves the acute action of a drug on its target protein and on neurons that express that protein (7).

A study was conducted at Yale University Department of Psychiatry and Pharmacology, on the molecular and cellular adaptations that occur gradually in specific neuronal types in response to chronic drug exposure, particularly those adaptations that have been related to behavioral changes associated with addiction. The study focused on opiates and cocaine, not only because they are among the most prominent illicit drugs of abuse, but also because considerable insight has been gained into the adaptations that underlie their chronic actions. The results of the study showed that the best established molecular adaptation to chronic drug exposure is up-regulation of the adenosine 3',5'-monophosphate (cAMP) pathway. This phenomenon was first discovered in cultured neuroblastoma and glioma cells and later demonstrated in neurons in response to repeated opiate administration. Acute opiate exposure inhibits the cAMP pathway in many types of neurons in the brain, whereas chronic opiate exposure leads to a compensatory up-regulation of the cAMP pathway in a subset of these neurons. Up-regulation of the cAMP pathway would oppose acute opiate inhibition of the pathway and thereby would represent a form of physiological tolerance. Upon removal of the opiate, the up-regulated cAMP pathway would become fully functional and contribute to features of dependence and withdrawal (1). Many researchers attempted to identify the genetic basis of these behavioral differences by the use of Quantitative Trait Locus (QTL) analysis. There is now direct evidence to support this model in neurons of the locus coeruleus, the major noradrenergic nucleus in the brain. These neurons generally regulate the attentional states and activity of the autonomic nervous system and have been implicated in somatic opiate withdrawal. Up-regulation of the cAMP pathway in the locus coeruleus appears to increase the intrinsic firing rate of the neurons through the activation of a non selective cation channel. This increased firing has been related to specific opiate withdrawal behaviors. cAMP Response Element Binding (CREB) protein, one of the major cAMP regulated transcription factors in the brain, was knocked out to create mutant mice. These mutated mice deficient in CREB showed attenuated opiate withdrawal (1). Overexpression of CREB in the nucleus accumbens counters the rewarding properties of opiates and cocaine; overexpression of a dominant-negative CREB mutant has the opposite effect. These findings suggest that CREB promotes certain aspects of addiction (for example, physical dependence), while opposing others (for example, reward), and highlight that the same biochemical adaptation can have very different behavioral effects depending on the type of neuron involved (1).

A recent NIDA funded study illustrates how genetic differences can contribute to or help protect individuals from drug addiction. The study shows that people with a gene variant in a particular enzyme metabolize or break down nicotine in the body more slowly and are significantly less likely to become addicted to nicotine than people without the variant(5) . Dr. Edward Sellers of the University of Toronto examined the role that a gene for an enzyme called CYP2A6 plays an active role in nicotine dependence and smoking behavior. CYP2A6 metabolizes nicotine, the addictive substance in tobacco products. Three different gene types or alleles for CYP2A6 have been identified by previous research- one fully functional allele and two inactive or defective alleles. Each person has a maternal and paternal copy of the gene. Therefore, a person can have two active forms of the gene and normal nicotine metabolism; one active and one inactive copy and impaired nicotine metabolism; or two inactive copies, which would further impair nicotine metabolism(6) . The study found that people in a group who had tried smoking but had never become addicted to tobacco were much more likely than tobacco-dependant individuals in the study, to carry one or two defective copies of the gene and have impaired nicotine metabolism. The researchers theorize that the unpleasant effects experienced by people learning to smoke, such as nausea and dizziness, last longer in people whose bodies break down nicotine more slowly. These longer lasting aversive effects would make it more difficult for new smokers to persist in smoking, thus protecting them from becoming addicted to nicotine. Generally, smokers with slower nicotine metabolism do not need to smoke as many cigarettes to maintain constant blood and brain concentrations of nicotine(5) . Currently researchers are looking into the possibility of being able to block the enzyme, consequently leading to the prevention of people becoming smokers.

Several large chromosomal regions have been implicated in addiction vulnerability, although specific genetic polymorphisms have yet to be identified by this approach. It has been established however, that some East Asian populations carry variations in enzymes (for example, the alcohol and aldehyde dehydrogenases) that metabolize alcohols. Such variants increase sensitivity to alcohol, dramatically ramping up the side effects of acute alcohol intake. Consequently, alcoholism is exceedingly rare in individuals for individuals that are homozygous for ALDH2 allele, which encodes a less active variant of aldehyde dehydrogenase (7).

Despite the progress made in the genetic field, candidate gene approaches are limited by our rudimentary knowledge of the gene products and the complex mechanisms underlying addiction(2) . As a result, more open-ended strategies are needed, such as those based on analysis of differential gene expression in certain brain regions under control and drug-treated conditions. Differential display, for example, enabled the identification of NAC-1, a transcription factor-like protein, which is induced in nucleus accumbens by chronic cocaine and is now known to modulate the locomotor effects induced by cocaine (3).

In addition to secondary targets, there are numerous neurotransmitters, their receptors and post-receptor signaling pathways that modify responses to acute and chronic drug exposure. For example, several behavioral aspects of mice lacking the serotonin 5HT1B receptor indicate enhanced responsiveness to cocaine and alcohol; notably, they self-administer both drugs at higher levels than wild-type controls. The mice also express higher levels of FosB (a Fos-like transcription factor implicated in addiction) under basal conditions. These observations point to the involvement of serotonergic mechanisms in addiction. There are many other such examples. Mice deficient in the dopamine D2 receptor or the cannabinoid CB1 receptor have a diminished rewarding response to morphine, implicating dopaminergic systems and endogenous cannabinoid-like systems in opiate action. The stability of the behavioural abnormalities that characterize addiction, indicate that there is a high possibility of drug-induced changes in patterns of gene expression. One demonstration of this approach showed that FosB accumulates in the nucleus accumbens (a target of the mesolimbic dopamine system) after chronic, (but not acute) exposure to several drugs of abuse, including opiates, cocaine, amphetamine, alcohol, nicotine and phencyclidine (also known as PCP or 'angel dust'). This is in contrast with other Fos-like proteins, which are much less stable than FosB and induced only transiently after acute drug administration. Consequently, FosB persists in the nucleus accumbens long after drug-taking ceases (6).

Dr. David Russell at Yale University examined the role for Glial-Derived Neurotrophic Factor (GDNF) in adaptations to drugs of abuse. Infusion of GDNF into the ventral tegmental area (VTA), a dopaminergic brain region important for addiction, blocks certain biochemical adaptations to chronic cocaine or morphine as well as the rewarding effects of cocaine. Conversely, responses to cocaine are enhanced in rats by intra-VTA infusion of an anti-GDNF antibody and in mice heterozygous for a null mutation in the GDNF gene. Chronic morphine or cocaine exposure decreases levels of phosphoRet, the protein kinase that mediates GDNF signaling in the VTA. Together, these results suggest a feedback loop, whereby drugs of abuse decrease signaling through endogenous GDNF pathways in the VTA, which then increases the behavioral sensitivity to subsequent drug exposure(10) .

In 1999, researchers funded by the NIH unearthed the unexpected connection between circadian rhythms in insects and cocaine sensitization, a behavior that occurs in both fruit flies and vertebrates, and that has been linked to drug addiction in humans. The circadian clock consists of a feedback loop in which clock genes are rhythmically expressed, giving rise to cycling levels of RNA and proteins. Four of the five circadian genes identified to date influence responsiveness to freebase cocaine in the fruit fly, Drosophila melanogaster. Sensitization to repeated cocaine exposures, a phenomenon that is seen in humans and animal models, and associated with enhanced drug craving, is eliminated in flies mutant for certain genes (period, clock, cycle, and doubletime), but not in flies lacking the gene timeless. Flies that do not sensitize owing to lack of these genes do not show the induction of tyrosine decarboxylase normally seen after cocaine exposure. These findings indicate unexpected roles for these genes in regulating cocaine sensitization and indicate that they function as regulators of tyrosine decarboxylase (9).

Pathological gambling (PG) is an impulse control disorder and a model behavioral addiction. Familial factors have been observed in clinical studies of pathological gamblers, and twin studies have demonstrated a genetic influence contributing to the development of PG. Molecular and genetic research has identified specific allele variants of candidate genes corresponding to the neurotransmitter systems associated. Associations have been reported between pathological gamblers and allele variants of polymorphisms at dopamine receptor genes, the serotonin transporter gene, and the monoamine-oxidase A gene (8).

Thus we see that animal models have proved to be pivotal to our understanding of neurobiological mechanisms involved in the addiction process. The described experiments provide sufficient evidence that the genetic makeup of the individual seems to play a very large role in drug addiction, whether the cause be a single dysfunctional/altered allele or by a cascade of biological events leading to change in gene expression patterns. One drawback (and one that is not limited to the field of addiction) is that sometimes a genetic mutation is found to result in a phenotype without any plausible scheme as to how the mutation actually caused the corresponding phenotype(4) . Various types of microarray analysis have led to the identification of large numbers of drug-regulated genes; it is typical for 1–5% of the genes on an array to show consistent changes in response to drug regulation(12) . However, without a better means of evaluating this vast amount of information (other than exploring the function of single genes using traditional approaches), it is impossible to identify those genes that truly contribute to addiction (7). As we can see, the method to approaching drug addiction is not straightforward- which dugs should be used as models? What should researchers be looking for- drug abuse per se? sensation seeking? specific biological markers?

Fortunately, the increasing sophistication of genetic tools, together with the increasing predictive value of animal models of addiction, makes it increasingly feasible to fill in the missing pieces—to understand the cellular mechanisms and neural circuitry that ultimately connect molecular events with complex behavior(11). Genetic work remains the highest priority, as it will greatly inform our understanding and treatment of addictive disorders in the years to come. In addition, the genetic basis of individual differences in drug and stress responses represents a powerful model of the ways in which genetic and environmental factors combine to control brain function in general (1).

References


1)Nestler, E., Aghajanian, G. (1997) Molecular and Cellular Basis of Addiction, Science, New Series, Volume, 278, Issue 5335.

2)Vulnerability to Addiction

3)Brain Buildup Causes Addiction

4)Everyone is Vulnerable to Addiction

5)Genes can Protect Against Addiction, NIDA Notes

6)Biological Basis of Addiction Studied

7)Genes and Addiction, Nature Genetics

8)Genetics of Pathological Gambling, Dept. of Psychiatry, Acala University, Spain.

9)Response to Cocaine Linked Biological Clock Genes, NIDA Notes

10)Promising Advances Towards Addiction , NIDA Notes

11)Genetic Animal Model of Alcohol and Drug Abuse , NIDA Notes

12)Dependance of the Formation of the Addictive Personality on Predisposing Factors

13)Evidence Builds that Genes Influence Cigarette Smoking


Surreal Perceptions of the Body
Name: Ellinor Wa
Date: 2003-05-06 00:57:22
Link to this Comment: 5621


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

"Do I look fat in this outfit?" Millions of men and women all over the world, in hundreds of different dialects have uttered this question. Depending on the century or decade even, they have hoped for a range of different responses. Some of them are aware of the answer but merely fishing for compliments, others understand the objectivity of the question, and the smallest percentage of these people is completely oblivious. This percentage perceives a completely different image than what shown, standing in front of the mirror. The representation does not necessarily apply to weight; it may apply to noses, lips, legs, buttocks, abdomen, spinal chord, ears, etc. Rather than regarding themselves realistically, they ingest a surreal image which plagues their thoughts and leaves them inclined to take drastic measures to fix the object of their obsession. When this perception becomes deeply skewed, altering a person's actions to an unhealthy level, the disorder associated with this loss of reality is called Body Dysmorphic Disorder.


The diagnostic critera for Body Dsymorphic Disorder requires all three of the following symptoms: 1. Preoccupation with an imagined defect in appearance. If a slight physical anomaly is present, the person's concern is markedly excessive. 2. The preoccupation causes clinically significant distress or impairment in social, occupational, or other important areas of functioning. 3. The preoccupation is not better accounted for by other mental disorder (dissatisfaction with body shape and size in Anorexia Nervosa). (1) Body Dysmorphic Disorder (BDD) does not usually apply to weight (as obsessive weight dissatisfaction is classified under more specific eating disorders); more often it applies to a specific imaginary defect. Once a defect has been noticed, the person suffering form BDD will often spend hours scrutinizing about their flaw in front of a mirror or other reflective surface. They tend to spend hours a day thinking about it. The defect will eventually take over their lives. It is common for sufferers of BDD to lose jobs, wreck marriages, and attempt suicide because of their feelings of inadequacy. (2)


One case study showed a woman who had earned her PhD and then decided to have children with her husband. Once the two boys she had bore reached three and five years old, it had become time for her to resume her career. A couple weeks later, as she was cooking in the kitchen, a small piece of ceiling fell on her nose. As a result of this accident, she bled a bit and then bandaged her nose. A scar rested there for a couple weeks, after which she was completely healed. No one could see an imperfection on her nose except for her. She became obsessed with this invisible mark, staring her nose for hours in the mirror. She felt her face had come to look grotesque. Eventually, she became housebound. At this point her husband and friends demanded she seek psychiatric help. (5)


In a number of cases, BDD was a result of a traumatic experience endured. In the story above, the woman who perceived her nose as deformed did not experience this misperception until it was time for her to leave her children and go back to work. In other cases, adolescent girls became disordered with this same metal illness after having been raped.(6) It seems that BDD may serve as an escape for some people who are faced with a horrible incident of experience in life. Instead of focusing all their emotional frustrations on an undesirable event, their mind creates a flaw in appearance with intent to distract.


This notion is reminiscent of the blind spot anomaly. When given a piece of notebook paper with a large dot and large cross, we moved the paper forward, focusing our eyes on the cross, until the dot disappeared. Instead of leaving a hole where the dot was until we could no longer see it, the mind created information to fill in the gap; where the dot was, we saw notebook paper instead. Perhaps this notion should be applied to BDD; the mind inserts an image to preoccupy itself.


In other cases, however, BDD does not develop as a repercussion of some disturbing event. The human percept becomes skewed on its own accord. How can the mind diffuse the eye's ability to see something in its realistic form? It seems that the image which is reflected on the back of the retina has been deformed by brain; the brain signals false information to the eye. The brain has tremendous power over the body. For sufferers of BDD, often they will not only see this surreal perception, but they will also feel it. When the brain signals misinformation to the eye, it also sends it to our sensory percepts causing it to be even more believable.


This convincing image often draws people towards enlisting the help of plastic surgeons. The image seems so authentic that operating on the defect will make it disappear. However, in most cases, people with BDD undergo plastic surgery repeatedly in attempts to fix their flaw. This fact further supports the argument that the brain has sent false information because the flaw usually will still be perceived by a disordered person even after they have had their defect operated on. This same type of obsessive behavior is applied to people who see themselves as larger than they are in actuality. They will resort to drastic measures to rid themselves of the extra fat they see in the mirror; becoming extremely anorexic (not eating enough to nourish their bodies) or bulimic (binging on a variety of foods and then purging their intake by vomiting or exerting themselves through exercise). Even after these obsessive food rituals are accomplished, they will still see themselves in the same way they did before.


The imaginary beliefs that are often attributed to Body Dysmorphic Disorder do not always indicate such a serious mental disorder. People view their bodies differently than is the case all the time. For instance, a new genre of disorder has swept the nation over the past couple years deemed 'tanorexia'. (9) In this case, people visit tanning booths constantly, convinced they are not as tan as they actually are. The new craze has been attributed to pop celebrities who constantly tan themselves.(8) Their frequent visits can cause cancer or death from overexposure to radiation.(7)


Several ideas come to mind in terms of the surrealistic v. realistic perceptions of the body relating to neurobiology. Does the same notion apply to body dysmorphic disorder that does to perceptions of sound? The noise that a tree makes falling in the forest sounds different to different people. Does the way one perceives oneself look different to different people? Perhaps the brain has a predetermined notion based on a familiar image of what the body looks like. Or does the brain signal different information to the retina concerning body image depending on its mood state? Most therapists prescribe serotonin level lifting medication to those suffering from Body Dysmorphic Disorder. It would seem that the cure to and root of such a mental illness would lay in the homeostasis of chemicals in the brain.


Another question that comes to mind in relation to Body Dysmorphic Disorder is what role the I-function plays in the imaginary perceptions seen. If we know that we can use the I-function to scare ourselves, how do we know that we can't use the I-function to make ourselves believe we look a different way than we actually do? Should we attribute skewed perceptions to the brain's misinforming the retina, or should we attribute them to the I-function's ability to take an image to the extreme?


The implications of Body Dysmorphic Disorder are neurobiologically significant in many ways. If the brain can send false information to the retina rendering an perceived image skewed, then how can we tell what images are realistic and which ones are surrealistic? Body Dysmorphic Disorder specifies that its victims only see themselves as deformed, not others. But if it is the brain which signals false information, how do we know that it only applies to those it afflicts? Maybe we see morphed images of everyone. If the brain can signal wrong information to the eye and can extend to the sensory percepts, then how do we know what images are real and which one are false? Perhaps we all see ourselves in a way which others do not, and only those who suffer from BDD see an extremely dramatic version of distorted perception. That could be why they stand out from the rest of us; their perceptions are so distorted that it is called to the attention of others. When those words "Do I look fat in this outfit" are uttered is it a possibility that no one knows the answer?

References

1)American Psychiatric Association. DSM-IV-TR, 2000.
2)Health Information, an interesting site about BDD
3)ABC scientific article page, an interview concerning BDD
4)BBC webpage, an interesting article about the severity of BDD
5)Mental Help Page, this site provides a tutorial for understanding and coping with BDD
6)College Health Page, this site explains BDD to college students
7)Tanning Laws, demonstrates the dangers of tanning
8)New York Times Archives, tanorexia: the new self-destructive appearance disorder
9)UWO Gazette Page, a college newspaper article about the international obsession with tanning


Brain=Behavoir: Psychosomatic Disorders
Name: Stephanie
Date: 2003-05-06 14:36:28
Link to this Comment: 5625


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Everyone feels stress, anxiety, fear, or worry at some point in our lives, if not on fairly regular basis. There are varying degrees of stress; some people let stress consume them and it ends up controlling their life. If stress is something that originates in the brain, which then has the capacity to influence us, doesn't it make sense that brain does equal behavior? This seems particularly apparent in people suffering from psychosomatic disorders - if someone can make themselves sick as a result of their thoughts and feelings, it seems clear that brain and behavior are directly related.

Pyschosomatic is a term that expresses the relationship between mind and body and "is applied to physical disorders thought to be caused by psychological factors" (1). There are, however, some differences among patients who suffer from such disorders. Just as some people fixate on the stress they feel and others ignore it, people with psychomatic disorders also obsess over it and seek medical attention frequently, whereas others cannot distinguish the sources of their bodily pain (2).

General stress seems to be largely psychosomatic, though of a milder formulation. Psychosomatic responses occur through the autonomic nervous system and include activation of the sweat glands and mucous membrane lining of the lungs, and the release of hydrochloric acid in the stomach (3). This explains why when we are nervous we often sweat more, experience an upset stomach, and an increase in pulse rate. However, tbe rationale for becoming nervous in the first originates in the brain.

Although people experience anxiety in different situations, most of us would admit to feeling nervous before a job interview or before giving a presentation. This is due the way we perceive the event. If we attribute our nerves to our fear of how the presentation will be received or to the previous time we were in a similar situation and made a mistake, our physical response to this neurological trigger may be heightened.

Furthermore, the same mechanism can be helpful in alleviating stress - this is why many people preach positive thinking as a why to overcome anxiety or fear. Since "anxiety [often] results from perceiving oneself as helpless and feeling unable to cope with an anticipated danger or problem," it can be beneficial to change our perception of the situation (2). Therefore, since stress can be ascribed to thought processes, it seems that the brain and the I-function have direct bearing on how we behave.

Psychosomatic disorders are usually termed as somatoform disorders. Included in this category is somatization disorder, somatoform pain disorder, and conversation disorder. Somatization disorder is generally chronic and occurs when the patient exhibits or expresses much physical pain that can be explained in terms of psychological problems. The pain these patients feel tends to be located within the digestive system, the nervous system, and the reproductive system. In addition, patients tend to not accept a psychological explanation and usually insist that the pain is entirely physical or bodily (4).

Those who suffer from somatization disorder may fall under one of two categories: alexithymic or hypochondriacal. Patients diagnosed as alexithymic tend to have difficulty identifying the nature of their emotions and moods. Consequently, they usually reject the suggestion that their physical pain is psychological (4). One case involving a young woman who grew up in an abusive household exhibits this well.

Hannah was raised by her father since her mother was frequently hospitalized soon after Hannah's birth. Her father was very strict and forbade Hannah to speak of her mother's illness outside the house. As a result, Hannah was very doting and sought to please her father. Academically, Hannah was gifted and went on to receive a Ph.D. She married twice but both husbands abused her emotionally and physically, largely because she succumbed the dominant male role in the relationship; as a child, she had not had a female role model to explain that being assertive is acceptable. Because she was well-educated, Hannah was articulate and could talk about her problems, but did not show any emotion when identifying them. Once she began to complain of a backache, her psychoanalyst pinpointed this as a somatic response: "the conflict inside her between wanting to get away from [an ex-boyfriend] and not being able to do so was splitting her back in two" (5).

It seems that beginning in her early childhood, Hannah's obedience to her father negated all other emotions - throughout her life she was only concerned with the well-being of others. As s result, her own emotions were under-developed. Related to Hannah's alexithymic somatization, is hypochondriacal somatization.

Patients defined as hypochondriacal are essentially hypochondriacs - that is, they are intensely convinced that they are ill most of the time. The difference between regular hypochondriacs and those with a somatization disorder is that the latter are entirely sure they have a disease and often become agitated when doctors try to disprove them (4). In addition, people with somatoform pain disorder are preoccupied with pain even though there is no physical evidence or explanation for it. Thus, it seems to be that these patients think they feel pain, but in reality, do not (4).

Finally, those who experience conversation disorder find that they lose a physical ability like vision or movement. Symptoms of this disorder are not created by the patient intentionally, or at least do not appear to make manifest the intention of the patient; thus, these symptoms occur due to psychological stimulation (4). The best example of this condition appears in the story of Alice Greer. When admitted to a neurology clinic, Alice suffered from paralysis in both of her legs. In a consultation with one of the doctors, Alice confessed that she had had an affair with her husband's brother while he was visiting. She felt extremely guilty, but he wanted to continue the affair and threatened to tell everyone their secret when she refused. He forbade her to speak to anyone of their relationship; since she could not tell anyone how she felt, her legs became paralyzed. Interestingly, as Alice spoke with the doctor "Strength immediately returned to her legs" (6).

The nature of psychosomatic disorders seems to make a particularly strong case for the brain equal behavior discourse. The fact that our thoughts can control or dictate our behavior is an interesting and significant aspect of being human. As previously discussed, people with psychosomatic or somatoform disorders experience physical pain or disability as a result of stress. Because stress arises as a result of the way we perceive certain situations (I can't do this versus I can handle it), and since perception is a neurological function, it is clear that in this sense, brain is directly linked to behavior.

Even if we think of behavior being the result of a motor symphony, a series of synapses in response to an input, the input may still be psychological and not merely external or environmental. Although behavior can certainly be influenced by one's environment, psychological factors still have a major role in prescribing behavior. If we find ourselves in a situation we find uncomfortable, we may think to ourselves: This makes me uncomfortable; I don't like this, therefore, I'm just going to keep to myself and mind my own business. This thought process is then what makes us act slightly aloof or solemn. Similarly, if we have an important job interview or presentation, we may say: This is really important - I hope I don't mess up - I'm really worried about this. As a result, we become nervous and find that our voice cracks or quavers when we speak or that we tremble while gesturing. Although these examples are somewhat trivial, they still exhibit part of the extent to which the brain controls behavior.

Furthermore, it seems that patients suffering from psychosomatic disorders have a distorted I-function. Their I-function appears to become somewhat muddled as a consequence of their intense psychological activity. Although these people still have a sense of themselves, they do appear to be confused about what that means. They seem to vacillate between the following thoughts: Do I feel pain because I think of something that then induces this feeling? Or: do I feel pain because I did something that actually hurts (i.e. I stubbed my toe)? In this instance, it seems that part of the behavioral mechanism in their brain is malfunctioning - particularly since they cannot recognize that their pain is due to something they inflict themselves - something as abstract as a thought.

Because thoughts are so difficult to control, it is no wonder that we do not lose control of ourselves and our behavior. Although it does seem clear to me that brain does equal behavior, it is also clear that our I-function, still part of the brain, checks our behavior based on culturally accepted norms.


References

1)The Merck Manual of Medical Information
2)Psychological Self-Help
3)Psychosomatic Disorders
4) Melmed, Raphael N. Mind, Body, and Medicine. New York: Oxford University Press, 2001.
5) Sidoli, Mara. When the Body Speaks. Philadelphia: Taylor & Francis Inc., 2000.
6) Griffith, James L. & Melissa Elliott. The Body Speaks. New York: Basic Books, 1994.


Endorphins, Depression, and the Brain of a Distanc
Name: Katherine
Date: 2003-05-06 16:37:24
Link to this Comment: 5626


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Ask a long-distance runner about the place and role of running in his or her life, and do not expect to get a simple answer. Running, and specifically distance running, goes well beyond the practical physicalities of mile splits, muscles and orthotics. It is decisively a sport of the mind as well as the body, and has carried this unique duality for hundreds of years. We have heard stories of ancient Greek messengers who run for hours and miles over rugged terrain in order to deliver their word. (1) Is such ability simply a case of "mind over matter?" What really is happening in the brain of a distance runner?

There are many factors behind the psychological aspects of running as a sport, both in relation to the sport itself, and to the athletes who choose to run. We must look into all of these factors when exploring the neurobiology of running. Among other effects, long-distance running has been associated with the "runner's high" and an ability to decrease depression. In assessing the validity of these claims we must look into the events that occur in the brain and the body of a distance runner.

In a Duke University study, 150 participants with depression, age 50 or more, were divided into three groups. One group was put on an exercise schedule, another group was administered Zoloft, and a third was given a combination of the two. At the end of four months, all three groups showed significantly lowered rates of depression, and those in the exercise group experienced significantly less relapse than those in either of the other two groups.(2) Positive mood changes have also been found to occur specifically as a result of long-distance running. (3)

We have all heard of endorphins, and the famously mysterious "runner's high." But is it really so mysterious? During all types of activity, chemicals are released in the brain to be used as signals and messengers. Running, in the spectrum of possible activities, is one of relatively high intensity. Therefore, it makes sense that such an activity should produce noticeable chemical changes in the body. When the body is put under stress, the brain reacts to this stress. Prolonged running is sensed by the body and brain as an aberration from normal, resting activity. It could also be associated with a need for heightened physical capabilities for survival, as the autonomic nervous system may not distinguish between running for recreation and running for another purpose. The activity of running is thus sensed as a stress, and the brain reacts accordingly. The method of reaction is the release of chemicals in response to stress, which are designed to counter this stress and enable the body to function efficiently as it continues to run.

A group of opiate proteins have been identified as naturally occurring in the brain, and these chemical neurotransmitters, known commonly as endorphins, are responsible for pain-relieving properties. Endorphins have been associated with the runner's high. (4) Endorphins are thus named because of their nature as endogenous (produced inside the body) neuropeptides (proteins in the brain) which act in a similar way to morphine and are chemically similar to that drug, the latter being derived from the pain-reducer, opium. Endorphins therefore convey some of the same properties as morphine, but are found naturally in the body. Endogenous opioids (endorphins being a specific example) were identified in 1975 based on knowledge of the human body's response to morphine. The body is able to respond to a chemical such as morphine because of receptors in the brain that recognize the drug and trigger a response to it. Scientists realized that, since the brain has receptors that recognize morphine, the human body must also produce its own morphine-like substances. It is not likely that receptors existed for the sole purpose of recognizing an exogenous chemical such as morphine. The newly recognized neurotransmitters were given the names endorphins and enkephalins, emphasizing their natural origin inside the body (prefix from Greek en-, meaning in). Endorphins are polypeptides containing 30 amino acid units, and opioids are hormones similar to corticotrophin, cortisol, and catecholamines (adrenaline and noradrenaline). They are made by the body to reduce stress and relieve pain, and are usually produced during times of extreme stress to block the pain signals sent by the nervous system. There are at least 20 known endorphins in the human body, with beta-endorphin having the strongest affect on the brain and body during exercise. (5) Beta-endorphin is structurally very similar to morphine, but has different chemical properties, and tyrosine is the main amino acid contributing to this peptide hormone.

Endorphin levels have been found to increase with conditions of stress in prolonged exercise such as running, cycling, cross-country skiing, and long-distance swimming. (6) The physical and psychological manifestations of this increased endorphin level can be varied. A runner may frequently feel a "second wind," meaning that he or she is more energized and balanced than when the run or the race began. Such a feeling often extends beyond the sensation of increased physical energy, however, and seems to take effects on the runner's mental outlook; hence the mood change associated with endorphin release. The effects are individual, and thus cannot be completely quantified. However, among the positive feelings associated with distance running, an athlete may begin to feel extremely balanced, rhythmic, connected, and in synch with her surroundings. There may be a simultaneous feeling of being both very aware and comfortable in the body, while also feeling that the body does not exist; the true reality being, instead, an unattached rhythm. The mind may seem to be both inside and outside of the body, so that, ironically, the relationship of the two is both close and detached. Also simultaneous is a feeling of being in control while also being free from any need or act of controlling. The world itself may appear more ordered and make more sense, so that things fall effortlessly into place, and the runner may feel a greater sense of her own placement in that world, with a simple yet distinct feeling of purpose. It may be a feeling of suspension, height, and calmness, where the runner could seemingly keep going forever, in a type of calm, automatic, and meditative rhythm, accompanied by a heightened awareness and crispness of perception. It is, in fact, interesting to note the meditative aspects of distance running, since both running and meditation have been shown to produce similar changes in hormone levels. (7) A seeming suspension of time may also occur, and these feelings of suspension come together in the spaces between each step, when the body rests for a short moment above the ground.

Pain threshold tends to increase directly following exercise, and moods are often elevated. But it is the amalgamation of all of these factors that may enable us to explain on a neurobiological level the power running may have to combat depression. The act of running produces a stress, which causes chemicals to be released in the brain. (One might here note the inherent requirement for contrast; the fact that an initial stress is necessary (running) before the cure can be imparted (endorphins and a heightened sense of consciousness and improved mood). As with many activities, difficulty necessarily precedes satisfaction, and there must be a low place before the high). These neuropeptides that are released have notable effects in combating pain and in creating noticeable sensations in mood and mental outlook.

How, though, do we determine whether these changes are due solely to the distinct release of endogenous chemicals? Indeed, human emotion is both psychological and physiological in nature, and there are numerous factors in the running experience that may contribute to a variation in mood. When running outside in nature, for example, the fresh air, sunlight, and scenery may contribute to positive mood changes. (8) The feeling of a strong body may make the athlete feel more confident, healthy, and self-sufficient. A run may be a respite from the daily bustle of life, offering a time to organize thoughts or to spend some time free from thought. The schedule and planning required for workouts, as well as the establishment and achievement of specific goals, may impart a sense of satisfying balance, accomplishment, and purpose for the runner.

There are, of course, two sides to every coin. Although running has been shown to alleviate depression in many cases, it also has the possibility to cause negative feelings and depressive states. Any activity performed at a high level will necessarily be both rewarding and difficult. It is a challenge, however, to pinpoint the true source of these sensations; and depressive feelings seem most associated with the psychological complexities associated with training, rather than solely to the release of neuropeptides per se. In this case, it is important to look at the type of people who are drawn to distance running, and to comment upon how their brains tend to react to life and to various situations. Perhaps we can find similarities between the behaviors of distance runners which would point to similarities in their brains. Indeed, there are common characteristics that are shared by many distance runners. Such recurring traits among these athletes include: an introspective and solitary nature, tendencies towards perfectionism, a desire for independence, and stubbornness in maintaining a strict training schedule. Quickness for self-critique, and the establishment of rigid and elevated goals, may contribute to depressive tendencies. Like any athletic or artistic endeavor at a high level, there is a potential for obsession. This comes from the striving to reach the extremes, and to find the highest potential; it is, by definition, at the brink of normalcy where this highest level necessarily lies. This brink is by its nature unstable. If the athlete, or the artist, is to reach it, she must also waver on the brink of instability. Hence comes the cathartic potential of an important race for the trained and tapered athlete; and yet, after this race, there is the potential for other swings of sentiment, and for the need to reestablish goals and new purpose. We may, perhaps, see parallels to this phenomenon in the experience of a new mother suffering from post-partum depression, or of a writer who has finally submitted a book for publication. There is, on the one hand, a sense of accomplishment and creation, and on the other, a sense of imbalance in being separated from the thing that was so long a part of each day's vision and purpose. The key then lies in re-calibration and continued motion. And yet while continuation can create a sense of purpose and structure, it also creates a type of relentlessness, an uneasy infinity. Hence, the immateriality of running provides both freedom and tension, just as a piece of music escapes our grasp the moment it is played. Running, of course, provides huge benefits to the body and to fitness; however, the hormones produced in over-training by some elite athletes have been known to cause depression. (9) Endorphins have also been linked with decreased estrogen levels in female distance runners, which can lead to lowered bone density and, eventually, osteoporosis. (10) All in all, endorphins are a type of natural drug and must be treated with respect. Like any drug, there is the potential for addiction and withdrawal, to which any runner who has missed a needed run, or has been injured, can attest.

All valuable pursuits provide both brightness and shadow, and it is perhaps by this contrast that we can recognize what is truly worthwhile. The challenges of running do not outweigh the rewards. On the contrary, distance running stands as a valuable way for many people to live fully, both mentally and physically, and to hint at bridging, in fleeting suspended moments, the eternal internal gap.

References

1) Coolrunning website: a source for runners and those interested in the sport.

2) Article on DepressioNet: a website resource for depression information and support.

3) PubMed Article: Markoff, R.A., Ryan P, Young, T. "Endorphins and mood changes in long-distance running." Medical Science Sports Exercise 14(1):11-15 (1982).

4)
Website for the International Association of Mind-Body Professionals Article by Dr. J.S. Ellis: Runner's High.

5) PubMed Article: Wildmann, J., Kruger, A., Schmole, M., Niemann J., Matthaei, H. "Increase of circulating beta-endorphin-like immunoreactivity correlates with the change in feeling of pleasantness after running." Life Science Mar 17;38(11):997-1003 (1986).

6) PubMed Article: Henry, J.L. "Circulating opioids: possible physiological roles in central nervous function." Neuroscience and Biobehavioral Reviews (6-3:229-45) 1982.

7) PubMed Article: Solomon, E.G., Bumpus, A.K. "The running meditation response" American Journal of Psychotherapy 32(4):583-92 (Oct 1978).

8) PubMed Article: Lambert G.W., Reid, C., Kaye, D.M., Jennings G.L., Esler, M.D. "Effect of sunlight and season on serotonin turnover in the brain." Lancet 7;360(9348):1840-2 (Dec 2002).

9) PubMed Article: Lane, A.M., Lane, H., Firth, S. "Performance satisfaction and postcompetition mood among runners: moderating effects of depression" Percept Mot Skills (June 2002).

10) PubMed article: Harrison, R.L. "Ovarian impairments of female distance runners." Annals of Human Biology, Jul-Aug 1998.


Retinal Tissue Transplantation: To See or Not to S
Name: Michelle C
Date: 2003-05-08 12:01:42
Link to this Comment: 5633


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Many Americans suffer from ocular diseases that are neurodegenative. Aside from visual loss due to aging, diseases such as Glaucoma, Uveitis, Pigmentosa and Cataracts potentially lead to blindness. With no known cure for these series diseases partial to total blindness is a reality for many. What if there was a new and innovative way to prevent r repair the damage caused by these disorders. What if we could stimulate the growth of new cells in our eyes and essentially improve our vision. New and interesting research regarding fetal retinal cells and adult intrinsic cell transplantation provides an interesting approach to the conservation and regeneration of the essential cells of the eyes.

The movie "Minority Report" depicted a futuristic era where the scanning of our eye would be used for identification. In a drastic attempt to conceal one's own identity and privacy an ocular transplant was preformed. In the movie it was successful, however, would probably be unheard of in real life. This operation, although fictional, provided new hope and inspiration for persons suffering from ocular disorders and or blindness. With current advancements regarding stem cell growth and transplantation, the reality of this fictional surgery and other options for persons with ocular neurodegenerative disorders are coming the forefront of scientific capability.

Retinal transplant grafts have become the most promising solution for the prevention of ocular neurodegeneration. The usage of adult and embryonic retinal sheets into the sub retinal area have showed promising results with exceptional cell survival rates (1). The usage of animal models such as rats have provided important information regarding the location where retinal graft cells are most successful. Adeghate and Donah used pancreatic tissues fragments for transplantation into the anterior eye chamber in rats (1). These fragments were either implanted alone or coupled with brain tissue fragments. This study was largely performed to examine the survival and variability of intrinsic nerves which have been detached from their original environment. Importantly, the pancreatic tissues contained intact 5-HT and AChE-positive neurons when it was transplanted. Results displayed a stable survival among the transplanted tissues. In addition, both the pancreatic and brain intrinsic tissues retained their ability to produce and store neurotransmitters and enzymes in the anterior area of the eye (1). These findings provide an innovative method for the reconstruction and restoration of damaged neuronal connections and cells which are damaged and/or lost due to neurodegenerative ocular diseases. The ability to successfully transplant and maintain the function of instrinic issue in the eye, points to numerous possibilities regarding improvements in vision for persons in the future.

While the majority of ocular cell transplantation research is concentrated among animals increasing studies have use humans and their fetal cells; this is becoming the norm in this area of research. In a preliminary report by Radtke, et. al the visual function of human patients was observed. Patients (ages 23 and 72) who suffered from retinitis pigmentosa received human fetal retinal sheet transplantation subretinally near the fovea of the eye. Results of their implantations revealed enhanced visual sensation in the visual field corresponding to the transplantation area (4). Both patients continued to experience these improvements in vision at 8 and 12 months after transplantation surgery (4). The successes of this transplantation are encouraging due to the consistent visual improvements across variable ages. Although, this investigation was for the most part a case study, it provides encouraging results and a technique which could potential benefit persons who suffer from a wide range of diseases and disorders which reduce visual ability.

With the success of survival among fetal retinal grafts and the promising results of intrinsic tissue survival, knowing the functional ability of the transplanted cells growth and if necessary guidance cues are in place for them is imperative. The growth and navigation of optic axons in the eye was investigating through the usage of grafts of retinal, optic disc, optic tectum, and floor plate tissues. Each was implanted into an organ cultured embryonic chick's eyes. The growth of these axons into and out of the graft was monitored and cross sectioned for study. Findings demonstrated that embryonic axons are able to grow into neonatal and adult retinal grafts, suggesting that older retina remains permissive for axonal growth (3). It was also noted that the axons from retinal grafts may be rotated during growth in their peripheral-central orientation due to the retina's inherent polarity that permits axon growth toward and away from the optic disc; however, axonal growth is not permitted in a perpendicular fashion (3). In addition, it was revealed that the centroperipheral cued growth operates locally rather than by long distance and that the optic disc provides an exit for the axons from the retina (3). The transplants of the optic tectum however reveal minimal growth expansion (3).

Understanding the specific growth patterns of retinal grafted cells provides important information regarding the most useful location for implantation transplanted tissues. In addition, in Vivo studies such as the one above provide a stable comparison from which can compare to normal ocular development. Tracking grafted retinal tissue migration could be an incredibly useful tool for both retinal and optic nerve restoration but for also for the study of the transgression of specific neurodegenerative ocular diseases. By following the pattern of normal growth in Vivo organs, then simulating ocular disorders within the cultured eyes, we may over time be able to efficiently speculate about the specific effects and causal of these disorders. Research in this area could be important for the correct treatment and the creation of effective pharmaceuticals for ocular diseases and disorders.

Enhanced vision as well as the observation of retinal grafted growth in a culture have provided optometrist with invaluable information. However it is important to see both survival and normal growth patterns in Vito subjects. In 1997 Sharma and Ehinger demonstrated successful retinal grafting proliferation in non-human subjects (5). Rabbits were implanted with fragments of embryonic retinal tissue and monitored for tissue survival and efficient proliferation. The donor retinal tissue which was transplanted, organized in rosettes after 1 day of transplantation. In addition, these donor tissues were observed to continue proliferation in the host eye until they formed patterns that resembled the normally developing retina (5). It was concluded from these observations, that the factors essential for normal proliferation in the embryonic retinal cells was preserved during transplantation (5). It was additionally suggested that the embryonic cells followed the normal course of development which corresponded to postnatal development when they were transplanted into a host eye (5).

A later study built on these findings and suggested that the thickness of the neuro-retinal tissue was an important growth factor. Using both partially and thick grafts of adult and embryonic retinal tissue, survival into the sub retinal region was observed in transplant rabbits. It was found the 11 out of 13 full thickness transplants revealed straight, laminated, transplantation with correct polarity in all normal retinal layers present (2). Overall embryonic tissues had the highest survival rates when they were of high thickness; the adult tissue grafts showed poor survival in full thickness (2). This research demonstrates the importance of graft thickness as one factor of successful transplantation survival (2). In addition, it suggests a possible advantage of embryonic tissue in transplantation compared to adult retinal tissues (2).

Although the successes of retinal grafting have been a cornerstone in optometric research the continued success of this research is uncertain. This due to the specialized embryonic cells necessary for transplantation. With infant mortality at a all time low and the currently debated law working to make late termed abortion illegal I question whether this research will have a opportunity to be developed more in the future. One alternative however is to use adult intrinsic tissue fragments for implantation Adeghate's et. al successes with both the survival and functioning of these tissue suggest another means for restoration of dead or damaged ocular tissues and nerves (1). The use of the anterior chamber of the eye in their experimentation is important to note as well as the use of non-embryonic tissue. It is important however to keep in mind, the fact that adult tissues did survive (2), however, just at a low rate compared to embryonic and that the sub-retinal area's tissue friendly environment has been claimed to be a suitable environment for the transplantation of most any tissue fragment (2,3 &5). With these encouraging factors, the future of transplantation research seems to be at most, promising.

While surgeons have not gone as far as total ocular transplantation like that of the movie "Minority Report" they are making major strides towards success. The successful incorporation of retinal grafting in the eye has provided most surgical optometrist with an opportunity to reverse some of the symptoms involved in the once known irreversible ocular diseases. By using this tool on patients, many may be provided with the opportunity to regain one of their most valued senses, sight.

References


1)Brain Research Protocols

2)Science Direct: Experimental Eye Research


3)Science">Direct: Developmental Biology


4) Radtke, N. D., Aramant, R. B., Seiler, M., & Petry H. M. (1999) Preliminary report: indications of improved visual function after retinal sheet transplanation in retinitis pigmentosa patients. American Journal of Opthalmology 128:3: 384-387


5) Sharma, R.K. & Ethinger, B. (1997) Cell Proliferation in retinal transplants Cell Transplantation 6; 2: 141-148


What About Elizabeth Smart?
Name: Marissa Li
Date: 2003-05-08 12:27:09
Link to this Comment: 5634

What happens to abducted children when they are returned to their families? As is, the mental state of a child is extremely variable. Putting the additional stress of child abduction and eventual return to ones family is something very significant in a life. Is it actually possible to remedy an abduction experience, and if so, what is the healthiest way to do so? What if a member of the family did the abduction? Does that change the form of mental reformation that must be provided to children that have experienced trauma due to forced removal from their homes? Another question at hand is the idea of the "Stockholm Syndrome," which basically states that the abducted actually come to worship or defend their abductor(s). What are the specific affects on the brain when a child is abducted; is there a physical change that comes from possible brainwashing, and what about post traumatic stress?

In the case of Elizabeth Smart, her father insisted that her abductors had brainwashed her. He claimed that "deprogramming" was necessary, and that though the family was more than grateful to get their child back, there is a major concern for her mental state and reaction to her experiences in the nine months in which she was missing (4). Because every case of child abduction involves different experiences and motivations, the whole notion of recognizing a child's issues and attempting to sort them out is somewhat imprecise. However, there are a variety of techniques that medical and legal experts use to attempt to sort out what is going on in the mind of a child that has been abducted. In the case of child abduction by a stranger, or at least a person outside of the close family or friends, there is a common inclination to suppose that the child has been brainwashed, perhaps sexually abused, and has been living in unknown conditions. This of course requires family counseling, individual and family therapy, and a variety of other techniques devised to allow the child to vocalize and come to terms with their experience. Like post traumatic stress disorder related to war trauma or any sort of span of time of intense unfamiliar conditions, abduction can cause a child to be silent about their experiences and withdraw from normal everyday interactions. The child could plausibly "re-experience the ordeal in the form of flashback episodes, memories, nightmares, or frightening thoughts, especially when they are exposed to events or objects reminiscent of the trauma" (2). People with the disorder tend to feel emotional numbness, bouts of depression, the inability to sleep, anxiety, feelings of guilt and general fears of repeating the experience that they have just been through (2).

Physical remnants of posttraumatic stress disorder include actual alterations in brain activity in terms of a person's hippocampus, which is critical to the memory and emotion aspect of the brain. These lapses in the system may cause flashbacks which many abducted children deal with even weeks after they return home. Naturally those mentally affected by the abduction may also experience changes in hormonal levels and other chemical imbalances, causing them to not only recap on the events, but also continue to affect their everyday state of mind and interactions with others (2). To diagnose and treat a victim of abduction, therapy and familial support may not be the only answer. The time it takes some cases to really surface could be anything from hours to days to years, without even knowing whether or not the child has come to terms with their experience. In some cases, parents would like to think that the best remedy for their child's recovery is for the child to be able to forget the incident all together; however, if the stress of the experience actually causes a chemical alteration, than "forceful forgetting" of the experience may just surface later in life. Without the impact of something like abduction, the young child's brain is already in a constant state of absorption and chemical adjustment. The trauma that abduction causes can seriously scar a young person for life, almost like an imprint on the nervous system, as a constant reminder, whether that "scar" is dormant or not.

If a member of the family is the abductor, another emotional trauma issue comes into play. While the idea of a close family member or friend may initially feel more comfortable to the child victim or the rest of the family left behind, in a sense for some children, it can be even more emotionally distressing. Because family abduction is not technically a "crime against the child," the system does not have as many rights to interfere on behalf of the child and prosecute the parent involved. This of course is dependent upon circumstance, like whether or not the abductor was a consistent member of the child's life, or just a floater, which than classifies them more closely to abduction by strangers (3). Abduction by family members is an interesting dichotomy. The child should feel safe with a parent or family member, but instead is put in a position that compromises their trust of a familiar face because the notion of removal from ones normal lifestyle is discomforting to any child. If a child cannot trust their family, than whom can they rely on? That in itself would seem even more frightening and hindering to forgetting or getting over ones abduction experience, than if it had taken place among strangers.

Abduction can take more than one form. A child can fear their abductors, be sexually assaulted by them, or may even grow to care for them, especially over large spans of time. Many experts argue that the "Stockholm Syndrome," can be defined as victims feeling cared for by the abductors and therefore in defense of their captors (1). "The term, Stockholm Syndrome, was coined in the early 70's to describe the puzzling reactions of four bank employees to their captor. On August 23, 1973, three women and one man were taken hostage in one of the largest banks in Stockholm. They were held for six days by two ex-convicts who threatened their lives but also showed them kindness. To the world's surprise, all of the hostages strongly resisted the government's efforts to rescue them and were quite eager to defend their captors. Indeed, several months after the police saved the hostages; they still had warm feelings for the men who threatened their lives. Two of the women eventually got engaged to the captors (1). This sort of incident is actually common among abduction cases, especially those involving sick obsessions with the need or want of children that turn into a forceful removal of a child from their home. "Stock