Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.


Bio 202, Spring 2003 Forum


Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.

Go to last comment

Gaucher Disease: A Rarity in Three Types
Name: Rachel Sin
Date: 2003-02-21 16:24:07
Link to this Comment: 4740


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Ethnicity can provide individuals with wonderful traditions and celebrations of one's heritage. However, for some Ashkenazi Jews, ethnicity brings them much more than they bargained for: a rare condition causing a wide array of liver, lung, spleen, bone and bone problems. Ethnicity brings them Type I Gaucher Disease. Type II and Type III are the two other forms of this rare genetic condition, and can occur at equal frequencies in all ethnic groups. Gaucher disease was first described in 1882 by Doctor Philippe Charles Ernest Gaucher from France (2) . Type I , the most frequently seen form of the disease, can affect people of multiple ethnic backgrounds. However, its prevalence is greatest by far in the Ashkenazi Jewish population, making it the most common genetic disease within this ethnic group.

While the Type I Gaucher Disease is non-neuronopathic (not affecting the nervous system) the second two types are neuronopathic. Yet even though the three types of Gaucher produce different symptoms, all three types result from the same cause: a lack of glucocerebrosidase enzyme. The glucocerebrosidase enzyme functions to break down the compound glucocerebroside, a fatty compound which usually is stored in all cells of the body in very small amounts. In Gaucher patients, an excess of glucocerebroside builds up in the body, and is stored abnormally in lysosome, or storage cells (3) . Typically, macrophages are able to aid in the degradation process of glucocerebroside. However, due to the lack of glucocerebrosidase in Gaucher patients, glucocerebroside stays in the lysosome, preventing macrophages from acting upon them. Macrophages which are enlarged and contain an abnormal buildup of glucocerebroside are known as Gaucher cells. (1) . In affected patients, Gaucher cells can be found in bone marrow, liver and spleen cells. (2) .

Each of the three types of Gaucher Disease affect many systems of the body. Type I of the disease, which is the most mild form and is most frequently seen, is the only form of Gaucher which does not affect the nervous system. Typically, the average age of onset for Type I Gaucher is 21 years (6) . Approximately 1 in 10 Ashkenazi Jews is heterozygous for type I. Although the condition is non-neuronopathic, patients can exhibit a wide array of symptoms ranging from increased spleen and liver volume, lung compression, a variety of bone problems including lesions, bone tissue death and pain, and anemia and easy bruising. Individuals with Type I Gaucher Disease typically have a life span of 6 to 80 years (5) . Within families, the severity of Type I of the condition varies immensely, thereby making it impossible to determine which family members will suffer from the most severe symptoms. Gaucher Disease is different from most other autosomal recessive conditions in that one of the nonfunctional glucocerebrosidase genes (which are characteristic of Gaucher Disease) is passed o1n to each of the patient's offspring, causing them all to be carriers. Among Ashkenazi Jews, it has been presumed that around 1 in 450 Ashkenazi Jews has two mutated copies of the glucocerebrosidase gene (4) .

While Type I Gaucher is by far the most common form of the disease, Type II is excessively rare; among newborns, less than 1 in 100,000 have Type II Gaucher. Its onset occurs in infancy, and unlike Type I, Type II Gaucher is highly neuronopathic, and causes drastic neurological problems, usually at age one. Type II typically leads to death by the age of two due to the degree of severity of its effects on the nervous system (3) . In addition to the severe neurological defects, Type II also causes the spleen and liver enlargements seen in Type I. However, unlike Type I, it is not concentrated in the Ashkenazi Jewish ethnic group, and is not concentrated in any other specific ethnic group in particular. (1) .

Like Type II Gaucher Disease, Type III Gaucher, also known as Sub Acute Neuropathic Form, is also an exceptionally rare form. It usually strikes during childhood, and accounts for less than 5% of all Gaucher Disease cases. It is estimated that fewer than 1 in 100,000 live births worldwide (with the exception of Scandinavia) result in this type of the disease. While some patients afflicted with this type of Gaucher Disease die during their childhood, most others can live to middle age and some even beyond middle age (6) . This second neuronopathic form was also once called juvenile Gaucher disease, and progresses the most slowly of all three types. As is the case with Type II Gaucher Disease, Type III is not associated with a specific ethnicity, yet many Type III Gaucher sufferers have been found to be concentrated in Sweden (1) . Symptoms in the nervous system in patients who have this form of the disease include myoclonic seizures, a lack of coordination and mental degeneration. The form is more severe than Type I; like Type I, symptoms also include enlarged liver and spleen and bone disease. However, although it is neuronopathic, its effects on the nervous system are not nearly as drastic, rendering it less severe than Type III (4) .

The three forms of Gaucher Disease all result from a lack of the production of the glucocerebrosidase enzyme, and all have drastic effects on numerous body systems. Type I Gaucher is far less rare than the other two, is non-neronopathic and not so rare among Eastern European Jews. Conversely, the second two types of Gaucher are much rarer, are neuronopathic and panethnic, and the symptoms are much more severe than those exhibited in Type I .

References

WWW Sources

1)Living With Gaucher Disease, from Massachusetts General Hospital

2)Gaucher Disease in Ashkenazic Jews

3)The NTSAD Diseases Family: Gaucher Disease , from NTSAD

4)Gaucher Disease – Rare Disorders-Medstudent , from Medstudents.com

5) Scientific American:Ask the Experts: Medicine -
What is Gaucher Disease? Are there treatments?
, from Scientific American

6) Health Topics A-Z: Gaucher Disease , from Ahealthyme.com


How can observed behavior differences in individua
Name: Luz Martin
Date: 2003-02-23 18:46:47
Link to this Comment: 4766


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

To understand the human brain, looking at behavior as an indicator of brain function has led researchers to focus on individuals with Williams syndrome (1). The limitations as well as highly developed features in behavior found in people with this syndrome have attracted investigators. The unique behavior displayed has led many to believe that it may be possible to unravel brain organization by locating the damaged areas in individuals with Williams syndrome and connect them to the behavior observed in these individuals (1).

Individuals with Williams syndrome are all unique in their behavior, but most share common features. When given IQ tests they tend to receive scores that are below the average (1). The most common feature in is mental retardation, with about 95% with IQs in the mild to moderate range of retardation (3). Along with low IQ scores, these individuals tend to have limited writing and arithmetic skills. Some individuals display developed abilities to compose stories. They seem to be comfortable narrating and manipulating their voice to express their story (3). They are known for their musical talents and spoken language abilities as well as their sociable tendencies (1).

Research on individuals with Williams syndrome has led to many discoveries. One of the biggest puzzle pieces uncovered is the genetic cause of the disorder (4). A small piece missing in one of the chromosome 7 copies was found to be the cause of Williams syndrome (4). Individuals lose about 15 to more genes contained in this missing piece. About 95% of the individuals with Williams syndrome display this deletion (1). Some features of the disorder include problems with the heart, blood vessels, kidney, as well as dental problems (4). Connecting genes to physical brain abnormalities and ultimately to the observed behavior could produce an explanation as to how the developing brain is organized.

When comparing "normal" children with children with Williams syndrome behavioral differences can be observed (2). Along with the behavioral differences, brain structure differences have also been observed (2). Physically, the brain of an individual with the syndrome is discovered to be about 80% of the volume of the "normal" brain (2). The Williams syndrome brain is also characterized by a reduction of cerebral gray matter and lower cell density (2).

Knowing that there are physical differences in brain structure developments, how can we assume that the brain of a child with the disorder is simply a damaged version of the normal child with the normal brain if their genetic information is different (2) ?

The difference in genetic information leads to different a development (2). There must be a difference in the growth of the brain in the womb as well in childhood. With a set of different instructions for development in an individual with Williams syndrome, is it really surprising that their brains are different?

Williams syndrome is a disease marked by the deletion of a section of chromosome 7 that may involve several more genes (4). Individual genes don't explain the whole story leading to behavioral differences. Genes code for proteins, but not necessarily for the behavior displayed in a person (2). Recognizing that there are multiple genes involved in one function as well as the observation that one gene may also be part of many functions makes it harder to explain that the symptoms due to the deletion of a small piece of chromosome are to be entirely blamed for the condition (2). Keeping in mind the contribution of the genetic information, a person with Williams syndrome has a different developmental path than that of a person with the normal brain.

When comparing the brain of adults with Williams syndrome and that of "normal" adults, there is an assumption that the relationship between behavior and brain structure can be compared between the two. If we were to suppose that behavior is a product of the brain, but not of single divisions of the brain, then it could be that the brain of a person with Williams syndrome is producing similar behaviors as the normal brain. The behavior may not be the product of the same regions in a normal brain because we have observed that their brain is different (2).

"In other words, researchers cannot use the end state of development to make claims about the start state (5)."

This might mean that we are dealing with a brain that is not necessarily damaged, but has simply physically developed differently. We cannot simply compare a "normal" brain with an abnormal brain when they are defined by different genetic information. The brain is much more complex. The same behavior can be produced without having the same brain.

References


1) Lenhoff, Howard M. Wang, Paul P., Frank Greenberg and Ursula Bellugi. (1997, December). Williams Syndrome and the Brain. Scientific American. 68-73. from http://www.sciamarchive.org

2)Institute of Child Health, Crucial Differences Between Developmental Cognitive Neuroscience and Adult Neuropsychology, by Annette Karmiloff-Smith.

3)Center for Research in Language, Williams Syndrome: An Unusual Neuropsychological Profile by Ursula Bellugi, Paul P. Wang, and Terry L. Jernigan.

4)National Center for Biotechnology Information, Genes and Disease information

5)Guardian Unlimited newspaper, Summary of lecture by Annette Karmiloff-Smith.


Guillain-Barre Syndrome
Name: Amelia Tur
Date: 2003-02-23 21:05:08
Link to this Comment: 4771


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Most people do not expect to become paralyzed during the course of their lives. Barring injury to the nervous system or debilitating disease, one does not expect to lose motor function. In spite of these expectations, people of all races, sexes, ages, and classes can be afflicted with a debilitating syndrome that can lead to difficulty in walking or even to temporary paralysis in the most severe cases. This syndrome is known commonly as Guillain-Barre Syndrome, or GBS.

GBS is an inflammatory disorder of the peripheral nerves. When the syndrome occurs, the body's peripheral nerves become inflamed and cease to work due to an unknown cause. (1) (3) Around 50% of the cases of GBS appear after a bacterial or viral infection. (1) The syndrome can also appear after surgery or vaccination. GBS can appear hours or days after these incidences or can even take up to three or four weeks to appear. (4) Some theories propose that GBS is caused by a mechanism of the autoimmune system that prompts antibodies and white blood cells to attack the covering and insulation of the nerve cells, which leads to abnormal sensation. GBS is considered a syndrome rather than a disease, because its description is based on a set of symptoms reported by the patient to her doctor. (5)

GBS is also known as acute inflammatory demylinating polyneuropathy and Landry's ascending paralysis after Jean B. O. Landry, a French physician who described a disorder that "paralyzed the legs, arms, neck, and breathing muscles of the chest." (4) (1) GBS was named after French physicians Georges Guillain and Jean Alexander Barre who, along with fellow physician Andre Stohl, described the differences of the spinal fluid of those who suffered from the syndrome. (5) The syndrome affects one to two people per 100,000 in the United States, making it the most common cause of rapidly acquired paralysis in this country. (1) Some patients initially diagnosed with GBS are later diagnosed with chronic inflammatory demyelinating poly[radiculo]neuropathy, or CIDP. (Sometimes radiculo is left out of the name, hence the brackets.) CIDP was initially known as "chronic GBS," but is now widely considered a related condition. (3)

Although patients can be preliminarily diagnosed with the syndrome based on an analysis of the physical symptoms, two tests can be used to confirm the diagnosis of GBS. The first is a lumbar puncture, or spinal tap, in order to obtain a small amount of spinal fluid for analysis. The spinal fluid of those with GBS often contains more protein than usual. The second is an electromyogram (EMG), which is an electrical measure of nerve conduction and muscle activity. (3) (4) The symptoms of GBS begin with numbness and tingling in the fingers and toes leading to weakness in the arms, legs, face, and breathing muscles. The weakness begins in the lower portion of the body and rapidly moves upward. This weakness eventually leads to loss of sensation in the affected areas; although a number of cases are mild, temporary limb paralysis is not uncommon. In the milder cases, the numbness can only cause difficulty in walking, "requiring sticks, crutches, or a walking frame." (3) Pain is not uncommon, and abnormal sensations, such as the feeling of "pins-and-needles," can affect both sides of the body equally. Loss of reflexes, for example the knee jerk, is common. (1) (3) Most patients are at their weakest point about two weeks after the onset of the symptoms, and around 90% will have reached their weakest state three weeks after the onset of symptoms. (4)

The progression of the syndrome is unpredictable, especially in the early stages, and patients may preemptively put in the hospital to be cared for. (1) In the more severe cases, patients require hospitalization and a stay in the intensive care unit in the early stages of their disease, especially if they need the assistance of a respirator, or ventilator, to breathe. In about a quarter of the cases of GBS, the paralysis moves up to the patient's chest, requiring this assistance from a respirator. The face and throat may also be affected by the paralysis, necessitating a feeding tube through the nose or directly into the stomach. (3) The period that the patient is afflicted with the syndrome can be extremely long and long-term hospital stays are not uncommon. Care is generally confined to supportive measures to the patient as the condition improves spontaneously, and efforts are made to speed recovery of nerve function, including physical therapy and hydrotherapy. Most patients eventually recover and go on to lead normal, or very near normal lives. Some patients, however, may remain partially paralyzed and require the use of a wheelchair for a long period of time. (1) (3) Around 30% of GBS patients still feel residual weakness three years after the onset of symptoms. (4) Death due to GBS is highly unlikely with modern medical practices, but death does occur in about 5% of the cases. (3) For those with CIDP, the course of the disease is generally longer than those with GBS and the recovery period can follow a recovery/relapse cycle, but the patients are less likely to suffer respiratory failure. (3)

The syndrome came to brief public attention in 1976 when a number of people vaccinated against Swine Flu were stricken with the syndrome. (1) A Minnesota doctor had reported to his local health board that after inoculating a man for Swine Flu, the man had developed GBS. The health board reported this the Center for Disease Control, who were running the Swine Flu vaccination program, and the CDC began to ask doctors to report cases of newly diagnosed GBS. It did not take long for doctors to begin to associate GBS with the vaccine, even though the CDC tried not to relay this impression. As GBS is difficult to diagnose and has many symptoms similar to other neurological diseases, doctors at the time might have been more likely to diagnose patients who came in with muscle weakness who did receive the vaccine with GBS more often than those patients who presented muscle weakness and did not receive the vaccine. As the CDC could not say with certainty that the flu vaccine was not causing the high number of cases of GBS, the CDC announced cessation of the vaccination program on December 16, 1976. To this day, some people avoid flu vaccines because of this one incidence of extreme side affects. (2)

In the end, a person with GBS may eventually fully recover her motor function to the level it was before her affliction with the syndrome. While she may be left with no physical reminders of the disorder, she and her family and friends will always remember her sudden incapacitation. The emotional scars from such an incident can last a lifetime.


References

1) Guillain-Barre Syndrome International - GBS: An Overview, An overview of GBS on the website of the Guillain-Barre Syndrome Foundation International, based in Wynnewood, PA.

2) Kolata, Gina. Flu: The Story of the Great Influenza Pandemic of 1918 and the Search for the Virus That Caused It. Simon & Schuster: New York. Pgs. 167-185.

3) Guillain-Barré Support Group, The homepage for the Guillain-Barre Syndrome Support Group based in the United Kingdom. The organization disseminates information to sufferers of the syndrome and their family and friends.

4) NINDS Guillain-Barre Information Page, National Institute of Neurological Disorders and Stroke information page on GBS.

5) GBS - An Overview For The Layperson, An overview of GBS written by Dr. Joel S. Steinberg, a neurologist that once suffered from GBS.


A Critical investigation of the etiology of Devel
Name: Michelle C
Date: 2003-02-23 22:17:25
Link to this Comment: 4776


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

The long disputed debate about the primary cause of dyslexia is still very much alive in the field of psychology. Dyslexia is commonly characterized as a reading and writing impairment that affects around 5% of the global population. The disorder has frequently been hypothesized to be the result of various sensory malfunctions. For over a decade, studies have made major contributions to the disorder's etiology; however, scientists are still unclear of its specific causal. Initially, dyslexia was thought to be a reading disorder in children and adults (1). Later it was suggested to consist of both a visual and writing component, therefore characterizing it as more of a learning disability which affected people of normal intelligence's ability to perform to their fullest potential (5). In the current research, cognitive and biological perspectives have often been developed independently of one another failing to recognize their respective positions within the disorder's etiology.
The Phonological Deficit and Magnocellular theory are two of the most dominant theories in dyslexic research. Various theories have been suggested to explain the nature and origin of dyslexia, however, they often served as additional support for either the phonological or magnocellular theories. The Double Deficit theory suggested that dyslexic symptoms were the result of speed-processing (7). The Genomic theory posed that dyslexia was a highly heritable disorder that can be localized to a specific genetic component, Finally, the Cerebellar Deficit theory suggested that dyslexia was the result of an abnormal cerebellum exist (2). With the constant debate of the biological nature versus the cognitive nature of the disorder's causal, scientists have had major problems explaining the disorders' etiology based on one individual theory. While it was important to locate the specific cause of the disorders' manifestations, it seemed that the most effective results would be achieved by a collective approach. This kind of approach would encompass ideas placed forth by both the cognitive and neurological theoretical ideas which currently existed in the current research of the etiology of dyslexia.
The phonological deficit hypothesis of dyslexia is one of the most long standing explanations existing in psychological research. The theory was coined by a man known as the father of dyslexia, Pringle-Morgan in 1896. Morgan viewed reading as a process that critically involved the segmentation of text into graphemes (1). These graphemes served as the earliest precursors to phonemes (the small unit of language), such that grapheme to phoneme conversion is equivalent to the whole sound of a word (1), (5). The process is said to both require that a reader assemble and address a word's phonology. Dyslexics are said to have phonological deficits which caused phonemic representation difficulties; they would often fare worse than others when it came to mapping sounds into letters in the brain, and with phonemic recall (1,5). Deficits in dyslexics' ability to retain short-term word memory and problems with segmentation of words into phonemes (e.g., auditory discrimination), lent support to these ideas (5).

A study by Petra Georgiewa and colleagues investigated the phonological theory using advance scientific techniques. In the study, 17 adolescent children were subjects; 9 dyslexics & 8 controls. Dyslexic subjects were diagnosed based on the discrepancy between non-verbal IQ and read/spelling performance (1). All subjects were matched for age and intelligence level (IQ > 85) and the mean age was 13 years old. During the study each child was given a series of word and non-word displays on a computer screen to read silently. The experimental condition contained words and non-words and the control condition contained words and non-words with an asterisks. All words were either 1 or 3 syllable nouns accumulated from a basic 10 year old vocabulary. Each word and non-word was presented one per 2000 milliseconds and remained on the screen for 1800 milliseconds for the completion of the task. A blocked fMRI design was used so that the tasks consisted of 8 blocked segments with a total of four blocks of experimental reading and four blocks of control condition reading which were alternated over the 20 minute duration of the experiment. The usage of fMRI (functional Magnetic Resonance Imaging) and ERP (Event Related Potentials) were used to compare differences across the dyslexic and control groups' performance. Additionally, observations of a behavioral task was collected with a similar word blocking before the experiment allowing subjects to read aloud allowing both groups to form familiarity with the task a provide a baseline.

Subjects silently read similar linguistic stimuli for both the fMRI and the ERP data collection. The ERP's however, were measured in a separate session where the stimuli and control words were displayed in a pseudorandomized order that was all presented in one session (1). Five images were taken from every 2.5 minute session totaling 40 total images for fMRI analysis. These data were collected by simultaneous electrodes at critical time periods for linguistic processing (100- 180ms, 180-250ms, 250-350ms, 350-500ms, and 500-600ms after word presentation). All ERP data were analyzed over a 2000ms timeframe.

Results from the fMRI data revealed significant activation in the left inferior frontal gyrus (IFG) for control subjects during the task (1). However, the dyslexic subjects display three areas of activation of dyslexic readers (1). Dyslexic readers' had activation in the left IFG, the posterior left thalamus, and in a part of the left caudatus nucleus (1). In addition, dyslexics displayed a significant hyperactivaion in the Broca's area (found in the anteriror insula and in the lingual gyrus [right tempro-occipital region]) in comparison to control subjects (1).

The data collected in Georgiewa's study provided significant support for the phonological hypothesis. Data from the behavioral study worked to demonstrate that dyslexics had increased problems with phonological decoding (most decoded at a slower rat which required a greater amount of effort) which was seen even during out loud reading tasks. Group differences in the activation of the Brocaas area are said to be reflective of increased effort concerning phonological coding, due to the Brocaas area's involvement in phonological decoding and lexical identification; piecemeal/ assembled phonology (1). And lastly, the activation of three separate areas of the brain during phonological decoding is suggestive of insufficient sensory function and of possible compensatory efforts to more efficiently do a task that is generally localized to one area of the brain.

While the phonological theory of dyslexia provided a sufficient explanation of the etiology of dyslexia through the usage of a cognitive framework, the magnocellular theory provided the field of dyslexia with a grounded biological origin for the cognitive manifestations that were observed in dyslexia. This theory did not refute the findings of the phonological deficit theory, rather, it tried to validate it by providing neurological evidence linking cognitive symptomologies to brain abnormalities. Moreover, this theory suggested that dyslexics suffer from an auditory deficit which also can be seen in the manifestations of the disorder. The magnocellular theory was coined by Stein in 1997 and was based on the idea that there exists a division of the visual system into two neural pathways: the magnocellular and the parvocellular pathways (6). It is suggested that dyslexics suffer from abnormalities in the magnocellular pathway that cause visual and binocular difficulties (6). The impairments of these neural pathways are believed to cause auditory and visual deficits. With both auditory and visual deficits, the magnocellular theory suggests that dyslexics suffer from difficulty in processing rapid temporal properties of sounds which leads to phonological deficits (5).

A 2002 study investigating temporal processing by Rey, De Martino, Espesser & Habib was done in light of both the phonological and magnocellular theories and lends support to a neurological temporal deficits in dyslexics described by the magnocellular theory. This study accessed 13 developmental dyslexic children ranging from 9.8- 13 years and 10 normal readers ranging from 11.5-13 years old. Dyslexic subjects attended a school specialized for dyslexic while controls attended regular junior high school. Subjects were match for age and IQ. During the experiment subjects were asked to complete a Temporal Order Judgement (TOJ) task which used the succession of two (p/s) within a cluster, these clusters, in addition there were two sections where either the stimuli was artificially shortened or lengthened during the TOJ task (4).

Results from the experiment supported a possible deficit in temporal functioning as a component of dyslexia. Dyslexics did significantly worse on the consonant brevity trails (shorting), preformed somewhat better on the consonant lengthening, and displayed normal performance (for their age and intelligence) when given 200% lengthening of consonants (4). The absence of significant effects of consonant ordering, suggested that temporal processing difficulty work to impair the dyslexic student's ability to perform to the best of their ability. Similar findings of Bradlow's 1999 study support these deficits. Bradlow's electrophisological manipulations (slowing or lengthening) of the presentations of each phoneme along the da-ga continuum revealed grave differences among dyslexic and normal subjects. It was found that common deficits of slowed temporal processing were minimized when there was an increase in phonemic presentation (increases of 40 milliseconds to 80 milliseconds) (4). It may be inferred through both the Rev, atl and Bradlow findings that there does exist clinical evidence for a magnocellular deficit in dyslexics which is inclusive of the presence of a phonological deficiencies, but also recognizes an additional temporal processing deficits.

Though both theories provide acceptable rationale for possible causal of dyslexia it is difficult to give preference to either theory. While the phonological deficits theory addresses important cognitive impairments that are detected in the majority of adults and children who suffer from dyslexia, it fails to account for the many other legitimate sympotomologies displayed in the greater population of dyslexia (7). To address this shortcoming, in 2002 Maryanne Wolf and Partrica Bowers proposed the idea of the double deficit theory. This theory proposed two subtypes of dyslexic readers; one with a single deficit (name-speed or phonological deficits) and others with a double deficit (both naming speed and phonological deficits (7). Through their research they discovered that phonological deficits were just one source of reading dysfunction and that naming speed deficits were also a major source of problems in dyslexic readers (7). Further, it was suggested that dyslexics with double deficits were among the worse reading due to their limited compensatory routes for reading efficiently. This research was the result of a growing number of dyslexic children who had not been diagnosed because their symptomologies were not "just" phonological in nature. Though the phonological deficit theory provides us with a firm idea of the core manifestations of dyslexia, it should be revisited to account for the various subtypes which occur throughout the spectrum of dyslexia. Accounting for these variations would allow clinical diagnoses to be a more flexible processes that acknowledges a wider variety of co-occurring symptomologies.

In accordance, the magnocellular theory, though useful in the localization of cognitive symptoms to core neurological abnormalities neglects to address common symptoms displayed in dyslexics which are not the directly the result of visual or auditory pathway abnormality. Difficulties in handwriting, clumsiness, automating skills are among a few commonly found symptoms which are present in over 90% of dyslexics (2). Many of these symptoms have been proposed to be the result of a ceberallar abnormality which effect dyslexics fine motor, balance and the ability to automate skills. From this idea, researcher Nicolson, Fawcett, and Dean proposed the cerbellar deficit theory as an additional cause of dyslexia and found promising results from a 2001 experiment which suggested that the core deficits; reading, writing and spelling, could be all be linked to abnormal functioning of the cerebellum (2). They further proposed that cerbellar deficits encompassed and addressed phonological deficits while paralleling magnocellular abnormalities; suggesting that some dyslexic children displayed either or both magnocellular and cerebellar abnormalities which served as a core source of the manifestations of their phonological deficits.

In light of the current research of the double deficit and cebellar deficit theories, it is necessary that the traditional theories of dyslexia be revisited. Though both the phonological and magnocellular deficit theories provide key components which are helpful in the long standing investigation of dyslexia, they both are lacking in major ways. Using the both the double deficit and cerbellar theories in conjunction with these dominant theoretical models, however, may be useful in obtaining a more holistic understanding of the true nature of the disorder. Accounting for the major phonological deficits with the phonological theory, the disorders neurological origin through the use of both the magnocellular and cerbellar theories, and assessing the various subtypes of dyslexia through the double deficit theory would be the ideal strategy for efficient investigation of the etiology of dyslexia. In conclusion, a model which works to incorporate these four facets of investigation could potential advance the research and treatment of dyslexia, by broadening the current diagnostic spectrum of dyslexics and evoking varying styles of intervention which work to target the multitude a dyslexic symptoms and subtypes.

References

< a name="1">1)Sciencedirect, Great article on brain imaging in subjects who suffer from dyslexia.


< a name ="2">2)Sciencedirect, Great article which explains the cerebellar deficit hypothesis of dyslexia.


< a name="3">3)Infotrac, Article about cerebellar deficits in dyslexics which lends support to the hypothesis of cerebellar deficit in dyslexia.

< a name="4">4)Sciencedirect, Article which give an in-depth explanation of temporal processing and phonological deficit theory of dyslexia.


< a name ="5">5)Nature.com, Article discussing the two most prevalent theories of dyslexia; the magnocellular and phonological theories.


< a name.="7">7) Infotrac, Article describing the double deficit hypothesis of dyslexia.

NON-WEB REFERENCES

6).To see but not read; the magnocellular theory of dyslexia.="6">6) Stien, J. & Walsh, V. TINS v20 1997 pages 147-152.


Asperger's Syndrome: What is it and What Happens N
Name: Marissa Li
Date: 2003-02-24 02:45:40
Link to this Comment: 4783

Typically in society today, the over diagnosis of disorders such as Attention Deficit Disorder (ADD) and other behavior disorders are very common. One disorder that has only recently been brought to the table is Asperger's Syndrome. Only added to the Diagnostic and Statistical Manual of Mental Disorders (DSM) in 1994, Asperger's is a neurological difference that seems to be stirring up everywhere among candidates assumed to be either socially senseless or mildly autistic (3). It is a disorder that is characterized by different extremes in behavior, and does no surface until around ages five to nine years old, if even diagnosed, many times it is not diagnosed at all. Though it is not a widely studied disorder and still new in the "autism spectrum," it is becoming more and more commonly diagnosed or looked into, especially among engineers and computer programmers (1). Asperger's was first recognized by the Viennese scientist Hans Asperger and the child psychiatrist Leo Kanner in 1943 and 1944. Each published papers, which described patterns of behaviors among children with normal intelligence and vocabulary that were somewhat awkward and unaware socially and in communication. Though the idea was dropped soon after, the study of Autism and the like were continued with a consideration of "high functioning" autistics that may function in even another realm (later to be diagnosed as Asperger's) (1). Those with Asperger's tend to exhibit signs of detachment socially, they are sensitive to their surroundings, and yet cannot interpret their own "proper body space" (1). They tend to preoccupy themselves with distinct and specific entities within an object or an idea, and they are obsessively organized and function within a world of obsessive uniformity. For example, the child or adult will insist on performing the same tasks daily in order for their life to run smoothly. Asperger's so far is more commonly traced in males, and there are a variety of cases where children actually appear to be "little professors" or mini-geniuses because they are so internalized, focused and brilliant, like an idiot savant, it tends to show up in diminutive and specific areas of intellect (1). Typical Asperger's have difficulty expressing empathy and though they appear to live normal lives, they have severe social interaction issues and cannot stay within the realm of others for long periods of time without floating into another thought process. There is no clinical indication of cognitive defects or slowing, and yet Asperger's function in another world internally within their own conscience, and perhaps what is not even within their own conscience. Since Asperger's is somewhat new within the medical field, actual physical brain and nervous system research is not very lengthy or proven. There are studies that suppose that the right hemisphere of the brain is naturally less functional in Asperger's cases and that part of that is a decrease in motor skills and an all around physical awkwardness (5). Asperger's cases have extensive vocabularies at times, but really no grasp or a reaction to language, caused by a malfunction in the ability to empathize with anything or grapple with another's speech patterns, leading to many interpretations of such behavior. Many Asperger's cases have "behavioral problems" for this very reason (5). Their actions are not accountable because they are not necessarily aware of their surroundings. According to Asberger himself, "in the course of development, certain features predominate or recede, so that the problems presented change considerably. Nevertheless, the essential aspects of the problem remain unchanged. In early childhood there are the difficulties in learning simple practical skills and in social adaptation. These difficulties arise out of the same disturbance which at school age cause learning and conduct problems, in adolescence job and performance problems, and in adulthood social and marital conflicts" (3). Though the research on the brain and nervous system with Asperger's is new and insufficient, it does show signs of a mix up in consciousness as well as an actual alteration within the brain and nervous system itself. There are several theories out now which speculate that Asperger's is linked to many persons who are very bright and dominant in the field of computers and engineering. "It's a familiar joke in the industry that many of the hardcore programmers in IT strongholds like Intel, Adobe, and Silicon Graphics - coming to work early, leaving late, sucking down Big Gulps in their cubicles while they code for hours - are residing somewhere in Asperger's domain. Bill Gates is regularly diagnosed in the press: His single-minded focus on technical minutiae, rocking motions, and flat tone of voice are all suggestive of an adult with some trace of the disorder." "Replacing the hubbub of the traditional office with a screen and an email address inserts a controllable interface between a programmer and the chaos of everyday life. Flattened workplace hierarchies are more comfortable for those who find it hard to read social cues. A WYSIWYG world, where respect and rewards are based strictly on merit, is an Asperger's dream" (3). So, is everyone who exhibits loner behavior and inclination towards computer programming a candidate for Asperger's? In today's society we have so many children on Ritalin for ADD, ADHD, and various other social and behavioral problems. Perhaps they are all Asperger's, but what is wrong with that? According to UCSF neurologist, Kirk Wilhelmson "If we could eliminate the genes for things like autism, I think it would be disastrous. The healthiest state for a gene pool is maximum diversity of things that might be good" (3). The idea behind the rise of autistic qualities in certain areas is due to certain mating patterns. When a computer programmer marries an engineer and both exhibit Asperger's like tendencies, it is no surprise that their offspring would do the same. Whether or not this is a terrible outcome is questionable, because these people that display these qualities are extremely beneficial to today's technological world. Perhaps the way of the world is for some to be social creatures and others to be the techies. The thought of genetic predetermination and upbringing are somewhat scary effects, but who is to say that one is not satisfied if they work at a computer all day? Most likely Asperger's has always been in existence, from Monks to Bill Gates. They have all made their contributions to society, so really can that be classified as a "disorder?" Research must be done to clarify the actual physical brain and nervous system of Asperger's, but on a general level, in society, it seems to be affecting many, but in ways that are not necessarily detrimental. WWW Resources 1) 1)/a>Online Asperger Syndrome Information and Support,Basics on Aspergers 2) 2)/a>Worchester Polytechnic 3) 3)/a>The Greek Syndrome, Article re: Silicon Vallet and abundance of Aspergers 4) 4)/a>The AQ Test, Test for Aspergers 5) 5)/a>Maryland Asperger Advocacy and Support Group,Basic Information and medical information on Asperger's


Stress, Sports and Performance
Name: Arunjot Si
Date: 2003-02-24 18:22:57
Link to this Comment: 4788


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Actors, athletes and students all have something in common. They all perform their tasks with varying stress levels. What is this stress that we all talk about? Stress can be defined as a physical, mental or emotional demand, which tends to disturb the homeostasis of the body. Used rather loosely, the term may relate to any kind of pressure, be it due to one's job, schoolwork, marriage, illness or death of a loved one. The common denominator in all of these is change. Loss of familiarity breeds this anxiety with any change being viewed as a "threat".

The issue of anxiety is an important aspect of performance. Whether it is during the tense moments of a championship game or amidst that dreaded History exam, anxiety affects our performance via changes in the body, which can be identified by certain indicators. One misconception though with performing under pressure is that stress always has a negative connotation. Many times, "the stress of competition may cause a negative anxiety in one performer but positive excitement in another" (3). That is why one frequently hears how elite players' thrive under pressure, when most others would crumble.

PHYSIOLOGY OF STRESS
Stress is an integral part of our lives. "It is a natural byproduct of all our activities" (4). Life is a dynamic process and thus forever changing and stressful. Our body responds to acute stress by a liberation of chemicals. This is known as the fight-or-flight response of the body, which is mediated by adrenaline and other stress hormones, and is comprised of such physiologic changes as increased heart rate and blood pressure, faster breathing, muscle tension, dilated pupils, dry mouth and increased blood sugar. In other words, stress is the state of increased arousal necessary for an organism to defend itself at a time of danger. Alterations of hormones in the body include not only adrenaline, but also substances like testosterone and human growth hormone. Up to a certain point stress is beneficial. We perform with greater energy and increased awareness with the influx of excitatory hormones that release immediate energy (11).

STRESS OVERLOAD
As we all probably know, there is only so much tension one can take. Whether it is constant episodes or chronic stress, either can transform what was beneficial stress into "distress". The stress hormones which are protective initially and liberated for self-preservation may cause damage due to overproduction. This has an effect on the entire metabolism, including the rate at which our cells grow and are repaired as well as the production of the cells in the immune system (1).

The hormonal surge of glucocorticoids released to promote utilization of glucose as well as the conversion of protein and lipids to usable glucose can become detrimental in the long run. One clinical symptom that arises from this hormonal imbalance is an increase in appetite, which in a chronic situation, may lead to obesity (2). Catecholamines also increase blood pressure and repeated spikes of hypertension may promote formation of atherosclerotic plaques, particularly with combination of high cholesterol and lipids can ultimately lead to heart disease and stroke.

The brain of course is a crucial target and neurons exposed to elevated glucocorticoids for long periods of time are known to be adversely affected. Tests have shown that brain cells in rats may shrivel and the dendrite branches, which are used to communicate with other neurons, wither away (2). In particular, the hippocampus, which is implicated in memory and mood, may be damaged by stress and the adrenal steroids.

Advances in medical technology like magnetic resonance imaging (MRI) enable us to develop clear images of specific parts of the brain, which in return allow us to see where exactly stress is affecting the brain. The hippocampus – which has glucocorticoid receptors is actually noted to be twelve percent smaller in volume in people with stress disorders. Yet, is stress-linked brain damage permanent? Short-term stress damage appears to be reversible (as per rat experiments) though chronic stress may lead to neuronal loss.

DEFINING AN ATHLETE WHO PEFORMS AT HIS BEST
"Sports performance is not simply a product of physiological (for example stress and fitness) and biomechanical (for example technique factors) but psychological factors also play a crucial role in determining performance" a href="#3">(3). However, every athlete has a certain stress level that is needed to optimize his or her game. That bar depends on factors such as past experiences, coping responses and genetics (7). Although psychological preparation is a component that has been often neglected by athletes and coaches alike, studies have shown that mental readiness was felt to be the most significant statistical link with Olympic ranking. Athletes have frequently been quoted to state how the mental aspect is the most important part of one's performance. As Arnold Palmer, a professional golfer suggested that the game is 90% psychological. "The total time spent by the golfer actually swinging and striking the ball during those 72 holes is approximately seven minutes and 30 seconds, leaving 15 hours, 52 minutes and 30 seconds of 'thinking time'" (3). Stress during sports, as in anything else in life, may be acute, episodic or chronic. For the most part in sports, it is episodic, whether during a competitive match between friends, or a championship game. While acute stress may actually act as a challenge, if not harnessed, it can evolve to not only an episodic stressor that can affect one in the long term, but can also hamper one's play.

RESEARCH ON SPORTS PSYCHOLOGY
Interest in researching sports psychology has skyrocketed over the past few years. A myriad of hypotheses have been developed to attempt to clarify the relationship between stress and performance. In 1943, the drive theory was introduced, claiming that an athlete who is appropriately skilled will perform better if their drive to compete is aroused or if they are "psyched up". In 1962, the inverted-U hypothesis was formed on the notion that there is an optimal amount of arousal that an athlete will perform at. However, "if that level of arousal is passed then the level of performance will decrease. The same thing happens when the level of arousal is lower than the optimal level" (9). Though this hypothesis has had much support for many years, it too has fallen out of favor due to its oversimplifacation on a subject as complex as brain and behavior. Other theories that have been proposed like the multidimensional anxiety theory and the catastrophe theory all make their predictions on how anxiety plays a role in one's performance level, but the results remains inconclusive (9).

In recent research, the factor of competitive anxiety has been dissected into two segments -- somatic and cognitive anxiety. Cognitive anxiety is characterized by negative expectations, lack of concentration, and images of failure. Somatic anxiety refers to physiological symptoms such as sweaty hands and tension and other physiologic changes (4). In order to chalk out optimal performance, the precursors of anxiety need to be sought out. The temporal patterning of anxiety, before, during and after competition has been receiving a lot of attention in research.

COPING WITH STRESS IN SPORTS
Developing coping techniques is the most crucial element in balancing stress levels so that they optimize instead of inhibit performance level. Relaxation, visualization/imagery, self-talk, goal setting, motivation, and video review are all examples of systems that can be used by athletes (7). Self-regulation training cultivates one's self-confidence and attention control levels. Goal setting is another important system as making realistic short-term goals prevents one from getting overwhelmed, which can result in loss of focus. Having these realistic expectations is the only way one can eventually reach one's long-term goals.

Anxiety control is another technique that is obtained through muscle relaxation exercises as well as mental relaxation through modalities such as meditation or listening to music. Practice for perfection is necessary but as we talk about anxiety control, too much practice can actually lead to overpressure which obviously leads to anxiety beyond the optimal level necessary for the given task. In addition, self-doubts regarding one's performance and a desire to impress others will create a high level of anxiety which leads to "choking" as the athletes' focus on the game is lost as is his/her physical control (6). Athletes that maintain a proper combination of honing their physical skills and developing their mental game are able to adapt to any unfamiliar situation/circumstance that they encounter. As an Olympic champion stated: "My fingers and feet were damp and freezing cold. I felt weak, my breath was short and I felt a slight constriction in my throat... I just wanted to get the whole thing done with. The waiting was agony, but my mind conditioned through long training and experience warned 'wait to warm up! Wait! Wait!'" (11). According to research, elite athletes use these "equalizing" techniques in some combination before, during, and after competition, and this gives them the greatest chance to thrive, even when the game is on the line (5).

CONCLUSION
A certain level of stress is needed for optimal performance. Too little stress expresses itself in feelings of boredom and not being challenged. "What is becoming increasingly clear... that competitive stress does not necessarily impair performance and can in certain circumstances enhance it" (3). At an optimum level of stress one gets the benefits of alertness and activation that improves performance. Even while making such statements, it is important to realize that there is currently no conclusive evidence except for the fact that stress and anxiety do have an influence in performance.

The surge of interest devoted to this specialized arena reflects how our society breeds competition, not just in sports, but in every aspect of life and at every age. How can I get that edge over the person next to me? That is the mantra by which we are motivated to improve ourselves. Though much progress has been made in this region of neurobiological science, we cannot understand enough to come up with more than an idea of how the brain deals with stress when related to performance. We still have so many unanswered questions regarding the brain and its behavior that we are unable to unravel many of these mysteries though we continue to make educated guesses.


References

1)McEwen, Bruce. "The Neurobiology and Neuroendocrinology of Stress"
Psychiatric Clinics of North America. 2002 June.

2) New Studies of Human Brains Show Stress May Shrink Neurons , by Robert Sapolsky

3)Jones, Graham. Stress and Performance in Sport. New York: John Wiley and Sons, 1990.

4)Herbert, John. "Stress, the Brain and Mental Illness." BMJ. 30 August 1997: 530-535.

5) Peforming Your Best When it Counts the Most , by Kyle Kepler

6) Choking in Big Competitions , by Kaori Araki

7) The Online Journal of Sports Psychology ,

8) The Mental Edge , by Sandy Dupcak

9) Competitive Anxiety , by Brian Mackenzie

10)Nelson, Charles. The Effects of Early Adversity on Neurobehavioral Development. London: Lawrence Erlbaum Associates, 2000.

11)Neufeld, Richard. Advances in the Investigation of Psychological Stress. New York:Wiley and Sons, 1989.


"Listening to Prozac" : The dangers behind the sir
Name: Neela Thir
Date: 2003-02-24 19:30:12
Link to this Comment: 4790


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

"If the human brain were simple enough for us to understand, we would be too simple to understand it" (1).

In his book Listening to Prozac, Dr. Peter Kramer thoroughly examines how Prozac has revolutionized the power of psychopharmacological medication and what it teaches us about the human self. Prozac has demonstrated the ability to transform a person's behavior, outlook, and conception of self through a neurological change of biology, thus providing more evidence that brain does indeed equal behavior. Perhaps more fascinating than the answers it provides about human neurobiology are the difficult questions, ironies, and problems its usage raises. The administration of Prozac challenges the model of healing through cognitive powers due to its purely biologic effectiveness. This success has widened the gap between the un-medicated and medicated human self. Which is the "true" reflection of a person? Do Prozac's transformations emulate an unnatural idealized social norm or release a healthy individual trapped in an unnatural state? How does this reflect or change our definitions of "illness" and "wellness"?

Dr. Kramer's discussions hinge upon the idea that the nervous system controls behavior. The case studies he provides show people who, after taking Prozac, have remarkable "transformations" of multiple facets of behavior including perceptions, motivation, emotions, sense of choice, values, and personality (defined by given temperament as well as developed character). Prozac's ability to change a person so drastically on a biological level causes much apprehension because the change does not need to be processed cognitively or even consciously. Dr. Kramer asserts that this change need not coincide with any self-knowledge because it is "evidently not necessary"(32). His comment points to a desire among many that the conscious self (I-function) has a stronger influence on behavior than biology does because we intimately connect behavior with self-identity. Relying on a foreign substance to change biology (and self) without apprising and receiving sanction from the conscious-self first seems unnatural. The utter reliance on biology without utilizing our human gift of cognition seems to be a violation of how humanity has separated itself from our own inner animal. Dr. Kramer dismisses claims that Prozac compromises our vision of humanity through changing behavior in psychobiological terms by saying, "biological models are not reductionistic but humanizing, in the sense that they restore scale and perspective and take into account the vast part of us that is not intellect" (143). Further compromising psychoanalysis in favor of pharmacology, Dr. Kramer writes, "we are symbol-centered creatures, and some of our worst stresses are symbolic. But to my mind these final causes were like the pebbles that finally turn a tenuously balanced collection of rocks into an avalanche" (141). He goes on to write that rearranging the pebbles will not stop the avalanche; only changing the structure of the rocks, our brain's biologic functions, is totally effective. The symbolic pebbles (cognition, the I-function, conscious self, memory etc.) he describes may be visible and therefore more focused upon, but as seen with Prozac patients, they are ultimately forgotten in the bigger picture of biologic determination.

After seeing the dramatic difference Prozac makes, Dr. Kramer poses a critical question: "had medication somehow removed a false self and replaced it with a true one?" (19). His patients praised the effects of the drug saying "I feel like myself", and when they were weaned off of Prozac, they complained "I'm not myself again" (10). For people suffering from acute depression or anxiety, the medicated self ironically becomes the "true" self despite being in an altered state. One of his patients said jokingly that she has changed her name – "I call myself Ms. Prozac" (11). Here, self is not only dependant on biology but on medication. Her cognitive I-function has literally begun referring to itself as "Prozac" rather than her realized or given name. By naming herself Ms. Prozac, the patient demonstrates how this medication has the potential of becoming and defining a new self based entirely on the pill's neural actions while eliminating the old self as false and somehow untrue. Dr. Kramer later comments that Prozac differs from other mood-brightener medications because it has "characteristics" and a "personality" which affects the human brain (257). It is interesting to note that Prozac is given a "personality". It becomes personified because of its own, almost human, ability to create a sense of self in the patient. The irony is that the concept of one's true personality becomes vague because of the medicine's effectiveness in expressing its own "personality". The long-acting nature of Prozac may cause "a very steady cushion against decompensation [that] may translate into a new continuous sense of self" (182). Therefore, the true sense of self that Prozac provides may be an effective physiological personality-high which effectively mimics a continuous and realistic sense of self.

However, Dr. Kramer has reservations that Prozac carries its own set of prescribed alterations. Rather, he says that medication merely clears the way for patients to discover their true selves, which have been masked by traumatic events and psycho-environmental occurrences. He cites that "repeated stress sensitizes rats to produce higher levels of CRF" which "can lead to the changes in the cell's genetic material, the DNA and RNA". Outside environment can change biology, which in turn, changes the behavior and the self. Dr. Kramer parallels physical stress, like electric shock, with psychosocial stress, such as sexual abuse, because they both can "cause cell death in the nerves affected by the cortisol system" (117). He states that "medication is like a revolution overthrowing a totalitarian editor and allowing the news to emerge in perspective" (221). However, this begs the question: whose "news" is emerging after the evil editor of psychosocial stress is overthrown? Is the resulting change in self the result of the medication's design, society's expectations, or is it indeed the "true self"?

Dr. Kramer freely admits that "Prozac supports social stasis by allowing people to move toward a cultural ideal" thereby indicating that Prozac not only hints at future "cosmetic psychopharmacology" but in fact represents its arrival (271). Prozac "induces pleasure in part by freeing people to enjoy activities that are social and productive...enjoyed by other normal people in their ordinary social pursuits" (265). In people with minimal depression, Prozac serves as a social un-inhibitor or the equivalent of having a few drinks and becoming an ideal social mover. Merely because social forwardness is sanctioned by our society as "productive" does not sanction science to change people's biology so they derive pleasure in fulfilling the cultural ideal. We have the potential of creating a homogeneous society with modified and standardized behaviors despite a hereditary and experiential biology of difference. As Michael McGuire defined as a "trait distribution phenomenon," a normal human temperament may be changed medicinally because the culture does not reward it, therefore leading to unhappiness, unfulfillment, and depression. One might argue, however, that if we can biologically cure people's minor depression because of society's narrow-minded ideals and their own personal trauma, what is the harm? By medicating such people with Prozac and then diagnosing their problem, we may create a more and more narrow definition of "normal" and "well" by expanding what we think of as deviant. By over-prescribing Prozac, we are eliminating more types of biologic temperaments by labeling them as "illnesses". Society may not accommodate biology, but should we limit biology to fit society?

Listening to Prozac proves difficult because it tells us so much of what we do not want to hear. It challenges engrained ideas about distinctions between biology, cognition, medications, and our sense of self. What we value – conscious human control - turns out to be secondary to that which supposedly demeans us – our biologic animalistic determination. Yet, Dr. Kramer frames this realization as beneficial; by accepting this, we are accepting our greater innate complexity rather than simplicity. However, it is important to recognize the dangers of falling under Prozac's spell. Happiness void of repercussions is no more contained within a Prozac capsule as it is with any other drug. Society's influence on the brain, behavior, and the medication of "illnesses" should not be ignored. Prozac may call to us as the savior of those with minor depression, but its overuse can drown difference in sea of medicated sameness.

References

1)Kramer, Peter M.D. Listening to Prozac. New York: Viking, 1993.


Cocaine and the Nervous System
Name: Elizabeth
Date: 2003-02-24 20:30:05
Link to this Comment: 4795


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

All drugs have a negative effect on the nervous system, but few can match the
dramatic impact of cocaine. Cocaine is one of the most potent, addictive, and
unpredictable recreational drugs, and thus can cause the most profound and irreversible damage to the nervous system. The high risk associated with cocaine remains the same regardless of whether the drug is snorted, smoked, or injected into the userˇŻs bloodstream. In addition to the intense damage cocaine can cause to the liver, intestines, heart, and lungs, even casual use of the drug will impair the brain and cause serious damage to the central nervous system. Although cocaine use affects many components of the body, including vision and appetite, the most significant damage cause by cocaine takes place in the brain and central nervous system.

Spanish explorers first observe South American natives chewing the cocoa leaf, from which cocaine is derived, when they arrived on the continent in 16th century. The South Americans chewed these cocoa leaves in order to stay awake for longer periods of time. Centuries after this initial discovery, Albert Neiman isolated cocaine from the cocoa leaf in 1860. Neiman used this extraction as an anesthetic. Over the ensuing years, cocaine use became increasingly common and was even sanctioned by doctors, who prescribed the drug to aid recovering alcoholics. Cocaine was even a key ingredient in such popular beverages as Coca- Cola. It was not until the long-term health problems associated with cocaine use emerged that the public realized that the drug was harmful
and highly addictive (2).

Cocaine is a versatile drug which can be ingested in a variety of ways. In its purest form, cocaine is a white powder extracted directly form the leaves of the cocoa plant. However, in the modern drug market, pure cocaine is often tempered with a variety of substances in order to make cocaine more profitable for drug dealers (5). The most common way to ingest powdered cocaine is to inhale the drug through oneˇŻs nasal passage, where the cocaine is absorbed into the bloodstream by way of the nasal tissues. Cocaine can also be injected directly into a vein with a syringe. Finally, cocaine smoke can be inhaled into the lungs, where it flows into the bloodstream as quickly as when injected into a vein. In 1985, crack cocaine was invented, which is the optimal form of cocaine for smoking (2). While most cocaine is created through a complex process requiring ether and other unstable and expensive substances, crack cocaine is processed with ammonia or baking soda. Crack cocaine has gained popularity as the drug is cheaper and provides a more potent immediate high than snorting cocaine (6). However, those who smoke cocaine run a higher risk of becoming addicted to the drug, as more cocaine is absorbed into the bloodstream through this method of ingestion (1).

Cocaine produces its pleasurable high by interfering with the brainˇŻs ˇ°pleasure centersˇ± where such chemicals as dopamine are produced. The drug traps an excess amount of dopamine in the brain, causing an elevated sense of well being. Cocaine acts as a stimulant to the body. In turn, the drug cause blood vessels to restrict, increases the bodyˇŻs temperature, heart rate, and blood pressure, and cause the pupils to dilate (4). Cocaine also increases oneˇŻs breathing rate. Cocaine causes such pleasurable effects as reduced fatigue, increased mental clarity, and a rush of energy. However, the more one takes cocaine, the less one feels its pleasurable effects, which causes the addict to take higher and higher doses of cocaine in an attempt to recapture the intensity of that initial high (1). In any case, a cocaine high does not last very long. The average high a user gets from snorting cocaine only lasts for 15-30 minutes. These highs are less intense, as it takes longer for the drug to be absorbed into the bloodstream when snorted. A smoking high, although more intense due to the rapidity in which the drug is absorbed into the bloodstream, lasts for an even shorter period of only about five to ten minutes (5). After the euphoric high comes the crashing low, in which the addict craves more of the drug and in larger doses (2).

Cocaine can cause serious long-term effects to the central nervous system, including an increased chance of heart attack, stroke, and convulsions, combined with a higher likelihood of brain seizures, respiratory failures, and, ultimately, death (2). An overdose of cocaine raises blood pressure to unsafe heights, often resulting in permanent brain damage or even. Coming down off of cocaine is highly unpleasant, as the user may feel nauseous, irritable, and paranoid. Also, in some cases, a sudden death may occur, although it is impossible to predict who could be killed suddenly by cocaine ingestion. Crack cocaine in particular heightens paranoia in its users, who have the more difficulty quitting the drug than other cocaine users (6).

Many studies have been done which analyze the impact of cocaine on the
brain itself. By inhibiting the brains release of dopamine and other neurochemicals, cocaine can cause serious and often irreversible damage to neurons within the brain. In autopsies, cocaine users had a reduced number of dopamine neurons (7). When flooded with the excess of dopamine created during a cocaine high, the brain reacts by making less dopamine, getting rid of this excess, and shutting down the dopamine neurotransmitters, sometimes permanently. In turn, many cocaine users feel depressed once they go off of the drug, which makes cocaine is highly addictive. Many addicts report that they crave the drug more than food, and laboratory animals will endure starvation and electroshocks if they can still have the drug (3).

Cocaine is one of the most dangerous drugs for the central nervous system. As a powerful stimulant, cocaine increases the likelihood of many fatal nervous system malfunctions, including stroke. However, the high initially gotten from cocaine keeps its addicts looking for more, as this highly addictive drug can be difficult to quit. Also, as the neurotransmitters shut down and disappear, the user needs cocaine to create an artificial high. Cocaine can cause serious damage to the nervous system, as it eats away chunks of the brain and increases blood pressure, heart rate and body temperature, often for the rest of the addictˇŻs life.

References

1)Drug information: Cocaine

2)Cocaine

3)The Effects of Cocaine on the Developing Nervous System

4)The Physical Effects of Cocaine

5)As a Matter of Fact

6)Crack and Cocaine

7)Cocaine Brain Damage may be Permanent


Avian Song Control
Name: Nicole Jac
Date: 2003-02-24 21:23:25
Link to this Comment: 4796


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Bird songs continue to fascinate neurobiologists and neuroethologists because the development of song has been a popular model used to examine the role of environment on behavior. In most species, only male birds sing complex songs. Their vocalizations are the result of sexual dimorphism in the brain regions responsible for the production of song. However, this behavior is not genetically hardwired into the avian brain. Certain conditions must exist in order for male birds to successfully produce their species-specific song. Additionally, the neuronal circuitry and structure of the avian song system shows high levels of plasticity.

If the brain and behavior are indistinguishable, then the structural differences in the avian brain are responsible for behavioral differences across the sexes. Nottebohm and colleagues identified six anatomically distinct regions of the forebrain involved in the production of song, which are arranged into two independent pathways, the posterior pathway, which controls song production, and the anterior pathway, which controls song learning. The collective unit is typically referred to as the vocal control region (VCR) (1) (2).

Female birds sing rarely and this behavioral difference is reflective of the anatomy of the female avian brain. There are significant differences in the size of three neural areas involved in the production of song across the sexes, and a specific area, Area X, is present in the male and absent in the female. Additionally, the incorporation of radiolabeled testosterone in certain locations is different in males and females (3) (4).

Scientists have been particularly interested in the origin of the structural differences in male and female songbirds. Research has suggested the importance of gonadal hormones, specifically testosterone in the production of song. It was observed that castration eliminated all song production (5). Additionally, when testosterone levels are low, there is not only a decrease in the production of song, but also a decrease in the size of some nuclei involved in song production (6). Further support for the necessity of testosterone for song production was demonstrated by Nottebohm (1980) when he injected female birds with testosterone, which lead to the production of song (7). This research has interesting implications regarding anatomical changes that may occur when an organism is chemically imbalanced. Disruptions in chemical equilibrium may alter brain structure and subsequently influence behavior.

Nevertheless, not all research has supported the claim that testosterone is responsible for anatomical and behavioral differences between male and female songbirds. Contradictory results have been obtained in different species of birds, and no one has successfully induced feminine bird song behavior in male birds with the introduction of hormones. The role of hormones remains unclear at this time, however, the structures and circuitry involved in song production exhibits neuronal plasticity in many species and there is a relationship between gonadal hormones and sex-specific behavior.

Assuming functional circuitry, bird vocalizations come in many varieties, and range from several seconds to several minutes (8). The complexity of the avian vocal repertoire has allowed scientists to study the development of this behavior. Bird songs are often species specific, but there are also differences in the vocalizations within a species. If bird vocalizations were merely the result of genetics there would be little variation in the complexity or variety of songs. The individuality present across males of the species raised questions about the origins of the songs and factors responsible for creativity and individuality in song.

Thorpe and colleagues (1961) removed young birds from the nest immediately after hatching and raised them in captivity. When the males began to vocalize, their songs were significantly degraded compared to males in the wild. The songs were similar to the songs produced by wild males, however, they were simpler in form and there was little variation within or between individual birds (9). This experiment demonstrated that the variation in song observed in wild birds was not genetically programmed, but rather something that developed when an animal was in the wild and could hear the songs of other males in the species.

Marler and Tamura (1964) further examined the development of bird songs by examining the bird songs of finches. Finches have species specific songs, however, the songs differ slightly depending on their geographical niche. In essence, finches have dialects that are regionally specific. They removed young birds from the nest after hatching and found that all birds, regardless of their geographical location developed simplified versions of their regional song. The birds were unable to learn their region-specific songs because they were not able to hear males from their region sing it. It was determined that the critical period for acquiring song occurs during the first 3 months of life, which is before a bird can produce song. In actuality, a template of the song forms in as little as 20 days, song acquisition is often completed in approximately 35 days, and after 60 days of rehearsal the song becomes consistent. Even males reared in isolation can be trained to sing by playing a recording of their song as long as they are under 3 months of age. After 4 months of age, birds become unreceptive to training. This shows that there is a simple song that is processed at an early age and subject to modification during the earliest months of life. The young birds, once exposed to the song of adult males, store it until they begin to sing (10).

Further examination by Konishi (1965) demonstrated that a young bird that is deafened immediately after birth will sing, but is only capable of forming disjointed notes. The song of a deafened bird is even more severely degraded compared to a bird raised in isolation and would not be identified as the song of a particular species (11). This demonstrates that not only is auditory exposure necessary for the successful production of song, but auditory feedback is also necessary.

Song production is a complex behavior that suggests that the interaction of genetics and the environment is necessary for birds to successfully vocalize. Further study is necessary to fully understand the factors that are responsible for sexual differentiation of behavior and the neuroplasticity and sexual dimorphism seen in the avian brain. This model of animal behavior may contribute to understanding the differences between male and female brains in humans, and demonstrate how anatomical differences may be the cause of behavior differences across the sexes.

References

1) Song circuit diagram, a basic diagram of the song control circuit in the avian brain.

2) Songbird brain circuitry , created by Heather Williams at Williams College, this is a more detailed figure of brain circuitry.

3) Sex and the Central Nervous System , the web site to accompany a developmental biology text by Scott F. Gilbert.

4) Arnold, A.P. (1980). Sexual differences in the brain. American Scientist. 68. 165-173.

5) Thorpe, W.H. (1958). The Learning of Song Patterns by Birds, with Special Reference to the Song of the Chaffinch. Ibis, 100. 535-70.

6) Nottebohm, F. (1981). A brain for all seasons: Cyclical anatomical changes in song control nuclei of the canary brain. Science. 214. 1368-1370.

7) Nottebohm, F. (1980). Testosterone triggers growth of brain vocal control nuclei in adult female canaries. Brain Research. 189. 429-436.

8) Zebra finch song , created by Heather Williams at Williams College , includes audio clips of zebra finch songs.

9) Thorpe, W.H. (1961). Bird song. New York: Cambridge University Press.

10) Marler, P. & Tamura, M. (1964). Culturally transmitted patterns of vocal behavior in sparrows. Science, 146. 1483-1486.

11) Konishi, M. (1965). The role of auditory feedback on the control of vocalization in the white-crowned sparrow. Zeitschrift fur Tierpsychologie. 22. 770-783.

12) Hinde, R.A. (1969). New York: Cambridge University Press.

13) An Introduction to Birdsong and the Avian Song System. ,(HTML format) General information about birdsong and the avian song system from a special issue of the Journal of Neurobiology.


Terrorists: How different are they?
Name: Stephanie
Date: 2003-02-24 23:29:34
Link to this Comment: 4800


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Ever since September 11th, terrorism has been on virtually all of our minds. And now, some eighteen months later, as the nation perches on the brink of war with Iraq, our fears remain. The frustration that most people experience in the aftermath of extreme violence is largely the result of the question why. Why would anyone want to commit so heinous a crime? How could they live with themselves? Terrorism is a widely researched topic, but it seems to be particularly salient now, as it hits closer to home. Are terrorists different than the rest of us? Are they different than serial killers? If brain equals behavior, then yes, they are. But perhaps that equation is only true in some cases.

Because the acts that terrorists execute are so disturbing, many people think they must be crazy - that there must be something fundamentally wrong with them or with their brains. There is an ongoing debate on this matter - especially since different research shows variations in the extent to which terrorists are perceived as "crazy." Clark McCauley, Professor of Psychology at Bryn Mawr maintains that terrorists are not crazy. In fact, they are quite normal and their psychology is normal. According to Professor McCauley, research has found "psychopathology and personality disorder no more likely among terrorists than among non-terrorists from the same background" (1). For most, this is an unfavorable result, for not only does it mean that anyone is capable of committing acts of terror, but it also means that there is little distinction between "us" and "them" - in fact, the "us and them" distinction may not really exist, at least not on a biological or psychological level (if we are truly essentially similar, with the most obvious difference being distinctly behavioral).

It is the discovery that terrorists and non-terrorists are not as different as originally thought that is so unnerving to non-terrorists (the "us"). It is difficult for most people to accept that they could, that anyone could, hypothetically, commit mass genocide or use commercial jets as missiles. Professor McCauley admits, "terrorism would be a trivial problem if only those with some kind of psychopathology could be terrorists. Rather we have to face the fact that normal people can be terrorists, that we ourselves are capable of terrorist acts under some circumstances" (1). Of course it is upsetting to think that normal people ("normal" being defined as people who refrain from terrorist behaviors or other violent and socially unacceptable acts) could become terrorists, but this does not imply that since we are capable of terrorist behavior we will go ahead with it. If terrorists are biologically and psychologically similar to non-terrorists, then why do they kill?

The path that future terrorists follow is a gradual one, for it is almost impossible for someone unaccustomed to killing, to suddenly be able to do so. The ability to kill must be nurtured over time, usually through the group dynamics of terror networks: "The terrorist has a fixation on systemic value. This means that they emotionally crave membership in the organization, group, or order to which they belong" (2). Terror groups are like families to terrorists, each with their role, and each providing support for their fellow terrorists. Terrorists live for the survival of their group and for the group's ideas and rules. The meaning of terrorists' lives "comes from being a dutiful soldier and member of the group" (2). In order for this type of blind loyalty to occur, subjects must lack any personality, individuality or sense of self - this is the mechanism that allows terrorists to kill, for without any solid conviction of who they themselves are, they cannot understand the value of individual or collective human life: "a terrorist does not see the infinite, unique, singular value of people and therefore does not see what is wrong with killing or maiming another person" (2). In this way, terrorists are able to almost alienate themselves from social norms and completely immerse themselves in the ideology of the group.

Terrorists' motivation to kill is comprised of a combination of factors. They may kill for political or economic reasons, but they may also kill for the sense of power it generates. Terrorists, according to Stephen J. Morgan, "crave the ultimate power, that of power over life and death of innocent people. They believe themselves to be omnipotent, messengers and agents of God, without feeling guilt or shame for anything they do" (3). This seems logical, but if terrorists see themselves as being divine or immortal, then they should have no reason to believe that other nations would be able to affect, manipulate, or disturb them - unless, of course, they kill because they see the other nations and civilizations that populate the world as threats.

Although many terrorists may not be significantly different from non-terrorists, research shows that some traits are more common in terrorists than in non-terrorists. These shared traits include low self-esteem and predilection for risk-taking. Research attributes low self-esteem to terrorist mentality because terrorists "tend to place unrealistically high demands on themselves and, when confronted with failure, [tend] to raise rather than lower their aspirations" (4). A penchant for taking risks and engaging in fast-paced activities are other qualities generally ascribed to terrorists because they tend to be "stimulus hunters who are attracted to situations involving stress and who quickly become bored with inactivity" (4). Of course, these characteristics should not be thought of as warning signs - they are merely the result of research that tries to explain aspects of terrorist behavior.

There is a general discourse over the nature of terrorist psychology and behavior - some researchers accept terrorists as "neurotics" or "psychopaths", while others emphasize the importance of examining "the social, cultural, political, and economic environment in which they operate" (3), (4).. Akin to this is Khachig Tololyan's argument that as a result of terrorists' humanness, "their behavior cannot be understood by the crude - or even by the careful - application of pseudo-scientific laws of general behavior" (5). Rather, he claims, "we need to examine the specific meditating factors that lead some societies under pressure, among many, to produce the kinds of violent accts that we call terrorism" (5). This seems a logical argument, for environment can certainly influence one's behavior. Although the acts of terrorists are hardly forgivable, they do not necessarily immediately qualify terrorists as clinically insane, for there may be some external forces or circumstances (political, social, and/or economic constructs) that may provoke terrorists to respond violently.

Interestingly, there seems to be a discrepancy between the psychology of terrorists and other criminals, like serial killers. Perhaps this is obvious, but is the idea that serial killers tend to stalk specific (though usually previously unknown) individuals and then kill one by one, whereas terrorists focus on governments and nations, with an end goal of having "a lot of people watching not a lot of people dead," related to their psychological disposition? (6), (7). Studies show that serial killers tend to share similar experiences such as "childhood abuse, genetics, chemical imbalances, brain injuries, exposure to traumatic events, and perceived social injustices" (6). However, as the same article explains, "a huge population has been exposed to one or more of these traumas. Is there some sort of lethal concoction that sets serial killers apart from the rest of the population?" (6) At this point, it is not clear, but it is clear that there is something, whether it is a biologically dysfunctional brain or psychological delusions that make serial killers act the way they do - but what about terrorists?

It is difficult to compare terrorists to other criminals because so few terrorists have been caught and those who have been caught are rarely willing to talk about their experiences. Therefore, it is hard to know whether terrorist motivations stem from abuse, trauma, or biological problems. However, although terrorists and serial killers are both thought of as "rational," meaning that their mental states are stable enough to allow them to organize, plan, and execute their attacks, they are certainly not sane - at least not by Western standards. In their own minds, and to their groups, they are sane - and perhaps in the future we will be able to more fully understand their rationale.

Until the nation is able to more efficiently curb terrorism, the government has developed some precautions designed to discourage terrorists. Recently, a new system known as "brain fingerprinting," has been developed for installation at airport security checkpoints. Suspects seized for interrogation may be screened this way: "A subject's head is strapped with electrodes that pick up electrical activity. He sits in front of a computer monitor as words and images flash on the screen. When he recognizes the visual stimuli, a waveform called the P300 reacts and the signal is fed into a computer where it is analyzed using a proprietary algorithm" (8). In cases involving suspected terrorists, the images used would be those "only known to a terrorist group, such as the word 'al-Qaida' written in Arabic or the instrument panel of a 757" (8) This is an interesting development, however, it may be important to note that just because a suspect shows brain activity when viewing certain images does not immediately qualify him or her as a terrorist. "Brain waves can't hand down a guilty sentence," says one researcher; another concurs: " 'It's [the technique of brain fingerprinting] like saying you can measure brain activity from someone's scalp and read their mind. You can see differential electrical activity, but you can't read the electrical activity as if it were words. You can say it's different, but you can't interpret it" (8). Interpretation is a loaded term - even in psychological research, we interpret the behaviors of others. In doing so, we do not always arrive at a clear conclusion. Thus, our relation to terrorists may not be as obvious as we think, or would like to think - they may be fundamentally similar to us, but are just conditioned to perform different behaviors.


References

1) The Psychology of Terrorism , from Social Science Research Council website.

2) How a Terrorist Thinks , from Clear Direction, Inc. website.

3) The Mind of the Terrorist Fundamentalist , site is kind of scary looking.

4) Long, David E. The Anatomy of Terrorism. New York: The Free Press, 1990.

5) Rapoport, David C. Inside Terrorist Organizations. London: Frank Cass Publishers, 2001.

6) What Makes Serial Killers Tick? , from Court TV's Crime Library website.

7) Freedman, Lawrence. Superterrorism Policy Responses. Massachusetts: Blackwell Publishing, 2002.

8) Thought Police Peek Into Brains , from Wired News website.


PANDAS: A link between strep throat and OCD
Name: Cordelia S
Date: 2003-02-24 23:50:20
Link to this Comment: 4803


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Can an ordinary streptococcal infection (strep throat) lead to obsessive-compulsive disorder (OCD)? In a small subgroup of children, a seemingly normal bacterial strep infection can turn into a severe neuropsychiatric disorder. The disorder affecting this group is known as PANDAS (Pediatric Autoimmune Neuropsychiatric Disorders Associated with Streptococcal infections), and was identified by Dr. Susan Swedo just twelve years ago (1). Though research on PANDAS is still very much a work in progress, it has already generated excitement that this disorder may lead to answers about the cause and nature of OCD (2). Similarities and differences between PANDAS patients and the majority of OCD patients, experimental treatments for PANDAS infections, and comorbidity of PANDAS with a variety of other psychiatric and neurological disorders are slowly leading to an understanding of exactly what OCD does to the brain (3).

It is not the streptococci themselves that cause OCD symptoms. Rather, strep infections seem to cause the body's immune system to build up antibodies that, for an unknown reason, begin to attack the basal ganglia in rare cases (1). The link between streptococcal infections and neurological disorders has been known about for half of a century. Rheumatic fever was identified in the 1950s as being an autoimmune disorder correlated with strep; Sydenham chorea, a disorder of the central nervous system involving hyperactivity, loss of motor control, and occasionally psychosis, was recognized as another strep-linked disorder that could be a symptom of Rheumatic fever or could stand on its own. PANDAS seems to be a milder form of Sydenham chorea (4).

Dr. Swedo observed, tested, and interviewed fifty children with a sudden onset of OCD or tic disorders who had recently (within the past several months) been diagnosed with a group A beta-hemolytic streptococcal (GABHS) infection. These children tested negative for Sydenham chorea. Swedo discovered that the children had episodic patterns of OCD and tic symptoms. She tested the presence of antistreptococcal antibodies in their blood and found that symptom exacerbations were twice as likely to occur with the presence of antistreptococcal antibodies (1). Brain imaging studies found that the caudate nucleus, frequently linked with OCD, became inflamed in PANDAS patients when antibody presence was high (2).

OCD symptoms are generally very similar between children with PANDAS and other OCD patients (5). However, the onset of symptoms can be quite different. While OCD is usually first identified in adolescence, PANDAS patients are always prepubescent. This is likely to be because of the rarity of GABHS infections in teens and adults. Also, though OCD usually manifests itself gradually, in PANDAS patients it can set in overnight. Swedo and colleagues report frequently seeing children whose parents could recall the day their child became obsessive-compulsive (2). Though it is not known why, PANDAS patients overwhelmingly obsess about urination, which is not an especially dominant obsession in other OCD cases (5). The episodic pattern of symptoms is unique to PANDAS patients. While other OCD patients can go through periods where symptoms are slightly more or less exacerbated, PANDAS patients often experience complete disappearance of symptoms between episodes (1). It is unknown whether a genetic marker on B cells of the immune system known as D8/17 is specific to PANDAS patients, or common in all OCD patients (6). The structure and function of this marker is currently being identified, and may provide some clues about the heredity of PANDAS or OCD in general (2).

Thus far, studies in which penicillin was given to PANDAS infected children as a preventative measure against strep and OCD have been inconclusive (3). However, many PANDAS patients have shown significant reduction of OCD symptoms when given plasmaphoresis, a type of plasma transfusion, to remove the antibodies (2). Current studies are further investigating prophylactic antibiotics, plasma exchange, and steroids as possible treatments to go along with SSRIs in treating both PANDAS and ordinary OCD.



As in most cases of OCD, other neuropsychiatric disorders are often present in PANDAS patients. Swedo and colleagues found that 40% of PANDAS patients suffered from ADHD, 42% from affective disorders, and 32% from anxiety disorders (1). There are several points of interest in discussing the comorbidity of these illnesses with PANDAS. It was found that non-OCD psychiatric symptoms in most cases followed the same cycles as OCD symptoms, and set in suddenly when antibody levels were high (1). This brings up the question of whether any additional psychiatric disorders can be triggered by strep throat or other bacterial infections. Though there is no evidence to date linking post-strep autoimmune dysfunction with any illnesses other than tic disorders, OCD, and possibly late-onset ADHD, researchers are looking into possible ties with disorders like autism, anorexia, and depression (2). The comorbidity statistics also suggest that particular areas of the brain which we know are involved in other psychiatric disorders are attacked by the post-strep antibodies, and could help lead to identifying the exact cells or proteins that are targeted. Interestingly, the putamen and globus pallidus, neighbors of the caudate nucleus, are linked to tic disorders and hyperactivity (2). This could explain the frequency of occurrence of these symptoms alongside OCD in PANDAS.

The frequency of PANDAS in the general population is unknown, but it is definitely a rare disorder. By contrast, OCD is present in one to two percent of the population (7). This may make PANDAS research appear useless in relation to research on "normal" OCD. On the contrary, the small size of the subgroup of PANDAS sufferers and the link to a disease as widely studied as strep throat could provide the key to discovering the cause of OCD and identifying exactly what genes and brain structures are involved (2). For example, if the nature of the antibody attack on the basal ganglia in PANDAS were identified, researchers could possibly target similar degradation in the basal ganglia of other OCD patients and potentially begin to look at ways to prevent this degradation. Also, research and public knowledge about PANDAS might make more people aware of the medical aspects and biological causes of mental illnesses. Perhaps this would lessen societal discrimination against the mentally ill and lead more people to understand why pharmaceuticals are often helpful or necessary in treating mental illnesses (7).

There is strong evidence of a link between streptococcal infections and obsessive-compulsive disorder in some children. Though it is not known exactly how the immune system turns against itself and causes behavioral symptoms, there is hope within the scientific community that answering questions about PANDAS will in turn lead to answers about OCD and mental illness in general. This disorder provides evidence for medical models of psychiatric illnesses, and for the idea that the brain = behavior. It is amazing and frightening that an illness that seems like a mere nuisance can lead to a severe behavioral change almost overnight. However, research and possible treatments appear promising, and this tiny disorder may contribute more to the body of neuropsychiatric knowledge than any other illness in the past.


References

1)American Journal of Psychiatry Website, First Susan Swedo article about PANDAS, defines symptoms and criteria

2)The Scientist Website , Harvey Black article discussing research and several points of view on PANDAS

3)Science Direct Website , Pilot study on use of prophylactic penicillin in treating PANDAS

4)Medscape Website, Register for Medscape, then go to Richard Barthel article "Pandas in Children - Current Approaches", overview of knowledge on PANDAS

5)JAMA Website , Joan Stephenson article discussing antibiotic treatment

6)Psychiatric News Website , Article discussing biological marker associated with OCD

7)University of Florida News , Current research being done on PANDAS and OCD


The Neurophysiology of Sleep and Dreams
Name: Alexandra
Date: 2003-02-25 00:17:42
Link to this Comment: 4804


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

The ancient Babylonians thought dreams were messages from supernatural beings, and that the good dreams came from gods and that bad dreams came from demons. (1) Since then people have sought many different explanations for the occurrence and importance of dreams. Before beginning to understand the function or significance of sleep and dreams, it is important to look at when, what, where, and how dreaming and sleeping occur.

Adult humans sleep, or should sleep, for about eight hours a day. People's necessary time spent sleeping changes over time. Newborns spend about twice as long sleeping. (2) Circadian rhythms, [the term originates from the Latin, "circa diem," which means about "about a day" (3)] determine when people fall asleep. The circadian rhythm, actually twenty-five hours long, is reset by light. (4) In 1953 scientists discovered REM (rapid-eye movement) sleep. The physiological state of REM occurs periodically with NREM (non-rapid-eye movement) about every ninety minutes (5), and lasts for about 20-30 minutes (6).

Differences between NREM and REM can be measured using an electroencephalogram (EEG), electrooptogram (EOG), and electromyelogram (EMG), which measure brainwaves, eye-movement, and muscle tone, respectively. REM is categorized by high-frequency, low-amplitude, more irregular waves in EEG, rapid, coordinated movement in EOG, and weak EMG. During this type of sleep, brain activation heightens, breathing and heart rates increase, and body movement is paralyzed. Because the person is highly aroused, like in waking, but also very asleep, REM sleep is also called paradoxical sleep (6).

Although dreams and REM are not synonymous, most dreaming occurs during REM sleep. Dreaming during the REM sleep can be seen as a "state of delerium possessing qualities of hallucination, disorientation, confabulation, and amnesia." (7) While as many as 70-95% of people awakened during REM remember their dreams, only 5-10% of those awakened during NREM report dreams (5).

The release of different neurotransmitters in different areas seems to determine which type of sleep should be activated. At the onset of sleep serotonin is secreted and seems to trigger NREM. (4) NREM switches to REM sleep with the release of the chemical, acetylcholine, in the pons, which is located in the base of the brain, and later the re-release of noradrenaline and serotonin seems to switch off REM (5) (7) and reactivate NREM sleep again. The action of the neurotransmitters as triggers of NREM and REM sleep is referred to as the reciprocal interaction/activation synthesis. (5) With the excretion of acetylcholine, signals from the pons are sent to the thalamus, which relays them to the cortex, and also sent to shut off the neurons in the spinal cord, which causes the temporary paralysis of the body.

Although the pons is responsible for REM sleep, dreams originate in areas in both the frontal lobe and also at the back of the brain. In twenty-six cases in neurological literature about damage to the pons, although there was a loss of REM sleep in all of them, loss of dreaming was reported in only one of the cases. (5) Also while damage to frontal areas of the cortex makes dreaming impossible, the REM cycle of the individual, whose brain is damaged that way, remains unaffected. (5)

Because most dreaming does occur during REM sleep, however, the REM state can be thought of as one of the triggers for dreaming. (5) The pons' paralyzing of the muscles is an important precondition for dreaming, since without that paralysis dreamers might act out their dreams (the effects of this problem can be seen in individuals suffering from REM Behavior Disorder). Thus, without muscle atonia, dreaming could be a very dangerous activity!

Understanding the nature of the areas of the cortex that are responsible for dreaming may help explain the reasons behind the content of dreams. The area of the frontal lobes, which is responsible for dreaming, contains the "large fiber pathway, which transmits the neurotransmitter dopamine from the middle part of the brain to the higher parts of the brain. " (5) The use of dopamine stimulants, such as L-dopa, massively increases the intensity and frequency of dreams. (5) Since the frontal and limbic areas of the brain are concerned with arousal, emotion, memory, and motivation (5), dreaming might be tied to these aspects of behavior based on its location in the brain. The gray cortex at the back of the brain, also responsible for dreaming, is connected to abstract thinking and is where the highest level of perceptual information is processed. (5) The activation of this location seems to be related to the huge importance of visual perception in dreams.

Although the functions of REM sleep and dreams is not fully agreed upon or understood, evidence for the necessity of sleep is incontrovertible. Rats deprived of sleep die after a few days. Also REM sleep and dreaming seem to be quite important since the individual deprived of REM sleep, will respond with REM rebound, which means that the amount of time spent in the REM state will increase following REM deprivation. (6) Also the attempts at entering REM increase, and individuals show more anxiety and irritability, and have trouble concentrating during the day. (8) Animals deprived of REM also show an increase in attempts at entering REM, and display other behavioral disturbances, such as increased aggression. (8) Thus, while in a future web-paper I intend to discuss and investigate more the functions of NREM and REM sleep, a knowledge of the processes at hand seemed necessary before such a discussion to occur.

References

1)The Association for the Study of Dreams
2)Harvard Undergraduate Society for Neuroscience
3)Silent Partners Organization
4)Talk About Sleep
5)The American Psychoanalytic Association
6)California State University student essay
7)"Neurophysiology of dreams"
8)Slides on Physiology of Sleep and Dreaming


Neurobiofeedback: The Latest Treatment for Migrai
Name: Clare Smig
Date: 2003-02-25 00:38:53
Link to this Comment: 4805


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Headaches are among the most common health complaints today. According to the National Headache Foundation in Chicago, 45 million Americans suffer from recurring headaches—16 to 18 million of which are migraines (1). Migraines are vascular headaches because they involve the swelling of the brain's blood vessels (2). The occurrence of migraine headaches, contrary to popular belief, is a disease. If you suffer from migraines you might be used to people comparing your migraine to a headache or trying to blame these "headaches" on you and your lifestyle. However, migraines are caused by the expansion of blood vessels whereas regular headaches area caused by the constriction of blood vessels. Although certain things such as harsh lighting, movement, or chocolate may trigger a migraine, the actual cause of this vessel swelling is unknown and may vary from person to person. Currently, there is no cure for migraine (3).

One theory as to the cause of migraines lies in excitement of the nervous system caused by stress, anxiety, or some unknown (4). A more recent form of treatment known as neurobiofeedback actually works by allowing patients to train their brains to function at a more relaxed mental state. The success of this treatment may indicate that increased neuron activity is one of the more common causes of migraines. Neurobiofeedback has been identified as successful for migraines precipitated by PMS, food allergies, or stress. It is not clear exactly how food allergies are related to increased nerve activity. Stress, however, regardless of the type, seems to be strongly correlated with migraines as it will determine the severity of the headache. Neurobiofeedback goes to the root of this problem and, as a result, is one of the more preferred methods of treatment (5).

Biofeedback, in general, is a technique in which the body's responses to specific stimuli are measured in order to give patients knowledge about how they physically react to various events. In the case of headaches, patients can condition their mind or body to react differently to pre-headache symptoms and prevent a headache from occurring (1). Neurobio or electroencephalogram (EEG) feedback, specifically, measures brain wave activity and feeds back to a patient their own brain wave patterns so that they can modify these patterns through game-like computer simulations (6).

Why does this work? Brain waves are recordings of electrical changes in the brain. They originate from the cerebral cortex and are detected in the extracellular fluid of the brain in response to changes in potential among large groups of neurons. These changes are directly related to the degree of neuron activity, which are translated into different waves. Alpha and theta waves occur during more relaxed, meditative states. Beta waves dominate during more active, stressful mental states (7). Previous studies have shown that migraines may result when neurons in the brain's cortex become overexcited (8). This would suggest the domination of beta waves. Therefore, neurobiofeedback enables migraine sufferers to counteract these waves and teach their brain to function at more relaxed wavelengths. According to one neurobiofeedback specialist, each person has its own psychopysiological stress profile showing how much of her sympathetic and how much of her parasympathetic nervous system she uses. Some people, even when they are idle or doing next to nothing, have their sympathetic system activated all of the time. This is the active, energy-expending branch of the nervous system, as opposed to the parasympathetic, which is the more relaxed, recovery, energy-storing system (6). Therefore, it is possible that migraine victims have their sympathetic nervous system "on" more than others and this could be what leads to an increase in beta waves observed. However, this would require further research. Furthermore, does nerve excitement lead to swelling of the blood vessels, as seen in victims of migraine? Continued research would be needed to show the correlation between these two aspects of migraines. Finally, there is still some question as to whether those things that trigger migraines—bright lights, harsh sounds, lack of sleep—increase neuron activity.

What is clear, however, is that neurobiofeedback has been successful in treating migraines for many people (9). Many sites and articles show success with this treatment, yet it is never 100%. Therefore, although the symptoms may be the same, there is something else involved in the disease that is causing the severe headaches. It is possible that their migraines are caused by other proposed theories including instability of the vascular system, magnesium deficiency, estrogen level fluctuations, and blood platelet disorder (based on the notion that the platelets of migraine sufferers aggregate more readily than normal platelets in response to certain neurotransmitters). Hopefully, continued understanding of this disease will help those who hope to treat it provide further relief for those who suffer from it from the great variety of factors that seem to cause migraines.

References

1) Howe, Maggy. "Headaches." Country Living. March 1996, 82- 84.

2)Nervous System Disorders

3)Migraines: Myth and Reality.

4)Causes of Migraine

5)Michigan Institute for Neurobiofeedback.

6)Biofeedback/Neurobiofeedback Therapy.

7)Brain Waves

8) Ferrari, Michel D. "Migraine." The Lancet. April 1998, 1043-1051.

9) EEG Spectrum International


Locked-In Syndrome: Giving Thought The Power Of Ac
Name: Priya Ragh
Date: 2003-02-25 00:53:25
Link to this Comment: 4806


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Imagine a world in which human communication is executed through the simplicity of thought. No muscle action- no nodding, smiling, slapping, pointing, speaking, or feeling...just through the immobile and inconspicuous medium of thought. This is an example of a locked-in patient. In a locked-in condition, the patient's ability to move his/her limbs, neck, and even muscles is brought to an abrupt halt. Messages ordered by the brain do not reach the muscles that consequently carry out the physical tasks of the brain. The locked-in syndrome leaves the victim completely paralyzed sparing only the eye muscles in most cases.

The reason for this disability is most commonly due to lesions in the nerve centers that control the muscle contractions, or a blood clot that blocks circulation of oxygen to the brain stem. Brain-stem strokes, accidents, extreme spinal-cord injuries, and neurological diseases are other main causes for the syndrome (5). Axons that carry brain signals leave the larger motor areas on the surface of the brain and direct their signals towards the brain stem. It is here where they converge linking one another to form a tightly packed bundle called the motor tract. The brain stem motor tract is extremely sensitive; thus even the slightest impact of a stroke can lead to destruction of the axon bundles resulting in a total paralysis (1). For a locked-in patient, depending on the severity of the stroke, the sensory tracts may or may not be affected. These tracts also form axon bundles and determine the functioning of the feel, touch, and pressure perceptions.

What is interesting is that while total paralysis of the external body is a likely possibility, the eye muscles and brain functioning remain intact and undamaged in most cases.It is rather remarkable to see the wonders of human life; where a person in a locked-in state is often mistaken to being a vegetable while in reality, barring very serious stroke effects, they are regular people who hear, understand, and think as coherently if not better than the next person.

The story of jean-Dominique Bauby is not only inspirational but has motivated scientists around the world to enhance technological means to broaden brain communication. Bauby, a 42 year old editor-in-chief of a French Magazine suffered from a severe brain-stem stroke in 1995. His body was instantly paralyzed leaving him with only one functioning eye muscle. Bauby is praised to this day for his ability to invent life for himself in the most appalling of circumstances. In fact, he astounded the international community when he came out with his own book, "The Diving Bell and the Butterfly"(2).

In this book is explained the number of medical staff and nurses who ignore or overlook the feelings and views of the patient whose only fault is his/her lack of physical or verbal communication. A strong prognosis needs to be recognized and respected for the acknowledgement of the rich mind trapped in a locked-in body. Bauby in his book said, " How many people in this state in enclosed half-lit hospital beds are never stimulated, never recognised as peoples with intact minds who can still contribute of only their talents are acknowledged?" Roger Rees of Flinders University in Adelaide agrees, " Can we make the quantum leap from befuddled locked-in thinking, to behaviour which acknowledges that while their physical world may be closed forever, their inner world is full of rich memories and imaginings whose journey can still be nourished and enhanced (3)?"

In a somewhat angelic disguise are present in this world doctors like Birbaumer. In 1995, Birbaumer, a German scientist on receiving a $1.5 million Leibniz prize focused on training paralyzed people to write out their thoughts with their brains. How this is a possible is indeed the miracle of knowledge and dedication of scholars in the field of science and technology. To establish a tool for communication between the brain and scientist, one must first fill in the missing pieces of the puzzle. Under normal circumstances, a series of electrical impulses are transmitted by the brain cells that flow along the nerves which in turn trigger the release of chemicals, which signal the muscles to flex and contract. Locked-in muscles do not receive any of these chemical messages (2).

Today's science, or rather, tomorrow's cure has shown the significance of computer technology. By implementing glass electrodes into the brains of locked-in patients, it is now possible to observe the growth of brain cell that contains growth- accelerating molecules. The glass electrode in return detects and transmits the brain signals to the computer. The latter translates the signals into screen movements that produce speech imitating the thoughts of the patient. All this is done by the powers of brain and mind- none of it involving the body (4).

The above experiment is indeed fascinating but at the same time a little frightful. Looking into the future of genetic engineering and biotechnology, we must be certain to honor our ethical responsibilities. Abusing science to satisfy one's political or personal appetite is disrespectful and dangerous to the stability of peace. Today it is about reading peoples' minds for them, what then does tomorrow behold? As long as research is done with the intention of bringing hope and light into a person's life, I continue to admire in awe the achievements in science today.

References


1. 1) National Health Institute Page, excellent for Science and Health information.

2. 2) Ian Parker, " Reading Minds": The New Yorker, January 20, 2003.

3. 3) abc home page, good website.

4. 4) Cleveland Clinic Websire, vast research material.

5. 5) National Institute of Health, one of my favorite websites for the field of health.


The "Mozart Effect"- Real or just a hoax?
Name: Grace Shin
Date: 2003-02-25 02:37:07
Link to this Comment: 4810


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

In 1998, Zell Miller, the governor of the state of Georgia, started a new program that distributed free CDs with classical music to the parents of every newborn baby in Georgia. Why did he do this? He certainly was not just trying to be nice and win a political statement; instead, his idea came from a new line of research showing a link between listening to classical music and enhanced brain development in infants. (1) So, what evidence was there for this governor to make a $105,000 proposal to give classical music for the newborn babies? I considered how my sister and I took music lessons, the Suzuki method, since we were 7 or 8 years old, and how my mother always had classical music playing in the house. My mother was convinced that musical ability will not only help us to be more well rounded people, but also that it will help us to be smarter individuals. Did all those years of piano lessons really pay off? Did all that money spent on buying classical music and attending classical concerts really play a role in determining where my sister and I are in our education?

Previous to this initiative by Gov. Miller, many researchers probed at the question of whether or not classical music enhanced intelligence. One of the initial experiments used musical selections by Wolfgang Amadeus Mozart to test for improvements in memory and this idea thus became known as the "Mozart Effect". The original experiment was published in 1993 by scientists at the University of California at Irvine. College students were required to listen to ten minutes of Mozart's sonata for two pianos in D major, a relaxation tape, or silence. Immediately after listening to these selections, they took a spatial reasoning test (from the Stanford-Binet intelligence scale). The results showed that the students' scores improved after listening to the Mozart selection. (2) However, soon enough, it was found that these results lasted only 10-15 minutes and more research quickly was undertaken by various groups all over the world.

However, before continuing in this search of the validity of the Mozart effect, I feel like there needs to be a definition of "intelligence". Since we learned how tricky notions of words can be on our minds from class, I took the liberty to search for how intelligence is measured and decided to take the definition used by Wilfried Gruhn as being associated with cognitive and intellectual capacity, the psychobiological potential to solve problems. This can be correlated with higher speed in neuronal signal transmission and signal processing, stronger and more efficient neuronal interconnectivity, and high correlation between IQ and neuronal activity (r=0.50 - 0.70). (3)

Now, if brain = behavior as we are learning in class (4), then there indeed should be a measurable correlation between musically trained minds and their intelligence. In addition, there must be one or more parts of the brain which are responsible for both these two "behaviors" of being "musically inclined" and "intelligent". And since the measure of intelligence is defined mainly concerning the brain activity, I delved into the "effects on music on brain" idea that has always been accepted in my childhood.

Assuming that brain = behavior, I wanted to find out if music really did affect the brain itself. If musically trained people were collected what would be the difference in their brains? One line of research found evidence that music training "beefs up brain circuitry". In a study conducted by the Society for Neuroscience, it was found that several brain areas such as the primary motor cortex and the cerebellum, which are involved in movement and coordination, are larger in adult musicians than in non-musicians. Another example given was that the corpus callosum, which connects the two sides of the brain, was larger in adult musicians. A third example is the auditory cortex, which is responsible for bringing music and speech into conscious experience, was also larger. (5)

Few other studies suggest that "music alone does have a modest brain effect." (5) One study showed that listening to the intricacies of Mozart pieces did raise college students' spatial skills. Rats were also tested and it was found that rats were able to complete a maze more rapidly and with fewer errors when exposed to Mozart tunes. As the research continued, even the earlier studies seem to show that the brain changes associated with musicians enhance mental functions such as ability to read and write music, keep tempo, memorize pieces, as well as functions not associated with music like word memory tests. Out of the adults tested in these memory tests, musicians scored much higher than non-musicians did.

The research continued with preschoolers, who were given piano lessons for about six months, and then tested in their puzzle-solving abilities. The experiment found that the preschoolers who had lessons performed better than those who did not have lessons. This was further supported when a group looked at the effects of music lessons on spatial reasoning and found that after 8 months, the children who took piano lessons, when tested on their ability to put puzzles together (spatial-temporal reasoning) and to recognize shapes (spatial-recognition reasoning), had improvements in the spatial-temporal test. Even when the children were tested one day after their last keyboard lesson, they still showed this improvement. (6)

Another study done with children consisted of testing second-graders who took piano lessons in their ability to play special math games. Again, it was found that they were able to score higher compared to other second-graders who took other lessons such as English. This study continued to see their performance in fourth-grade and found that they were able to better understand concepts such as fractions, ratios, symmetry, graphs, etc. (5)

In contrast, many laboratories have tried to use the music of Mozart to improve memory, but were unable to find similar supporting results. One group of scientists used a test where students listened to a list of numbers, and then repeat them backwards, known as a backwards digit span test. However, listening to Mozart before this test had no effect on the students. Other researchers have said that the original work on the Mozart Effect was flawed because only a few students were tested, and it was possible that listening to Mozart really did not improve memory. The latter suggested that it was possible that the relaxation test and silence impaired memory. In 1999, Dr. Kenneth Steele reported that when they used the exact procedures from the original experiment, the results did not give conclusive evidence of the Mozart effect. (7) Because there are many supporting as well as non supporting evidence of the Mozart effect, I soon came to realize that it will be fascinating to see where this all will lead to. Because so many people are caught up with the idea of wanting to be intelligent, by the time I have children, there definitely will be more information available to parents who want "intelligent" children!

Continuing in my search for the correlation between music and intelligence, I found an interesting direction that these studies are taking, which is the chance of musical training becoming a possible treatment of brain damage. (5) Music therapies are being used in many different clinics for various behavioral and neurological problems such as depression, autism, and aphasia.

An example of music affecting brain damage is the report by Berlin et al who studied seven patients with aphasia. Using Melodic Intonation Therapy (MIT), which consists of speaking in a type of musical manner, characterized by strong melodic (two notes, high and low) and temporal (two durations, long and short) components. Evaluating the effects of MIT on the brain, measured by relative cerebral blood flow and PET scanning during the hearing, they saw that "MIT-loaded" words recovered speech capabilities, which were thought to be lost. Going back to the brain = behavior idea, this experiment was further examined to find that critical regions, like the Broca's Area in the left hemisphere, which is involved in language and speech, of the brain were activated by "MIT-loaded" words, and not regular words. (8)

Another example involves the study reported in Lost & Found, an online journal on the autism web site. In 1996/7, where procedures like AIT, Auditory Integration Training, is showing great potential for benefiting the growth and development of various special children. In this report, they modeled the possible effects of AIT in newly born domestic chicks. After an exposure to an AIT-type program for ten days, the neurochemical changes in the brain were studied. Repeatable data were collected and results showed that music had an incredible effect in the chicks. Four groups of chicks were exposed to the followed auditory conditions: 1- Musical treatment with AIT-type modulated music, 2- Musical treatment without modulated music, 3- Verbal treatment of the human voice, and 4- Control with no treatment. After 10 days of these conditions, and after two days following the termination of the treatment, chemicals in the chicks' brains were measured. Concentrations of norepinephrine (NE) and its principle metabolite MHPG were elevated in the groups with the music treatment, and more so in the music treatment with the AIT-type modulated music. Though the relationships among these observations to humans are not yet fully known, these results are quite promising. NE is known to be related with attention processes, not to mention that these observations are related to the type of brain change that is thought to mediate the antidepressant effects of tricyclic drugs like desipramine. (9)

In conclusion, I do not believe that there is conclusive evidence to believe in the Mozart effect. However, despite the fact that we cannot reach a conclusion just on these experiments, there is a vast amount of data out in the web for the public to further probe for themselves. Thus far, we are torn between conflicting results and many parents are still being misled by various advertisers who says that classical music will enhance children's learning capabilities. However, I think it is safe to say that there is some evidence that the brain is affected indeed somehow by music and that music lessons can not hurt the growing stages of a child. I think it will be very exciting to see where the research regarding treatments of brain damage is headed despite the unwillingness on my part to make any conclusions as to whether or not these observations on the brain necessarily enhance intelligence.


References

1)CNN News web site, Article under Entertainment.

2)Entrez-PubMed, National Library of Medicine's search engine for literature.

3)Wilfried Gruhn - Brain Research - Forschung und Lehre, Article published in European Music Journal.

4)Biology 202 Lecture Notes , on the Serendip web site.

5)Music Training and the Brain, in the Society for Neuroscience web site.

6) M.I.N.D. Institute's site for research papers.

7)Music and Memory and Intelligence.

8)The Effects of Music and the Brain, web site on the value of music education.

9)Therapeutic Options: Effects of Music on the Brain, Lost & Found, Journal on Autism.


Predestined Serial Killers
Name: Annabella
Date: 2003-02-25 02:58:00
Link to this Comment: 4811


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

serial killer: a person who commits a series of murders, often with no apparent motive and usually following a similar, characteristic pattern of behavior. (8)

As participants in today's information obsessed society we are constantly being bombarded with the brutal actions that mankind is capable of. One watches the news and hears about a murder, or reads a book about a mysterious killer. The only time that the New York Daily News has ever outsold the New York Times was when the headline claimed the letters that proved the 'real' identity of Jack the Ripper. As you wade through these bits and pieces of reality, one can't help but be struck by the thought-- what causes a person to actively commit such horrendous acts? There have been many different studies done in hopes of finding an answer. For a crime such as serial killing there are two main schools of thought. The first idea is that serial killing is caused by an abnormality in the frontal lobe region of the brain. Another theory is that serial killers are bred by circumstance. However, I believe that with some analysis the evidence for both theories can serve to prove that serial killers are genetically different. Thus demonstrating that serial killing can find its origins in genetics.

A startling amount of criminals on death row have been clinically diagnosed with brain disorders. A recent study has demonstrated that 20 out of 31 confessed killers are diagnosed as mentally ill. Out of that 20, 64% have frontal lobe abnormalities. (1) A thorough study of the profiles of many serial killers shows that many of them had suffered sever head injuries (to the frontal lobe) when they were children. To discover why damage to the frontal lobe could be a cause of serial killing, one must look at the function of the frontal lobe of the brain.

The frontal lobe is located in the most anterior part of the brain hemispheres. It is considered responsible for much of the behavior that makes possible stable and adequate social relations. Self-control, planning, judgment, the balance of individual versus social needs, and many other essential functions underlying effective social intercourse are mediated by the frontal structures of the brain. (3) Antonio and Anna Damasio, two noted Portuguese neurologists and researchers working in the University of Iowa, have been investigating in the last decade the neurological basis of psychopathy. They have shown that individuals who had undergone damage to the ventromedial frontal cortex (and who had normal personalities before the damage) developed abnormal social conduct, leading to negative personal consequences. Among other things, they presented inadequate decision-making and planning abilities, which are known to be processed by the frontal lobe of the brain. (5) For a long time now, neuroscientists have known that lesions to this part of the brain lead to severe deficits in all these behaviors. The inordinate use of prefrontal lobotomy as a therapeutic tool by surgeons for many mental diseases in the 40s and 50s, provided researchers with enough data to implicate the frontal brain in the genesis of dissocial and antisocial personalities. (6) Through this information one can posit that the frontal lobe acts as a conscience.

An example of a serial killer that had suffered sever injury to his frontal lobe is Albert Fish, better known as the Brooklyn Vampire. At the age of seven he had a severe fall off a cherry tree which caused a head injury from which he would have permanent problems with, such as headaches and dizzy spells. (2) After his fall he began to display many violent tendencies, including an interest in sadomasochistic activities. At the age of twenty he killed his fist victim, a twelve-year-old neighbor by the name of Gracie whom he cannibalized. (2) It would appear that a damaged frontal lobe is a prime suspect in the causation of serial killing. However, if you look closer this argument flows over to prove that genetics is the main cause of serial killing. Could it be the case that we are all genetically encoded to become serial killers, and the frontal lobe is the proverbial muzzle that stops us from acting on our tendencies? All of us have at one point or another imagined killing a person because they displayed tendencies that we did not like. None of us went out and killed them in a ritualized manner.

The idea that the frontal lobe is a stop button, or acts as a conscience that stops a 'normal' human from acting on their more violent tendencies, is not a new one. Evolutionary biologists point that the frontal lobe evolved in tandem with the evolution of man from a beast to purveyor of civilization. (3) The trademark of all social primates is a highly developed frontal brain, and human beings have the largest one of all. It is a simple jump in reason to understand that, like a dog without its muzzle, the human animal without a frontal lobe is more likely to 'bite.'

Brain damage cannot be the motivator for all serial killing. 46% of all confessed serial killers have no frontal or general brain damage. The majority admits that they were perfectly aware of what they were doing before, during, and after the crime. Some even confess that they know that what they were doing was wrong, and contemplated 'giving up' after the first time. The thrill derived from murder is a temporary fix. Like any other powerful narcotic, homicidal violence satisfies the senses for a time, but the effect soon fades. And when it does, a predator goes hunting. (1) Caroll Edward Cole commented that after his first kill " I thought my life was going to improve, I was sadly mistaken. Neither at home or at school. I was getting meaner and meaner, fighting all the time in a way to hurt or maim, and my thoughts were not the ideas of an innocent child, believe me." By the end of his career Cole had killed so many people that he could not even remember the names of his victims. At one point during his testimony he remembered a new victim, "This one is almost a complete blank," Cole said of the victim. He didn't know the woman's name, but Cole remembered finding pieces of her body scattered from the bathroom to the kitchen of his small apartment. "Evidently I had done some cooking the night before," he testified. "There was some meat on the stove in a frying pan and part that I hadn't eaten on a plate, on the table." (1) Caroll Edward Cole had no mental illnesses, and was aware that he was committing morally abhorrent acts.

Circumstance is another argument that is proposed as a cause of serial killing. As the study of the profiles of serial killers progresses many similarities in their pasts, and in their recurring actions become eerily apparent. As children many suffered through traumatic childhoods, usually being physically or mentally abused. From this fact, it emerges that all reported cases of abuse committed against serial killers was done by their mothers. Most serial killers kill either for personal gain (revenge, money etc...) or pleasure (usually sexual). Many serial killers have above average intelligence, and are well employed in high wage jobs (engineers, doctors, lawyers etc...). The majority came from middle to high-income families. They are usually handsome/ beautiful, very articulate, and sexually promiscuous. (1) As killers they have a specific trait that they look for in the victims that they ritually kill. In interviews, many serial killers admitted that the people they killed fulfilled some aspect of a person that abused or taunted them. Caroll Edward Cole killed little boys that reminded him of a schoolyard bully that taunted him for having a girl's name. He later expanded his modus operandi to include young brown haired females that looked like his abusive mother. Under oath, he told a story of childhood abuse inflicted by his sadistic, adulterous mother, giving rise to a morbid obsession with women who betrayed their husbands or lovers. "I think," he told the jury, "I've been killing her through them." (1)

Do these disquieting threads of commonality among serial killers make a strong case towards the idea that serial killers are caused by circumstance? It would appear to be true, until you take into consideration the distribution curve. If serial killers give the impression of being linked by similar circumstance it is because the genetic traits of serial killing are all found at one end of the distribution curve (which by chance could be the same end in which a higher probability of certain circumstances occurs). The normal distribution curve is one of the most commonly observed and is the starting point for modeling many natural processes. (4) There are also cases of serial killers that break the thread of commonality. For example, one pair of killers, killed as a form of earning income. In the 1800's William Burke and William Hare poisoned over 200 people, and sold their corpses to medical universities so that medical students could learn how to autopsy cadavers. (1) If there is no commonality of experience between serial killers, then serial killing must be genetic.

An interesting idea comes forward when discussing the commonality of serial killers. Many genetic diseases are carried by a female and become active in their male offspring. For example, the most common form of color blindness, affects about 7 percent of men and less than 1 percent of women. It is identified as a sex-linked hereditary characteristic, passed from the mother to her son. (7) Can the same be said for serial killing? It is observed that their mother abused the majority of serial killers. Following this with the thought that serial killing is caused by genetics, could the mother be passing the genetic trait down to her son? This would account for the reason that serial killers are predominantly male.

In this paper the question has been raised: are serial killers genetically different? This question spawned the idea that serial killing is genetic. To prove this the two main arguments against genetics were looked at, and disproved. The first argument proposed that serial killing was caused by mental illness, or more specifically that serial killing was caused by trauma to the frontal lobe. If you look closely at this argument it actually turns to prove that serial killing can be genetic. The argument generates the idea that we are all genetically encoded to become serial killers, and the frontal lobe (acting as a conscience) stops us from doing so. The argument does not refute the idea that serial killing is genetic. The next major argument towards the causation of serial killing supposes that a serial killer is created by circumstance. To oppose this one can simply say that the apparent commonality of experience among serial killers only demonstrates that since serial killers are genetically different, in general they are at one end of the distribution curve. If there is no commonality of experience, then serial killing must be genetically caused. Maybe our founding fathers claims of manifest destiny and predestination were accurate after all.

Web References

1)Crime Library, This is a web site that houses the profiles of many serial killers, it also give detailed case history, and modus operandi. It specializes in hard to find serial killer cases. Especially older ones.
2)AtoZ Serial Killers, Here you will find a psychological profile of serial killers. Has photos, and sketches of the serial killers. Gives profiles of modern, and operating killers.
3)Frontal Lobe Abnormalities, This site provides a very good case for abnormalities in the frontal lobe. It specializes in brain abnormalities, and how that affects behavior.
4) Distribution Curve, Here you will find provided a really simple description of the distribution curve.
5)A Study of Brain Physiology ,This site focuses on the physiology of the brain.
6)History of Psychosurgery, This site looks at the different treatments for brain diseases. It is a historical look at psychosurgery and the effects that it has on the brain.
7)A History of Color Blindness, A basic history of color blindness, and its genetic origins.
8)Oxford English Dictionary, This is the site that hosts the complete Oxford English Dictionary.


Ecstasy, the Brain, and the Media
Name: Kathleen F
Date: 2003-02-25 03:26:38
Link to this Comment: 4812


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Ecstasy has been glorified by countless Brit-pop drug anthems, condemned by staunch anti-drug foundations and even caused a controversial media debate when the post-mortem picture of eighteen year old Lorna Spinks was splashed across every newspaper in the United Kingdom, her Ecstasy-related death rendered in full gruesome color. The long-term effects and temporary consequences of Ecstasy have been a subject of heated debate in the past ten years as the pill has seen a surge in popularity. What exactly does Ecstasy do to the brain? What creates the euphoric effects? Why has it been used in therapy? And does the media's portrayal of Ecstasy rely on the facts of the drug, or skew the information to instill a sense of fear into citizens, parents, and teenagers?

Ecstasy (Methlenedioxy-methamphetamine, MDMA for short) is a synthetic, psychoactive drug with amphetamine-like and hallucinogenic properties. It shares a chemical structure with methamphetamine, mescaline, and methylenedioxyamphetamine (MDA), drugs known to cause brain damage (1). MDMA, in a simple explanation, works by interfering with the communication system between neurotransmitters. Serotonin is one of a group of neurotransmitters that carries out communication between the body and the brain. The message molecules travel from neuron to neuron, attaching to receptor sites. This communication activates signs that either allow the message to be passed or prevent the message from being sent to other cells. However, when MDMA enters the nervous system, it interferes with this system. After serotonin is released, the neurotransmitters are retrieved into the nerve terminal where they are recycled. MDMA hinders this process so that the serotonin is not drawn back in. This allows for an accumulation of serotonin, and also an increase in serotonin synapses (2). This surge of serotonin creates an emotional openness in the Ecstasy user. A sense of euphoria and ecstatic delight envelop the user. Some users report thinking clearly and objectively, and often claim to come to terms with personal problems or various other skeletons in the closet (3).

This is the reason Ecstasy resurfaced in the 1980s (after being developed in Germany in 1912 as a dieting drug due to the fact that amphetamines are appetite suppressors) as a tool in experimental psychotherapy, particularly regarding relationship and marital problems (4). In 1984 the drug was declared illegal in the United States after it started being used for recreational purposes. However, in June of 1999, Swiss courts ruled that dealing Ecstasy is not a serious offence. Ecstasy remains illegal in Switzerland but is now regarded as a "soft drug" rather than hard. The Swiss courts stated that Ecstasy "cannot be said to pose a serious risk to physical and mental health and does not generally lead to criminal behavior." Switzerland already had quite a liberal background regarding the use of Ecstasy. Between 1988 and 1993, long after psychotherapeutic Ecstasy use had been curtailed in the US, therapists in private practice were permitted to use Ecstasy within psychotherapy (5). Shockingly enough, this past November, the Food and Drug Administration approved the first ever U.S. study of Ecstasy as helpful medicine, claiming that previous testing has not focused on the drug's benefits, but on issues of toxicity (6).

However, the use of Ecstasy in psychotherapy is still hazy. If MDMA merely ups serotonin levels, and serotonin is responsible for making the individual feel just plain Good, is this really solving marital problems? If brain=behavior, and the brain is regulating the behavior through an artificial source, is this really genuinely therapeutic? Or is it a trick of the brain, flooding itself with neurotransmitters and depleting the euphoric sensation as levels return to normal in the next few days?

The idea of serotonin levels returning to normal in "a few days" seems idealistic, as well. What are the long-term effects of Ecstasy use? The serotonin absorption in the blood is refurbished in 24 hours but serotonin tissue levels are found to decline for weeks after Ecstasy usage. The receptors and neurons within the system are also found to degenerate over a period of time. Animal studies done by Dr. Karl Jansen have concluded that MDMA is far more toxic in primates. Studies done on rats revealed that the loss of serotonin in rats was four times less than in monkeys. Jansen declares the drug potentially hazardous for human use and also drew comparisons between serotonergic axons damaged by Ecstasy and those seen in Alzheimer's disease, where the most common receptor modification is a loss of presynaptic serotonergic receptors (7).

Finally, what to make of the highly publicized deaths claimed by the media to be "Ecstasy-caused"? In 1995 the English media championed the tragic story of teenage Leah Betts, posting the country with billboards claiming that, "Just One E Tablet killed Leah Betts." The country went ballistic over the story, but failed to mention that Betts had not died from the Ecstasy, but from brain swelling due to drinking too much water. The media opted to keep the message straightforward: Leah died from taking a single E pill. To say she died from excessive water consumption is not a good story (8).

I spent last year living in London. One day I looked out my window to see a huge billboard covered in deviant-looking pills with the caption "Which one is the killer? 1 in 100 E tabs can kill you." The amount seemed strikingly high to me, especially because studies had been done in Britain previously reporting that there were well over a million regular Ecstasy users in England alone. The number of deaths related to Ecstasy in the UK is about seven per year. The numbers didn't match up. If a million users are understood, this works out at 1:143,000, implying the risk is "Minimal", according to British Government Risk Classification developed by Sir Kenneth Calman, Government Chief Medical Officer. This means a citizen would have a greater chance of dying in a fishing accident than from taking an E pill (9).

Perhaps one of the biggest problems with Ecstasy is the aura of mystery surrounding it and the media hype of "the killer pill." One ray of light surrounding the tragic E-related deaths of young people in Britain (and other countries around the globe) is the November decision by the FDA to delve into the effects of Ecstasy. The only way these deaths will not be in vain is if more education and knowledge of MDMA and Ecstasy is brought to light.

References

1) National Institute on Drug Abuse

2) Seratonin and the Brain

3) Erowid.org, the Sensory Effects of MDMA

4) Therapeutic Uses of Ecstasy

5) Switzerland and Ecstasy

6) The Return of using Ecstasy for Therapy?

7) Adverse Psychological Effects of Ecstasy use and Their Treatment

8) The Leah Betts story

9) British Government Risk Classification


Attention Deficit Hyperactivity Disorder
Name: Nupur Chau
Date: 2003-02-25 04:05:45
Link to this Comment: 4813


<mytitle>

Biology 202
2003 First Web Paper
On Serendip


According to the National Institute of Mental Health, Attention Deficit Hyperactivity Disorder, or ADHD, is the most commonly diagnosed disorder among children (1). The disorder affects approximately 3-5 percent of children of school age (1), with each classroom in the United States having at least one child with this disorder (1). Despite the frequency of this disease in the United States, there still remains many discrepancies about the disorder itself, starting from the diagnosis and frequent misdiagnosis of ADHD, as well as the question of whether or not ADHD is an actual medical condition, or just a "cultural disease" (3).

According to the NIMH, frequent symptoms of ADHD include inattention, hyperactivity, and impulsivity (1). Examples of these three patterns of behavior can be found in the Diagnostic and Statistical Manual of Mental Disorders, which, summarized by the National Institute of Mental Health, states that signs of inattention include

* Distraction by "irrelevant sights and sounds" (2)
* Failure of attention towards details, resulting in "careless mistakes" (2)
* Frequently failing to follow instructions
* Losing or forgetting things used for tasks, such as toys or pencils (2).

Similarly, signs of hyperactivity and impulsivity include

* A restless feeling, shown through frequent fidgeting or squirming
* Failure to sit or maintain quiet behavior
* "Blurting out answers before hearing the whole question" (2)
* Difficulty in waiting in line or for their turn (2).

The Diagnostic and Statistical Manual of Mental Disorders also mentions that these behaviors must appear before the age of seven (2), and continue for at least six months (2). Most importantly, these patterns of inattention, hyperactivity, and impulsivity must affect at least two areas in the person's life, whether it be school, work, home, or social settings (2).

However, the National Institute of Mental Health also states that during certain stages of a child's development, children are generally inattentive, hyperactive, and impulsive (2). In other words, at certain stages, children may have the symptoms of ADHD, without actually having ADHD. Similarly, many sleep disorders among children create symptoms that are similar to symptoms associated with ADHD (3).

These discrepancies result in a high risk of misdiagnosis for ADHD: "ADHD is, after all, merely a collection of behaviors. There's no virus to look for, no blood test to administer" (3). The diagnosis of ADHD is based purely on observations and anecdotes, both of which have a higher chance of being wrong than something than a blood test or CAT scan. Additionally, the interpretation of the behaviors can be different for different "experts". Seeing as there is no "real" scientific process, many of the observations are open to interpretation, and thus different diagnoses.

It is because there is no "real" scientific process of diagnosing ADHD that many people think of ADHD as a "cultural disease" (3) rather than a medical condition. Others view it as a way to justify rowdy kids that have parents that fail to discipline them. Regardless of one's beliefs, the American Academy of Pediatrics states that between 4-13 percent of children aged 6 to 12 are affected with the disorder, whatever the disorder may be.


References

1)ADHD-Questions and Answers

2)ADHD

3)Parenting-Hyperactivity Hype?


The Controversy over Stem Cells and Parkinson's Di
Name: Melissa Os
Date: 2003-02-25 04:24:45
Link to this Comment: 4814


<mytitle>

Biology 202
2003 First Web Paper
On Serendip


Without any thought, without even noticing it happens, when one has an itch, they scratch it. The arm moves up to the face, the fingers reach down and move across the skin. This series of actions, which many of us do everyday is something individuals with Parkinson's disease struggle with every moment of their lives. Simple movements are replaced by frozen limbs that they or their nervous system can not move. Described by many as a type of momentary paralysis, the disease causes gradual degeneration in patients until they are no longer able to perform the most basic bodily functions, such as swallowing or blinking.

Parkinson's disease is a neurological disorder that is named after "the English physician who first described it fully in 1817" (4). The disease causes disturbances in the motor functions resulting in patients having trouble moving. Other characteristics that are not always present in every patient are tremors and stiffening of limbs. All of these characteristics, of the disease are caused by "degeneration of a group of nerve cells deep within the center of the brain in an area called the substantia nigra" (5). Dopamine is the neurotransmitter for these cells to signal other nerve cells. However as the cluster of nerve cells fail to operate, the dopamine can not reach the areas of the brain that affects one's motor functions (5). On average Parkinson's patients have "less than half as much dopamine in their systems as healthy people do" (8). The problem and controversy that arises from this disease is in the cure. Researchers, for years, have been attempting to unravel the mystery of what causes Parkinson's disease and how it can be treated and or cured. While many medications have been developed the most controversial treatment is the use of stem cells, which has raised debate not only in the scientific realm, but in political and religious circles as well.

In the past twenty years, many drugs have been developed to treat the disease. Although the cause of Parkinson's disease is still unknown, scientists have been developing methods of treatment and therapy. The idea is to replace dopamine in the brain, which is accomplished, to some extent, with the administration of L-Dopa. In conjunction with other drugs, L- Dopa "inhibits the enzymes that break down L-dopa in the liver, thus making a greater part of it available to the brain" (5). This treatment is very successful, but it only hinders the disease for a time and it is by no means a cure. That leaves us with stem-cells and the role they play in treatment of Parkinson's disease.

There are many different types of stem-cells which can be implanted in patients to regenerate or replace the damaged or abnormal cells caused by not only diseases like Parkinson's but also Alzheimer's and spinal cord injuries (2). A specific example in relation to Parkinson's is the harvesting of embryonic stem cells. These human embryonic stem cells can be transplanted into the brain to replace and create dopamine neurons. The controversy is in how one can obtain these stem cells. During fertilization, in humans, the embryo is hollow and contains cells that eventually develop into a fetus (1). Researchers have discovered, as recently as 1998, that the cells in the embryo contain all the tissues types, therefore becoming any cell in the body. Thus the stem cells can be transplanted into patients with diseases dealing with cell abnormality (1).

Many religious groups argue that stem cell research should be discontinued by the federal government, because the killing of the embryos is just as heinous as abortion. Stem-cells are extracted from a few different sources, either from surplus embryos created for infertile couples, umbilical cords, and from elective abortions. For pro-lifers the embryos used to extract the stem cells are equal to human lives being destroyed (6). The other side of the argument is that embryonic cells are not living- they do not have a soul. Much of this debate is rooted and overlaps with the debate over whether abortion, of any form should be lawful. Scientists have also argued that many of the embryos used would have been destroyed at some point anyway, so why not use them for furthering the good of the human race. Consequently, political debate over whether federal funds should be used to support the further research of stem cell transplantation continues to this day. An example of the relevance and immediacy of this issue is the speech President Bush gave in the summer of 2001, in which he approved funding from the government on research of 64 human embryonic cells (7) (1).

These arguments are all valid and much debated however there are other reasons, less explored that account for the reason some feel stem cell research should not be pursued. Much of the time when stem cells are inserted into the brain "they grow into every cell type and form tumor-like masses called teratomas, eventually killing there hosts" (1). The problem is not that stem cells fail to become dopamine neurons; the failure is in controlling the growth of these cells. While there has been research completed with successful control of the stem cell growth, the percentage is small and inconsistent (1). Therefore, at this stage it is still too early to attempt these transplantations of stem cells on human patients. However these studies represent a ray of light and hope for Parkinson's patients who could benefit greatly by the transplantation of stem cells into their brain and the resulting production of dopamine neurons.

There are no easy answers when it comes to a cure or a treatment for Parkinson's patients. The most promising cure is in stem-cells. Not only does the research and implantation of these cells into a treatment raise debate with religious groups, but it extends into the political realm as well. These debates all hold merit, but the bigger more important issue that is rarely talked about is that of whether stem cells can actually be effective in humans. Can researchers learn to harness the growth of these cells and if they can what does that mean for the future of these type of stem cell therapies? What moral and ethical implications does this process pose? While these questions are unanswerable at this point the only way to prove one way or another, whether these cells will be effective is to be able to research them, so that maybe one day Parkinson's disease can be an aliment of the past.


References

WWW Sources

1)articles from the National Academy of Sciences,particularly an article about embryonic cells by Curt Freed

2), encyclopedia of biological terms and articles, search under key word stem cells.

3),a Parkinson's disease society page that gives basic information.

4),

5), encyclopedia of biological terms and articles, search under key words Parkinson's Disease.

6) ,site that deals with religious views on stem cell research.

7),

8),science news archive with articles relating to current science issues.


REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Sexual Differentiaion and its affects on Sexual Or
Name: Tiffany Li
Date: 2003-02-25 05:13:05
Link to this Comment: 4815


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT

What controls a human's sexual orientation? The long-standing debate of nature versus nurture can be extended to explaining human sexual orientation. Is it biological or environmental? The biological explanation has been gaining popularity amongst the scientific community although it is only based on speculations. It is argued that sexual orientation is linked to factors that occur during sexual differentiation. The prenatal exposure to androgens and their affect on the development of the human brain play a pivotal role in sexual orientation (2). Heredity is also part of the debate. Does biology merely provide the slate of neural circuitry upon which sexual orientation is inscribed? Do biological factors directly wire the brain so that it will support a particular orientation? Or do biological factors influence sexual orientation only indirectly?

Gender is determined by the sex chromosomes, XX produces a female, and XY produces a male. Males are produced by the action of the SRY gene on the Y chromosome, which contains the code necessary to cause the indifferent gonads to develop as testes (1). In turn the testes secrete two kinds of hormones, the anti-Mullerian hormone and testosterone, which instruct the body to develop in a masculine fashion (1). The presence of androgens during the development of the embryo results in a male while their absence results by default in a female. Hence the dictum "Nature's impulse is to create a female" (1). The genetic sex (whether the individual is XX or XY) determines the gonadal sex (whether there are ovaries or testis), which through hormonal secretions determines the phenotypic sex. Sexual differentiation is not driven by the sex chromosomes directly but by the hormones secreted by the gonads (3).

Hormones are responsible for sexual dimorphism in the structure of the body and its organs. They have organizational and activational effects on the internal sex organs, genitals, and secondary sex characteristics (1). Naturally these effects influence a person's behavior not only by producing masculine or feminine bodies, but also by causing subtle differences in brain structure. Evidence suggests that prenatal exposure to androgens can affect human social behavior, anatomy, and sexual orientation.

Androgens cause masculinization and defeminization of developing embryos. Masculinization is the organizational effect of androgens that enables animals and humans to engage in male sexual behavior in adulthood. This is accomplished by stimulating the development of neural circuits controlling male sexual behavior (1). Defeminization is the organizational effect of androgens that prevents animals and humans from displaying female sexual behavior in adulthood. This is accomplished by suppressing the development of neural circuits controlling female sexual behavior (1). For example if a female rodent is ovariectomized and given an injection of testosterone immediately after birth she will not respond to a male rat as an adult, when she is given estradiol and progesterone. This demonstrates that she has been defeminized. If the same female rodent is given testosterone in adulthood, rather than estradiol and progesterone, she will mount and attempt to copulate with receptive females (3). This on the other hand is an example of masculinization.

For example in the congenital adrenal hyperplasia (CAH) disorder, the adrenal glands secrete abnormal amounts of androgens. The secretion begins prenatally and causes prenatal masculinization and defeminization (1). Boys born with CAH develop normally. However girls with CAH are born with an enlarged clitoris and fused labia. Sometimes surgical intervention is needed and the person will be given a synthetic hormone that suppresses the abnormal secretion of androgens (1). Money and his colleagues performed a study on 30 women with a history of CAH. They were asked to give their sexual orientation and 48% described themselves as homosexual or bisexual. They also exhibited more masculinized behavior (4). These results suggest that an abnormally high exposure to prenatal androgens affects sexual orientation.

Androgen insensitivity syndrome affects people who are insensitive to androgens. These genetic males develop as females with female external genitalia, but also with testes and no uterus or Fallopian tubes (1). If they are raised as girls, they seem to do fine and function sexually as women in adulthood. Women with this problem lead normal sex lives. There is no indication of sexual orientation toward women. Thus, the lack of androgen receptors appears to prevent both the masculinizing and defeminizing effects of androgens on a person's sexual interest (1).

There are two additional cases which show quite a strong correlation between sexual orientation and sexual differentiation. The first example is a genetically transmitted condition which is fairly common in one area of the Dominican Republic. It causes abnormal sexual differentiation in XY fetuses that produce an inadequate enzyme required to convert testosterone to DHT in the external genitalia (3). The child is born with ambiguous genitalia, labia (with testes inside) and an enlarged clitoris, causing partial masculinization. These individuals are usually raised as females, but at puberty the rise in testicular androgen secretion causes the phallus and the scrotum to grow and the body to develop in a male fashion. At this age the individuals start assuming male tasks and having girlfriends (3). These individuals prove that early testosterone masculinizes the human brain and influences sexual orientation and gender identity. The following example is with a set of identical twin boys. They were raised normally until seven months of age when the penis of one of the boys was accidentally burnt off during circumcision. The parents decided to perform a sex change on the child to remove the testes and raise it as a girl. It turned out that when the child reached adolescence she was unhappy and felt like she was really a boy. The family admitted to her what had happened and she got at sex change to become a boy again (1). This is another case that suggests that people's sexual identity and orientation is under the control of biological factors rather than environmental.

The brain is a sexually dimorphic organ. Neurologists discovered that the two hemispheres if a women's brain appear to share functions more than those of a man's brain. Men's brains are also larger on average than a woman's (1). Most researchers believe that the differences in the brain arise from different levels of exposure to prenatal androgens. Several studies have examined the brains of deceased heterosexual and homosexual men and heterosexual women. The studies have found differences in the size of three different subregions of the brain: the suprachiasmatic nucleus, the sexually dimorphic nucleus of the preoptic area (SDN-POA), and the anterior commissure (2). The suprachiasmatic nucleus was found to larger in homosexual mean and smaller in heterosexual men and women. The SDN-POA was found to be larger in heterosexual men and smaller in homosexual men and heterosexual women. The anterior commissure was found to be larger in homosexual men and heterosexual women and smaller in heterosexual men (2). It cannot be concluded that any of these brain regions are directly involved in people's sexual orientation. The results do suggest that the brains of heterosexual women, heterosexual men, and homosexual men may have been exposed to different patterns of hormones prenatally and that differences do exist.

If sexual orientation is affected by differences in exposure of the developing brain to androgens, there must be factors that cause exposure to vary. A study performed on rats showed that maternal stress decreased the secretion of androgens, causing an increased incidence of homosexual behavior and female-like play behavior in male rats (1). Prenatal stress has also been shown to reduce the size of the SDN-POA, which is normally larger in males (1).

The last biological factor that may play a role in sexual orientation is heredity. A study performed using identical and fraternal twins was performed. When both twins were homosexual, they were said to be concordant for that trait. If only one was homosexual, they were said to be discordant (1). There was 52% concordance of homosexuality in identical male twins and 22% in fraternal twins and a 48% concordance of homosexuality in identical female twins and 16% in fraternal twins (1). This study suggests that two biological factors may affect a person's sexual orientation, prenatal hormonal exposure and heredity.

Sexual orientation may be influenced by prenatal exposure to androgens as the studies strongly imply. So far, researchers have obtained evidence that suggests that the sizes of three brain regions are related to a man's sexual orientation. The case in the Dominican Republic and the twin whose penis was accidentally damaged shows that the effects or early androgenization are not easily reversed by the way a child is reared. The twins' studies suggest that heredity may play a role in sexual orientation. Despite all this evidence scientists are still unable to fully assert that biologically factors are responsible for sexual orientation. There are so many variations within society that could affect at person's sexuality, that it is impossible to make any assertions at this point. Therefore the nature and nurture debate is still open.


Sources

1) Carlson, Niel R. Physiology of Behavior. 7Th ed. Massachusetts: Allyn and Bacon,
2001.

2) Swaab, D.F., and Hofman, M. Sexual Differentiation of the Human Hypothalamus in
Relation to Gender and Sexual Orientation. Trends in Neuroscience. 18, 264-270.

3) Breedlove, M.S. Sexual Differnetiation of the Brain and Behavior. In Becker, J.B,
Breedlove, S.M, and Crews, D. (EDS). Behavioral Endocrinology. Pp 39-70.

4) Money, J. , Schwartz, M. and Lewis, V.G. Adult Erotusexual status and Fetal Hormonal Masculinization and Demasculinization:46, XX. Psychoneuroendocrinology, 1984, 9, 903-908.


Helen Keller: A Medical Marvel or Evidence of the
Name: Ellinor Wa
Date: 2003-02-25 06:37:21
Link to this Comment: 4816


<mytitle>

Biology 202
2003 First Web Paper
On Serendip


Everyone cried a little inside when Helen Keller, history's notorious deaf-blind-mute uttered that magic word 'wa' at the end of the scientifically baffling classic true story. Her ability to overcome the limitations caused by her sensory disabilities not only brought hope for many like cases, but also raised radical scientific questions as to the depth of the brain's ability.


For those who are not familiar with the story of Helen Keller or the play 'The Miracle Worker', it recalls the life of a girl born in 1880 who falls tragically ill at the young age of two years old, consequently losing her ability to hear, speak, and see. Helen's frustration grew along side with her age; the older she got the more it became apparent to her parents that she was living in more of an invisible box, than the real world. Her imparities trapped her in life that seemed unlivable. Unable to subject themselves to the torment which enveloped them; watching, hearing and feeling the angst which Helen projected by throwing plates and screaming was enough for them to regret being blessed with their own senses. The Kellers, in hopes of a solution, hired Anne Sullivan, an educated blind woman, experienced in the field of educating sensory disabilities arrived at the Alabama home of the Kellers in 1887. There she worked with Helen for only a little over a month attempting to teach her to spell and understand the meaning of words v. the feeling of objects before she guided Helen to the water pump and a miracle unfolded. Helen understood the juxtaposition of the touch of water and the actual word 'water' Anne spelled out on her hand . Helen suddenly began to formulate the word 'wa' and repeated it, symbolizing her comprehension. This incident was, without a doubt, the greatest revolution in educating those with similar problems and was labeled a medical miracle for its just phenomenon.


I cannot help but think of the incredible capacity had, not only by Helen Keller, but by the human mind. What enables the human brain to function with minimalized inputs? How is the human sensory system able to remain dynamic and adaptive?


The sensory function in the nervous system is a complicated one, which still stimulates unanswered questions. While most often a sensory disabled person will only experience a sensory malfunction in one area, Helen Keller experienced it in two: the loss of sight and the loss of hearing initially rendering her unable to speak at all. The loss of sight occurs when the retina, a layer of neurons formed at the rear of the eye and the only part of the brain can be seen from outside the skull no longer is able to perform properly. The natural ability to hear is orchestrated by receptor neurons located in the ear which are called 'hair cells' that transmit sound through vibrations When these two senses no longer remain intact, the possibility of learning to communicate appears to be doubtful.


Helen Keller's transformation from a victim of herself to a college graduate, publishing several books and traveling the world to help others with disabilities similar to her own, brings about many questions as to how a human being can transcend such incompetence. As previously noted Helen fell ill at the age of two, long before a child learns to read, and often before one learns to speak, especially in the early 19th century. Had she been blind, deaf and mute from her birth would she have been able to combat such ailments? Or was her previous natural sensory experience irrelevant on the grounds that she would have been too young to acknowledge its functions?


In order to properly answer these questions, a variety of facts and analyses must be apprehended. Primarily, we must consider the extent of the miracle at hand. Before the age of two, not only did Helen Keller not know how to read or write, but she also did not know what reading and writing was in the simplest form. The fact that she could negotiate, in her inexperienced mind the, the relationship between fingers basically drawing lines on her hand, pronunciation, and the actual object (water) itself seems implausible. Whether or not some sort of innate notion of language instilled in her brain since birth depends primarily on the amount of awareness in relation to the English language or the senses she acquired before she became impaired. Furthermore, had she acquired such a concept, it would not be enough to surpass the boundaries of her inability to communicate and understand.


Scientists Bohm and Peat have theorized on Helen Keller's aided ability to communicate. They postulated that her natural instinct and previous knowledge of the senses led her to combine the notion of 'A' – water in its natural form (in a variety of different ways including from the rain, tap, and pail) and 'B' – the letters Anne would write on her hand when she experienced A. Essentially, she associated one form of learning with another. However, if this theory holds true, where in it lies the final output? Helen's pronunciation of the word 'wa' meaning water has no connection to either the word written on her hand nor the feeling of the water in all forms.


This brings me to a conclusion that the I function must have played a key role in facilitating the process of compound understanding. Whether or not the I function exists in the literal sense, its presence began to make its way into scientific rationality long before Christopher Reeves.

References

1)jstor home page, Scientific Monthly Vol.15 No.3

2)originresearch home page

3)The Life of Helen Keller

4)5)Sensory Perceptions Homepage

6)More of the Life of Helen Keller


Brain Plasticity
Name: Kat McCorm
Date: 2003-02-25 06:47:04
Link to this Comment: 4817


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Throughout the line of questioning we have been following in our efforts to get "progressively less wrong" in our class wide model of the brain, a constant debate has sparked on the issue of whether brain equals behavior. If the agree that brain truly equals behavior, then we can surmise that the vastly differing human behavior must also translate to differing nuances in the brain. It is a widely conceded point that experience also effects behavior, and therefore experience must also effect the brain. On this point, I have been intrigued: are these differences in the brain mysterious; things as well theorized on by a philosopher as researched by a biologist? Or can an experience actually change the physical structure of the brain? In my web research, I found a partial answer in the concept of plasticity.

According to source (1), "Plasticity refers to how circuits in the brain change--organize and reorganize--in response to experience, or sensory stimulation." There appear be four types of stimuli to which a brain responds with change: developmental, such as in the newly formed and ever evolving brain of a child; activity dependent, such as in cases of lost senses; learning and memory, in which the brain changes in response to a particular experience; and finally injury induced, resulting from damage in the brain, as occurs in a stroke or in the well-know case of Phineas Gage. Although the particular change in the brain is dependent on the type of stimulus, brain plasticity can be widely described as an adjustment in the strength of synaptic connections between brain cells. (1)

The developmental function of brain plasticity is important not only in the world of early childhood, but also has implications for the function of an aging brain. As we age, the synaptic plasticity deceases due to the increased expression of neurotoxins in astrocytes which are responsible for cell-cell communication (2). Similarly, in youth, increased synaptic plasticity accounts for the inordinate amount of growth and learning that must occur in this stage of development.

Much research has been done on injury-induced plasticity, and continues to be done with the hopes of minimizing the effects of an injury on the brain. One case where is in brain injury due to stroke , wherein particular functions of the brain such as motor control, memory, or language may be affected. According to source (3) "a reorganization of brain functions may occur through 'uninjured' brain areas, allowing then-altered functions to be performed differently". If this function of brain plasticity can be exacerbated and emphasized, it is perhaps possible with further research and experimentation to minimize the effects of brain injury such that many or all symptoms are eliminated.

In the area of activity dependent plasticity, a study has been done comparing patients who had gone through a period of deafness and recently received cochlear implants to a control group made up of individuals with normal hearing (4). A positron emission tomography, or PET scan was done on both groups to compare which parts of the cerebral network were engaged in both groups. The results were such that the two groups, when confronted with the same stimuli, would in some cases use entirely different parts of their brains to receive and interpret the same sounds. According to an interpretatio of the study "These data provide evidence for altered functional specificity of the superior temporal cortex [and] flexible recruitment of brain regions located within and outside the classical language areas." (4) This study indicates to me the adaptivity of the brain to outside stimuli in such a light that it become irrefutable.

In reflecting on this newfound knowledge, it becomes clear that experience does indeed account for differences within the brain, modulating synapses such that a certain response is favored as a result of a certain stimuli. Coming from a family of psychologists, I imagine applying this phenomenon to a situation often encountered by my parents: a person with a phobia. Typically, psychologists account phobias to a long ago encounter, sometimes remembered and sometimes long forgotten, which caused the person to always have the protective response of fear when faced with a certain stimulus. Although an explanation such as this is referred to by the general public as psychobabble, the explanation of brain plasticity may lend credence to this psychological theory. Say a child has a fear inspiring encounter with a spider. This single experience has the power to modify the strength of the synapses within his brain such that every time the arachnophobiac is faced with spiders, his fear response is much more potent than the average persons, In this way, I can see the plasticity of learning and memory relating to long held theories about differing human behavior.

The concept of synaptic plasticity is not merely limited to humans, however. In one study done by Sharen McKay, et al., of Yale University, the well known vertebrate enactors associated with synaptic plasticity (neurotrophins) were artificially placed in an invertebrate setting and yet again influenced neuronal growth and plasticity(5). Although invertebrates do not produced neurotrophins, substances with similar effects have been isolated in several invertebrate species. These results suggest that synaptic plasticity is a function that is widely found in many species of life, not merely in a developmental role, but also in the adaptation and modification of adult brains. This article again inspired my theorizing: does the suggestion that "adult plasticity [is] highly conserved across diverse phyla" (5) indicate that the ability of the brain to adapt to learning, memory, and specific experience is an evolutionary advantage?

. In researching and learning about the types of brain plasticity, I found more evidence for the idea that brain equals behavior, and that the experiences which are input into the brain do directly cause changes in synaptic activity and strength. Although I have found the ideas inherent in brain elasticity intriguing, I'm also afraid that they have raised more questions for me than they have answered. I find myself wondering about the evolutionary advantages of plasticity, the amount of time it takes to effect a physical change in the brain, and why, even in the face of plasticity, do certain functions of the brain never seem to adapt to the demands of the modern day world?


References

1) Brain Plasticity, on the John F. Kennedy Center for Research on Human Development web site.
2) Glia-neuron intercommunications and synaptic plasticity, on the PubMed web site.
3) A Comprehensive Functional Approach to Brain Injury Rehabilitation, on the Brain Injury Source web site.
4)Functional plasticity of language-related brain areas after cochlear implantation, on the PubMed web site
5) Regulation of Synaptic Function by Neurotrophic Factors in Vertebrates and Invertebrates: Implications for Development and Learning, on the Learning & Memory web site.


Pathways of Pain and Possibility: The Dysfunction
Name: Danielle M
Date: 2003-02-25 07:26:15
Link to this Comment: 4818


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

As a disorder reaching nearly every culture, historic and contemporary, headache has been experienced in some form by the majority of the human population. Despite its relative age and prevalence, we have yet to fully ascertain either its cause, its organization, or its cure, and continue to suffer everything from quotidian tension-type headache to cluster or "suicide"-type headache with little substantial relief. So just what is it we know about headache? Broadly, headache is largely understood in terms of its external characteristics, that is, its symptoms and their effects on the sufferer. Headache is described as "a throbbing, pulsating or dull ache, often worsened by movement and varying in intensity [that] can be a disorder unto itself, such as migraine, or a symptom of another disorder ranging from a head injury to a brain tumor" (1). The term headache comprises a wide range of subtypes, the most common and studied of which are tension-type, migraine, and cluster headache (1). Of focused attention in this paper is migraine headache, a disorder now afflicting between 23 and 28 million Americans, about 60% of whom go undiagnosed (1).

Historically, migraine and its symptoms have appeared in popular as well as medical literature for more than 1000 years (1). Arguments addressing the nature of migraine pain have thus been long debated, particularly as to whether it was the result of "a disorder of blood vessels (vascular) or of the brain itself (neural)" (2). As early as 1873, Edward Liveing published a theory of the link between migraine and other neurological disorders, including insomnia, epilepsy, vertigo, and vasovagal faints. He asserted with surprising accuracy what contemporary researchers are still discovering: that migraine and other neuroses are a dysfunction of the nervous system originating in the cerebral cortex of the brain (2). Now, hundreds of years later, researchers are engaged in the continuing search for proof of the true pathogenesis of migraine and its cure. What has thus far been accepted is the rather vague definition of migraine headache as "a primary episodic headache disorder characterized by various combinations of neurological, gastrointestinal and autonomic changes" (1). Yet for its pervasiveness and deleterious effects, what do we truly understand about migraine headache? How is migraine related to other neurological dysfunction? Is there a cure? if so, how can it be found?

That which is most understood about migraine is of course that which is most readily observed and documented--its manifest symptoms. Previously lacking the objective means of discovering the underlying cause, researchers were constrained to understanding headache "by analyzing its symptoms" (3). Migraine headache has thus been divided into two types according to the accompanying symptoms: common migraine, which affects the majority of sufferers, and classical migraine, which affects about 15% of migraine patients (3). Both common and classical migraine are associated with extreme throbbing pain that is often unilateral (one-sided), nausea, and "exquisite" sensitivity to light (photophobia) and sound (sonophobia) (3). Those with classical forms of migraine, however, experience an additional symptom, "a distortion of vision that can be hallucinatory in nature" known as a migraine "aura" (3). Both classical and common migraine follow a common symptomatic pattern once triggered, lasting a total of between 12 to 24 hours. The pattern of migraine behavior is divided into four stages: in classical migraine, symptom progression begins with the prodrome, leading to the aura, the headache, and then the postdrome; common headache follows the same path without the experience of an aura (1).

The migraine prodrome occurs in the 24 hours preceding the headache stage and consists of feelings of extremes of euphoria and unbridled energy or lethargy and depression, with the possibility of "vague yawning," or distinct food cravings or aversions (2).

In classical migraine, the prodrome is followed by the aura, which arises before or during the headache, usually lasting about 30 minutes. The aura is characterized by visual sensations of "flashing lights, zigzag castellations, balls or filaments of light," or metamorphopsia, where objects appear distorted and out of proportion (2). The sufferer's hands may feel swollen, or her tongue too large for her mouth (2). These disturbances are thought to result from dysfunction in the parietal and occipital lobes of the brain; they have also been credited as the possible inspiration for "the grotesque experiences of Lewis Caroll's Alice in Wonderland," as Caroll was known to suffer from headache (2).

The next stage in both classical and common migraine is the headache itself, sometimes centered on the eye or temple, or beginning as a tightness at the back of the head and spreading slowly into a one-sided pain. The pain is described as throbbing or pulsating, but may also occur as "a non-specific dull ache" or a band of pressure. Movement or exertion worsens the discomfort, and nausea and vomiting may occur at the height of pain (2).

Finally, the headache is proceeded by the resolution phase, the postdrome, marked by "diminished headache intensity" in company with "a sense of relief--almost euphoria in some," though there may be "a vague bruised aching in the head and general tiredness" (2).

Triggers of migraine are much less well understood than the symptoms they cause--researchers are unable to reach a consensus on which factors consistently trigger migraines, primarily because there is in fact little such consistency. The factors most frequently blamed include hormones, insomnia, hunger, or changes in barometric pressure, though recurrent triggers vary across individual sufferers (4). Of the identified migraine triggers, two in particular could lead to new observations in the theory of migraine: hormone fluctuation and food craving/aversion. The implication of cycling drops in estrogen levels in migraine has led to an increasing body of research and speculation about women and migraines (4). In the effort to explain the pronounced prevalence of migraine among females, researchers have noted the development of significantly higher rates of migraine in girls than boys during puberty, the increasing difficulty of managing migraine during the premenopausal period, and the worsening effect of oral contraceptives and estrogen replacement therapy on migraine occurrence (4). The result of migraine from falling estrogen levels may be indicative of the close correlation between hormones and the neurotransmitters involved in migraine dysfunction. Because estrogen is involved in regulating neurotransmitter action in priming blood vessels to receive serotonin, when estrogen drops, the receptiveness of blood vessels to serotonin may drop as well, causing both migraine and depression-like symptoms.

Certain foods such as chocolate, coffee, red wine, cheese, or fish have also been identified as triggers in the past, though some researchers now believe dietary factors to be related to "suggestion and psychological conditioning" or "neighborly suggestion and pseudoallergy," rather than actual food-specific sensitivity (2). This dismissal however, may not consider closely enough the relation between hormonal changes and unusual attitudes toward specific foods often experienced during periods of hormonal fluctuation, such as menstruation or menopause, and seen in the prodromal stage of migraine. Such a link may be indicative of the role of particular foods and their biological effects on brain chemicals associated with the sensation of emotional sorrow or pleasure. The unusual attitudes toward food seen in the prodromal stage of migraine, along with the co-occurring disruptions of emotions (euphoria, lethargy, depression), may be a clue about the possible relationship of migraine not only to neurotransmitter regulation in the blood vessels, but about food and its ability to affect migraine-related brain chemistry. Despite assertions, food may well be a noteworthy trigger of migraine attack worthy of exploration, potentially able to reveal further insight into the neurological activity of neurotransmitters, food, and pain/pleasure in migraine headache.

Beyond what is externally manifest during migraine headache, what is known of the internal pathogenesis (cause) and pathophysiology (changes associated with dysfunction)? At present, knowledge of headache, migraine or otherwise, is relatively limited, and what is known has been only recently discovered through new brain-imaging technology. Previously, little to no information on brain activity during migraine was available, and theories of the disorder's cause and internal manifestation were vague and unsubstantiated. The most widely held past theory believed stress to be the main contributor to all forms of headache, and was supposed to somehow act on the arteries of the brain, thereby constricting blood vessels and reducing oxygen flow to the brain. The vessels were said to then inexplicably contract and irritate the nerves of surrounding arteries, which subsequently caused the sensation of headache pain (the brain itself does not feel pain) (4). More recent researchers now question the theory, reasoning, as did neurologist Neil Raskin, that: "If it were true, we would get a headache after running around the block," which also causes blood vessels in the brain to contract (4).

How then does the brain experience pain? If the brain itself does not feel pain, why do headaches produce severe head pain? The answer (at present) lies with the structures of the brain that do feel pain, which include the meninges (the three membranes that envelop the brain and spinal cord: the pia mater, dura mater, and arachnoid membrane), arteries and veins, paranasal sinuses, the eyes, and the cervical spine (2). The mechanisms causing head pain are the distension, inflammation, and traction of the blood vessels, ventricles, and dura matter (2). When pain sensitive structures are irritated, the tirgeminal nerve, the sensory pathway between the head and neck, acts as the pain conduit. Specifically, pain is run from the nerves that supply pain-sensitive structures at one end, to the upper three cervical nerve roots at the other, where nerves are sent up into the face and scalp (2). The location and function of the trigeminal nerve in the brainstem could account for the location of pain around the eye or temple in migraine headache and the sensation of unilateral throbbing pain of migraine (3). In experimental studies this theory has been supported by the parallel pain experienced when researchers irritate the blood vessels of the brain (6).

Revised notions have therefore begun to suspect that blood vessels themselves are more likely the "victims" of headache, rather than the cause, as originally argued (3). One of the first experiments to show such evidence for a different headache pathogenesis found that by irritating the brains of laboratory rats (whose trigeminal stimulation occurs very similarly to humans') with a chemical or probe. The result was an observed wave of electrical impulses through the brain, followed by reduced electrical activity that "spread out from the point of contact like ripples on a pond" (3). This experimental approach was next tested with human brains, this time using positron-emission topography to observe resultant activity. The result was the same--an electrical wave that researchers subsequently termed cortical spreading depression (CSD) seemingly sparked by the firing of abnormally excitable neurons at the back of the brain (7). These impulses were seen to spread across the cerebral cortex of the brain down the brainstem, where important pain centers are located, and to activate afferent (sensory) neurons through the trigeminal nerve (7), (8). As the wave passes, blood flow in the brain increases, then sharply drops--the pain of headache is thought to result from the activation of the brainstem and the resulting dilation of blood vessels (8). This drop may result from signals from the hypothalamus, itself influenced by such triggering factors as seasonal patterns, diurnal and biological mood swings, fatigue, or hormonal fluctuation (6)--all of which are suspected prompters of migraine pain, and interestingly, often related to depression as well. The stimulation of nerve cells in the brain stem by CSD thus increases blood flow to the scalp and face through a parasympathetic reflex (called the trigeminovascular reflex). Here, the neurotransmitter serotonin, along with norepinephrine, is implicated as the likely chemical messenger between the neurons originating in the brain stem, telling blood vessels to dilate or contract (3). This new migraine model thereby attempts to answer the question of whether headache pain is vascular or neurological by implicating them both in a cause and effect relationship called the "trigeminovascular reflex" (9).

With a growing knowledge of what underlies head pain, what then accounts for the often implicated emotional pain of migraine symptoms? Among researchers studying migraine headache in patients, "there is a consensus that migraine sufferers have a higher risk of anxio-depressive disorders than the general population," but the nature of the link remains uncertain (10). The relationship may be a result of chance, one disorder can cause the other, for example, or they may each be incited by common environmental factors, or there may be a common biology underlying both conditions (as some researchers think most likely) (11). Interestingly, not only is depression often seen in patients who experience migraine, but many migraine symptoms mirror similar features of depressive and anxiety disorders. The euphoria of the prodrome and postdrome stage, as well as the pronounced lethargy of the prodrome echo in miniature the drastic mood shifts of bi-polar depressive disorder and some personality disorders. This may be explained by the similarity of brain activity found in depression and migraine, including activity in the hypothalamus, which is implicated in the hormonal and mood regulatory functions of the limbic system; and the neurotransmitter serotonin, which is linked to depression and the sensation of emotional suffering or pleasure. According to researcher Fred Shetell, depression is mediated by the same neurotransmitters, and "migraine and depression often occur in the same people...More than 70% of people with migraine have a close relative who also suffers from the disorder" (9).

The significance of the relation between migraine headache and serotonin, is further evidenced by the "significant comorbidity between chronic headache and psychological distress" (5). Both "frequent headache and frequent disability are associated with depression, anxiety, and impaired quality of life;" while "headache associated with anxiety or depression tends to be more severe and often requires supplementary psychological treatment in addition to headache therapy" (5). In a study by published by the medical journal Cephalalgia, researchers found a significant link between abnormal perceptual experiences found in migraine aura and mood changes, particularly those of increased depression and irritability. The spread depression of cortical electrical activity was concluded to be responsible for temporal lobe and limbic system dysfunction experienced in both migraine and depressive disorders (5). Further, in a study by Guillem, Pelissolo, and Lepine, after examining the lifetime prevalence of major depression in migraine sufferers, it was found that 34.4% of migraineurs experienced clinical depression, in contrast to only 10.4% in those without migraine. For bipolar I disorder, the prevalence was 6.8% among people with classical migraine, versus 0.9% in those without (5). This evidence opens a new avenue of discussion, and begs for more investigation. Increased understanding of the neurological ties of migraine and depression could aid in the development of more comprehensive knowledge of migraine headache and how it can be treated. Beyond migraine, better comprehension of such neurological activity would also represent significant progress in the understanding of brain function and dysfunction in a more general sense.

Finally, the question most crucial to migraine sufferers: is there a cure to eliminate any and all future migraine pain? and if so, when will it be found? Currently, a cure for migraine headache pain has yet to be found, though patients can seek a variety of treatments, including psychophysiological therapy, physical therapy, and pharmacotherapy (5). Pharmacotherapy is the most widely used method, whose popular drug treatments are classified as either prophylactic (preventative), analgesic (pain reducing), or abortive (to reduce acute pain) (4). While no cure has yet been found, the discovery of CSD activity in the brain and the added understanding of the headache process to which it has contributed signals a significant advance in headache research. Studies of CSD could be useful in prophylactic migraine treatment by detailing how or why headache is triggered and just how "CSD activates the trigeminal nerve" (2). The recent discovery of a gene apparently linked to migraine heredity may also offer another step toward more effective treatment. In January of 2003, Italian scientists discovered a gene, ATP1A2, in human chromosome 1 that was linked to severe migraines in a four-year study of six generations of a migraine-prone family. The scientists believe a mutation of the gene causes "a malfunction of the pump that shifts sodium and potassium through the cell" (10). The presence of the mutant gene was associated with migraine pain, the sensation of tingling hair, and the perception of flashing lights commonly associated with migraine aura (10). If such a gene can be definitively linked to the source of migraine pain, development of more direct and effective preventative treatment could become a significantly nearer reality.

While much remains unknown about migraine headache, new research and experimental brain-imaging tools are actively seeking deeper and more comprehensive knowledge. Advances may be slow in the understanding of a disorder that is likely as old as man himself, but with each new discovery about the internal structures and functions of the brain, we come closer to better treatment, maybe even a cure. Because of the vastness of the brain's complexity, understanding one manifestation of dysfunction, such as migraine, may well also provide new leads to understanding other closely linked neurological disorders, such as depression or anxiety. Perhaps migraine disorder is a single feature in a chain of intertwined, co-occurring disorders, or perhaps it stems from a malformed gene passed through familial generations. With as much as remains unknown, each possibility can lead to fascinating new relationships and functions of the brain previously unimagined.

References

1)Headache, by Stephan D. Silberstein, on the Encyclopedia of Life Sciences web site.

2)Migraine, by J.M.S. Pearce, on the Encyclopedia of Life Sciences web site.

3) Brownlee, Shannon. The Anatomy of a Headache. U.S. News & World Report, 31 July 1989.

4)Migraine, by The Merck Manual of Diagnosis and Therapy on the Merck Manual web site.

5)Identification of Patients with Headache at Risk of Psychological Distress, by D.A. Marcus, Headache, May 2000 on the Web of Science web site.

6)Intrinsic Brain Activity Triggers Trigminal Meningeal Afferents in a Migraine Model , by Hayrunnisa Bolay, Uwe Reuter, Andrew K. Dunn, Zhihong Huang, David A. Boas, and Michael A. Moskowitz, Feb 2002 on the Nature Medicine web site.

7)New Research Could Open Doors to Better Migraine Treatment, by CNN on CNN.com.

8)New Understanding into the Mechanics of Migraine, by Kathryn Senior, Lancet, 9 Feb 2002 on Lexis-Nexis.

9)Sheftell, Fred D. Migraine Headaches. Scientific American, Summer 1998.

10)Italian Scientists Discover Migraine Gene , by Rachel Sanderson, 21 January 2003 on MEDLINEplus.

11)Inheritance of Cluster Headache and its Possible Link to Migraine, by L. Kudrow, Headache, July 1994 on the Web of Science web site.

12)Prevalence of Frequent Headache in a Population Sample, by J. Liberman, R. B. Lipton, Al Scher, and W. F. Stewart, July-August 1998 on the Web of Science web site.


Of Two Minds
Name: Sarah Feid
Date: 2003-02-25 08:24:56
Link to this Comment: 4819

<mytitle> Biology 202
2003 First Web Paper
On Serendip

Intro
"I'm of two minds on the matter." "I can't make up my mind." "I'm having an internal argument." Our language is full of idioms that make it sound as if there were two disagreeing voices inside our heads. Often, that is indeed how it feels. But is that sensation physiologically supported? Can a brain fight with itself? Can there be multiple independent centers of consciousness in a single head? Until the 1960s, there was no way for us to test this feeling of internal disagreement. But when a surgery aimed at alleviating epileptic seizures also isolated the two hemispheres of the patient's brain, science was surprisingly afforded that opportunity.

Background
The left and right hemispheres of the brain are connected by a dense bundle of neurons called the corpus callosum. This bundle is primarily responsible for communication of information between the two hemispheres, connecting them with approximately 200 million callosal axons (in humans.) (1) In some cases of multifocal epilepsy, the electrical discharges that cause seizures can start in one hemisphere and spread to the other by way of the corpus callosum, greatly increasing the severity of the fit. Sometimes this condition is unresponsive to medication, at which point the spasms can only be controlled with more drastic measures.(2)

In 1961, Dr. Michael Gazzaniga performed an operation which had been pioneered on animals by Drs. Ronald Meyers and Roger Sperry, but which had never before been tested on human patients. In this procedure, called a commissurotomy, the surgeon opens the skull, lays back the brain coverings with a cerebral retractor, and cuts through the corpus callosum. While this prevents a seizure from spreading, it also prevents information from being passed between hemispheres. Thanks to Dr. P. J. Vogel, we now know that severing the anterior ľ of the corpus callosum can effectively stop the spread of a seizure, while allowing full communication between the hemispheres to remain. (3) However, the behavior of full-commissurotomy patients has been extensively documented, and provides fascinating insight into the specialization of the hemispheres, the nature of the brain, and the nature of consciousness itself.

Results
To understand these behaviors, one must first remember that neurological wiring of the body is, for the most part, contralateral. Signals travel from the left side of the body to the right hemisphere of the brain and back, and vice versa. For example, the left hemisphere "sees" out of the right eye, and moves the right hand.

Patients who have undergone a commissurotomy, "split brain patients," experience strange, acute post-operational symptoms: many have trouble speaking, or are completely mute; often they experience inter-manual conflict, where their hands cannot cooperate; when speech is possible, many remark that their left hand is behaving in a "foreign," "alien" manner, and they express surprise that it is acting so "purposefully." These symptoms fade over time. The long-term symptoms are much more difficult to distinguish in an everyday setting. Split brain patients function normally in social settings, except for slight memory problems. Pianists can still play the piano, artists can still paint. Once in an experimental setting however, more phenomena can be observed which point to the dramatic impact of full commissurotomy on cerebral function. (2)

Commissurotomy patients are usually tested with a tachistoscope, a device that presents to the patient a screen with a focal point in the center. The tachistoscope then flashes a word, picture, or scene in the non-overlapping visual field of one eye. This is done faster than the eye can move, to ensure that only one hemisphere of the brain receives the input.(4) When a key and a ring are flashed simultaneously in the left and right fields respectively, a normal viewer reports seeing "keyring". A split brain patient reports seeing only "ring". Even more notably, if the patient is told to use his left hand to pull the object he saw out of a bag, he pulls out a key. When asked what it is he pulled out, (without looking) the patient says, "A ring." (5)

Why the apparent duality?

The explanation requires a rethinking of the brain. Think of the hemispheres as separate, independent entities – the left hemisphere is commonly the center of language, while the right hemisphere is mute. (2) When the patient was asked what he saw, the "speaking" hemisphere replied truthfully: it saw a ring. It received no information from the right hemisphere (left eye) because of the commissurotomy, and so had no way of knowing about the key. When the patient was asked to use his left hand to pull out what he saw, the right hemisphere, which saw a key, told the left hand to pull out a key. Subtle tactile sensations are contralaterally wired, so the left hemisphere could not feel the key; thinking all that was shown was a ring, it assumed that the left hand pulled out a ring.

These startling findings are one example of a long series of experiments conducted to pinpoint the nature of the dual-hemisphere brain. Researchers over the years have examined everything from language capability and pitch acuity to academic strengths and compensation mechanisms. Among other things, they have found that the left brain is better at language, math, and analytical processing, while the right brain is superior in visuospatial processing, music, and form. Both hemispheres can control shoulder and upper arm movement, but only contralateral hemispheres can control fine hand motion and visual/audio processing. (2)

Over time, the hemispheres develop compensatory "clueing" mechanisms. For example, a patient may begin speaking the names of objects out loud to get her left hand to retrieve them. If she is trying to figure out what she is holding in her left hand without looking, the left hand may dig the point of a pencil into its palm, shooting ipsilaterally-wired pain signals to both hemispheres and causing the left hemisphere to realize that it's holding something sharp.(6)

The hemispheres seem to have different opinions, too – a split brain patient named Paul awoke from surgery to show an unheard-of amount of language skill in his right hemisphere. Though it could not speak, there was not only word association, but also complete sentence comprehension. Researchers were able to ask each hemisphere the same questions and compare the written answers. When asked, "Who are you?" both hemispheres answered, "Paul." But when asked, "What do you want to be?" the right hemisphere answered, "Automobile racer," while the left replied, "Draftsman." (1)

Philosophy
So here we have a picture of a dual consciousness, a picture so powerful we refer to each hemisphere as a separate person. How accurate is this image? Do we really have two separate centers of identity, which work together until they are split? Is this a real-life Jekyll and Hyde condition, with two minds in the same head? How separate is split hemispheric consciousness in reality? Gazzaniga comes to an interesting conclusion:

After many years of fascinating research on the split brain, it appears that the
inventive and interpreting left hemisphere has a conscious experience very different
from that of the truthful, literal right brain. Although both hemispheres can be viewed
as conscious, the left brain's consciousness far surpasses that of the right. Which raises
another set of questions that should keep us busy for the next 30 years or so. (7)

Not only does he believe that each hemisphere has its own consciousness, he clearly finds the left hemisphere to be superior to the right. In contrast, Dennett, author of Consciousness Explained, states:

When Dr. Jekyll changes into Mr. Hyde, that is a strange a mysterious thing. Are
they two people taking turns in a single body? But here is something stranger:
Dr. Juggle and Mr. Boggle [standing for the left and right cerebral hemispheres of a
single body] too, take turns in one body. But they are as like as identical twins! Why
then say that they have changed into one another? Well, why not: if Dr. Jekyll can
change into a man as different as Hyde, surely it must be all the easier for Juggle to
change into Boggle, who is exactly like him. We need conflict or strong difference
to shake our natural assumption that to one body there corresponds at most one agent (6)

His argument here is that, since there are no significant differences in personality between the two hemispheres (as there are with Jekyll and Hyde) there are not two separate consciousnesses.

In his book, Dennett seeks to discredit a theory he calls the Cartesian Theater of the mind. In the Cartesian Theater theory, the mind consists of a sum of parts that feed into a central ethereal "theater," a site of synthesis, a nucleus of consciousness with input from everywhere in the body.(8) That theory has interesting parallels to the brain's being the nucleus of the nervous system, interpreting inputs from all over; but in this theory, there is a central locale where this synthesis is projected. Where is that in a split brain patient?

To apply that theory to the brain, we would see a single "consciousness," independent of individual hemispheres, yet viewing their information in a sort of mental simulcast. Without the corpus callosum, this simulcast would have the handicap of receiving not a single synthesized feed of information from an integrated brain, but two distinct and sometimes contradictory lines of input. The mental "picture" is split, and confusing for a while, but in the end it is comprehendible by the "consciousness." This would account for the disorienting acute post-operational symptoms, and the subsequent return to normal social function. In this model, it is not the consciousness that is split, simply the picture presented to the consciousness.

Conclusion
So which is it? Does splitting the brain split consciousness into two distinct instantiations? Or does splitting the brain merely disjoint the "theater projection" presented to the single immaterial consciousness of an individual? That is a question open to debate. Those who believe in a transcendent human soul may subscribe more easily to the Cartesian Theater application, while devoted followers of the brain=behavior philosophy might more readily agree with Gazzaniga. Perhaps the contradictions evidenced by split brain behavior point to a dramatically strange Jekyll-Hyde situation, with a foreign and mute brain-half in control of the left side of the body. Or maybe, when their hands fight each other, commissurotomy patients are simply experiencing what we all do: they can't make up their minds.

References

(1) Split Brain Consciousness An extensive educational site about the brain and split brain studies, on Macalester College's site.

(2) The Split Brain A paper by J. Bogen et al. Bogen was one of the first extensive researchers of this phenomenon. This paper on Caltech's site is an excellent comprehensive resource.

(3) Splitting the Human Brain Article by Paul Pietsch PhD on the ShuffleBrain website, a resource a la Serendip.

(4) Neuroscience for Kids A good, simple, but detailed description provided by the faculty of the University of Washington.

(5) Two Brains or One? The Split Brain Studies A history of Sperry and the early studies.

(6) Dennett on the Split Brain A philosopher's review of Dennett's book, on the Psycoloquy website.

(7) Human Neurobiology: Split Brain Research An overview on the Educational Cyber-Playground website.

(8) Stage Effects in the Cartesian Theater: A review of Daniel Dennett's Consciousness Explained Another philosophical review, on the Psyche website.


The Coorelation between Drug Tolerance and the Env
Name: C. Kelvey
Date: 2003-02-25 08:36:40
Link to this Comment: 4820


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

When considering the dynamics of brain and behavior, another component that enters the equation is environment. If brain equals behavior then changes to either should result in an altercation to the other component. The question that arises is whether a change in the environment produces change in brain chemistry and therefore, behavior. A connection between brain, behavior and environment may be observed in the context of drug tolerance. There are a collection of questions that seem essential to consider when attempting to correlate the brain's development of an observable drug tolerance and the environment.

• Does the environment affect drug tolerance? How?

• What observations correlate environmental cues and a tolerance to drugs?

• How can an understanding of the correlation between environment and tolerance be applied to medical care and treatments?

Tolerance is caused by the brain's ability to adapt to or compensate for the presence of a chemical (1). Tolerance developed after a repeated exposure to a drug is due to two possible biological processes. One process involves a decrease in the concentration of the drug at the effector site due to changes in the absorption, distribution, metabolism or excretion of the drug. The other process involves changes in the sensitivity towards a drug due to adaptive changes that diminish the initial effects of a drug. The nervous system is able to adapt and thereby reduce the initial effects of a drug by using two methods. The first method involves a change in the number or properties of drug-sensitive receptors. The second method provides a coordinated compensatory response to counteract or oppose the effects of a drug (2).

In terms of drug administration, various observations indicate that the nervous system adapts through a compensatory response. A delayed recovery to the initial stimuli would be expected if the receptors were numerically or physically altered. However, after terminating a drug treatment, a delayed recovery to the initial stimuli has not been observed. Instead, the body readapts to the drug-free state (2). Furthermore, reactions that are almost directly opposite to the desired effect of the drug are observed. These reactions are termed withdrawal symptoms and are observed when no drugs are administered and compensatory processes operate unopposed (2). If the nervous system provides compensatory responses, then the question arises as to whether environmental cues initiate the response.

Environmental cues that initiate responses were observed by the Russian physiologist, Ivan Pavlov (1849-1936). Pavlov made initial observations pertaining to 'classical conditioning'. Pavlov correlated environmental cues and physiological changes as he observed dogs salivating in response to a collection of cues that signaled feeding time. Without the stimulus of food present, there was an observable response to the anticipated stimuli as the dogs salivated in preparation for the emanate arrival of food (3).


Siegel et al. (1982) applied Pavlov's model of 'classical conditioning' to the administration of drugs (4). Siegel et al. observed that the anticipated stimuli signaled by environmental cues provided an additional drug tolerance. An experiment was designed with three groups of rats. For thirty days, one group received injections of heroin in the Colony room (normal living conditions) and placebo (dextrose) in the Noisy room the next day while the second group, received placebo in the Colony room and heroin in the Noisy room. The third group received placebo in both the Colony room and Noisy room. After 30 days, the rats were injected with a potentially lethal dose of heroin. Of the rats that had never been previously injected with heroin, there was a 96% mortality rate. There was a 64% mortality rate with rats that were administered heroin in an alternative room to where they had previously been injected with the drug. Of the group of rats that received heroin injections in the same environment, there was only a 30% mortality rate. (5)
These observations show that while there is a tolerance to the drug itself through repeated exposure, environmental cues result in an additional tolerance.

As the nervous system attempts to maintain homeostasis, disturbances created by a drug trigger compensatory reactions. If environmental cues were coordinating the brain to engage in specific reactions and develop a drug tolerance, then differences within the brain would be expected. The neurons in the hippocampus and amygdala are postulated to participate in a mechanism of learning and memory necessary for processing and assigning value to drug-associated cues (6). Indeed, Mitchell et al. (2000) observed that a tolerance induced by environmental cues compared to a tolerance without consistent contextual pairing involved different patterns of neuronal activity in both the amygdala and the hippocampus (6).


The conclusions that are drawn from the previously mentioned observations suggest an additional tolerance caused by environmental cues. Such results can be applied to medical care and treatment. Within medical terminology, "drug overdose" may be a misnomer in many cases. An overdose may not only be the result of an alteration of the dosage. When ten drug addicts who had experienced near-death overdoses were questioned about their environment while injecting heroin, seven out of the ten claimed to have been shooting up in a new and unfamiliar setting (7).

Changing the environment where a drug is administered may be advantageous for those patients suffering from chronic illnesses and limitations to treatments, due to developing tolerances. Typically, the development of a drug tolerance results in either an increase in the dosage, which often involves toxic side effects or a change in the prescription if the option is available. Considering the role environment plays in drug tolerance offers an alternative. That is, a patient may limit the progression of developing a tolerance by changing the environment. Astonishing medical professional, a patient suffering from cancer endured a treatment that was expected to become ineffective within two years, yet continued to be effective after almost five years (8). The lack of tolerance was attributed to the fact that the patient moved twice during the previous years of treatment and therefore changed the environmental cues (8).

The treatment of patients in rehabilitation centers may also be based on observations made pertaining to environmental cues and physiological adaptations. Since an addict's response to drug-paired stimuli reflects a learned compensatory response, clinicians may weaken the response and onset of a craving through a repeated, controlled exposure to the drug-paired environmental cues, without the administration of the full drug dose (9).

Due to the observations mentioned, the model for drug tolerance within the nervous system can be modified. Traditionally, drug tolerance was a direct consequence of prolonged and repeated input of a drug resulting in a change within the nervous system. Instead however, in addition to the drug, there are numerous additional inputs pertaining to the environment in which the drug is administered. The collection of additional inputs causes conditioned responses within the nervous system. The inputs provided by environmental cues contribute to the observable drug tolerance and explain the increased level of tolerance that is conditional to a consistent setting.


References

1)Tolerance Definition Information

2)The American College of Neuropsychopharmacology Descriptive account for the adaptive processed that regulate tolerance to behavioral effects of drugs

3) Biography of Ivan Pavlov

4) Siegel, S., Hinson, R.E., Krank, M.D. and McCully, J. Heroin "overdose" death: contribution of drug-associated environmental cues. Science 216, 436-437 (1982)

5)University of Plymouth, Department of Psycology page clear outline of experiment

6) Nature Neuroscience

7) Siegel, S. Pavlovian conditioning and heroin overdose: reposts by overdose victims. Bull. Psychonomic Soc. 22, 428-430

8)A Practical Application of Environmentally-Induced Tolerance Theory

9) American Psychological Association


Where am I Going? Where Have I Been: Spatial Cogn
Name: Erin Fulch
Date: 2003-02-25 08:44:34
Link to this Comment: 4821


<mytitle>

Biology 202
2003 First Web Paper
On Serendip


In the complex dissection of the human brain evolving in our course, great strides have been made on the path to comprehension of thought and action. Evidence concerning the true relationship of mind, body, and behavior has been elucidated through discoveries of the neural pathways enabling active translation of input to output. We have suggested the origins of action, discussed stimuli both internal and external, as well as concepts of self, agency, and personality interwoven with a more accessible comprehension of physical functionality. However, I remain unable to superimpose upon the current construct of brain and behavior a compatible notion of awareness of self. What are the cognitive and neural mechanisms involved in understanding the spatial relationships between oneself and other objects in the world? How do we even become aware of space and the environment in which we live? What element of the nervous system governs those processes, which enable human beings to navigate through space?

The term "spatial cognition" is used to describe those processes controlling behavior that must be directed at particular location, as well as those responses that depend on location or spatial arrangement of stimuli (1). Navigation refers to the process of strategic route planning and way finding, where way finding is defined as a dynamic step-by-step decision-making process required to negotiate a path to a destination (2). As a spatial behavior, negotiation demands a spatial representation; a neural code that distinguishes one place or spatial arrangement of stimuli from another (1). What, though, serves as such a representation in navigation and from where are these representations derived? The processes occurring within the hippocampus provide such representations.

The hippocampal mode of processing is concerned primarily with large distances and long spaces of time. These processes demand a very specific form of spatial representation, which relate locations to one another as well as to landmarks in an environment, rather than simply to the agent of action. Spatial attention and action, which result from encoded sensory information, are controlled by the parietal neocortex (1). Information relating to the location and stimuli derived from that location is encoded in sensory cortices. Informed by this egocentric information, allocentric representations provide a basis from which one's current location and orientation can be computed from one's relationship to sensory cues in the environment. This particular set of locations is referred to as a cognitive map. Cognitive maps are those overall images or representations of the spaces and layout of a setting, which ultimately function in the recognition of location (2) .

Cognitive maps can also be more tangibly understood as the internal representations of the world and its spatial properties stored in memory (3). This memory can include the ideas of what is present in the immediate environment, what are the attributes of that environment, where is the location of a goal environment or destination, and how to physically progress to that location. These maps cannot be interpreted as cartographic maps existing within the brain, but rather as an assembly of incompletely integrated, discrete pieces such as landmarks, routes and regions (4). These pieces are determined by physical, perceptual, or conceptual boundaries. Neural representations of these elements of space are maintained over time and the brain must solve the problem of updating them each time physical location is altered, and subsequently, a receptor surface is moved (5). With every movement of the eye, we introduce a new object into our visual field. These novel objects activate a new set of retinal neurons. Despite this constant change, we experience the world as stable, and remain capable or more capable, rather, of movement through space. Hence, we are granted the ability to visually accept two-dimensional information and convert it to layout knowledge of simultaneous interrelations of locations and, through the complex workings of the human brain, undertake creative navigation.

Embracing a new comprehension of the human perception of physical space and reveling in the completion of my first web paper, I suddenly become aware of a striking truth. Before me, I see nothing more than a two dimensional space, images on my computer screen and virtual addresses and locations, through which I have myself been traversing and navigating. And yet, I feel as though I have recently utilized those same capabilities and cognitive resources on which I depend to find my way to my next class. There remain myriad questions to answer about the human potential for utilization of spatial cognition, and the utilization of spatial learning developed by the personal experience and internal and external stimuli.

Web Sources
1) Models of Spatial Cognition , a paper by the Institute of Cognitive Neuroscience.

2) Psycholoquy , Golledge article of great interest.

3) Spatial Cognition , a paper by Hanspeter A. Mallot.

4) MITECS , The MIT Encyclopedia of the Cognitive Sciences.

5) The Mind's View of Space , an academic paper.


Through Different Eyes: How People with Autism Exp
Name: Alanna Alb
Date: 2003-02-25 09:07:58
Link to this Comment: 4822


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Many of us have heard of the neurological disorder called autism, and have a general sense of what the term "autism" means and all of the typical behaviors that belong in its category. Yet, I must question how many of us out there who do take an interest in autism really understand how having this disorder can totally distort one's perception of what one experiences in the world. A person with autism senses things differently then we normally do, and also responds to them in other ways – what we would call "abnormal behaviors". Why is this so? According to scientists, MRI research studies have shown that the brains of autistic individuals have particular abnormalities in the cerebellum, brain stem, hippocampus, amygdala, the limbic system, and frontal cortex (7). This provides substantial evidence that autistic behaviors must be in some way caused by these abnormalities. The problem is that we do not know exactly how or why these abnormalities cause someone with autism to experience the world differently than we do. This underlying issue of autism has always greatly intrigued me, and yet the topic of sensory integrative dysfunction in autism has been overlooked for many years. Articles and documents addressing this feature of autism have begun to appear only recently. While conducting research for my paper, I found it a challenge to find articles that specifically talked about this topic that I desired so much to learn about. Thus, the ultimate goal of my discussion is to reveal a misunderstood, hidden world – the complicated sensory dysfunctions that underlie autistic spectrum disorder.

What have we found out so far about how people with autism experience the world? All the information that we do know has been pieced together from observations of autistic behaviors, and recently, the personal accounts of high-functioning autistic persons. Dr. Temple Grandin, a professor at Colorado State University who has autism, has been able to provide us with an in-depth look into the sensory world of autism: "I pulled away when people tried to hug me, because being touched sent an overwhelming tidal wave of stimulation through my body...when noise and sensory over-stimulation became too intense, I was able to shut off my hearing and retreat into my own world" (7). Tito Mukhopadhyay, a 14 year old boy from India with severe autism, has also been able to give us a somewhat clearer picture of what he experiences: "I am calming myself. My senses are so disconnected, I lose my body. So I flap [my hands]. If I don't do this, I feel scattered and anxious...I hardly realized that I had a body...I needed constant movement, which made me get the feeling of my body" (2).

These accounts have provided a special glimpse into the sensory disorders that accompany autism. It is fascinating to see how Dr. Grandin and Tito are living examples of how the autistic person perceives the world. At first glance, the two testimonies seem very much alike to me. Both of these autistic persons' nervous systems are constantly overwhelmed by the sensory input that their bodies receive. However, a much closer look reveals to me the key differences between the two. Dr. Grandin is a high-functioning autistic person whose nervous system receives too much sensory input. Her brain is painfully overwhelmed by the flood of information, and in response to this she withdraws from the source of input by "shutting off" one or more of her senses in a desperate attempt to find relief. Tito, on the other hand, is a low-functioning autistic person who is amazingly still able to communicate what he is feeling to the rest of the world. According to his testimony, Tito's nervous system actually receives so little input that he cannot sense a connection with his own body. His hand flapping response is his attempt to calm himself and gain a sense of his body's existence. The comparison between Tito and Dr. Grandin demonstrates an unmistakable yet perplexing truth: no two autistic people are alike. Although they may share common behaviors, these behaviors will appear in all sorts of combinations and will vary in levels of intensity. It is my opinion that this observed irregular pattern of autistic behaviors is partly what has contributed to its being ignored for so long. I find it unfortunate that researchers in the past had probably cast autism sensory issues aside simply because it was just too baffling.

Observed autistic behaviors such as hand flapping, tapping and /or mouthing objects, toe walking, rocking back and forth, head banging, and vocalizing, along with the testimonies of various autistic individuals, have led researchers to believe that those with autism are either severely over-sensitive, under-sensitive, or both to outside sensory stimuli. Autistic persons have said that they have visual distortions and impaired depth perception of their environment, noxious sensations, and auditory, proprioceptive, tactile, and kinesthetic impairments (1). This is evidence that their nervous systems do not process sensory information correctly, so they feel overwhelmed by the abundance or lack of sensory information that their nervous system is receiving. In response to such confusing input, they exhibit abnormal behaviors in an attempt to either reduce the amount of input their nervous system is receiving or increase it. Such behaviors like tapping or vocalizing allow them to know where their "boundaries" exist in their environment, since they cannot see the world the same way we do (4).

The distortions in sensory perception have been linked to certain brain abnormalities discovered in the brain autopsies and MRI images of different people with autism. Normally, our internal "brain maps" give us a sense of our bodies and involve the regions of the brain that deal with the senses and movement. MRI images depict that autistic persons have scrambled brain maps (2). In other words, the information connections for sensory functions still exist, but they are located in the wrong parts of the brain. For example, face-recognition areas in the brain of autistic persons have actually been found in the frontal lobes, which is quite contrary to the specialized location of face-recognition in the normal brain (2). Autistic brains have been found to be larger than average, and they contain an incredible amount of electrical discharges in the hearing regions. The cortical columns of the brain contain a much higher amount of cells than the norm, and also make extra connections between neurons. This excess circuitry is what is believed to cause problems in sensory function (10). Abnormalities found in the brain stem, cerebellum, hippocampus, amygdala, and the limbic system may also explain many sensory processing problems. For example, the amydala, the emotion center of the brain, is underdeveloped, as well as the hippocampus. The hippocampus controls sensory input as well as learning and memory, so immature growth in this region of the brain would most definitely explain some common autistic behaviors. In the frontal cortex, it has been discovered that there is a significant lack of Purkinje cells. How this abnormality relates to neurodevelopment and mental function is still unclear to researchers (9).

As of today, researchers recognize the common pattern of autistic behaviors, and they have located what and where the abnormalities in the brain are. They have agreed that these abnormalities may be contributing to the behaviors often observed in autism, but exactly how and why is still a widely debated topic. Various researchers have come up with their own specific interpretations of the connections between the autistic brain and autistic behavior. For example, in terms of visual perception, researchers have theorized as to whether autistic persons really do have severe visual distortions of what they see, even though they are by no means blind. Many have debated how and why some autistic persons will rely on touching objects to recognize the identity and location of objects despite their apparent ability to see. We know that parts of the autistic brain that control vision may be under or over-developed, but it is not understood how sometimes autistic persons may be able to see just fine, while at other times they behave as if they were truly blind. From what I have read, it seems that some professionals question if these visual experiences and such are truly caused by physiological problems in the brain, or if they are just mere hallucinations. The second argument seems highly unlikely to me when so many apparent abnormalities in the autistic brain have been detected. It only makes sense that the visual disturbances would be attributed to something physiological, not psychological.

Unfortunately, there has also been an overwhelming reluctance by professionals to rely on the testimonies of autistic persons who are capable of describing their condition. I was rather shocked to come upon this fact while conducting my research. I do not understand why they would refuse to listen to the ones who suffer from the disorder, because they are the only ones who can actually explain what it means to live in the world of autism. Do these researchers believe that the words of a mentally disabled individual are not plausible? Such negative attitudes displayed towards society's mentally disabled have only delayed in the quest to solve the baffling puzzle of autism.

In conclusion, we are left with more questions than before, and no definite answers. I have explained the complications surrounding sensory integrative dysfunction in autism, hoping to make others aware of how much it affects those living with autistic spectrum disorder. Autistic people will respond to a lack or abundance of sensory input by flapping hands, shutting off certain senses, and doing other abnormal behaviors. Several abnormalities have been found in the autistic brain, but many researchers debate what the connections are between these abnormalities and autistic behavior. These debates, as well as disfavorable attitudes towards the mentally disabled, have only slowed our progress in the search for answers. I can only hope that in the future improved research studies and technology, as well as increased awareness and compassion among society, will have helped to improve our knowledge and understanding of sensory dysfunctions in autism.

Bibliography and Additional Links:


1)Can Foundation Page,A Case Study of Distorted Visual Perception in Autism


2)Autism Today Page, A Boy, a Mother and a Rare Map of Autism's World


3)Autism Today Page, Different Sensory Experiences/Worlds


4)Autism Today Page, Possible Visual Experiences in Autism


5)Autism Today Page, Reconstruction of the Sensory World of Autism


6)Autism Today Page, Auditory Processing Problems in Autism


7)Autism Info Page, My Experiences with Visual Thinking Sensory Problems and Communication Disorders


8)Autism Today Page, An Inside View of Autism


9)Pub Med Page, Nicotinic Receptor Abnormalities in the Cerebellar Cortex in Autism


10)Pub Med Page,
Stereological Evidence of Abnormal Cortical Organization in Individuals with Autism


11)Autism and Related Conditions Page, Sensory and Motor Disorders


12)National Center for Biotechnology Information Page, Neurofunctional Mechanisms in Autism


13)Autism Today Page, Sensory Disorder


Bradykinesia
Name: Laurel Jac
Date: 2003-02-25 09:28:53
Link to this Comment: 4823


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Perception is an intangible part of every being. It cannot be explained, defined, or nailed down the way that most scientists would like. In some ways, perception can be taught-a person's circumstance and background would cause him or her to perceive a situation in a particular way. In other ways, perception is unpredictable and ever changing. Even here, attempting to describe the indescribable, there are flaws in the last two sentences because they are based on the writer's perceptions of perception. It is too subjective for a "scientific" definition. What does it mean for a person suffering from bradykinesia? If the individual understands the condition, she will realize that the perceptions she has are not always correct. She may perceive herself to be making a fist, or spreading her fingers, but in fact she may not have accomplished this. (1) A blind and deaf person may have perceptions about the world around her. Most likely, her only correct perceptions are those perceptions about herself such as: "I am moving my arm," or "I am swinging my legs." The external stimuli are ineffective in this person, whereas a person with bradykinesia can only react completely and at a normal speed to external stimuli. Because of damage to signal pathways, the internal stimuli are ineffectively activated. (1)

Bradykinesia is a Greek term that means "slow movement", and it is one of the constituents of Parkinson's disease (2), although it is also associated with other diseases. For patients suffering from Parkinson's disease, it is usually the most tiring and frustrating of the associated conditions. Small muscle movement is one of the first affected areas of the body. Therefore, a common test is to ask the patient to tap her finger. Normal individuals tap their fingers at 4 or 5 Hz, someone afflicted with bradykinesia can usually manage only up to 1 Hz.(3) There is no cure for bradykinesia. Certain surgeries may help decrease the condition. Hope remains for the future while researchers continue to explore different possibilities, examining causes and treatments that will lead to a cure and to more clues about Parkinson's disease, Huntington's disease, and other conditions with which bradykinesia is associated. (4)

Not only does bradykinesia affect the speed of movement, the person's ability to complete a motion suffers. While walking, the arms no longer swing, but remain lax at the person's sides. (2) If a person suffering from bradykinesia is asked to make a fist without looking, he or she can tell that their movements are slow. The individual does not, however, realize that she never makes a complete fist. The fingers may be only slightly bent. Slowness of movement, by itself, is not bradykinesia. It can be associated with depression, stroke, or any kind of brain injury. The motion may be slow, but it is complete. The shuffling stride of a Parkinsonian and the monotonous voice are examples of bradykinesia-not only is the person moving the feet and legs slowly, they are unable to make a full stride. (1) Bradykinesia is a motion disorder and a perception disorder. A therapy treating both aspects has yet to be found.

Most cases of bradykinesia do not affect the entire body, but it is possible for the whole body to be afflicted. In severe cases, patients have a noticeable, unnatural stillness. While seated, they make no movements the way normal individuals move, such as crossing and uncrossing legs, crossing and uncrossing arms, shifting the angle of their head, or tapping their fingers. Bradykinesia in the face can lead to what is called "mask face" because of the constant lack of expression. Loss of voice volume and lack of intonation are common occurrences in those with bradykinesia.(4)

Since the most obvious symptoms of bradykinesia can apply to other diseases, and there are several causes for the condition, it is important that all bases are checked, including a thorough family history, drug use, and any pre-existing conditions must be considered. MRI generally rules out stroke and tumor. Sometimes a lumbar puncture is taken to measure the presence of metabolites and neurotransmitters in order to rule out metabolic disorders such as dopa-responsive dystonia. Drugs such as calcium-channel blockers, neuroleptics, and serotonin-reuptake inhibitors can be causes of bradykinesia. (4) Other causes include dementia, depression, dementia, drug-induced parkinsonism, and repeated head trauma.(5)

The problem of bradykinesia lies in the internal connections and stimuli within the nervous system. It is not a behavior that can be overcome with training. When there is an interruption of the signal pathway in the nervous system, the brain has an amazing ability to reroute the signal, creating a new pathway. The new pathway, however, does not accomplish effectively the task of carrying the signal. It takes a much longer time, and thus the translation of the signal into motion is much slower than before. Bradykinesia is reduced in patients who undergo therapies that attempt to restore the signal pathway by dealing with the interruption.(2) Even during such treatments, however, the patient may experience what is called the "freezing phenomenon" in Parkinson's disease. The entire body becomes frozen, and the person is literally trapped within her own body, unable to help herself. Although the specifics are not known, it is suspected that the freezing comes when there is a dopamine deficiency in a specific portion of the nervous system called the substantia nigra. Since people who suffer from bradykinesia respond better to external stimuli than internal stimuli, triggering an internal stimulus with an external one can end the "freezing phenomenon" control over their bodies.(6)

One of the identifiable chemical imbalances in people suffering from Parkinson's disease is the depletion of dopamine levels within specific regions of the brain. The treatment, therefore, is L-dopa, a precursor of dopamine, which is designed to convert to dopamine within the brain, compensating for the dopamine losses.(4) This type of medication does much to slow the progression of the disease, but it has little affect on bradykinesia. Surgeries are also used to alleviate the symptoms of Parkinson's disease. The two main surgeries performed are the thalamotomy and the pallidotomy. Of these, only the pallidotomy affects bradykinesia. The surgery involves placing a heat lesion, in the globus pallidus internus, that is commissioned to correct the abnormal amount of discharged nerve cells located in that area of the brain. This treatment was in existence before the introduction of L-dopa in the 1960s. L-dopa became overwhelmingly popular, but because of occasional drastic side effects, not all Parkinson's disease patients can take the medication. Furthermore, there seemed to be little long-term success of L-dopa. Despite this, it continues to be one of the main treatments for Parkinson's disease.(7)

As long as there is creativity and variation of perception, research will continue to explore and probe further into the hows and whys of diseases such as Parkinson's. While research on the larger diseases continues, new information on the symptomatic conditions will surface and help to explain the large and small pictures. Presently, much research is taking place. Neurotrophic proteins are being researched as ways to protect nerve cells from deteriorating. Neuroprotective agents such as naturally occurring enzymes are being investigated in their relationship to the deactivation of so-called free radicals. Explorations continue in neural tissue transplantation and genetic engineering.(8) The possibilities that research suggests are exciting. It holds the promise that future generations will only dig deeper, learn more, and question more. Bradykinesia is part of every day life for some. Others have never heard of it. Perhaps someday research will reveal something exciting that may lead to more information that will lead to effective treatment and better understanding of the functions within the brain.

References

1)Curing Parkinson's disease in our lifetime

2)Parkinson's disease caregiver website

3)Awakenings: the Internet focus on Parkinson's disease

4)We Move

5)General Practice Notebook

6)Freezing Phenomenon in Parkinson's disease

7)Neuroscience Clinic

8)Holistic Medicine


From Entropy to Avalanching Sand Piles: Important
Name: Geoff Poll
Date: 2003-02-25 09:30:02
Link to this Comment: 4824


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

The year is 1967 and Theodore Bundy, an average American college student has fallen in love with Stephanie, a dark haired co-ed of the same state university. He convinces her to go on a few dates, but she quickly loses interest, later citing his lack of ambition. The rejection on his heels, Bundy shifts gears and spends the next six years of his life transforming himself into the law student of her dreams. When they meet again Bundy holds the upper hand and Stephanie falls in love. A short time after the small wedding ceremony Bundy abandons Stephanie during a ski vacation and she never hears from him again.(2)

In the context of this short historical blip from the life of America's most "normal" serial killer the ensuing killing and mutilation spree may be explained in any number of ways. Biologically we could look for an imbalance in neurotransmitter firing or an oversized development in the frontal lobe of his brain. Sociologically we could point to society's need to produce deviants in order to see itself more clearly. A psychoanalyst might notice that Stephanie and most of his victims bore a strange resemblance to Bundy's mother, of whose identity he was deceived until late in adolescence.

Each of these explanations provides its own compelling paradigm for looking at 'abnormal' behavior, then leaves great gaps in the understanding of our own 'normally' irregular behavior. We will forever be attracted to deviance models as a way of examining that which we are not, but 'normal' human behavior is also sporadic. More vigorous models are needed to take in the inconsistency, pick out the places where patterns begin to emerge, and go there to seek a more profound summary of observations.

With the help of these models we will find a place among us for naturally occurring Ted Bundy's, but more importantly is the perspective we gain in looking towards our own variable behavior against the backdrop of millions of years of evolution. At the end we will have few definitive answers, but many notable implications for the way that we perceive our world on many different levels.

For a jumping off point we start with the smallest example of randomness found in nature: the atom. The Second Law of Thermodynamics describes a system beginning with a large but already dissipated amount of energy coming from the breakdown of chemicals in the sun and ending when that energy disperses at the level of each tiny atom whose 'random' movement is propelled by that energy.(10)

The study of Entropy is the study of the amounts of this energy being dispersed in a given process at a certain temperature. Although this is a founding law of nature, life does not rise from nature through entropy but through the blocking of entropy. While each particle carrying a potential amount of energy will unload that energy and spread it to as many other less energetic particles as possible, systems work to use the energy by creating boundaries and walls, both physical and chemical. (10)

The human body presents us with a tangible example of this process in action. Still thinking about tiny particles dispersing their energy to other tiny particles, entropy can be observed in just about every human biological process as there is always energy flowing in or out of the body. The human body therefore exists as an open system in a state that is far-from-equilibrium. This state is marked by the disorderly movement of energy always leaving the system and the constant need for replenishment through the metabolism of food and oxygen. We do contain some of the energy, storing it as proteins and carbohydrate molecules, but it is in the disorder that the true nature of the human body becomes apparent. (10)

If we were to map out human behavior the way we do the behavior of tiny particles, we would find the same trend towards randomness. Whether an individual or an atom, objects under the sun move in ways that we can not predict. The Chaos Theory (CT) centers on this principle.(9)

So named for the chaotic behavior first observed in gaseous molecules, the CT is a misnomer. The bulk of the theory deals with looking at this chaos instead of through linear equations as was previously done, through three-dimensional nonlinear maps. A nonlinear mapping is one way to describe the activity of a system whose multiple pieces interact dynamically over the course of time. Each piece moves without repeating itself, and without direct correlation to any other particular piece in the system. Using this mapping technique scientists observe beautifully ordered patterns emerging from an otherwise chaotic set of data points. These data points are simple representations of the type of structures that make up most of the world's complex systems. (9)

To better understand the idea of a dynamically acting complex system we turn to Per Bak's sand piles and their potential to self-organize, as living systems do, to a point of criticality. The sand pile begins when grains of sand are dropped continuously onto a table. The sand forms a pile within its confines and at some point the pile will crumble, or avalanche. In repeating the dropping of sand over time, it was found that the pile will always organize itself into a predictable sand pile, but the avalanches will never be predictable. In a given time period, there will be a certain number of small and large avalanches, but it is impossible to predict when they will come.(3)

Looking closer at the sand pile we see many tiny grains, each alone acting randomly according to its fall and any number of variables it experiences. Together the grains form a larger, more stable structure. At this point there are two forces acting on the pile. Entropy tells us that a structure approaching its critical point—where it has grown to maximum size and strength—is full of potential energy that will be dispersed if given the right activation boost. At the same time the pile's stability to this point is mediated by forces that will continue to hold the pile stable. Alone, a grain of sand has no say in either process, neither building a stable pile, nor causing one's collapse. These types of results are completely system dependent.

The data this model produces are consistent with that of larger sand piles, like mountains and their tendency to avalanche, as well as stock market activity and other social systems. Bak's most compelling parallel is a translation of the sand pile to a living and evolving ecosystem. In this "landscape," each species acts in relation to greater system the way a grain of sand acts in relation to its pile. The species, defined as a group of individual organisms acting at the same fitness level, is always working towards the critical state characterized by maximum fitness within the ecosystem.(6)

In the ecosystem model, the collapse of the species at its critical level is the extinction of a species or a mutation to a 'new' species. Both of these events are very regular functions of the natural evolution of the ecosystem. Again it does not make sense for a species at its critical level to mutate or become extinct, but in an ecosystem there are countless species all working at the same time for the same goal of maximizing their fitness, which produces a nonlinear network.(6)

When most of the species can stabilize at a high fitness the system will remain in a state of stasis. While any species would like to remain in stasis, there will always be a species with a lower fitness level. Being deficient the barrier set up for this species to go past its criticality will be small and it is likely to mutate or become extinct. This mutation of a seemingly unimportant species does not command much attention in itself, but if we remember the web that makes up the ecosystem we will quickly understand the significance. (6)

If our one species was directly connected to only two other species, the mutation/extinction would upset the immediate landscape of these two and their fitness—previously stable—would have to be reevaluated. Each of the two species has direct connections to two other species and it goes on through the network of both animal and plant species as well as the self-regulated temperature and chemicals making up the ecosystem. The almost immediate result (in terms relative to natural evolution) of this small change is a major shift in a landscape to which each other species' fitness is no longer applicable. (6)

The point to be taken from this model is that the resulting changes and catastrophes are a normal part of the evolution of a landscape and that these changes come from within the network and not from outside influences. The environmental pressure that "causes" the avalanche is constant. The dependent variables are the fitness of a set species and the natural forces, or barrier,* keeping it at its critical state. (6)

Making a quick jump from one complex system to another, we see that the Nervous System (NS) of an individual human being, one of the main communication networks in our bodies, is 99% internal communication and relies little on outside world input. Focusing on the NS and looking to its smallest distinguishable mechanism, we find the nerve to be a self-regulating system.(7)

Along each axon directing information from one neuron to another, there are sets of gated Potassium (K+) gated channels. Particles of K+, following entropy, move towards dispersing their energy, but they can only move as far as the axon walls that contain them. When the gates open up and let the charged K+ through, it flows out and causes a negative charge that pulls it right back in again. The system continues this way until it reaches a threshold, a critical state where the same push comes from within the axon as outside. The system, forming a potential battery, waits in stasis for an activation energy. (7)

That battery is the beginning of an extensive internal system. The end has all ten billion neurons interacting through 1,000 billion synapses. (1) This is not a linear system. It is complex and dynamic with many loops for feedback and self-regulation. With a bit of imagination we can picture a landscape which is the human body, where the resting potential of the axon is the critical state of the neuron and every thought, movement, or reaction is an avalanche.

Neurons fire in all or nothing blasts, but the different firing patterns within the networks of neurons represent climaxes of varied sizes. As with a natural landscape where the species who last evolved is most likely to arrive with low inhibitory barrier and fitness level and be the next to evolve again, full thoughts and muscle motion may very well be the result of cascading bursts of neuronal activity.

At the same time, on a scale a bit more gradual than a neuron firing, we see development in the body, both physically and "mentally," which makes sense in light of the constant extinction/mutation/evolution that goes on according to this model. The neurons do not change physically. Instead, what we see and perceive is always the result of the patterns of those neurons we have had since birth, and the constantly evolving relationships among them.

The human body is always working to maximize its individual fitness and the fitness of the landscape in the context of the larger landscape of the world. The NS is one example of this but it is not the end of the story. Valera includes also the Endocrine and Immune Systems in his model of human cognition and perception. His Santiago theory (with Maturana) states that the mind should not be thought of as an object but a process, involving all of these systems as well as the outside environment. The two scientists do not deny that a physical world exists, but they do not see a separation between us and our perception and that world. (1)

It is our ability to think abstractly that makes the separation. That ability seems to come from the cortex, a subset of our NS. If our three main systems contribute to our perception of the world, we are losing something in abstractly conceptualizing. In simpler organisms, their cognition does not allow abstract reasoning but they still perceive the world enough to get by. There have been interesting experiments with simpler organisms, cited by Paul Grobstein, talking about the increase in "reliable input/output relations in the spinal cord" when parts of the NS are removed from an animal.(11) They perceive more efficiently and so the question is then how is the rest of their perception affected?

We can observe this phenomenon in quadriplegics who have a part of their systems cut off. After observing a reflex reaction of a quadriplegic's foot, that same individual may very well claim that, "I can not move my foot". Though his or her brain functions, their perception of the world is not accurate.(7) A quadriplegic has 'consciousness,' and so observing this behavior we see that consciousness is just a small part of our potential for perceiving the world.

Ted Bundy was asked countless times in many different ways if what he did was wrong, but his answer can only possibly incriminate his consciousness. We speak to a piece of him when we ask the question, the piece that communicates with the world through language. As we have seen, even within humans there is so much more that we are able to readily perceive or articulate.

With or without the conscious opinion of Bundy, our model provides little help in theorizing about the events leading up to any kind of burst of activity or catastrophe. Each one occurs as the result of a network of activity in a landscape, multiple members working towards a state of criticality determined by their natural inclination towards entropy and the self-regulation imposed to fight those forces and utilize the energy handed down by the sun. If we maintain this perspective any conclusions we may make will be nonmaterial and irreducible.

Our models do not offer us conclusive answers. The beauty of this way of thinking is the lack of control it gives us. Nevertheless, there are serious implications for every scientific field, and the subject deserves serious thought. Being that we are all living in a landscape, my questions tend towards how this looks as a mapping of our lives in relation to each other. We like to think of ourselves as existing forever in periods of stasis. I look around and do not a see a species that has maximized its fitness. Wars and serial killers are two quick examples for why the system as we are running it, or participating in it, deserves to be looked at.

*The probability of a species mutating is p=e^(b/t) where b=barrier, t=mutation parameter. Time between mutations is large, but actual mutation is fast. The higher the b, higher fitness of animal, more stability (barrier=stability when species is at its maximum fitness). The lower the b, the easier it will be to make a change.

References

Non-Web References:

1) Capra, Fritjof. Web of Life. New York: Doubleday, 1996.

2) Rule, Ann. Stranger Beside Me. New York: New American Library 1996.

3) Kauffman, Stuart. At Home in the Universe. New York: Oxford University Press, 1995.

4) W. Softky, W.; Holt G. "A Physicists Introduction to Brains and Neurons". Physics of Biological Systems. New York: Springer, 1997.

5) Bak, P.; Paczuski, M. "Mass Extinctions vs. Uniformitarianism in Biological Evolution." Physics of Biological Systems. New York: Springer, 1997.

6) Flyvbjerg, H.; Bak, P.; Jensen, M.H.; Sneppen, K. "A Self-Organized Critical Model for Evolution." Modelling the Dynamics of Biological Systems. New York: Springer, 1995.

7) Class notes. Bryn Mawr College. Biology 202, Professor Grobstein.

Web References:

8)Introduction to Chaos Theory, short but easy to understand

9)Chaos Theory, very good and comprehensive

10)Second Law of Thermodynamics, for non scientists, very easy to use

11)Variability in Brain Function and Behavior, interesting article by our prof


The Correlation between Thinking and Consciousness
Name: Shanti Mik
Date: 2003-02-25 09:44:48
Link to this Comment: 4827


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

It inherent to a human being to have the ability to reason and think, but how is it exactly that we go about doing this? Is thinking merely due to input and output, but if so then how come a person can build on previous knowledge? Where is this knowledge stored? Can we "max out" our brain or is it elastic? And is intelligence past knowledge or is that something we come programmed with? We cannot hope to answer all these questions, but it is interesting to note how if these questions were to be asked to five different people, their responses would be different because of their previous knowledge. They have all learned differently, and have all learned to think differently. All these questions also bring up general ideas we have when we discuss the notion of thinking and learning. Because the human brain is so complex, it is not easy to peer inside and decide if it is the neurons firing that helps us learn or if its something else. Why is it that some people are able to pick up mathematics incredibly quickly while others struggle and some people pick up philosophy while the mathematicians struggle with that?

To first understand thinking we have to believe in the mind. Our first question would then be do we know that the mind exists. One philosopher to tackle this problem was René Descartes in the seventeenth century. He believed that the behavioral sign that indicated the presence of the mind was language. Thinking is one distinct function of the mind Descartes said, and thinking produces thoughts. Language is what expresses these thoughts and therefore we can say that language implies thinking (1). The thinking then in turn implies the mind. Descartes then faced an interesting dilemma: if communication is evidence of a mind then do animals have minds since they are able to communicate with each other? Descartes believed that animals were mere machines and therefore did not have minds. Yet this becomes harder to prove once we think of evolution. If through evolution, humans did in fact come from less complex animals, then were did the mind come from? If we wish to keep with phylogenetic continuity, then we have to say that animals also possess minds. Through gradual increase of complexity from animal to human, is it possible to just have a mind pop in somewhere or is it that animals have a lesser form of a mind and that the mind as well as the body gradually became more complex over time. I am inclined to believe that all organisms have some form of the mind. If animals do not posses a mind and do not think, how is it possible then to teach a dog how to fetch or roll over? The dog itself reveals that it too is learning.

There are generally three types of behaviorists who each see the brain in a different way. There are methodological behaviorists, radical behaviorists and mediation behaviorists. Methodological behaviorists believe that consciousness exists but believe that it is within each persons brain and that it cannot be studied by science for it is unique to each person. They believe that science demands that we only study behavior(1). The radical behaviorists, on the other hand feel that a persons conscious may cause behavior. What is going on in your head will have an output or will be directly related to your actions, for example, taking medication for your headache, or even calling a friend because you feel sad. This type of behaviorists also resist the idea of "unseen machinery". If the machinery cannot be seen by anyone then a radical behaviorist rejects it(1). A mediation behaviorist studies human learning and find that humans learn differently than animals and they try to capture this difference through mediation. Animals respond directly to stimuli where as a mediational behaviorist believes that a human learns by responding to their environment and can control their own responses(1).

The problem with discussing consciousness is that it becomes hard to distinguish at times if a person is conscious. Descartes believes that the behavioral sign that indicated the mind was language and from this came all the rest of his assertions. Yet would a person who is mute also be considered unconscious or what about a person who is deaf and therefore cannot respond to the language of others. Does sign language count as another language that can therefore indicate the presence of a mind in a deaf or mute person? I would assert that because a human who is born deaf or mute, while not being able to use the language that most humans use vocally, still shows the presence of a consciousness and the ability to learn since they are able to learn something like sign language. This ability to learn even though a mute person cannot communicate the same way that the rest of us do is evidence that if the brain is not able to communicate its thoughts through its voice then it would find another way. This also demonstrates that language may not be the sole determiner of a conscious. Body language could then also be seen as a form of communication and language as could your actions, which is why we say at times that actions speak louder than words. This is effect reveals that we also do not feel that language

The question then becomes that regardless of if the consciousness is evidenced by language or the body, how is it that thinking actually occurs in the brain. To understand this, we need to understand what the brain is made up of. The brain is "Four pounds and several thousand miles of interconnected nerve cells (about 100 billion)" that control movement, sensations, thoughts, emotions(3). The brain itself consists of trillions of neurons which are all interconnected and all transmit signals at an astonishing rate. We then have to understand how things are understood by the brain. It all begins with an image or a distal stimulus. A distal stimulus is any object that is in the world that a human being can perceive. Then that distal stimulus becomes a proximal stimulus which is the retinal image of the book. Finally, the book becomes a percept, which is where your brain recognizes the object as a book(2). The percept is what you see and in the future becomes your interpretation of the same object. Now you know what a book looks like and if you come across a similar looking object, your brain perceives that also as a book. As a child, this is how you perceive the world. Your parents hold an object in front of you, repeat its name and try to have you make the same sounds. Then they show you how it is used. It is through this form of association that a child learns what a bottle is and what they can do to get one. This is also found through conditioning, which is one theory of how people learn.

The question is first how many different types of learning are there? Is there just instrumental conditioning-Pavolovian conditioning or is there something else which is a immediate association between stimuli and movements? Instrumental conditioning-Pavolovian conditioning, which is a dichotomy in itself represent a type of associated learning. Pavolovian conditioning was thought up by a man named Ivan Petrovich Pavlov(1849-1936) who was a physiologist. It follows a law called contiguity which says that two ideas will get associated if they occur in the mind at the same time. From this Pavlov derived his hypothetical brain theory which says that "stimuli set up centers of activity in the cerebrum and that centers regularly activated together will become linked, so that when one center is activated, the other will be too(1). This theory believes that the greater number of times you come into contact with a stimuli, if you are given the same response each time, then you will learn to associate that response with that stimuli. Yet a questions arises from this theory and it is that what if the next time a person comes into contact with that stimuli, a different response is given. Can the brain then attribute both responses to the same stimuli and also be able to differentiate the two? I believe so. The human mind possess a great elasticity and it is part of growing up as a human that we discover this, for we find that sometime we are given the same response with different stimuli. Our parents tell us we can not go somewhere, while the first time we asked it was to go to the mall, and the second time we asked, it was to go to a friend's house. A great deal of learning also comes through patterning(6) which is also related to perception. We start forming classes of information where similar stimuli or similar responses start to be categorized. Through the process of conditioning, we find that the more that we are exposed to in life, and with all the myriad of responses, our categories become more complex.

Learning is not something that a human can do optimally all their lives and even at an age where a human can learn the most, there are many factors which can stunt it. Humans are known to learn more at 20 years of age than at 60(5), yet within the population of 20 year olds, not all of them learn the same. There are many incredibly smart 20 year olds and there are many who are unable to learn without spending hours poring over material. What kind of things cause this to occur? Is intelligence and learning hereditary or is, in the words of one behaviorist, "The essence of intelligence is the adequate response to a stimulus"?(4) We believe that the greater response a person has to stimuli, the greater is their intelligence. But isn't intelligence itself rather arbitrary? Perhaps it would be better to say that a person's greater response to stimuli is evidence of their ability to learn and respond to their surroundings.

As we come to a greater understanding about learning and human behavior, we realize that the two concepts are not as simple as we initially thought. What indicates that a person is learning and is it possible to measure the amount of material that a person learns everyday and does thinking indicate learning? We've been thinking about consciousness and trying to understand if how closely learning and thinking are related. Our brains are in a constant state of consciousness. We are constantly alert to what is going on around us and we are constantly observing. It would be impossible to measure the amount of information that the brain absorbs, but as we realize its elastic potential, it makes us see how great the pursuit of knowledge really is. Our brains, unlike animals, have an incredible ability to reason, think, and pursue ideas. Our thoughts can be independent of our actions and we have the ability to critically observe the world around us. It is this gift that we are given as humans and it reinforces the idea that a mind is a terrible thing to waste.


References

1) Harris, Richard J, Leahey, Thomas H. Learning and Cognition. New York: Simon & Schuster, 1997.

2) Galotti, Kathleen. Cognitive Psychology In and Out of the Laboratory. New York: Wadsworth Publishing Co.,1999.

3)Brain Source A great site to find out about learning, brain disorders, damaged brains and the effects of stress on the brain

4)What is Intelligence?

5)Brain Connection

6)Understanding about learning and the brain


Reality versus Fiction: The Mysterious Illness Sc
Name: Nicole Meg
Date: 2003-02-25 09:56:22
Link to this Comment: 4828


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Imagine being functional your entire childhood and teenage life. You attend class, study, work, and juggle a myriad of activities. You may have friends with whom you socialize in your free time. You are becoming more independent and learning to care for yourself. Suppose the newscaster on television starts talking directly to you or that someone calling with the wrong number is really a government spy or that you were going out to lunch with the president? You lose control of your life, as you can no longer discern reality from wildly absurd fantasy. Available medical treatment is imperfect and it is difficult to engage your compliance. Friends and family watch your behavior deteriorate, heartbroken, and know not how to react. Such is the tragic web that schizophrenia weaves into the lives of its victims.

When thinking about the concept of the relationship of brain to behavior, what more dramatic condition comes to mind than schizophrenia. One percent of the population, or 2 million Americans alone, are affected by schizophrenia. Imagine living in a town with a population of 10,000 people. It is likely that ten of your neighbors and community members would suffer from schizophrenia. The tragic illness is not limited to one race or culture. It is found all over the world. It affects slightly more men than women, with first signs of schizophrenia appearing between 16 and 25 years of age, although it had been observed among children as well. (1) Sufferers experience psychosis or psychotic episodes when they are unable to distinguish between what is real and what is not. According to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), for a person to be diagnosed schizophrenic they must exhibit two or more of the following symptoms in the active phase of the disorder: delusions; auditory, visual, olfactory, and/or tactile hallucinations; disorganized thinking and/or speech; negative symptoms (absence of normal behavior); and catatonia. Of those diagnosed, there are three types of schizophrenic patients: disorganized (characterized by lack of emotion and disorganized speech), catatonic (characterized by reduced movement, rigid posture, or sometimes too much movement), and paranoid (strong delusions or hallucinations). (2) The greater tendency for monozygotic twins to have the illness shows that genetics plays a role but because the tendency is not 100%, genetics are not the sole factor. Considering that only 1% of the population is affected by schizophrenia, the elevated risk of relatives—8% for non-twin siblings, 12% for child whose parent has schizophrenia, 12% for fraternal twin, 40% for child who had two affected parents, and about 47% for identical twins—further supports the influence of genetics. A 2000 issue of Science identified a small region on chromosome 1 (1q21-q22) to be genetically involved in schizophrenia but little more about it is currently understood. (2) Does this mean that all people with larger lateral ventricles have schizophrenia? Do all schizophrenic patients have large lateral ventricles? Currently, testing has suggested only that large lateral ventricles are characteristic of schizophrenic patients. Reduced size of the hippocampus, increased size of basal ganglia, and abnormalities of the prefrontal cortex have also been associated but are not necessarily indicative of schizophrenia. (2) Elevated levels of D4 receptor bindng have been found during autopsy of schizophrenic patients. (4) Deth's research shows that dopamine can stimulate the methylation of membrane phospholipids through activation of the D4 dopamine receptor and, in summary, suggests that schizophrenia may result from impairment in the ability of D4 dopamine receptors to modulate NMDA receptors at nerve synapses via phospholipids methylations.

One difficult to define factor associated with schizophrenia is that of environment. In an article in April 2001 Proceedings of the National Academy of Sciences, U.S. and German scientists found that 1/3 of newly diagnosed schizophrenics contained genetic material from HERV-W retroviruses in their cerebrospinal fluid as opposed to only 5% of chronic sufferers and 0% of healthy people. Although the origin of the virus is unknown, it seems to trigger the onset of schizophrenia. (3) According to a study in the British Journal of Psychiatry lack of DHA (docosahexaenoic acid), a fatty acid in breast milk, during infancy increases risk of schizophrenia in adults. (7) Is schizophrenia the result of one factor, all of the factors, or a select few? Although the specific combination is unknown, schizophrenia is most likely a result of many of the factors listed above, however, current studies support that genetic, physiological and biochemical influences outweigh the influence of the environment alone.

How well do current medications treat schizophrenia? Although the illness is yet incurable, antipsycotics or neuroleptics (such as clozapine, risperidone, quetiapine, and olanzapine) have been shown to effectively control symptoms in most patients. (7) Most of these medications have negative side effects, such as agranulocytosis which is a loss of white blood cells, so continued development of more effective drugs with fewer side effects is important. If in fact schizophrenia is not completely triggered by genetics, physiology, and/or biochemistry further study of environmental stimuli may be the key to developing lifestyle changes and effective counseling approaches that will alter that environment of an at-risk or affected person. Continued research of biological causes and effective medications to treat those causes is also critical.

How frightening it must be for an individual to ponder the likelihood of succumbing to the illness themselves when struggling to accommodate an affected family member. What I really want to know from future research is if someone is predisposed to the disorder to the extent that they have a schizophrenic family member, what can they do, if anything, to prevent it from afflicting them? According to a list compiled by family members of affected patients affiliated with the World Fellowship for Schizophrenia, social withdrawal is the most common warning sign indicating onset of the illness with excessive fatigue, deterioration of social relationships, inability to concentrate or cope with minor problems, apparent indifference to even highly important situations, decline in academic and athletic performance, deterioration of personal hygiene, frequent aimless moves or trips, drug or alcohol abuse, bizarre behavior, inappropriate laughter, low tolerance to irritation, and forgetfulness also noted. 1) NIH
2) Schizophrenia
3) National Institute of Mental Health, Genetic Link
4) Scientific American
5) Scientific American Link to Viruses
6) Schizophrenia home page
7) Yale Bulletin
8) Schizophrenia at the National Institute of Mental Health
9) World Fellowship for Schizophrenia


The Spirit Molecule (DMT): An Endogenous Psychoact
Name: Neesha Pat
Date: 2003-02-25 11:04:31
Link to this Comment: 4829

"The feeling of doing DMT is as though one had been struck by noetic lightning. The ordinary world is almost instantaneously replaced, not only with a hallucination, but a hallucination whose alien character is its utter alienness. Nothing in this world can prepare one for the impressions that fill your mind when you enter the DMT sensorium."- McKenna.

N,N-dimethyltryptamine(DMT) is a psychoactive chemical in the tryptamine family, which causes intense visuals and strong psychedelic mental affects when smoked, injected, snorted, or when swallowed orally (with an MAOI such as haramaline). DMT was first synthesized in 1931, and demonstrated to be hallucinogenic in 1956. It has been shown to be present in many plant genera (Acacia, Anadenanthera, Mimosa, Piptadenia, Virola) and is a major component of several hallucinogenic snuffs (cohoba, parica, yopo). It is also present in the intoxicating beverage ayahuasca made from banisteriopsis caapi. This drink inspired much rock art and paintings drawn on the walls of native shelters in tribal Africa- what would be called 'psychedelic' art today (Bindal, 1983). The mechanism of action of DMT and related compounds is still a scientific mystery, however DMT has been identified as an endogenous psychadelic- it is a neurotransmitter found naturally in the human body and takes part in normal brain metabolism. Twenty-five years ago, Japanese scientists discovered that the brain actively transports DMT across the blood-brain barrier into its tissues. "I know of no other psychedelic drug that the brain treats with such eagerness," said one of the scientists. What intrigued me were the questions, how and why does DMT alter our perception so drastically if it is already present in our bodies?

DMT is known as the "spirit molecule" because it elicits with reasonable reliability, certain psychological states that are considered 'spiritual.' These are feelings of extraordinary joy, timelessness, and a certainty that what we are experiencing is more real than present day reality. DMT leads us to an acceptance of the coexistence of opposites, such as life and death, good and evil; a knowledge that consciousness continues after death; a deep understanding of the basic unity of all phenomena; and a sense of wisdom or love pervading all existence. The smoked DMT experience is short, but generally incredibly intense. Onset is fast and furious, and thus the term "mind-blowing" was used to describe the effect. It is a fully engaging and enveloping experience of visions and visuals, which vary greatly from one individual to the next. Users report visiting other worlds, talking with alien entities, profound changes in ontological perspective, fanciful dreamscapes, frightening and overwhelming forces, complete shifts in perception and identity followed by an abrupt return to baseline. "The effect can be like instant transportation to another universe for a timeless sojourn" (Alan Watts).

The physical and psychological effects of DMT include a powerful rushing sensation, intense open eye visuals, radical perspective shifting, stomach discomfort, overwhelming fear, color changes and auditory hallucination (buzzing sounds). One of the primary physical problems encountered with smoked DMT is the harsh nature of the smoke which can cause throat and lung irritation. Surprisingly however, DMT is neither physically addictive nor likely to cause psychological dependence (Szara, 1967).

Most subjects were reported to have had memorable positive experiences, however the possibility of unpleasant experiences is not ruled out. William Borroughs, a psychologist in London tried it and reported of it in most negative terms. Burroughs was working at the time on a theory of neurological geography and after trying DMT, he described certain cortical areas as 'heavenly', while others were 'diabolical'. In Burroughs' pharmacological cartography, DMT propelled the voyager into strange and decidedly unfriendly territory. The lesson however, was clear. DMT, like the other psychedelic keys, could open an infinity of possibilities. Set, setting, suggestibility and temperamental background were always present as filters through which the ecstatic experience could be distorted (Jacobs, 1987).

The brain however, is where DMT exerts its most interesting effects. The brain is a highly sensitive organ, especially susceptible to toxins and metabolic imbalances. In the brain, sites rich in DMT-sensitive serotonin receptors are involved in mood, perception, and thought. Although the brain denies access to most drugs and chemicals, it takes a particular and remarkable fancy to DMT. According to one scientist, "it is not stretching the truth to suggest that the brain hungers for it." A nearly impenetrable shield, the blood-brain barrier, prevents unwelcome agents from leaving the blood and crossing the capillary walls into the brain tissue. This defense extends to keep out the complex carbohydrates and fats that other tissues use for energy, as the brain only uses glucose or simple sugars as its energy sources. Amino acids are among the few molecules that are transported across the blood brain barrier and thus to find that brain actively transported DMT into its tissues was astounding. DMT is part of a high turnover system and is rapidly broken down once it enters the brain, giving it the ability to exert its effects in a short period of time. Researchers labeled DMT as 'brain food,' as it was treated in a manner similar to how the brain handles glucose. In the early nineties, DMT was thought to be required by the brain in small quantities to maintain normal functioning and only when DMT levels crossed a threshold did a person undergo 'unusual experiences.' These unusual experiences involved a separation of consciousness from the body and when psychedelic effects completely replaced the mind's normal thought processes. This hypothesis led scientists to believe that as an endogenous psychedelic, DMT may be involved in naturally occurring psychedelic states that have nothing to do with taking drugs, but whose similarities to drug-induced conditions are striking. DMT was considered to be a possible 'schizotoxin' which could be linked to states such as psychosis and schizophrenia. "It may be upon endogenous DMT's wings that we experience other life-changing states of mind associated with birth, death and near-death, entity or alien contact experiences, and mystical/spiritual consciousness" (Strassman, 2001).

Hallucinogenic drugs have multiple effects on central neurotransmission. In 1997, it was found that like LSD and Psilocybin, DMT has the property of increasing the metabolic turnover of serotonin in the body. Serotonin is found in specific neurons in the brain that mediate chemical neurotransmission. Axons of serotonergic neurons project to almost every part of the brain, affecting overall communication within the brain. Early in the research on hallucinogens, it was determined that hallucinogenic drugs structurally resemble serotonin (5-HT) and thus researchers thought that DMT bound itself to serotonin receptors in the cerebral cortex (Strassman, 1990).

Further, an increase in 5-hydroxy-IAA excretion suggests the involvement of serotonin in DMT action and elevated blood levels of indoleacetic acid (IAA) are seen during the time of peak effects, implying its role as a metabolite. The relationship between DMT and serotonin led researchers to become interested in the pineal gland. The pineal gland in humans regulates homeostasis of the body and body rhythms. A dysfunction could be associated with mental disorders presenting themselves as disturbances of normal sleep patterns, seasonal affective disorders, bipolar disorder, and chronic schizophrenia. Strassman proposed that the pineal gland, besides producing melatonin, is associated with 'unusual states of consciousness.' For example, it possesses the highest levels of serotonin in the body and contains the necessary building blocks to make DMT. The pineal gland also has the ability to convert serotonin to tryptamine, a critical step in DMT formation. In addition, 5-methoxy-tryptamine, which is a precursor of several hallucinogens, has been found in pineal tissue [Bosin and Beck, 1979; Pevet, 1983] and in the cerebrospinal fluid [Koslow, 1976; Prozialeck et al., 1978].

Every night Pinoline (made by the pineal gland), DMT and 5meoDMT are produced in the brain and are the causal agents of vivid dreams. The interior dialogue produced on a DMT trip leads scientists to believe that the language centers are also affected. Of late, various experiments have been conducted that show that DMT allows awareness of processes at a cellular or even atomic level. Could DMT smokers be tapping into the network of cells in the brain or into communication among molecules themselves?

According to Groff, the major psychedelics do not produce specific pharmacologic states (i.e-toxic psychosis) but are unspecific amplifiers of mental processes (Groff, 1980). At the same time, the identification of 5HT2 receptors as possibly being involved in the action of hallucinogens has provided a focal point for new studies. Is there a prototypic classical hallucinogen? Until we have the answers to such questions, we continue to seek out the complex relationship between humans and psychoactives.


WWW Sources

Szara, S. Hallucinogenic Drugs- Curse or Blessing? Am J Psychiatry 123: 1513-1518, 1967.

Strassman, R. Human Hallucinogenic Drug Research: Regulatory, Clinical and Scientific Issues. Brain Res. 162. 1990.

B.L. Jacobs. 1987. How Hallucinogenic Drugs Work. "American Scientist". 75:385-92.

M.C. Bindal, S.P. Gupta, and P. Singh. 1983. QSAR Studies on Hallucinogens. 'Chemical Reviews'. 83:633-49.

Groff, S. Realms of the Human Unconcious: Observations from LSD Research. Jeremy Tarcher Inc., LA. 1980, pp 87-99.

www.erowid.Org

http://www.drugabuse.gov/pdf/monographs/146.pdf


Anxiety Disorders
Name: Irina MOis
Date: 2003-02-25 14:33:18
Link to this Comment: 4833


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Irina Moissiu Web Paper # 1
As I got close to the Embassy Suites, where Lincoln Financial Group was holding
their interviews, I felt myself get tense. "What if people are in the lobby and they see me
in jeans? Would that make a bad impression?" After a long debate with myself, I decided
that it was nearly midnight and that people would not be awake. I walked into the lobby,
got my room key and went up. We all had our own suite so it was clear that Lincoln had
some money to spend. As I tried to fall asleep, I became more and more restless. I began
thinking about all the things that could go wrong. I couldn't sleep. 3am rolled around.
Then 6am. At 7am I got up, showered, put my suit on and walked out of the room. I
immediately turned around because I realized that I had forgotten my name tag. As I tried
to open the door with the plastic key, I realized I was trembling so bad that I could not get
the stupid key in the door. I finally managed to enter the room and get my name tag and I
proceeded to stab my finger with the safety pin of the tag. The pin kept slipping because
my palms were sweaty. I took a deep breath, cleaned myself, cursed myself for being
clumsy, and went downstairs to eat.

The elevator doors opened and I saw over 150 people in the lobby. I nearly
fainted. I felt like my lungs would not expand and for a second everything went black. I
quickly walked over to the bathroom and slapped myself a couple of times. Splashing
cold water on my face would have been out of the question given that I was wearing
mascara. I asked myself to get a grip (several times) and walk out of the bathroom. I
was so nervous that I hung my head and walked over to the food hoping to avoid any eye
contact. I looked at the food and I wanted to eat because I was hungry, but my nausea
got in the way. I finally had to look up and then I saw the rest of the name tags. "OH MY
GOD!" Cornell, University of Penn., Princeton, Yale, Columbia. I wanted to start
crying but there were too many people around. I thought "you might as well go home.
There is no way in hell you will get a job given the competition."

Time came to mingle with upper management. At this point I started sweating.
My heartbeat was so strong that I could almost feel it impair my speech. Soon enough,
my stomach and intestines began working overtime. Thankfully the bathroom was near
by.

Symptoms so far: tremors, diarrhea, sweating, palpitations, muscle tension, sleep disturbances.

I returned to Bryn Mawr where more fun times were awaiting me. This is a brief
on the rest of the month:

Lots of snow = lots of car trouble.

Computer crashes. Thought we saved everything on FTP. Nope! Some files got
lost. But at least I ordered another computer (which I had no money for. But hey! What
are credit cards for!!!?)

I work about 30 hours a week so I can't get to Guild to work on a computer. I fall behind in all my classes because 4 ˝ out of 4 ˝ classes are computer/web based.

The new computer arrives. It turns out that the hard drive is not spinning properly and I have to wait for another new computer.

Another interview in NYC. I get there a half hour late due to traffic and I make a total jerk of myself.

Hell week is coming up and I am dorm president without a computer to keep in
close touch with people. Lots of other social issues to deal with which should not be
mentioned.

It's only downhill from here because papers are due, midterms are coming up, I
have to start my thesis, and more interviews.

Symptoms over the last month: muscle tension, easily fatigued, edgy, difficulty
concentrating, irritability, sleep disturbances, need to cry when more than 5 things go
wrong in one day.

I would say that for the most part I am stressed out because there are a lot of
things happening at the same time. I would not consider myself to have any sort of
anxiety (although I did suffer from panic attacks a couple of years ago). I expected my
senior year to be a difficult one and I think I am doing just fine. I strongly believe that many things in life can be solved through the power of suggestion. When I feel like things are getting out of control, I look in the mirror and I try to put things in perspective. Will I really die if I don't get a job with Lincoln? Absolutely not. That is why my
interviews went fine regardless of how nervous I was at first. I think it is perfectly
normal to be scared when meeting very important people for the first time; or even when
you meet new people. Nobody wants to be viewed as "socially retarded" so we all worry
about the kind of image we project out to other people. Does a person have a disease if
they are shy or nervous or irritated? I don't think so. I believe that some of these
disease/illnesses are conjured up socially and by pharmaceutical companies to get people
to think they need a pill to make all their problems go away. But being nervous or shy is
normal for the majority of the human population. So if everyone listened to the
symptoms of General Anxiety Disorder (GAD), the majority of us would be taking a pill.

According to the Wyeth, anxiety is "a feeling of unease and fear that may be
characterized by physical symptoms such as palpitations, sweating, and feelings of
stress." GAD is "a psychiatric condition in which the main symptoms are chronic and
persistent apprehension and tension that are not related to any situation in particular.
There may be many unspecific physical reactions, such as trembling, jitteriness,
sweating, lightheadedness, and irritability." (1) The symptoms of GAD are thought to be:
muscle tension, easily fatigued, restless, difficulty concentrating, irritability, sleep
disturbance and most people will have at least 3 of these symptoms. (1) They also
mention that these symptoms should persist over 6 months but I can easily see these
symptoms persisting if (for example) I was going through a painful divorce which would
take more than 6 months.

Is it maybe possible that we as adults are taking on too many responsibilities?
Maybe we work too many hours and we don't have enough time to relax. Therefore we
are always juggling 100 things at the same time. The more things we juggle, the higher
the chances that something will go wrong, the higher the stress. The brain may be a
wonderful and complex thing but it needs to rest every-now-and-then.

So maybe a lot of adults are suffering from GAD due to high stress levels.
However when one looks at the symptoms for Social Anxiety Disorder (SAD):
palpitations, tremors, sweating, diarrhea, confusion, and blushing, it is very clear that
these are too general and can be interpreted by some people as a sign of illness.
According to the Anxiety Disorders Association of America, "individuals with the
disorder are acutely aware of the physical signs of their anxiety and fear that others will
notice, judge them, and think poorly of them."(2) Don't most of us worry about what
other people think of us? Don't most of us get nervous when we meet new people or
when we have to perform? Plenty of people suffer from stage freight. It is a matter of
self control and with enough experience; most performers learn to deal with it. The more
I look at it, the more it seems like we want to be perfect at everything and we want a
quick simple fix. Meditation or counseling just takes too long and we want to see results
immediately.

If it was not enough to subject adults to these ridiculous, socially constructed
illnesses, we have decided to put our children through the same traumas. SAD has a
slightly different template to go by when it is applied to children. The Cincinnati
Children's Hospital Medical Center states: "Social anxiety disorder (in children) is
characterized by fear and anxiety in social situations, extreme shyness, timidity, and
concerns about being embarrassed in front of others. Situations that trigger anxiety and
are often difficult for children with social anxiety disorder are speaking in front of the
class, talking with unfamiliar children, performing in front of others, taking tests, and
interacting with strangers." (3) How many children would not be shy to perform in
public and talk to strangers? The criteria used to diagnose this illness are too broad and
would apply to the majority of children. The symptoms for children are: avoiding eye
contact, speaking softly, trembling, fidgetiness, and nervousness. I would say that not a
lot of kids tremble. However all the other symptoms are exhibited by both shy and
hyperactive children depending on their activity.

Although I agree that there are some individuals who would benefit from the use
of medication, I believe that the majority of people need to understand that we are not
perfect and we can't expect ourselves to be perfect at all times. And if we notice our
imperfections, our first choice in treatment should not be a pill. It is true that we are all
very busy and meditation for example would require too much time. However what is
the point of taking medication which will have side effects which are just as bad as your
"illness" and can potentially be addictive (like Paxil which is used to treat SAD). (4)

Not all illnesses/diseases have strong medical support. Homosexuality was
considered a disease until the last 1970's. It is important to understand that although the
brain is complicated, there is no reason for doctors or pharmaceutical companies to create
more diseases/illnesses based on social construct rather than scientific facts.



















1.1)

2.2)

3.3)

4.4)

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Orpheus unraveled? A conversation on sound and br
Name: Katherine
Date: 2003-02-25 21:20:29
Link to this Comment: 4837


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

In Macedonian hills, the music of Orpheus was said to possess certain magical qualities, having powers strong enough to alter the very behavior of people and animals. Among its abilities, the notes of Orpheus' lyre were said to calm the guard-dog of Hades (1), to cause the evil Furies to cry, and to tame the deadly voices of the Sirens (2). Was this power simply a divine and magical gift with no other explanation, or can we explain more specifically the connections between music and behavior?

Sound is an important input affecting the nervous system. The brain reacts to sound input because information signals are able to travel from the outside environment, across action potentials and through the neural network into the brain. Such signals, electrical in nature, can be detected by the electroencephalogram, or EEG, which measures the electrical activity of the brain (3). Such electrical activity has been shown to correspond to different states of consciousness within an individual, as the EEG reveals different brain activity depending upon the mental state and the actions of the person being observed. Four different types of brain waves are defined—beta, alpha, theta, and delta—and each corresponds to a different state of activity and consciousness. Beta waves, vibrating at a frequency of 13-50 hertz, are those that are experienced during the normal and alert waking state; alpha waves at 8-13 Hz occur during relaxation; theta waves (4-7 Hz) in the "halfway" moments between waking and sleeping; and delta waves, the slowest at 0.5-4 Hz, during sleep. Interestingly, the brain has been shown to emit the slower theta waves even when in the waking state, when in the undergoing such activities as chant and meditation (4).

For thousands of years, music has been regarded as possessing unique powers in affecting the human experience. Specifically, music has been associated with healing abilities, and has been used for such purposes throughout history. Traditionally, the types of sound responsible for healing are characterized by distinct rhythms, and by specific emphasis on repetition that stems from those rhythms. The existence of repetitive beat seems to aid in the achievement of meditative state. Shamans are well known for their use of drum beats to access healing powers both within themselves and for the people they wish to treat (5).

It has been suggested that in the meditative state—a state of extreme awareness and internal mental calmness—the two hemispheres of the brain become synchronized in brain wave production, rather than generating signals of varying frequencies and amplitudes. It would thus make sense that the repetitive nature of chant, and the underlying beat of music, is central in the unifying and rhythmic effect that such practices have on the brain. Specifically, we find the underlying repetitive drone, a constantly held baseline tone, in numerous types of spiritual chant, including the Hebrew, Byzantine, Arabic, Tibetan and Gregorian traditions. The Om sound is also an important tone in traditional chanting practice which calls upon repetition and harmonics for its restorative effects.

A striking example relating rhythm, brain function, and health is found in a story which occurred forty years ago among a group of Benedictine monks in southern France, when changes in their behavior resulted in health complications. After the Vatican II council, it was decided that the monks no longer needed to perform their typical 6-8 daily hours of chanting, and could rather use that time for other chores and activities. Interestingly, it was after this change that a vast majority of the group began to suffer exceeding amounts of fatigue, no longer able to survive well on the limited sleep with which they once functioned normally. Health officials were called, and advised the men to get more sleep. This was done, but still the symptoms persisted. Again a doctor was called, this time deciding that the monks were undernourished and should begin eating meat. This new regimen was followed, but to no avail, and the monks suffered yet more tiredness and lethargy. Finally, it was prescribed by Dr. Alfred Tomatis, a French ENT specialist who acknowledged the impact of structured sound on brain function, that the monks recommence their many daily hours of chanting; and this, interestingly, is what seemed to solve the problem and bring vitality back. It was thus believed that the singing of chant, and specifically the accessing of high frequency sounds and harmonics (6), had a way of altering brain wave activity and energizing the nervous system and brain (7). It seemed possible that vocal harmonics could actually create new neurological connections.

Numerous other examples claim such positive effect of chanting on brain function. "Omkar" recitation and chanting has been shown to increase concentration and memory, and to reduce fatigue (8). In 1993, a study performed at the university of California at Irvine and published in Nature, examined the effect that listening to certain types of music might have on learning and memory (9). A group of students listened to ten minutes of the following three different selections, each for ten minutes prior to taking a spatial reasoning test: Mozart's piano Sonata in D major, a recording of relaxation music, and silence. According to the results, the students' scores improved after listening to the Mozart recording compared to the other two recordings, a result which spawned the term "Mozart effect" for the increased performance expected after such listening. The results, however, have their limits; the so-called "Mozart effect" was found to last only 10-15 minutes, and is helpful specifically in tests of spatial reasoning, rather than tests involving the recitation of numbers (10). This suggests that music has an effect on specific neural pathways, and those pathways are responsible for specific areas of mental functioning.

It has long been proposed by way of many religions, philosophies, and myths, that sound possesses creative force. Many religions relate the creation of the universe by way of a speech-act, in which the god literally speaks the world into existence. In Mesopotamian myth, the cosmos is formed musically by the lyre of Ur. Sound, in many instances, is painted as the fundamental moving force of the universe. This idea of 'sound creating form' began to migrate into the world of the sciences as well: the scientists H. Jenny and G. Manners in the mid 20th century studied the shapes and patterns created by sound vibrations as projected onto mediums of sand and iron filings, introducing a discipline which has been termed Cymantics.

It is fascinating to think about the primacy of voice and sound in the health and existence of life on earth. Indeed, voice can be thought of as a virtual map of the human topography. It has been suggested that it is the speaking ability of the I-function which helps us to define our "true selves," and allows us to separate this internal self from the outside world. Myths have even suggested that the "Ur-sound" introduces the fundamental frequencies lying at the base of all life. These ideas offer interesting models for the physical action and effects of sound on living systems. It is clear that sound, and structured voice, does indeed have a profound effect on brain physiology—and hence affects our behavior—as the lyre of Orpheus has always shown.

References

1)Ovid, Metamorphoses, Book X, lines 25-32.

2) Homer, Odyssey, XII.90.1.

3)Neuroscience website from the University of Washington, Electricity in the body: action potential, resting potential.

4)Journal article from Pubmed , A study of electroencephalogram in meditators.

5)Website of Shamanism Today, Informational page about native American shamans.

6)Website for Healing Sounds, Information about chanting and harmonics.

7) A further discussion of harmonics and chanting.

8)Website for Yoga Point, Publication of research on Omkar recitation.

9), Journal article in Nature,Music and spatial task performance. McLachlan, JC.

10)Neuroscience website from the University of Washington,details on tests and results.


The Placebo Effect: Redefining the Role of the Min
Name: Christine
Date: 2003-02-25 23:49:25
Link to this Comment: 4838


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

The mind has often been referred to as the organ of consciousness. Daily functions such as thinking, breathing, and most any task we do rely heavily on use of this precious organ. However, through the use of placebos, it is becoming clear that the mind may have an even greater influence on our daily lives, influencing our perceptions of well- being. The placebo, which is Latin for to please, is a sugar-pill that is given under the guise of being a medication thought to treat an ailment. The use of placebos has shown us that the mind has tremendous potential to induce physiological changes in our body based solely on its perceptions. In example, as we swallow a sugar pill thinking that it is Prozac, we may actually physically feel fewer symptoms of depression as a result of the mind's perception (1). The placebo effect (the phenomenon of perceived benefit from a mock stimulus) has recently opened further doors to our understanding of how the mind works. Once thought of as an inactive, harmless mock substance, placebos have now shown that they induce brain activity. Therefore, the perceived benefit that once was laughed off as fooling the patient, may actually be a consequence of very real physical responses created by the mind, and generating a very real benefit. This paper will explore the various placebo studies that have helped us define and redefine the role of the mind.

The placebo effect is a powerful effect that can consistently induce a perceived benefit. Once the placebo was identified as a tool capable of generating a desired response, it became more widely used as a control in clinical trials. As a result, the placebo has been extensively studied throughout history, and yields a significant amount of data on its clinical effect. Irving Kirsch and Guy Sapirstein used meta-analysis to analyze 19 clinical trials (1). In total, the results of 1460 patients receiving antidepressant medication and 858 receiving a placebo sugar pill were analyzed. This analysis combined the results from these 19 different studies and generated an effect size (which is calculated as the mean of the experimental group minus the mean of the control group, divided by the pooled standard deviation SD). Once Kirsch and Sapirstein subtracted mean placebo response rates from mean drug response rates, they found a mean medication effect of 0.39 SDs (1). For each type of medication, the effect size for the active drug response to drug response is between 1.43 and 1.69, and the placebo response is between 74% and 76% of the active drug response. This means that 75% of the effect attributed to the perceived use of anti-depressants was due to the placebo. The perception of clinical benefit from antidepressants was largely attributed to the perceptions of the mind, and not to the actual chemical make-up of the pills patients were taking. The placebo effect shows us that the mind heavily influences our perceptions of wellness and health.

The placebo effect is often thought of as an act of fooling the mind into perceiving a benefit that has no physical basis. This depiction of the mind as a naďve and foolish organ may be incomplete and ill-representative of the mind's abilities. Indeed, the mind may orchestrate a physical response in the body based on its perceptions alone. In one study done by Luparello et al (2), 40 asthmatics were exposed to a single placebo twice, and told each time that it was a different substance being administered. Each of the participants were told that they were inhaling a bronchoconstrictor as part of an industrial air pollutant study, when they were really only inhaling saline, an inert substance. Of the 40 asthmatics, 12 had extreme asthmatic attacks, and 7 others experienced a significant increase in airway resistance, although not enough to induce a full-blown attack. None of the other patients in the study had any adverse reactions, suggesting that the experience was specific to the asthmatic condition. When the same patients were told that the inhaler contained a bronchodilator, the attacks were reversed and their airways opened up. Three minutes after administration of the placebo, the mean thoracic gas volume ratio rose from 0.07 to 0.13 L/sec/cm H20/L (normal, 0.13 to 0.35) (2). Here, the same saline solution that has caused discomfort within the 19 patients also brought about relief when it was administered a second time. The same administered placebo seems to have produced adverse effects in the patients of this study. While the mind may be labeled as an organ easily fooled by placebos, whose benefit has no physical basis, it is clear that the mind may have an even greater role in behavior. The mind's perceptions of either an irritant or bronchodilator induced airways to physically open or close, and thereby physically experience relief or an attack. The mind may have greater physiological capabilities, and may have a greater role in physical as well as perceived responses that we experience daily.

The concept of a placebo has previously been thought of as a benign sugar pill that is inactive. However, a recent study by Leuchter et al (3) suggests that a placebo may actually activate a distinct and separate part of the brain, the prefrontal cortex. In this study, Leuchter et al 51 depressed patients received one of two medications, fluoxetine (24 patients) or venlafaxine (27 patients), in a nine-week placebo-controlled study. The brain activity of each of the patients (post administration of the pills) was then analyzed through the use of quantitative electroencephalography (QEEG). Both QEEG power and cordance, which measures blood flow and energy use in the brain, were examined. Among the three arms (fluoxetine, venlafaxine, and placebo) the placebo responders were the only group showing a significant increase in brain activity. In the prefrontal region, the placebo had a treatment response of 0.98 compared to the treatment response of the fluoxetine, which was 0.06 (3). Administration of a placebo appears to be an active treatment, rather than the no-treatment comparison it has been thought to provide. In this case, the placebo not only does induce brain activity, but it does so in a way that is distinct from the way the experimental drugs (antidepressants) do. The mind therefore is not an organ that is fooled by the placebo effect into generating a standard set of brain function specific to what it believes the placebo to be. Rather, the mind creates a response that is specific to the placebo, and distinct from what we previously called "active" substances (or experimental drugs). The mind therefore must have the ability to discern what is a placebo and what is an experimental, and thereafter generates a physical response accordingly. This sophisticated ability of the mind to discriminate further shows us that the mind is a complex organ capable that is not fooled, but creates very informed responses.

Placebos can no longer be thought of as the wool being pulled over the mind's eyes. These sugar pills induce the mind to create a very real and physical response that may be specific to the placebo; as a result, use of a placebo can become a very seductive treatment option for many. With the on-going use of placebos, both as a control, and potentially as a treatment alternative, several issues emerge: Is the use of placebos ethical? Furthermore, can it be guaranteed that placebos will generate a safe, and effective, result? While these pills may seem benign by being less active than experimental drugs, the risk for harmful and unethical consequences still does exist.

Results from one recent study have brought this issue to the forefront. Freed et al conducted a study examining the outcomes of 40 patients, ages 34-75, who had severe Parkinson's disease (4). In this study, the patients either underwent neuronal transplantation surgery or sham surgery (placebo). These patients were randomly assigned to the different groups. In the patients who underwent the sham surgery, holes were drilled into their skulls but the dura (the outermost of the three meninges) was not penetrated. While all of the patients had hoped to receive this neuronal transplant, only half actually did. The rest had the placebo surgery. Freed et al found that although there was no notable effect among the older patients in either transplantation or placebo surgeries, the younger transplantation recipients showed much improvement as compared with the placebo surgery group (4). No one from the placebo surgery group benefited from the procedure. Results were measured using the standardized scoring system of the Unified Parkinson's Disease Rating Scale (UPDRS) and the Schwab and England scale. They measure symptoms of Parkinson's disease including mentation/mood and performance in the activities of daily living, respectively. Freed et al further analyzed results looking for growth of transplants by using 18F-fluorodopa PET scans. These tests all concluded that the only group that benefited from the study was the younger transplantation group, leaving many concerned due to the lack of improvement in their condition. Half of those in the placebo group experienced additional pain, and some experienced trauma. In addition to not benefiting from the procedure, many experienced significant pain from the placebo surgery. In this case, the mind could not be induced into generating the type of physical response that is desired from this surgery. And further, the potential for pain as well as harm are also clear in this example. It is clear that the ethics behind placebos, given that they are active substances that can induce very real physical responses need to be taken seriously. The mind is a complex organ that may not always respond in the way that we hope it will.

The placebo effect has shed great light on the complex functions of the mind. The mind has the remarkable ability to generate a physiological and real response to placebos. Furthermore, the mind can discern a placebo from an experimental drug, as we see through the specific activation of the prefrontal cortex by the placebo. The mind has functions and capabilities that are larger than just thinking, breathing and walking. It not only controls our perceptions of our well-being, but may control the physicalities of our well-being more extensively than was previously thought. While the placebo effect has yielded important information on the powers of the mind, we need to think more responsibly about the use of placebos, and the potential effects of these active stimuli on the brain. Given that placebos do activate the brain, we need to re-address our notions of these pills as inactive sugar pills. What if placebos could have the potential to affect the mind in a way that is not positive? What if placebo pills, and furthermore surgeries, could be harmful to the patient? The ethics of placebos, and the role of the mind in responding to them, should not be underestimated as we move forward in our studies of how the mind works. Our well-being depends on it.

References

1) Kirsch, Irving, PhD and Guy Sapirstein, PhD. Listening to Prozac but Hearing Placebo: A Meta-analysis of Antidepressant Medication. Prevention & Treatment, Volume 1, June 1998.

2) Luparello, T.J., Lyons, H.A., Bleeker, E.R. & McFadden, E.R. (1968). Influences of suggestion on airway reactivity in asthmatic subjects. Psychosomatic Medicine, 30, 819-825.

3) Leuchter AF, Cook IA, Witte EA, Morgan M, & Abrams M. (2002) Changes in brain function of depressed patients during treatment with placebo. American Journal of Psychiatry, 159: 122-129.

4) Freed CR, Greene PE, Breeze RE, Tsai WY, DuMouchel W, Kao R, Dillon S, Winfield H, Culver S, Trojanowski JQ, Eidelberg D, & Fahn S. (2001) Transplantation of Embryonic Dopamine Neurons for Severe Parkinson's Disease. New England Journal of Medicine, 10:710-719.


Multiple Sclerosis
Name: Nia Turner
Date: 2003-02-25 23:54:19
Link to this Comment: 4839

Neurobiology and Behavior 202 Nia Turner
Professor Paul Grobstein 2/25/02

The primary objective of this paper is to raise fundamental questions in regards to multiple sclerosis, and to explore possibilities that attempt to answer these inquiries. Second, the prospective outcome is to provide a solid knowledge base for which my peers may begin to understand the relationship between multiple sclerosis and neurobiology and behavior. The first question to address in the general schema of this essay is: What is Multiple Sclerosis?

Multiple Sclerosis also commonly referred to as MS is considered an autoimmune disease that affects the central nervous system (CNS). The key to understanding MS is to recognize its relationship to the human immune system. The immune system is an intricate network of specialized cells and organs that defends the body against attacks by foreign agents also known as antigens such as bacteria, viruses, fungi, and parasites. To the contrary, in the case of multiple sclerosis, the connection between the immune system and the body is interrupted when the immune system identifies itself, particularly the white matter of the central nervous system as a foreign body, and consequently destroys the myelin. The myelin is a fatty tissue composed of rich protein and lipids that protect and insulate the nerve fibers, which serve to carry out electrical impulses. The central nervous system is made up of the brain, spinal cord, and optic nerves; thusly MS affects several areas of the human anatomy.

Multiple Sclerosis could be described as the loss of myelin in multiple locations throughout the body, which then exposes the nerve fibers and leaves scaring called sclerosis. The next question that should be addressed is: What are the principal functions of myelin? In addition to protecting nerve fibers myelin is fundamental in the process of conduction of electrical impulses to and from the brain to other parts of the body. The transmission of electrical impulses will be hindered as a direct consequence of damaged or destroyed myelin. One may ask why does the body begin to attack and destroy the myelin? This question ultimately leads to the inquiry: What causes multiple sclerosis? The response is that the exact origin of MS is unknown, and that scientists and researchers suspect that the damage to the myelin results from an abnormal response by the body's immune system. In other wards science cannot explain this phenomena. However, research is making advances in the area of MS, and the future for those who are affected by multiple sclerosis appears to be more optimistic.

In most recent years, scientists have created a group of tools that provide them with the capacity to target the genetic influences that make an individual prone to multiple sclerosis. Molecular genetics has provided a model for which these instruments are utilized to separate and determine the chemical structure of genes. Since the 1980's, scientists have begun to apply the tools of molecular genetics to human diseases that are attributed to genetic defects in individual genes. Now, some scientists are convinced that one may be "susceptible to MS only if she or he inherits an unlucky combination of alterations in several genes." An example that demonstrates progress in the area of MS research; by 1996 it was reported that up to twenty locations that may contain genes contributing to MS were recognized, but a specific gene was not documented to have a significant impact on the susceptibility of Multiple Sclerosis. Once the gene that accounts for a predisposition to MS is identified, subsequently researchers will address the influence that this gene has in the immune system and on the neurological aspect. This leads me to the following query: Who gets MS? The response to this question is that anyone may develop multiple sclerosis, but that there are some known patterns. The most widely accepted and documented epidemiological observations of MS are: that there is a significantly greater frequency in a latitude of 40ş or more away from the equator, than in a lower latitude closer to the equator, in the U.S. MS occurs more often in states that are above the 37th parallel than in states below it, an individual who is born in an area with a higher risk of developing MS and moves to an area of lower risk, acquires the risk of the new residency if relocation occurs prior to adolescence, MS is more prevalent among Caucasians, (particularly those of northern European ancestry) than other races, Multiple Sclerosis is 2-3 times as common in women than in men, and in specific populations, a genetic marker has been linked to MS. These factors are diverse in nature and could be considered environmental, immunologic, viral, and or genetic factors. However, approximately 400,000 Americans have been diagnosed with having multiple sclerosis, and in addition every week 200 people are diagnosed. On a much larger scale, worldwide MS may affect 2.5 million individuals.

As people are affected either directly or indirectly: What can be done to promote awareness? The response is to become empowered by seeking education about multiple sclerosis, and share your knowledge with others. An aspect of becoming more informed about this autoimmune disease is to be aware of the symptoms. The symptoms of MS are unpredictable and vary from individual to individual, but some of the most common symptoms are bladder, bowel, sexual dysfunction, abnormal fatigue, lack of balance and muscle coordination, cognitive, emotional, and vision problems, and several more symptoms. Often, MS is misdiagnosed because of the array of symptoms. The two fundamental signs for confirming multiple sclerosis are; signs of disease in different parts of the nervous system and signs of at least two separate exacerbations of the disease. Once an individual experiences symptoms and is diagnosed with MS: What are the options available as far as treatment is concerned?

Unfortunately, there is not a cure, but rather treatments that have been proven effective in slowing down the progression of the disease. The medications most commonly used to treat MS are Glatiramer Acetate, Interferon beta-1a, Interferon beta-1b, and Mitoxantrone. Except for Mitoxantrone the medications listed above are types of proteins manufactured by a biotechnological process from one of the naturally occurring interferons. To the contrary Mitoxantrone belongs to the general group of medicines called antineoplastics. It acts in MS by suppressing the activity of T cells, B cells, and macrophages that are thought to lead the attack on the myelin sheath. People diagnosed and properly treated for multiple sclerosis may live a normal life expectancy, and maintain an active lifestyle. It all depends upon the state of mind of the individual. My mother is a primary example, that MS does not have to control one's life, but to the contrary an individual has the power to control it.

There are several important aspects of living with this disease, instead of at its mercy. First, one needs to acknowledge the existence of the disease. Second, an individual should allow time to process the idea of living with the disease. Third, one should become an active agent in fighting the disease by becoming informed. Fourth, a person should be willing to make adjustments, which may alter one's lifestyle. In conclusion an individual should not be afraid of multiple sclerosis, but dare to live a fulfilling life.

Bibliography

1)http://www.nationalmssociety.org

2)http://www.ninds.nih.gov/health_and_medical/pubs/multiple_sclerosis.htm

3)http://www.undestandingms.com/ms/articles/cognitive.asp

For further information check out

4)http://web.lexis-nexis.com/universe/document?_

5)http://www.docguide.com/news/content.nfs/NewsPrint/

6)http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


"Why? The Neuroscience of Suicide": Further Resea
Name: Clarissa G
Date: 2003-02-26 00:53:54
Link to this Comment: 4840


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

While this writer had some rudimentary knowledge of the impact serotonin had on the brain, "Why? The Neuroscience of Suicide" by Carol Ezzell piqued my curiosity on the role levels of serotonin and the process by which it is absorbed in the brain affect suicidal patients. This article was recently posted on the Neurology and Behavior website as supplemental reading for neurology and behavior's spring semester 2003 class. In this article the writer Carol Ezzell weaves her own personal experience with informative reporting of groundbreaking neuroscience research on suicide. Through further research I discovered various articles on a group of scientists from Columbia University doing research on the difference in people's brains whom have attempted suicide and or succeeded.

It is widely accepted that the level of serotonin present in the brain has a significant affect on the behavior of an individual, specifically, an individuals mood. SSRI's (Selective Serotonin Reuptake Inhibitor) are common medications that treat major depression. Thus affecting the mood of an individual. Some would argue improving the quality of life of people who suffer from clinical depression.

The amount of serotonin in the brain has an affect on an individual's behavior. "Low levels of the chemical are associated with clinical depression". (1) According to an article in "Time Domestic" entitled Suicide Check, serotonin may not reach some parts of the brain in adequate amounts in suicide victims. The article cites a study by Dr. John Mann of the Columbia University College of Physicians and Surgeons in New York City. Dr. Mann's study "...focuses on a section of white matter-the orbital cortex-that sits just above the eyes and modulates impulse control. In autopsies of 20 suicide victims, Mann's group found that in almost every case, not enough serotonin had reached that key portion of the brain". (1)

Roughly seven years later Carol Ezzell revisits Dr. Mann's research with his colleague Victoria Arango. Arango's research, presented in 2001, at a conference of the American College of Neuropsychopharmacology had determined that "people who were depressed and died by suicide contained fewer neurons in the orbital prefrontal cortex" and that "in suicide brains, that area had one third the number of presynaptic serotonin transporters that control the brains had but roughly 30 percent more post synaptic serotonin receptors". (2) This means that the brain is trying extra hard to deliver whatever serotonin it could produce to the correct part of the brain. This system of serotonin production and absorption is known as the serotonergic system. The serotonergic system is what Arango believes is deficient in people who attempt or commit suicide. (2)

Arango and Mann are developing a positron emission tomography (PET) test that measures serotonergic system. This system would monitor the areas of the brain using serotonin "in patients who have the most skewed serotonin circuitry – and are therefore at highest risk of suicide". (2)

None of these articles in anyway suggest that an individual with serotonin deficiency will be a victim of suicide. Victoria Arango of the New York State Psychiatric Institute says the occurrence of suicide "starts with having an underlying biological risk" but that "life experience, acute stress and psychological factors each play a part". (2) The possibility of this predisposition means in the future diagnosis and prevention of the possibility of suicide could be easier.

Today individuals are conscious of genetic traits, which give them a higher chance of heart attacks or breast cancer, and have the ability to alter their behavior (i.e. smoking and nutrition) accordingly. Would the same be applicable to the possibility of suicide? Meaning before it becomes a reality, sufferers of this disorder would be able to have choices to take care of themselves accordingly using therapy and medication as a way of avoiding the possibility of disastrous consequences. New developments in this area of neuroscience will have an affect on the care given in the future to suicidal patients.

1) Bipolar Disorder; Complete Digest of information
2)Scientific American.com
3)Reutershealth,com
4)Psychiatric Institute 2000


Right Before My Very Eyes
Name: Patricia P
Date: 2003-02-26 15:22:56
Link to this Comment: 4846


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

"I'll believe it when I see it:" is one of many common catch phrases included in our every day vernacular. A person who declares this is asserting that they will not be fooled by another's assumptions or perceptions of the world. This understanding raises a great sense of security within us, concerning the things that we do see, and inversely, an unavoidable sense of insecurity in those beliefs that are not supported by vision. Do you believe in Ghosts? Angels? Out of body experiences? Would you believe if you could see them? Maybe not. But it is possible to offer those who are withholding there stamp of approval on things that exist but cannot be seen, a better summary of evidence, which could make the inability to see something an invalid criteria for belief. Could a summary of evidence be compiled that would support this: Our vision is incomplete, incorrect, and can even be as misleading as to create something within the brain that does not exist at all, shedding light on a brain that is more of a visionary, and less of a reporter.
Human beings rarely contemplate the significance of their own blind spot, a place where processes of neurons join together and form the optic nerve; it is here that the brain receives no input from the eye about this particular part of the world. What I discovered while entertaining myself with a simple eye exam aimed at divulging the capabilities of the brain in the face of the eyes blind spots was fundamental in my exploration of the trust we place in vision, and so I will explain it briefly. Our brain can ignore a dot that exists on the page and "fill" the spot with the color of its surroundings, no matter what the color. However, it is not that our brain cannot conceive of an image or of a shape to fill this place. Continuing with the experiment leads you to find that the brain will continue the line that is obstructed with the black dot, covering the sides of the dot in the surrounding color, and transforming the image before you into a line within your brain. A line that is absolutely not there. This reveals more than just a weakness in the eye, but an ability of the brain! (1)
John Whitfield, within A Brain in Doubt Leaves it Out, argues that it is due to the hemispheres in the brain and their disagreements with one another. In short, "The left hemisphere seems to suppress sensory information that conflicts with its idea of what the world should be like; the right sees the world how it really is." (2) This would explain why people with paralysis due to an injury affecting the right side of their brain might deny that they are disabled. More specifically, the parietal lobe can be held responsible for this congestion of what the brain wants to see. When tampered with, injured patients can witness their legs vanish before their eyes. (2) This further damages the credibility of our perceptions. It is alarming to acknowledge that the brain has the power to create, superimpose, and even remove.
Some still may not be satisfied that with manipulation of our brains, either by covering one eye to explore our blind spot or by prodding of the parietal lobe to cause a disappearing act, that we have conclusive evidence to discard sight as a valid criteria for believing or not believing in the existence of anything. This is understandable. However, there are many instances where the brain displays misconceptions and contortions due to a preconceived notion of the world. David Whitaker and Paul V. McGraw reported in Nature Neuroscience (3), that what our brain finds unfamiliar, we tend to distort, simply because of a lack of understanding or previous model. For example, when people are asked to identify the degree of tilt of italicized letters in one display, and then in those of a display of its mirror image, an interesting fact about perception was revealed. It was consistently reported that the mirror image was slanted 2 1/2 degrees greater than the letters italics in the commonly seen clockwise way. However, the same over exaggeration of tilt was not made with shapes or characters, leaving the misinterpretation to exist in a place within the brain that associates with "memory and meaning." (3) Without being provoked in any way, the brain continues to distort at a higher level of understanding.
Can a brain be stimulated to see an entire body that does not exist? Helen Pearson found that it can be. Through electrode stimulation of the right angular gyrus, patients reported experiences such as this; "I see myself lying in bed, from above." (4) We have established that the mere fact that you may be able to see a body (or a phantom limb) does not mean it exists. In much the same way, not see another body or entity, perhaps a ghost or an angel, does not shed enough light on whether or not they actually exist. To question the gift of vision's position as the warden of reality, and to come to terms with its capabilities to present us with inaccurate, distorted and often dream like perceptions of reality is vital in opening the window of possibilities to what exists past our noses.
And so, we could take from this that all individuals are, to some degree, "blind." Blindness leads the brain to create. In fact, when tested in a large group, people's brains tend to "create" even without the excuse of missing information attributed to their blind spot. And if we go further, and take the findings of John Whitfield and Helen Pearson into account, our brains are even capable of causing our surrounding to act in peculiar ways or cause completely new surroundings to appear. We must gain a new sense of self-awareness when we evaluate what exists beyond ourselves. The ability of our brain to ignore and create forces us to look at the fabric of what we consider reality. Beyond the existence of ghosts and angels, we must question our entire perceptions of the outside world as being always incomplete, leaving us more open to explore things that our eyes may not see. There may be an argument for believing something, even without "having seen it with your own eyes."

References

1)Serendip Vision Exam, A site within the courses homepage that explains and tests the boundaries of our blind spot.
2)Article, A Brain in Doubt Leaves it Out, by John Whitfield concerning the behaviors of the brain when it is unsure and the origins of those reactions.
3)Article, A Different Angle, on studies of the brain's response to degrees of change in similar but mirror image italics print.
4)Article, Electrodes Trigger Out-of-Body Experiences, just helped to add to the capabilities of the brain to create whole figures.


Attention Deficit/ Hyperactivity Disorder: the int
Name: Jennifer H
Date: 2003-02-26 20:25:26
Link to this Comment: 4851


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

One summer afternoon I recall while playing in my mother's office, that her client's mother was beaming with such life, emotion, energy as she proclaimed that her son was 'cured'. I was a bit baffled at the time because I had seen the boy everyday for the past month, and he didn't appear to be 'broken', at least physically. My mother, Doctor Janice Hansen, PhD, had done it, she had taken the boy off prescription medication and re-patterned his 'pathways', thereby, curing the boy who had been diagnosed with Attention Deficit/Hyperactivity Disorder (AD/HD). My mother humbly explained to me that she had just given that child a 'chance' to reach his full potential and that his 'pathways' were like a puzzle I which she had simply inserted the missing piece. Granted I was only 7 years old when this experience occurred, yet, this has prompted me to question the intentions and effects of medications on the body and the mind. This web discussion expresses my growth and research as an intellectual exploring the intricate web of diagnostic criterion of AD/HD, and question the relationship to the behavioral criteria of a Gifted Child.

Attention Deficit/ Hyperactivity Disorder is a neurobiological disorder where origin has yet to be determined via research. Still, the research that has been conducted seems to support the notion that it is not caused by environmental issues (4). There have been studies performed during stages of fetal development which I believe can aid in the scientific community's understanding of this disorder because there is a connection between premature birth and consequent neurobiological damage. Neuralization is one of the later systems to be formed during the development of the embryo ((5)); therefore, the embryo's neurological connections would be affected by premature birth. Although the origin of this disorder is important, the question addressed is whether there is misusage of treatment (prescription medications) and proper diagnosis of the disorder.

My great expectation of encountering studies that conveyed the overprescription (misusage) of medication to diagnosis AD/HD fueled my quest. Conversely, according to the Journal of the American Medical Association (1), there was a study performed that indicated that "little evidence of widespread over-diagnosis or misdiagnosis of AD/HD, or of widespread overpercription of methylphenidate (common drug used to treat AD/HD)". Yet, they also stipulated that some cases may have been misdiagnosed due to insufficient evaluation of the child in question. Could these experimenters be implying that the medical diagnostic criteria on which the medical/psychiatric field bases their evaluations are inaccurate?

The DSM-IV diagnostic criterion for Attention Deficit/ Hyperactivity Disorder illustrates the symptoms that the medical community utilizes to make their diagnoses (2). The AD/HD diagnosis is distributed into three categories: those that exhibit symptoms of inattention, those that display symptoms of hyperactivity-impulsivity, and, those that demonstrate symptoms of both the inattention and hyperactivity-impulsivity components. However, the behavioral symptoms utilized to diagnose AD/HD are not specialized for only this disorder. I have found that these symptoms that are used as diagnostic criteria in identification of individuals with AD/HD and those utilized to diagnosis 'gifted' children are similar.

Research indicates that in many cases, a child is diagnosed with AD/HD when in fact the child is gifted and reacting to an inappropriate curriculum (Webb & Latimer, 1993). The key to distinguishing between the two is the pervasiveness of the "acting out" behaviors. If the acting out is specific to certain situations, the child's behavior is more likely related to giftedness; whereas, if the behavior is consistent across all situations, the child's behavior is more likely related to ADHD. It is also possible for a child to be BOTH gifted and ADHD (3). The following compiled lists highlight the similarities and where the problem in diagnostic criteria exists:

Characteristics of Gifted Students Who Are Bored:
* Poor attention and daydreaming when bored
* Low tolerance for persistence on tasks that seem irrelevant
* Begin many projects, see few to completion
* Development of judgment lags behind intellectual growth
* Intensity may lead to power struggles with authorities
* High activity level; may need less sleep
* Difficulty restraining desire to talk; may be disruptive
* Question rules, customs, and traditions
* Lose work, forget homework, are disorganized
* May appear careless
* Highly sensitive to criticism
* Do not exhibit problem behaviors in all situations
* More consistent levels of performance at a fairly consistent pace
(Cline, 1999; Webb & Latimer, 1993)


Characteristics of Students with ADHD:
* Poorly sustained attention
* Diminished persistence on tasks not having immediate consequences
* Often shift from one uncompleted activity to another
* Impulsivity, poor delay of gratification
* Impaired adherence to commands to regulate or inhibit behavior in social contexts
* More active, restless than other children
* Often talk excessively
* Often interrupt or intrude on others (e.g., butt into games)
* Difficulty adhering to rules and regulations
* Often lose things necessary for tasks or activities at home or school
* May appear inattentive to details
* Highly sensitive to criticism
* Problem behaviors exist in all settings, but in some are more severe
* Variability in task performance and time used to accomplish tasks.
(Barkley, 1990; Cline, 1999; Webb & Latimer, 1993)
Compiled from ((3))

The diagnostic criterion for AD/HD and Gifted Children is inadequate and alternative methods for distinguishing between the two similar behavioral symptoms needs to be determined and perfected in order to insure that the treatment received is appropriate to the disorder at hand. The key to distinguishing between these similar behavioral symptoms is with observation prior to diagnosis; however, this instills a lot of power and responsibility in the hands of the doctor/ observer, which is a highly subjective process. How safe does this make you feel?


References

1) Journal of the American Medical Association, a rich resource of articles regarding AD/HD: article entitled 'Little evidence found of incorrect diagnosis or over-prescription for AD/HD' of interest.

2)Methylphendite and ADHD webpage, DSM-IV diagnostic criterion enclosed

3)ERIC Clearinghouse on Disabilities and Gifted Education, Great resource for diagnostic criteria for gifted children


4) Deficit Disorder Organization homepage , Questions on possible theories of origin.

5) Wolpert, Lewis. Principles of Development-2nd edition. Oxford University Press, publisher, 2001.


Autism: A lack of the I-function?
Name: Kate Shine
Date: 2003-02-27 03:25:24
Link to this Comment: 4857


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Autism: A lack of the I-function?
Name: Kate Shine
Date: 2003-02-27 03:34:55
Link to this Comment: 4858


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

In the words of Uta Frith, a proclaimed expert on autism, autistic persons lack the underpinning "special feature of the human mind: the ability to reflect on itself." (3) And according to our recent discussions in class, the ability to reflect on one's internal state is the job of a specific entity in the brain known as the I-function. Could it be that autism is a disease of this part of the mind, a damage or destruction to these specialized groups of neurons which make up the process we perceive as conscious thought? And if this is so, what are the implications? Are autistic persons inhuman? Which part of their brain is so different from "normal" people, and how did the difference arise?

The specific array of symptoms used to diagnose an individual as autistic do not appear as straightforward as Frith's simple statement. It seems hard to fathom that they could all arise from one similar defect in a certain part of the brains of all autistics. Examples of these symptoms include a preference for sameness and routine, stereotypic and/or repetitive motor movements, echolalia, an inability to pretend or understand humor (3), "bizarre" behavior(4) and use of objects (2), lack of spontaneity, excellent rote memory (2), folded, square-shaped ears (3), lack of facial expression, oversensitivity, lack of sensitivity, mental retardation, and savant abilities.

Obviously not all autistics exhibit all of these characteristics. Psychologists, however, often believe certain symptoms to be more indicative of the disease than others. The world autism stems from a Greek word meaning, roughly, "selfism." Autistics are described as very self-absorbed, and some academics refer to a short list of three characteristics to diagnose them. These are impairment of social interaction, impairment of communication without gestures, and restrictive and repetitive interests. (3) Tests have also been designed in attempt to diagnose the disease decisively. One of these is the Sally-Anne test, in which autistic children usually show an inability to understand the perception of another child in a scenario or to understand that she could believe something false. (4) Another test involves two flashing lights. Autistic children will look at a light if it flashes, but other children will show a tendency to move their eyes toward a new light while autistic children will not. (3)

Observation of the behavior of autistics makes it clear that they do interact with their world and understand certain aspects of it to a degree. However they often appear intensely focused on one perception or sensory experience and are unable to integrate multiple factors of emotion, intention, or personality in the way most people do. As a result of this inability to perceive order in all of the circumstances of their environment, they often find themselves in a world that seems very chaotic and random.

Jennifer Kuhn hypothesizes that many of the symptoms of autism are defense mechanisms stemming from a feeling of helplessness to control a situation. (3) Rats, when they are forced to jump at one of two doors that are randomly chosen to open unto food or stay closed and hurt the rat, will always pick the same door. (Lashley and Maier, 1934) Thus the repetitive behavior of autistics, their insistence on routine, and their echolalia can all be explained as attempts to calm themselves by achieving an inner order. They are effectively forsaking experimentation in an attempt to avoid failure. This might be evidence for damage to an I-function, as we have discussed that observation and experimentation are innate behaviors of humans, and that they require personal reflection.

However, many autistics do not exhibit behaviors such as echolalia all the time but rather more frequently when they are in an unfamiliar environment. (1) And I have observed in myself a tendency to get a nervous repetitive motion in my foot in certain situations or even a rhythmic movement of my head when I am concentrating very intensely, but I am still confident of my ability to reflect on my own thoughts.

One of the most obvious ways to find out if autistics share a collective brain abnormality that causes their unique array of characteristic behaviors is to look into the structure of the brain itself. In studies of autistic brains use scanning images as well as autopsies, scientists have generally not been able to agree on a single consistent abnormality. Some observations have been that the neurons in the limbic systems of the autistic subjects are often smaller and more densely packed than those of other individuals. Monkeys whose limbic systems were removed in experiments showed weaknesses in social interaction, memory, as well as exhibiting locomotor stereotypes similar to those of autistics. (2)

Another area that often exhibits abnormality in autistic brains is the cerebellum, especially in the CA1 and CA4 cells were there are less dendritic arbors and less Purkinje and granule cells. (2) Purkinje cells are responsible for cell death, and in where they are in small number cells may be allowed to crowd each other and not develop networks properly. This could account for autistics' tendency to be overwhelmed by stimulation. The cerebellum is also responsible for muscle movement, and other studies have found significantly fewer numbers of facial neurons (400 as compared to 9,000 in a control brain), which could also account for absence of facial expression. (3) Other areas of the brain which some studies have argued are abnormal in autistic brains are the left hemisphere which controls language (5), (6) the temporal lobes which control memory(2), and the brain stem(3).

However, many of these findings are still considered controversial or inconclusive and may not only be attributed to autism but to many people with developmental disorders. And even if these observations are accurate there is still no explanation for why these differences in brain structure occur simultaneously.

One very convincing hypothesis to explain these occurrences has been proposed by Patricia M. Rodier. Inspired by the fact that relatives of autistics are much more likely than the natural population to be diagnosed with the disease, Rodier became convinced that there must be some definite genetic factor contributing to it. However there was still the fact that an identical twin with an autistic sibling will only contract autism about 60% of the time, which makes an argument for environmental influence as well.

She then came upon a study of children whose mothers were exposed to a drug called thalidomide during their pregnancy, and found that a full 5% of their children were diagnosed as autistic. By combining information about the drug with other birth defects in the ears and faces of the children, she was able to guess that autism began 20 to 24 days after conception, when the ears and the first neurons in the brain stem are starting to grow. When she began to examine the brains of previously autopsied autistics, she noticed they showed evidence of particular short brain stems and often they also had a small or even absent facial nucleus and superior olive.

This information suggests that the genetic factor which causes these abnormalities in the primitive brain could then go on to cause the secondary abnormalities in the various other more sophisticated parts of the brain already mentioned. And Rodier has identified a gene known as Hoxa1 which is present in 40% of autistics but only 20% of non-autistics. It is found more often in relatives who are autistic than those who are not. Although this is by no means a conclusive argument that autism is genetic, there could be other variants of that allele which contribute to autism or certain genes which decrease the risk of it which haven't yet been discovered but could account more strongly for genetic determination. (3)

So could there be specific genes which actually code for the I-function? This is possible, but there is still certain evidence for environmental factors causing some of the symptoms of autism. In-utero exposure to not only thalidomide but also rubella, ethanol, and valproic acid have been liked to the disease. And there is a significant population of people who believe that the elimination of glutens and luteins found in milk, apple juice, wheat, and other products relieves the build up of certain chemicals in the brain and makes autistics more able to fit in with society. (8) A recent study of abused children has also claimed that the left hemisphere of the brain, specifically areas dealing with memory and expressing language in the limbic system, can be overexcited and damaged as a result of intense sexual or physical abuse. (5) Perhaps all of these environmental factors do additional damage to the different sectors the I-function.

What does all of this say about autistic people? Do the characteristic differences in their brains necessarily indicate that they are lacking this "I-function" or awareness part of the mind that makes someone human? I do not think so. The fact that rats and monkeys can show similar symptoms when these areas of their brains are damaged destroys this argument. The self-awareness or I-function does not appear to be limited to humans. And it may just be that the I-function operates differently or is damaged in the minds of autistics. There are huge degrees of severity in what are called the "autism spectrum diseases." These can be defined to include dyslexia, ADHD, Asperger's syndrome, pervasive developmental disorder, and more, with autism being the most severe. And any person can have symptoms of autism without being fully "autistic." But even severe changes in the I-function system should not necessarily be considered defects.

It is entirely possible that autistic people have thoughts and self-reflections which they cannot communicate through language or in society's "normal" modes of expression. While they do seem do have a diminished capacity to reflect on as much as other people do of the world at once, it may be that what they do reflect on could be just as much or more meaningful. Autistic savants, for example, show an amazing ability in one area such as music, art, or calculation which often surpasses what it seems a "normal" person could ever achieve. And these creations are not just byproducts of the rote memory and habit that some scholars insist is all that exists in autistics. It is my personal opinion after viewing the art of an autistic savant, Richard Wawro (9), that the work and the artist are infused with at least as much emotion and unique perspective as any "normal person."

Frith also argues that autistics are "...not living in a rich inner world but instead are victims of a biological defect that makes their minds very different from those of normal individuals." (4) But who is it that has the authority to decide what a "normal" individual is? The defense mechanisms that autistics display are evidence that they do experience distress and anxiety, but Freud argued that non-autistic or what Frith would call "normal" individuals also display a number of common defense mechanisms as a result of the anxieties in their lives, and this view is still widely accepted. Just because these may appear more normal or subtle to us does not mean they are not important.

In a society of autistics, we might feel a great deal of incoherence at how they might organize their world, and we might found ourselves the ones unable to communicate. In the movie "Molly," which concerns a fictional autistic woman who becomes normal and reflects on her experiences, there is a speech that illustrates this point beautifully. "In your world, almost everything is controlled....I think that is what I find most strange about this world, that nobody ever says how they feel. They hurt, but they don't cry out. They're happy, but they don't dance or jump around. And they're angry, but they hardly ever scream because they'd feel ashamed and nothing is worse than that. So we all walk around with our heads looking down, but never look up and see how beautiful the sky is." (10) If autistic people can possibly use their minds to see a beauty we cannot imagine, who are we to call them the abnormal victims? If they can only see the world in a sincere way and not understand irony or humor, is that necessarily a bad thing? We cannot truly know what autistics are aware of internally unless we have experienced their world.


References


1) Stereotypic Behaviors as a Defense Mechanism in Autism , from Harvard Brain website, Jennifer Kuhn, 1999.

2) Neurobiological Insights into Infantile Autism , Amy Herman, 1996.

3) Rodier, Patricia M. "The Early Origins of Autism." Scientific American February 2000: 56-63.

4) Frith, Uta. "Autism." Scientific American June 1993, reprinted 1997: 92-98.

5) Teicher, Martin H. "The Neurology of Child Abuse." Scientific American March 2002: 68-75.

6) Autism and Savant Syndrome, , web paper by Sural Shah.

7) Considerations of Individuality in the Diagnosis and Treatment of Autism, , web paper by Lacey Tucker.

8) Diagnosis: Autism, , Mothering Magazine, Patricia S. Lemer, 2003.

9) Paintings by Richard Wawro, , an online gallery of an autistic savant's work.

10) Duigan, J. (Director). (1998) Molly. [Video-tape]. Santa Monica, CA: Metro-Goldwyn-Mayer Pictures Inc.


Bipolar Disorder: An Overview
Name: Kate Tucke
Date: 2003-02-27 10:05:08
Link to this Comment: 4861


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Bipolar disorder, also commonly known as Manic Depressive Disorder, affects about two million American adults. That equals 1% of the population that is 18 or over.(1) One out of every five people with bipolar disorder commits suicide.(2) Obviously this is a serious illness, and yet very little is understood about it. The causes are unclear, the symptoms can be very different, and the genetic relationship is still fuzzy. There is no cure, but with proper treatment most individuals can live normal lives.

Bipolar disorder is a mood disorder, which means that those who have it experience extreme mood swings. There are two types of mood disorders, unipolar depressive disorders (what most people know as simply depression) and bipolar disorders. Bipolar disorder is characterized by at least some abnormal elevation of the mood as well as mood cycles over time.(3) There are four different types of cycles that a bipolar patient might experience. These are mania, hypomania, depression, and mixed episodes.

Mania begins with heightened energy and creativity and develops into a feeling of euphoria or of extreme irritability. People experiencing a manic episode often lack insight into his or her illness, deny that there is a problem and get very angry with anyone who points out the mania. In order to be considered a manic episode these conditions must last at least a week and interfere with the normal functioning of the person's life. In addition there must be at least four of the following symptoms: needing little sleep but having lots of energy, talking very fast, having racing thoughts, being easily distracted, having an inflating feeling of greatness and importance, or being reckless without considering consequences. Sometimes psychotic symptoms are also present in the form of hallucinations and delusions.(4)

Hypomania is a milder form of mania. The person experiences an elevated mood and is very productive. Hypomania feels good so patients often stop taking their medications while experiencing a hypomaniac episode. This is problematic because it often escalates to mania or depression.

A major depressive episode must last at least two weeks and must make it difficult for the person to function. The patient experiences feelings of sadness and loses interest in normally enjoyable activities. S/he also must experiences at least four of the following symptoms: difficulty sleeping or sleeping too much, difficulty eating or overeating, problems concentrating and making decisions, feeling slowed down or too agitated to sit still, feeling worthless, guilty or having low self-esteem, or thoughts of suicide or death. During depression, hallucination and delusions can also occur.

A mixed episode will have symptoms of both Mania and Depression either simultaneously or alternating frequently on the same day.(5)

There are different patterns of episodes which lead to different classifications of bipolar disorder. Bipolar I Disorder consists of manic or mixed episodes, and almost always depressive episodes too. Bipolar II Disorder consists only of hypomania and depression. This type is harder to recognize because during a hypomanic episode patients just seem more productive. People frequently overlook the hypomania and simply seek treatment for depression. Anti-depressants, however, could trigger a manic episode or make cycles more frequent. Rapid-Cycling Bipolar Disorder occurs when a person experiences at least four different episodes in a year, in any combination. This accounts for five to fifteen percent of bipolar patients and is more common in women.(6)

Depending on the person, bipolar disorder can manifest itself in different ways. Some people have equal periods of mania and depression, while other experience one more frequently. The average person has four episodes in the first ten years. Men are more likely to start with mania, while women are more likely to start with depression. The cycling can sometimes be seasonal. Episodes can last days, months, or years. Without treatment, mania usually lasts a few months, while depression lasts for over six months. Some people have normal periods in between episodes, while others constantly experience mild ups and downs.(7)

The treatment for bipolar disorder can be either aimed at ending the current episode or at preventing future episodes. Treatment can include medications (mood stabilizers, antidepressants or antipsychotics), education, or psychotherapy. Medications are almost always used in treatment. Mood stabilizers provide relief from acute episodes of both mania and depression and can also be used to prevent them. Lithium was the first known mood stabilizer and is still commonly prescribed. The other most common mood stabilizer is Divalproex, which is an anticonvulsant. It was originally used to treat seizures, but has been found to be effective in managing both euphoric and mixed episodes. It is particularly effective for rapid-cyclers and situations complicated by substance abuse or anxiety disorders. It is often preferred because it can be started at higher doses and is therefore faster than Lithium. Other anticonvulsants that are used as mood stabilizers are Carbamazepine, Lamotrigine, Gabapentin, and Topiramate.(8) Anti-depressants and anti-psychotics are also prescribed when those symptoms are present.

The cause of bipolar disorder is largely unknown. The neurotransmitters in the brain of a bipolar patient are different than normal patients, which means that there may be an abnormality in the genes that regulate neurotransmitters.(9) Bipolar disorder tends to run in families. A number of genes have been identified that could be linked, which implies that several different biochemical problems are occurring at the same time.(10) Biochemical, neurophysiologic, and sleep abnormalities have been linked to bipolar disorder, but none of them are specific to the disorder. It is unknown how unipolar, bipolar I, and bipolar II are related to each other.(11) It is possible that bipolar disorder makes people more vulnerable to physical and emotional stress, which is why life events can trigger episodes, but this is not the cause of the disorder. There is an "inborn vulnerability interacting with an environmental trigger" that starts an episode.(12)

One of the large problems with bipolar disorder is that many who suffer from it go undiagnosed. In a 1999 university study, 40% of the bipolar subjects were previously misdiagnosed as unipolar. As mentioned previously, this is dangerous because the use of anti-depressants alone can trigger a manic episode or lead to rapid-cycling. It is very important to identify individuals who suffer from bipolar disorder. 12% will commit suicide, usually during a depressive episode.(13) Many others will struggle with substance abuse as a way to self-medicate. One study showed that 60% of hospitalized bipolar patients had some form of lifetime substance abuse. 48.5% used alcohol and 43.1% used drugs.(14)

Many famous people have had bipolar disorder, including Teddy Roosevelt, Robert Schumann, Vincent van Gogh, and Sylvia Plath.(15) Manic episodes can trigger great creative periods, some of which are responsible for great works of art, music and literature. Bipolar disorder is obviously a serious illness which, if left untreated will at the very least disrupt a person's life. Fortunately, there are many ways to treat bipolar disorder, even though the causes of the illness are not understood. Treatment can be difficult, as the illness is often misdiagnosed. Even once a proper diagnosis is obtained, patients may need to try different combinations of treatment before finding success.
a>.

References

(1)Mayo Clinic, an article about treatment of Bipolar Disorder

(2)Goodwin & Jamison, Manic Depressive Illness, p. 228, referenced on a personal page with a summary of personal experiences with Bipolar Disorder

(3)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(4)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(5)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(6)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

Bipolar Disorders Information Center, a great overview of Bipolar Disorder

Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(9)Mayo Clinic, an article about treatment of Bipolar Disorder

(10)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(11)Minnesota Medicine: Diagnosing and Treating Bipolar Disorder, an article for doctors' guidance

(12)Bipolar Disorders Information Center, a great overview of Bipolar Disorder

(13)Minnesota Medicine: Diagnosing and Treating Bipolar Disorder, an article for doctors' guidance

(14)Substance Abuse, an article linking Substance Abuse and Bipolar Disorder

(15)Minnesota Medicine: Diagnosing and Treating Bipolar Disorder, an article for doctors' guidance


Depression: All in our heads?
Name: Stephanie
Date: 2003-03-02 01:45:19
Link to this Comment: 4888


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

Is depression a figment of the Western world's cultural imagination? Do we create madmen of those who cannot keep up with the go-go-go attitude that defines American culture? Even if the condition we know as depression can be traced back to a certain state of the brain, is it culture that considers this state of the brain to be a mental illness? How do culture and the brain conspire together to produce the condition known as clinical depression?

To appreciate the complexity of the current understanding of depression, one must first look at how attitudes toward depression in the Western world have changed over time. During the Renaissance physicians believed that melancholy, a condition characterized by sullenness and fits of anger, was caused by an imbalance of fluid in the body. An improper balance of blood, phlegm, yellow bile and black bile in the body of the suffering individual was thought to be the cause of melancholy and a host of other mental illness (1). In the late 1800's a German psychiatrist developed the current classification of a condition of low spirits and mental gloom defined separately from other mental illness. It is not until 1905 that the word "depression" was used to describe this collection of symptoms (2).

The early 20th century was dominated by Freud's theory that depression was a disease of the mind. Self help proponents, like Tony Schirtzinger who suggests that "we [modern Americans] got depressed because we were like kids in a candy store", demonstrate that this idea still exists today (3). During the mid-20th century the medical world began to consider depression not only a disease of the mind, but also a disease of the brain.

Around this time, drugs began to be used to treat depression. Non-medical treatments were also developed, including psychoanalysis that focused on forcing the patient to confront irrational beliefs, and cognitive therapy that attempted to stop the negative feelings that allegedly led to depression. Beginning in the 1980s, antidepressant drugs began to target serotonin, working by inhibiting the re-uptake of this neurotransmitter (4).

Today, the medical world sees depression as primarily a disorder of the neurotransmitters. It is thought that in depressed people serotonin and norepinephrine levels are decreased and corticotrophin-releasing factory (CRF) levels are increased (5). Serotonin is a hormone that helps one to control powerful feelings such as fear and anger. In studies, animals with less serotonin were more likely to engage in impulsive, aggressive behavior (6). CRF is known as the "stress hormone". It prepares the body to deal with a dangerous situation. New research is suggesting that depressed people have a larger amount of this stress-inducing chemical in their spinal fluid (7).

This research might lead someone to believe that depression is strictly a neurobiological problem, but it is impossible to separate depression from culture. Culture influences every area of the disease: from diagnosis to treatment. Depression is diagnosed by a series of symptoms whose very definitions and descriptions require a cultural context. The interpretation of these symptoms is also dependent on culture. Grcbic writes that "in many parts of the world, people with symptoms relating to depression or anxiety, often do not view their problems as needing psychiatric or psychological intervention" (8). This is not because the symptoms are not real, but because "what is seen as abnormal and harmful in one culture, may be adaptive and accepted in another." (9).

The new wave of immigration in the United States is bringing with it new mood disorders, giving reason to rethink the way that mood disorders and other mental illness are thought about. There are quite a few culture-bound and culture-specific illness, most of them mental diseases. Culture-bound illnesses are defined as illness that are recognized in non-western cultures, but "do not have a one to one correspondence with a disorder in the west" (10). Anorexia nervosa is an example of an illness that anthropologists consider to be a culture-bound disease. The cultures of both North America and Western Europe are structured in such way as to produce an illness that is unique to these cultures ) (10). This is not to say that biology is not involved. As more research is done in the area of eating disorders, scientists are suggesting that serotonin may play a key part in the development of anorexia and bulimia.

That depression exists can not be denied, but it seems that the cut off between sadness and depression is manmade. This theory does not go against the neurobiological evidence. If the "brain=behavior" theory is right, then what doctors would classify as normal sadness would also involve some sort of change in brain make up. Someone has to draw the line. Why is it that in the U.S. depression in treated but not hwa-byung (Korean suppressed anger syndrome) (11)? Is it because the brain structures of Korean nationals are uniquely susceptible to hwa-byung? This might be true, but more likely it is because Western medicine doesn't recognize particular sets of symptoms as an illness or culture actually creates a set of circumstances that allow for the brain to be changed in such a way to create the illness.

Extending this to depression, it can be said that Western society facilitates depression by creating the circumstances for the illness to manifest itself and by giving a particular set of symptoms a name. The realization that depression is more than just a disease of neurotransmitters should not take away from the seriousness of the illness. Perhaps a more comprehensive view of depression can lead to more holistic treatments. It might also help us to take seriously the symptoms of illnesses that we might not have heard of. Often treatment of depression does involve both medication and therapy, but understanding depression is that in some ways a social construction might empower sufferers to fight depression. Depression attacks its victims from more than one angle, and it needs to be fought that way.

References


1) Clinical Depression Then and Now

2) Oxford English Dictionary

3) Depression in the Culture

4) A Brief History of Depression

5) Depression

6)Serotonin and Judgement

7) Depression and Stress Hormones

8) Impact of Culture and Stigma

9) Culture-bound Syndromes, Cultural Variations, and Psychopathology

10)Glossary of Culture-bound Syndromes

11) Kersham, Sarah. "Freud Meets Buddha." The New York Times 18 Jan. 2003:B1.


The Effects and Usages of Psilocybe Mushrooms
Name: Lara Kalli
Date: 2003-03-06 07:58:21
Link to this Comment: 4969


<mytitle>

Biology 202
2003 First Web Paper
On Serendip

One of many naturally-occurring hallucinogens (or "entheogens", the term preferred by those who experiment frequently with these chemicals), psilocybe mushrooms have been in use for thousands of years as both a recreational and mind-expanding tool. Native Americans in both Central and South America regularly used them in religious ceremonies; they were believed to have divine properties. Popularized by the psychedelic movement of the 1960's, so-called "magic mushrooms" are now a staple of modern counterculture. (1)

There exists very little definitive or concrete knowledge on the full effects of psilocybe mushrooms on the brain. The primary active chemical in these mushrooms, 4-
phosphoryloxy-N,N-dimethyltryptamine or psilocybin, is a 5-HT2A and 5-HT1A post-synaptic receptor agonist. (2) In other words, the chemical structure of psilocybin is similar to that of the neurotransmitter serotonin, and it seems to take effect through the inhibition of the binding of serotonin to its receptors. (3) Once ingested, psilocybin eventually breaks down into 4-hydroxy-N,N-dimethyltryptamine or psilocin, which is approximately 1.4 times more potent by weight and a more unstable compound than psilocybin. (3)

The most common way to use psilocybe mushrooms is simply to eat them, although it is also viable to drink them in a tea. Once consumed, the user generally begins to feel the effects about thirty to sixty minutes later. The mushroom "trip" usually lasts four to six hours. As with all entheogens, the specific effects of mushrooms vary widely from person to person, but usually the trip begins with feelings of vague unrest or nticipation and a sense that reality has been undefinably altered or augmented in some way, often accompanied by nausea or general gastrointestinal discomfort. Once the trip is fully underway, one may experience quickly changing moods – often, the user will start laughing for no reason whatsoever – confusion, increased mental and physical energy, and most notably intense feelings of spiritual awareness, insight or revelation. Open-eye visuals, such as the appearance of moving patterns all over one's surroundings, are common at higher doses; closed-eye visuals will occur in almost every case. There is usually a two- to six-hour comedown period in which the user is not actually tripping but continues to have a sense of reality having been altered somehow. Sleep is also very difficult during this period. (1)

Since mushrooms have historically never been the object of a widespread popular usage fad, as was LSD during the 1960s, they have received considerably less media, governmental and scientific attention; thus, mushrooms might be regarded as a less significant substance. However, some recent studies have shown that psilocybin may in fact be an extremely useful tool in research on various forms of mental pathology. A study done in a hospital in Switzerland showed that stimulation of the receptors upon which psilocybin acts may moderate the brain's excessive release of dopamine, which is characteristic of acute psychosis. (2) An even more recent set of studies dealing with the effects of psilocybin and the dissociative anaesthetic ketamine on the AX-Continuous Performance Task – a test that measures the subject's ability to process auditory and visual context-dependent information – and generation of mismatch negativity – part of the auditory event-related potential that is induced by infrequent changes in a repetitive sound (5) From this data, scientists could begin to distinguish and understand the neuropharmacology of some of the abnormal cognitive processes of schizophrenia. It is believed that psilocybin will prove to be extremely useful in the gradual understanding of the mechanisms of this disease and other forms of psychosis.

Furthermore, more experienced recreational users of psilocybe mushrooms view them as an invaluable tool for the acquisition of self-knowledge or universal understanding. It cannot be argued that the effects of these mushrooms – and entheogens in general – have the potential to teach the user a great deal about himself, due to the deeply personal and introspective nature of the experiences they generate. In Psilocybin: The Magic Mushroom Grower's Guide, Terrance McKenna expressed his belief that psilocybe mushroooms were the mechanism by which humans would eventually evolve into a higher state: "...it is the occurence of psilocybin and psilocin in the biosynthetic pathways of my living body that opens for me and my symbiots the vision screens to many worlds. You as an individual and Homo sapiens as a species are on the brink of the formation of a symbiotic relationship with my genetic material that will eventually carry humanity and earth into the galactic mainstream of the higher civilizations." (7)

Psilocybin and psilocin are currently Schedule I substances under the Controlled Substances Act, meaning that the Drug Enforcement Administration has deemed them to be highly dangerous and without medical value. (4)

References

1. The Vaults of Erowid, an excellent and comprehensive resource on many psychoactive substances, not anti-drug use

2. The Good Drug Guide, an article on psilocybin

3. The Lycaeum, another comprehensive, not anti-drug use resource

4. homepage for the Drug Enforcement Administration

5. an article on the relationship between the effects of psilocybin and certain cognitive defects in schizophrenia

6. information about mismatch negativity

7. The Mushroom Speaks, Terrence McKenna's theories about the nature of psilocybe mushrooms


test
Name: test
Date: 2003-04-07 20:42:53
Link to this Comment: 5300


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Feeling SAD? Let There Be Light!
Name: Rachel Sin
Date: 2003-04-07 22:29:26
Link to this Comment: 5305


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

It's wintertime, and you are gathered for the holidays with all of your family and friends. Everything seems like it should be perfect, yet you are feeling very distressed, lethargic and disconnected from everything and everyone around you. "Perhaps it is just the winter blues," you tell yourself as you delve into the holiday feast, aiming straight for the sugary fruitcake before collapsing from exhaustion. However, the depression and other symptoms that you feel continue to persist from the beginning of winter until the springtime, for years upon end without ceasing. Although you may be tempted to believe that you, like many millions of other Americans, are afflicted with a case of the winter blues, you are most likely suffering from a more severe form of seasonal depression known as Seasonal Affective Disorder, or SAD. This form of depression has been described as a form of a unipolar or bipolar mood disorder which, unlike other forms of depression, follows a strictly seasonal pattern. (5).

During the winter, many of us suffer from "the winter blues", a less severe form of seasonal depression than SAD. Still others are sufferers have an already existent condition, such as pre-menstrual syndrome or depression, which is exacerbated by the coming of the winter. (2). In general, many people suffer from some form of sporadic depression during the wintertime. We may feel more tired and sad at times. We may even gain some weight or have trouble getting out of bed. Over 10 million people in America, however, may feel a more extreme form of these symptoms. They may constantly feel lethargic and depressed to an extent that social and work related activities are negatively affected. This more extreme form of the "winter blues" is SAD. Typical SAD symptoms include sugar cravings, lethargy, depression, an increase in body weight, and a greater need for sleep (1). Onset of these symptoms usually occurs in October or November, and the symptoms disappear in early spring. Frequently, people who suffer from SAD react strongly to variations in the amount of light in their surrounding environment. Most often, patients who suffer from SAD and live at more northern latitudes note that the more north they live, the more distinct and severe their SAD symptoms become. In addition, SAD patients note that their depressive symptoms increase in severity when the amount of light indoors decreases and the weather is cloudy. (4). Most commonly, symptoms of SAD appear in one's late twenties or thirties, yet the disorder has been less frequently diagnosed in children. Out of all of the patients who suffer from SAD, 70-80% are female. (1).

Multiple theories exist as to the origins of SAD. The exact cause is currently unknown, yet doctors believe that stress, heredity and the chemical makeup of the body all play a role in its initiation. A strongly-held belief is that the lack of presence of sufficient sunlight can lead to the disorder. Simply being in a room without windows for an extended period of time can trigger a depressive episode in SAD patients. The body's circadian rhythms, which regulate an individual's daily sleep-wake cycles, are disturbed when sunlight is not as prevalent. This is the reason why individuals with wintertime SAD often experience a difficulty in waking up in the morning during the longer nights of winter. (3). In addition, a disruption of the body's circadian rhythms is known to be a likely cause of depression. (6). Thus, during the wintertime months or time spent in a dark room, when the circadian rhythms can potentially be disrupted, a person who suffers from SAD may experience severe depressive episodes.

Scientists have also hypothesized that the disorder could be due to a decrease in serotonin, a key neurotransmitter in the brain. When the brain is deficient of serotonin, depression has been known to result. The production of serotonin has been thought to be caused by the presence of sunlight; people with SAD who suffer from depression have been found to have lower serotonin levels in their brain. In addition, research has found that SAD could also be due to an increase in the level of melatonin, a hormone which is connected to sleep. (3). The higher level of this hormone in SAD patients during the lengthier periods of darkness during winter have also been thought to be linked to their depressive symptoms. Although SAD is most commonly seen during the winter, the condition can also occur during the summer months. These patients plan trips to colder climates during the winter to relieve their depression. This rarer form of seasonal depression causes symptoms such as agitation, loss of weight, inability to sleep, anxiety and a loss of appetite.

Apparently, in patients who suffer from the more common winter form of SAD, an increase in the amount of time per day in which the sky is dark is thought to be one of the main causes of depressive symptoms. According to Dr. Daniel Kripke, a psychiatry professor at the University of California San Diego, the photoperiod of reduced light during the wintertime is a result of the reduced daylight, and that many mammals' seasonal responses are dominated by this photoperiod. Kripke explained that the mammals whose seasonal responses were dominated by the shorter photoperiod in winter experienced a change in appetite and lethargy during the winter. So naturally, it would seem reasonable that one of the main proposed treatments for SAD (aside from medication and psychotherapy, which have been proven to be effective) is an added source of light during the longer periods of darkness which occur during the wintertime. In fact, multiple tests conducted in Japan, the United States and Europe from 1986 through 1995 have shown that a strong light source proves highly effective in treating both nonseasonal and seasonal forms of depression. (6). In this form of SAD treatment, the light, which is between 10 and 20 times stronger than a regular room light, is placed several feet from the patient. The patient sits in front of this light in the morning (so as not to induce insomnia, which nighttime light therapy may cause) for at least a half a hour. (3). Side effects are minimal, and the treatment is very simple to undergo.

Although multiple light sources can be used in the treatment of SAD, one must be careful in choosing the proper kind of light therapy, which will not only be effective for use but safe for use as well. For example, in a 1992 study by Lam et al., tests were conducted in an effort to see which wavelengths of light were most efficacious in the treatment of SAD patients. It turned out that the UV wavelength of around 300 nm was not at all helpful in the treatment of seasonal depressive disorder. In addition, this low wavelength can cause harmful burns, as we all know from spending long days in the sun at the beach. Filters for certain wavelength —emitting light therapy devices should be used in order to prevent wavelengths which are below 400 nm from being emitted (5). In addition, full spectrum light used in rooms has not been efficacious in the treatment of SAD. The light used in treatment must be of higher intensity, at least ten to twenty times stronger than a typical light bulb used in a room. (2). Neither UV light nor full spectrum light should be used in the treatment of SAD; both have proven to be ineffective during light therapy trials.

Even though neither the full spectrum lights nor the 300 nm UV-emitting therapy device was effective in treating SAD patients, forms used with the special wavelength filter are safe and have proven to be very useful in the treatment of SAD. Most often, high-intensity light boxes which do not emit UV wavelengths have been used to treat the symptoms. Depending on the individual, smaller or larger lux (a measure of illumination) light boxes can be used. A normal light bulb emits somewhere between 200-500 lux. SAD patients may select a very high lux light box which emits10,000 lux, and requires a shorter amount of therapy time. Other patients prefer a weaker lux box (minimum of 2500 lux required), yet these boxes require a longer time spent in front of the light box. (1).

In addition, light visors (which also emit high-intensity luxes of light) are available, and are worn by the SAD patient. The visors are advantageous to patients who do not wish to sit idly in front of a light box for a half an hour or more; treatment can be administered while the patient is in motion. Also, a special dawn simulator can be used. This very strong light is placed beside the patient's bed, and its intensity is slowly augmented so as to reach its greatest strength when the patient is set to wake up. 1992-1993 studies conducted in Seattle by Avery et al. (where the sky is frequently overcast in the winter) have shown the dawn simulators to be highly effective. (6). In fact, the improvement was described by the Seattle study group as significant even when the final lux emission amount was as small as 250 lux!

The only rare side effects seen from the forms of light therapy used have been eye strain, nausea, irritability and headaches, and these often subside as the patient gets acclimated to the therapy. (4). Outdoor light, which provides just as much light as the light boxes (even when the sky is cloudy) has also been shown to improve SAD symptoms. SAD symptoms have been ameliorated in patients who took a daily 60-minute walk.

Many millions of Americans suffer from depression during the winter. However, a fascinating and often neglected fact is that those ten million who are afflicted with the more severe symptoms seen in SAD may suffer because of a simple deficiency in the amount of light present during the winter! Their brains may react in response, and a disruption of circadian rhythms, decrease in serotonin, and an increase in melatonin can result. As a consequence of these reactions in the brain due to the lack of light, the SAD patient can become severely depressed. So while trying to light up the lives of family members and friends with SAD through encouragement, please inform them about the many types of light therapy which are currently available — both you and the SAD patient will be glad that you did.
 

References

1)Seasonal Affective Disorder, From Northern County Psychiatric Associates of MD

2)Information and Frequently asked Questions about Seasonal Affective Disorder, From PhoThera Light Products

3)Seasonal Affective Disorder, From Mayo Clinic

4)Seasonal Affective Disorder, From Nation's Voice on Mental Illness

5)Seasonal Affective Disorder, from mentalhealth.com

6)Light Treatment for Nonseasonal Depression, from Psychiatric Times


The Pathways of Pain
Name: Neesha Pat
Date: 2003-04-08 14:07:28
Link to this Comment: 5324


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In 1931, the French medical missionary Dr. Albert Schweitzer wrote, "Pain is a more terrible lord of mankind than even death itself." Today, pain has become the universal disorder, a serious and costly public health issue, and a challenge for family, friends, and health care providers who must give support to the individual suffering from the physical as well as the emotional consequences of pain (1).

Early humans related pain to evil, magic, and demons. Relief of pain was the responsibility of sorcerers, shamans, priests, and priestesses, who used herbs, rites, and ceremonies as their treatments. The Greeks and Romans were the first to advance a theory of sensation, the idea that the brain and nervous system have a role in producing the perception of pain. But it was not until the middle ages and well into the Renaissance-the 1400s and 1500s-that evidence began to accumulate in support of these theories. Leonardo da Vinci and his contemporaries came to believe that the brain was the central organ responsible for sensation. Da Vinci also developed the idea that the spinal cord transmits sensations to the brain. In the 17th and 18th centuries, the study of the body and the senses continued to be a source of wonder for the world's philosophers. In 1664, the French philosopher René Descartes described what to this day is still called a "pain pathway" (5).

What prompted me to research about the various pain pathways was my grandmother's arthritis. She has suffered for many years with severe joint pain and in the past, has been treated with corticosteroids. Currently, she is taking Celebrex, (COX-2 inhibitor) which is a relatively new drug in the family of 'superaspirins'. What impressed me was how far medical research has come in the quest to conquer pain and interfere with the 'pain pathway'. "The philosophy that you have to learn to live with pain is one that I will never understand or advocate," says Dr. W. David Leak, Chairman & CEO of Pain Net, Inc. (4) . The focus of this paper has been on the numerous avenues explored by researchers and two methods of treatment that offer promising results.

What is pain? The International Association for the Study of Pain defines it as: An unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage (1). It is useful to distinguish between two basic types of pain, acute and chronic, as they differ greatly. (7)

Acute pain, for the most part, results from disease, inflammation, or injury to tissues. This type of pain generally comes on suddenly, for example, after trauma or surgery, and may be accompanied by anxiety or emotional distress. The cause of acute pain can usually be diagnosed and treated, and the pain is self-limiting, that is, it is confined to a given period of time and severity. In some rare instances, it can become chronic (1).

Chronic pain is widely believed to represent disease itself. It can be made much worse by environmental and psychological factors. Chronic pain persists over a longer period of time than acute pain and is resistant to most medical treatments. It can, and often does, cause severe problems for patients. There may have been an initial mishap such as a sprained back, serious infection, or there may be an ongoing cause of pain such as arthritis, cancer, ear infection, but some people suffer chronic pain in the absence of any past injury or evidence of body damage(1) . Many chronic pain conditions affect older adults. Common chronic pain complaints include headache, back pain, cancer pain, arthritis pain, neurogenic pain (pain resulting from damage to the peripheral nerves or to the central nervous system itself) and psychogenic pain (pain not due to past disease or injury or any visible sign of damage inside or outside the nervous system) . (2)

Pain is a complicated process that involves an intricate interplay between a number of important chemicals found naturally in the brain and spinal cord. In general, these chemicals, called neurotransmitters, transmit nerve impulses from one cell to another (5).

The body's chemicals act in the transmission of pain messages by stimulating neurotransmitter receptors found on the surface of cells; each receptor has a corresponding neurotransmitter. Receptors function much like gates or ports and enable pain messages to pass through and on to neighboring cells (10). One brain chemical of special interest to neuroscientists is glutamate. During experiments, mice with blocked glutamate receptors show a reduction in their responses to pain. Other important receptors in pain transmission are opiate-like receptors. Morphine and other opioid drugs work by locking on to these opioid receptors, switching on pain-inhibiting pathways or circuits, and thereby blocking pain (8).

Another type of receptor that responds to painful stimuli is called a nociceptor. Nociceptors are thin nerve fibers in the skin, muscle, and other body tissues, that, when stimulated, carry pain signals to the spinal cord and brain. Normally, nociceptors only respond to strong stimuli such as a pinch. However, when tissues become injured or inflamed, as with a sunburn or infection, they release chemicals that make nociceptors much more sensitive and cause them to transmit pain signals in response to even gentle stimuli such as breeze or a caress. This condition is called allodynia -a state in which pain is produced by innocuous stimuli . (1)

Scientists are working to develop potent pain-killing drugs that act on receptors for the chemical acetylcholine. For example, a type of frog native to Ecuador has been found to have a chemical in its skin called 'epibatidine', derived from the frog's scientific name, Epipedobates tricolor. Although highly toxic, epibatidine is a potent analgesic and, surprisingly, resembles the chemical nicotine found in cigarettes. Also under development are other less toxic compounds that act on acetylcholine receptors and may prove to be more potent than morphine but without its addictive properties(1) .
T
he idea of using receptors as gateways for pain drugs is a novel idea, supported by experiments involving substance 'P'. Investigators have been able to isolate a tiny population of neurons located in the spinal cord, that together form a major portion of the pathway responsible for carrying persistent pain signals to the brain. When animals were given injections of a lethal cocktail containing substance P linked to the chemical saporin, this group of cells, whose sole function is to communicate pain, were killed. Receptors for substance P served as a portal or point of entry for the compound. Within days of the injections, the targeted neurons, located in the outer layer of the spinal cord along its entire length, absorbed the compound and were neutralized. The animals' behavior was completely normal; they no longer exhibited signs of pain following injury or had an exaggerated pain response. Importantly, the animals still responded to acute, that is, normal, pain. This is a critical finding as it is important to retain the body's ability to detect potentially injurious stimuli. The protective, early warning signal that pain provides is essential for normal functioning. If such work could be translated clinically, humans might be able to benefit from similar compounds introduced, for example, through lumbar (spinal) puncture (2).

Another promising area of research using the body's natural pain-killing abilities is the transplantation of chromaffin cells into the spinal cords of animals bred experimentally to develop arthritis. Chromaffin cells produce several of the body's pain-killing substances and are part of the adrenal medulla, which sits on top of the kidney. Within a week or so, rats receiving these transplants cease to exhibit telltale signs of pain. Scientists believe the transplants help the animals recover from pain-related cellular damage. Extensive animal studies will be required to learn if this technique might be of value to humans with severe pain (8).

One way to control pain outside of the brain, that is, peripherally, is by inhibiting hormones called prostaglandins. Prostaglandins stimulate nerves at the site of injury and cause inflammation and fever. Certain drugs, including NSAIDs (nonsteroidal anti-inflammatory drugs), act against such hormones by acting on the blood vessels. Blood vessel walls stretch or dilate during a migraine attack and it is thought that serotonin plays a complicated role in this process. For example, before a migraine headache, serotonin levels fall. Drugs for migraine include the triptans: sumatriptan (Imitrix), naratriptan (Amerge), and zolmitriptan (Zomig). They are called serotonin agonists because they mimic the action of endogenous (natural) serotonin and bind to specific subtypes of serotonin receptors (8).

The explosion of knowledge about human genetics is helping scientists who work in the field of drug development. We know, for example, that the pain-killing properties of codeine rely heavily on a liver enzyme, CYP2D6, which helps convert codeine into morphine. A small number of people genetically lack the enzyme CYP2D6; when given codeine, these individuals do not get pain relief. CYP2D6 also helps break down certain other drugs. People who genetically lack CYP2D6 may not be able to cleanse their systems of these drugs and may be vulnerable to drug toxicity. CYP2D6 is currently under investigation for its role in pain (5).

The link between the nervous and immune systems is an important one. Cytokines, a type of protein found in the nervous system, are also part of the body's immune system, the body's shield for fighting off disease. Cytokines can trigger pain by promoting inflammation, even in the absence of injury or damage. Certain types of cytokines have been linked to nervous system injury. After trauma, cytokine levels rise in the brain and spinal cord and at the site in the peripheral nervous system where the injury occurred. Improvements in our understanding of the precise role of cytokines in producing pain, especially pain resulting from injury, may lead to new classes of drugs that can block the action of these substances (1).

Medications, acupuncture, local electrical stimulation, and brain stimulation, as well as surgery, are some treatments for chronic pain. Some physicians use placebos, which in some cases has resulted in a lessening or elimination of pain. Psychotherapy, relaxation and medication therapies, biofeedback, and behavior modification may also be employed to treat chronic pain (3). Some of the latest developments include COX-2 inhibitors and chemonucleolysis (7).

COX-2 inhibitors, ("superaspirins") are thought to be particularly effective for individuals with arthritis. For many years scientists have wanted to develop the ultimate drug-a drug that works as well as morphine but without its negative side effects. Nonsteroidal anti-inflammatory drugs (NSAIDs) work by blocking two enzymes, cyclooxygenase-1 and cyclooxygenase-2, both of which promote production of hormones called prostaglandins, which in turn cause inflammation, fever, and pain. Newer drugs, called COX-2 inhibitors, primarily block cyclooxygenase-2 and are less likely to have the gastrointestinal side effects sometimes produced by NSAIDs. In 1999, the Food and Drug Administration approved two COX-2 inhibitors- rofecoxib (Vioxx) and celecoxib (Celebrex). Although the long-term effects of COX-2 inhibitors are still being evaluated, they appear to be safe. In addition, patients may be able to take COX-2 inhibitors in larger doses than aspirin and other drugs that have irritating side effects, earning them the nickname "superaspirins. (7)

Chemonucleolysis is a treatment in which an enzyme, chymopapain, is injected directly into a herniated lumbar disc in an effort to dissolve material around the disc, thus reducing pressure and pain. The procedure's use is extremely limited, in part because some patients may have a life-threatening allergic reaction to chymopapain. (3)

Thus, we see that over the centuries, science has provided us with a remarkable ability to understand and control pain with medications, surgery, and other treatments. (3) Today, scientists understand a great deal about the causes and mechanisms of pain, and research has produced dramatic improvements in the diagnosis and treatment of a number of painful disorders. (9) For people who fight every day against the limitations imposed by pain, the work of scientists holds the promise of an even greater understanding of pain in the coming years. Their research offers a powerful weapon in the battle to prolong and improve the lives of people with pain: hope (1) .


References

1)National Institute of Neurological Disorders and Stroke
2)American Pain Society
3)American Academy of Pain Management
4)PainNet.Inc
5)International Association for the Study of Pain
6)MayDay Pain Project, The.
7)Pain Treatment: Janssen-Cilag Pharm.
8)American Chronic Pain Organization
9)Rest Ministries Chronic Illness
10)Worldwide Congress on Pain


The Pathways of Pain

Accomplishments of Medica
Name: Neesha Pat
Date: 2003-04-08 14:18:27
Link to this Comment: 5325



<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In 1931, the French medical missionary Dr. Albert Schweitzer wrote, "Pain is a more terrible lord of mankind than even death itself." Today, pain has become the universal disorder, a serious and costly public health issue, and a challenge for family, friends, and health care providers who must give support to the individual suffering from the physical as well as the emotional consequences of pain (1).

Early humans related pain to evil, magic, and demons. Relief of pain was the responsibility of sorcerers, shamans, priests, and priestesses, who used herbs, rites, and ceremonies as their treatments. The Greeks and Romans were the first to advance a theory of sensation, the idea that the brain and nervous system have a role in producing the perception of pain. But it was not until the middle ages and well into the Renaissance-the 1400s and 1500s-that evidence began to accumulate in support of these theories. Leonardo da Vinci and his contemporaries came to believe that the brain was the central organ responsible for sensation. Da Vinci also developed the idea that the spinal cord transmits sensations to the brain. In the 17th and 18th centuries, the study of the body and the senses continued to be a source of wonder for the world's philosophers. In 1664, the French philosopher René Descartes described what to this day is still called a "pain pathway" (5).

What prompted me to research about the various pain pathways was my grandmother's arthritis. She has suffered for many years with severe joint pain and in the past, has been treated with corticosteroids. Currently, she is taking Celebrex, (COX-2 inhibitor) which is a relatively new drug in the family of 'superaspirins'. What impressed me was how far medical research has come in the quest to conquer pain and interfere with the 'pain pathway'. "The philosophy that you have to learn to live with pain is one that I will never understand or advocate," says Dr. W. David Leak, Chairman & CEO of Pain Net, Inc. (4) . The focus of this paper has been on the numerous avenues explored by researchers and two methods of treatment that offer promising results.

What is pain? The International Association for the Study of Pain defines it as: An unpleasant sensory and emotional experience associated with actual or potential tissue damage or described in terms of such damage (1). It is useful to distinguish between two basic types of pain, acute and chronic, as they differ greatly. (7)

Acute pain, for the most part, results from disease, inflammation, or injury to tissues. This type of pain generally comes on suddenly, for example, after trauma or surgery, and may be accompanied by anxiety or emotional distress. The cause of acute pain can usually be diagnosed and treated, and the pain is self-limiting, that is, it is confined to a given period of time and severity. In some rare instances, it can become chronic (1). Chronic pain is widely believed to represent disease itself. It can be made much worse by environmental and psychological factors. Chronic pain persists over a longer period of time than acute pain and is resistant to most medical treatments. It can, and often does, cause severe problems for patients. There may have been an initial mishap such as a sprained back, serious infection, or there may be an ongoing cause of pain such as arthritis, cancer, ear infection, but some people suffer chronic pain in the absence of any past injury or evidence of body damage(1) . Many chronic pain conditions affect older adults. Common chronic pain complaints include headache, back pain, cancer pain, arthritis pain, neurogenic pain (pain resulting from damage to the peripheral nerves or to the central nervous system itself) and psychogenic pain (pain not due to past disease or injury or any visible sign of damage inside or outside the nervous system) . (2)

Pain is a complicated process that involves an intricate interplay between a number of important chemicals found naturally in the brain and spinal cord. In general, these chemicals, called neurotransmitters, transmit nerve impulses from one cell to another (5).

The body's chemicals act in the transmission of pain messages by stimulating neurotransmitter receptors found on the surface of cells; each receptor has a corresponding neurotransmitter. Receptors function much like gates or ports and enable pain messages to pass through and on to neighboring cells (10). One brain chemical of special interest to neuroscientists is glutamate. During experiments, mice with blocked glutamate receptors show a reduction in their responses to pain. Other important receptors in pain transmission are opiate-like receptors. Morphine and other opioid drugs work by locking on to these opioid receptors, switching on pain-inhibiting pathways or circuits, and thereby blocking pain (8).

Another type of receptor that responds to painful stimuli is called a nociceptor. Nociceptors are thin nerve fibers in the skin, muscle, and other body tissues, that, when stimulated, carry pain signals to the spinal cord and brain. Normally, nociceptors only respond to strong stimuli such as a pinch. However, when tissues become injured or inflamed, as with a sunburn or infection, they release chemicals that make nociceptors much more sensitive and cause them to transmit pain signals in response to even gentle stimuli such as breeze or a caress. This condition is called allodynia -a state in which pain is produced by innocuous stimuli . (1)

Scientists are working to develop potent pain-killing drugs that act on receptors for the chemical acetylcholine. For example, a type of frog native to Ecuador has been found to have a chemical in its skin called 'epibatidine', derived from the frog's scientific name, Epipedobates tricolor. Although highly toxic, epibatidine is a potent analgesic and, surprisingly, resembles the chemical nicotine found in cigarettes. Also under development are other less toxic compounds that act on acetylcholine receptors and may prove to be more potent than morphine but without its addictive properties(1) .

The idea of using receptors as gateways for pain drugs is a novel idea, supported by experiments involving substance 'P'. Investigators have been able to isolate a tiny population of neurons located in the spinal cord, that together form a major portion of the pathway responsible for carrying persistent pain signals to the brain. When animals were given injections of a lethal cocktail containing substance P linked to the chemical saporin, this group of cells, whose sole function is to communicate pain, were killed. Receptors for substance P served as a portal or point of entry for the compound. Within days of the injections, the targeted neurons, located in the outer layer of the spinal cord along its entire length, absorbed the compound and were neutralized. The animals' behavior was completely normal; they no longer exhibited signs of pain following injury or had an exaggerated pain response. Importantly, the animals still responded to acute, that is, normal, pain. This is a critical finding as it is important to retain the body's ability to detect potentially injurious stimuli. The protective, early warning signal that pain provides is essential for normal functioning. If such work could be translated clinically, humans might be able to benefit from similar compounds introduced, for example, through lumbar (spinal) puncture (2).

Another promising area of research using the body's natural pain-killing abilities is the transplantation of chromaffin cells into the spinal cords of animals bred experimentally to develop arthritis. Chromaffin cells produce several of the body's pain-killing substances and are part of the adrenal medulla, which sits on top of the kidney. Within a week or so, rats receiving these transplants cease to exhibit telltale signs of pain. Scientists believe the transplants help the animals recover from pain-related cellular damage. Extensive animal studies will be required to learn if this technique might be of value to humans with severe pain (8).

One way to control pain outside of the brain, that is, peripherally, is by inhibiting hormones called prostaglandins. Prostaglandins stimulate nerves at the site of injury and cause inflammation and fever. Certain drugs, including NSAIDs (nonsteroidal anti-inflammatory drugs), act against such hormones by acting on the blood vessels. Blood vessel walls stretch or dilate during a migraine attack and it is thought that serotonin plays a complicated role in this process. For example, before a migraine headache, serotonin levels fall. Drugs for migraine include the triptans: sumatriptan (Imitrix), naratriptan (Amerge), and zolmitriptan (Zomig). They are called serotonin agonists because they mimic the action of endogenous (natural) serotonin and bind to specific subtypes of serotonin receptors (8).

The explosion of knowledge about human genetics is helping scientists who work in the field of drug development. We know, for example, that the pain-killing properties of codeine rely heavily on a liver enzyme, CYP2D6, which helps convert codeine into morphine. A small number of people genetically lack the enzyme CYP2D6; when given codeine, these individuals do not get pain relief. CYP2D6 also helps break down certain other drugs. People who genetically lack CYP2D6 may not be able to cleanse their systems of these drugs and may be vulnerable to drug toxicity. CYP2D6 is currently under investigation for its role in pain (5).

The link between the nervous and immune systems is an important one. Cytokines, a type of protein found in the nervous system, are also part of the body's immune system, the body's shield for fighting off disease. Cytokines can trigger pain by promoting inflammation, even in the absence of injury or damage. Certain types of cytokines have been linked to nervous system injury. After trauma, cytokine levels rise in the brain and spinal cord and at the site in the peripheral nervous system where the injury occurred. Improvements in our understanding of the precise role of cytokines in producing pain, especially pain resulting from injury, may lead to new classes of drugs that can block the action of these substances (1).

Medications, acupuncture, local electrical stimulation, brain stimulation, as well as surgery, are some treatments for chronic pain. Some physicians use placebos, which in some cases has resulted in a lessening or elimination of pain. Psychotherapy, relaxation and medication therapies, biofeedback, and behavior modification may also be employed to treat chronic pain (3). Some of the latest developments include COX-2 inhibitors and chemonucleolysis (7).

COX-2 inhibitors, ("superaspirins") are thought to be particularly effective for individuals with arthritis. For many years scientists have wanted to develop the ultimate drug-a drug that works as well as morphine but without its negative side effects. Nonsteroidal anti-inflammatory drugs (NSAIDs) work by blocking two enzymes, cyclooxygenase-1 and cyclooxygenase-2, both of which promote production of hormones called prostaglandins, which in turn cause inflammation, fever, and pain. Newer drugs, called COX-2 inhibitors, primarily block cyclooxygenase-2 and are less likely to have the gastrointestinal side effects sometimes produced by NSAIDs. In 1999, the Food and Drug Administration approved two COX-2 inhibitors- rofecoxib (Vioxx) and celecoxib (Celebrex). Although the long-term effects of COX-2 inhibitors are still being evaluated, they appear to be safe. In addition, patients may be able to take COX-2 inhibitors in larger doses than aspirin and other drugs that have irritating side effects, earning them the nickname "superaspirins. (7)

Chemonucleolysis is a treatment in which an enzyme, chymopapain, is injected directly into a herniated lumbar disc in an effort to dissolve material around the disc, thus reducing pressure and pain. The procedure's use is extremely limited, in part because some patients may have a life-threatening allergic reaction to chymopapain. (3)

Thus, we see that over the centuries, science has provided us with a remarkable ability to understand and control pain with medications, surgery, and other treatments. (3) Today, scientists understand a great deal about the causes and mechanisms of pain, and research has produced dramatic improvements in the diagnosis and treatment of a number of painful disorders. (9) For people who fight every day against the limitations imposed by pain, the work of scientists holds the promise of an even greater understanding of pain in the coming years. Their research offers a powerful weapon in the battle to prolong and improve the lives of people with pain: hope (1) .


References

1)National Institute of Neurological Disorders and Stroke

2)American Pain Society

3)American Academy of Pain Management

4)PainNet.Inc

5)International Association for the Study of Pain

6)MayDay Pain Project, The.

7)Pain Treatment: Janssen-Cilag Pharm.

8)American Chronic Pain Organization

9)Rest Ministries Chronic Illness

10)Worldwide Congress on Pain


A Hidden Compulsion: Obsessive-Compulsive Disorder
Name: Amelia Tur
Date: 2003-04-12 21:47:44
Link to this Comment: 5357


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Obsessive-compulsive disorder, commonly known as OCD, is a type of anxiety disorder and was one of the three original neuroses as defined by Freud. It is characterized by "recurrent, persistent, unwanted, and unpleasant thoughts (obsessions) or repetitive, purposeful ritualistic behaviors that the person feels driven to perform (compulsions)." (1) The prime feature that differentiates OCD from other obsessive or compulsive disorders is that the sufferer understands the irrationality or excess of the obsessions and compulsions, but is unable to stop them. What differentiates people with OCD from other usually healthy people with milder forms of obsession and compulsion is the fact that the obsessions and compulsions serve to interfere with the person with OCD's life to the point where they are extremely distressed, the obsessions and compulsions take a large proportion of their time, and serve to interfere with the their routine, functioning on the job, normal social activities, and relationships with others. (1) (3)

Some of the typical compulsions that someone with OCD may exhibit include an uncontrollable urge to wash (especially the hands) or clean, to check doors repeatedly to make sure that they are locked, confirming that appliances are switched off multiple times, "to touch, to repeat, to count, to arrange, or to save." (1) Obsessions that one with OCD may display can include fixation on dirt and contamination, the fear that one may act upon destructive or violent urges, having an overdeveloped sense of responsibility for the welfare of others, objectionable religiously blasphemous or sexual disturbances, other socially unacceptable behavior, and an overbearing concern with the arrangement and symmetry of items. Obsessions may be present along with compulsions, or compulsions may be present singly; often times the compulsions are designed to reduce the anxiety associated with the obsession. (1) (2) (5)

The most common presentation of OCD is washing. The person is compelled to wash their hands many times a day and have constant thoughts of dirt, germs, and contamination. The person may spend up to several hours each day washing their hands or showering, and generally attempts to avoid items that they perceive as sources of contamination. (1)

A second subtype of the disorder is extreme doubt joined with compulsive checking; while some sufferers are preoccupied with symmetry, most people exhibiting this subtype are concerned with the safety of others. While checking something, such as the status of an appliance, would alleviate the doubt of a typical person, when a person with OCD checks something, their doubt is often heightened and often leads to even more checking. (1)

Those with OCD are usually quite aware of their irrational or extreme fears and behaviors, but are unable to control them. Because such behavior seen in OCD sufferers is often considered "crazy," otherwise normal people who suffer from OCD are often driven to hide their symptoms. Many are able to do so with remarkable success, as they are normal in all other aspects of their lives. The tendency to hide such behavior may be the reason of the recent epidemiological studies that show an incidence rate of around 2%, rather than the previously thought 0.05%. On the Zoloft informational site on OCD, they state that as many as one in fifty Americans, up to five million people, have OCD at some point in their lives. Around a third of the cases of OCD in the United States begin in adolescence, the second third beginning in young adulthood, and the final third beginning later on in life. While more boys will show evidence of OCD early in life than girls, men and women exhibit OCD in equal numbers in adulthood. (1) (5)

OCD is largely resistant to typical psychotherapy, which tries to get at the source of the conflict by going back to early childhood. However, OCD is more receptive to behavioral therapy, wherein the person with OCD is presented with the feared or triggering situation but is prevented from carrying out the accompanying compulsion. OCD is generally resistant to drugs used in the treatment of anxiety, depression, and psychosis. However, the symptoms generally ease with medications that influence the brain's serotoninergic system. (1)
Serotonin is a neurotransmitter that must be removed from a synapse by reuptake before it can be fired again. Clorimipramine, fluvoxamine, and fluoxetine are currently the only psychoactive drugs that block the uptake of serotonin, and all are effective in the treatment of OCD; this leads doctors to believe that the cause of OCD is closely related to the uptake of serotonin in the brain. It is thought that OCD may be closely related to tic disorders, such as Tourette's syndrome; it has been suggested that OCD and Tourette's share a common genetic basis. The examination of the brains of those with OCD with PET and MRI scans has shown that these people have abnormalities in the cortex and basal ganglia coupled with decreased caudate volume. More simply put, it is thought that in both OCD and tic disorders, there is a defect in a circuit that runs from the frontal lobe to the basal ganglia to the thalamus and back to the frontal lobe. Defects in this circuit may have a genetic basis or may arise from immunological factors, as suggested from the appearance of OCD and tic disorders after a Streptococcus infection. (1) (2)

Scientists have found that mice will exhibit OCD-like symptoms when their Hoxb8 gene is manipulated. In a study done at the University of Utah in Salt Lake City, one of the two Hoxb8 genes was manipulated in utero on mice from a line of inbred mice, as the mice would be identical to others from the line with only the exception of the manipulated gene. The mice with the manipulated gene were shown to groom themselves compulsively, often leading to bald patches of skin and open sores. Furthermore, the mice would also compulsively groom other mice in their cages just as forcefully. The scientists feel that feel that this could lead to some interesting findings about OCD, as well as for trichotillomania, a rare disorder where people pull out their hair, as the hox genes are largely similar in all vertebrates and could have a large influence on behavior. The Hoxb8 gene has also been shown to be active in areas that control animal grooming and the beginning of human OCD symptoms. This research helps to show that OCD most likely has a biological basis as well as a psychological basis, rather than only a psychological one. (4)

Both the Access Science articles and the information from the Zoloft website stressed the fact that the causes of OCD most likely have much to do with problems in the nervous system, there were some differences in how the information was presented. In the Access Science articles, there seemed to be a sense of distancing from those with OCD; the main article on OCD made sure to differentiate between people with "normal" obsessions and compulsions, and those people who have OCD. There was a clinical air to the articles, as if they were aimed at people who were researching OCD for a project or just general information, rather than being aimed at people who might possibly have or know someone with OCD. The Zoloft site, on the other hand, presented its information in a way that conveyed the sense that anyone could have OCD, and that those with OCD are not unusual for having it. The information was geared to those with OCD, their family members, and people who could possibly have it. (1) (2) (3) (5)


References

1) Obsessive-Compulsive Disorder, An article on obsessive-compulsive disorder by Joseph Zohar on McGraw-Hill's Access Science site, an online encyclopedia of science and technology.

2) Anxiety Disorders, A section of an article on anxiety disorders by Daniel S. Pine on McGraw-Hill's Access Science site, an online encyclopedia of science and technology.

3) Neurotic Disorders, An article on neurotic disorders by Marshal Mandelkern on McGraw-Hill's Access Science site, an online encyclopedia of science and technology.

4) Ancient Gene Takes Grooming in Hand, An article by Bruce Bower found through McGraw-Hill's Access Science site, an online encyclopedia of science and technology.

5) Understanding Obsessive Compulsive Disorder (OCD), An informational site about OCD, from the makers of Zoloft, which is used in the treatment of OCD and other anxiety disorders.


Love in the Brain
Name: Clare Smig
Date: 2003-04-14 15:45:42
Link to this Comment: 5367


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Does brain equal behavior? Some people have argued that they have difficulty saying it does because they find it hard to believe that our individual, tangible brain controls emotions that many consider to be intangible, such as being in love. This paper will discuss the role that the brain actually plays in love- why we are attracted to certain people, why we feel the way we do when we are around them, and whether or not this is enough to say that in the case of love, brain does equal behavior.

The first stage of romantic love begins with attraction. Whether you have been best friends for a long time or you just met the person, you begin your romantic relationship when there is that feeling of attraction. But why are we attracted to some people and not to others? Some research and experimentation suggests that pheromones play a role in attraction ((1), (2), (3), (4)). Although the existence of pheromones in humans and the method by which individuals detect them is still under debate and requires further research, a study by Stern and McClintock on pheromones in women's underarm secretion gives the most solid evidence for the existence of human pheromones ((5)). It has been hypothesized that the brain detects these pheromones through an organ known as the vomeronasal organ (VNO), by receptors, or by the terminal nerve in the nostrils ((5)). Despite the fact that pheromones and how they are detected in humans is controversial, it has been suggested that selectivity for certain pheromones might explain why we are only attracted to certain people ((6)).

Research agrees, however, that whether or not pheromones exist, they are not the only reason we are attracted to an individual. Other factors such as social and environmental influences, genetic information, and past experience contribute to who we are and who we find attractive physically and emotionally ((5), (7)). In addition, an experiment by McClintock showed that women were attracted to the smell of a man who was genetically similar, but not too similar, to their fathers ((1)). Therefore, our genetic information might play a role in whether or not someone is desirable in order to avoid inbreeding or, on the other end of the spectrum, to avoid the loss of desirable gene combinations. Inevitably, however, it is our brain that processes another individual's appearance, lifestyle, how they relate to past individuals we have met, and, possibly, their pheromones. Then, based on this information, we decide, within our brain, whether or not this person is worth getting to know.

Almost immediately thereafter, it is uncontroversial that when someone experiences an attraction for someone else, their brain triggers the release of certain chemicals. These adrenaline-like chemicals include phenylethylamine (PEA) which speeds up the flow of information between nerve cells, dopamine, and norepinephrine (both of which are similar to amphetamines). Dopamine makes you feel good and norepinephrine stimulates the production of adrenaline. Together, these chemicals explain why when we are around someone we are attracted to we feel a "rush" and our heart beats faster ((8)). However, if you have ever been in love, you know that these feelings somewhat subside as you become more comfortable with someone and move from that attraction and "lust" stage to love.

But what is the role of the brain in the stage of love? One chemical, oxytocin, plays an important role in romantic love as a sexual arousal hormone and makes women and men calmer and more sensitive to the feelings of others. Physical and emotional cues, processed through the brain, trigger the release of oxytocin. For example, a partner's voice, look or even a sexual thought can trigger its release. Attachment to someone has been linked to chemicals released from the brain known as endorphins that produce feelings of tranquility, reduced anxiety, and comfort. These chemicals are not as exciting as those released during the attraction stage, but they are more addictive and are part of what makes us want to keep being around that person we are in love with. In fact, the absence of these chemicals when we lose a loved one plays a part in why we feel so sad ((8), (9)). But is that it? Are chemical releases triggered by the brain when we think of or are in the presence of our partner all there really is behind those "I love you's"?

Other research has shown that there are certain areas of the brain linked with being in love with someone. It is possible that our feelings for our partner are somehow stored in our brain. Researchers have found that when individuals are shown pictures of their loved ones, areas of the brain with a high concentration of receptors for dopamine are activated. Moreover, MRI images of the brains of these individuals showed that the brain pattern for romantic love overlapped patterns for sexual arousal, feelings of happiness, and cocaine-induced euphoria. This overlap and, at the same time, unique pattern indicates the complexity of the emotions that comprise romantic love ((6), (10)). These results did not occur when the individuals were shown pictures of non-romantic loved ones. Another similar experiment showed that individuals produced activity in the medial insula and the anterior cingulate of the brain. The former is a part of the brain associated with "gut feelings" and the latter is associated with feelings of euphoria ((11)).

Other areas of the brain that have been associated with love include the septal area, which has been associated with pleasure, and the frontal lobe, the most highly evolved part of our brain, which has been associated with higher mental functions such as trust, respect, desire for companionship, etc ((12)). Finally, the amygdala, which has direct and extensive connections with all the sensory systems of the brain and with the hypothalamus, is considered to be the emotional center of the brain. Therefore, it most likely also plays a role in the emotions surrounding love ((13), (14)). Consequently, it is highly likely that as we become more attached to someone through experience and time together, our love for them is processed and stored in our brain.

So if everyone in love is experiencing the same chemicals and activating the same areas of the brain, what makes love such a special experience? What makes your feelings any different from anyone else's? It is that person that you have fallen in love with. It is them and only them that can do or say the right things and touch you the right way so that those chemicals are released and those areas of the brain are activated. Finally, is love just a function of our brains? As shown by the experiment with college students, the pattern within the brain that formed when they saw their loved ones was complex, as are our brains. Although it is your partner's brain that enables them to act or say those things that trigger your brain to respond with those chemicals of attraction and attachment, everyone's brain is individual and makes up an individual "you" and that unique and special experience that we call love. Although this does not rule out other areas that many believe play a role in love, such as the soul, it shows that the brain does play a vital, if not ultimate role in all aspects of love and that this role is extremely complex and unique.


References


1) Gupta, Sanjay. Chemistry of Love: Do pheromones and smelly t-shirts really have the power to trigger sexual attraction? Here's a primer. Time. Feb 2002: 78.

2) Herman, Steve. Main Attraction: The search for human pheromones continues. Global Cosmetic Industry. Global Cosmetic Industry. Dec 2000: 54.

3) Cutler, Winnifred B. et al. Pheromonal influences on sociosexual behavior in men. Archives of Sexual Behavior. Feb 1998: 13.

4) Smell and Attraction

5) Ben-Ari, Elia. Pheromones: what's in a name? Bioscience. July 1998: 505-511.

6) Love Chemistry: New studies analyze love's effects

7) Mating and Temperament

8) What is chemistry and chemicals in love relationships

9) Chemicals

10) Love in the Brain

11) BBC News- Health- How the brain registers love

12) My search for love and wisdom in the brain by Marian Diamond

13) Bower, Bruce. Brain faces up to fear, social signs. Science News. Dec 1994: 406.

14) Biology of Love


Touched With Fire
Name: Elizabeth
Date: 2003-04-14 16:24:17
Link to this Comment: 5368


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In Touched with Fire: Manic Depressive Illness and the Artistic Temperament, Kay
Redfield Jamison explores the compelling connection between mental disorders and artistic creativity. Artists have long been considered different from the general population, and one often hears tales of authors, painters, and composers who both struggle with and are inspired by their "madness". Jamison's text explores these stereotypes in a medical context, attributing some artists' irrational behaviors to mental disorders, particularly manic-depressive illness. In order to establish this link, Jamison presents an impressive collection of artists who have suffered from mental illness, whether diagnosed correctly during their lifetime or discovered in hindsight. Well organized and interesting, Jamison provides an ideal introduction to this still
evolving idea, providing the reader with as many thought provoking questions as answers, and leaving the door open for further study.

Jamison begins with a brief explanation of manic-depressive illness and its effects on human behavior. The term "manic-depressive illness" refers to a variety of mental disorders which share similar symptoms, but range greatly in severity. These disorders alters one's mood and behaviors, disrupt established sleep and sexual patterns, and cause fluctuations in energy level. Manic-depressive illness cause cycles of manic, energized highs followed by debilitating, lethargic lows. Such disorders usually develop early in life and intensify over time, leading to maniacal highs and devastating lows. The manic energy associated with mental disorders may cause a person to require less sleep while raising energy levels increasing one's rate of thinking. These symptoms stimulate creativity and lead to an elevated level of productivity. Conversely, during the attendant lows associated with mental illness, the afflicted person experiences lethargy and hopelessness. Artists, in particular, often experience a creative block during their depressive periods, resulting in an intense frustration with their decreased productivity. In turn, this frustration may drive an artist to substance abuse, or even suicide. Depression is not the only cause of detrimental and possibly dangerous changes to one's behavior. Mania, of course, does not simply produce more creative energy. During a manic period, one tends to lose their grasp on reality, which could prompt irrational impatience, excessive spending, and impulsive sexual relations. Both manic and depressive periods alter behavior significantly and pose a
threat to the patient's life.

After outlining the effects of manic-depressive illness on human behavior, Jamison presents profiles of some of the numerous artists who have suffered from some sort of mental disturbance. Among the most notable manes are Picasso, van Gogh, Hemingway, Fitzgerald, and Poe. Additionally, Jamison discusses a number of lesser known artists who were affected by manic-depressive illness. Indeed, one cannot help but wonder if these men and women were prevented from reaching greater career heights by their mental disorders. Many studies concerning the link between creativity and mental illness have been conducted in recent years, with compelling results. For instance, a study by Dr. Colin Martindale of eminent English and French poets found that over half of these artists had suffered nervous breakdowns, been institutionalized, or suffered from hallucinations and delusions. Likewise, a study by Dr. Arnold Ludwig found highest rates of "mania psychosis and psychiatric hospitalization [were] in poets" (1). Poets, of course, are not the only artists to suffer from manic-depressive
illness. Indeed, Jamison lists a staggering number of painters, sculptors, and composers who also lived and created with severe mood disorders. Additionally, Jamison presents a detailed account of the madness of George Gordon, Lord Byron. Included are shorter, but equally compelling, case studies artists describing their mental illness and tracing the progression of their disease through family bloodlines.

However compelling the link between creativity and mental illness are, there are some flaws to consider. For instance, and most obviously, not all creative and productive artists have a mental disorder. Some writers and painters lead perfectly normal, healthy lives, finding their inspiration and creativity in other sources. One must not imply that creativity stems from madness, although many of the artists discussed by Jamison attribute their particular genius to the manic drive associated with mental illness. Also, one could also argue that madness hinders creativity rather than encouraging it. Many artists lose months of time to the stifling depression that follows manic periods, spending these potentially productive months unable to create. This depression, severe enough on its own, can increase greatly when an artist is frustrated with his or her inability to produce work at the manic level they once enjoyed. Additionally, it stands to reason that not every creative person suffers from a mental disorder. Indeed, although manic-depressive illness is more common among the artistic community, the majority of artists do not suffer from any sort of mental illness. This fact is easy to overlook when one considers the caliber of artists who have suffered from mental illness. Many prominent artists have been very upfront about their illnesses, and some even attribute their abilities to "madness". This gives the
general public an impression that all truly great artists suffer from some form of dementia, which, of course, is a false assumption.

Jamison ends her book with a short discussion of how the medical community can deal with manic depressive illness through medication. This is a complicated issue, as many artists feel heavy medication will deprive them of their inspiration and interfere with their creativity. It is a thorny, and relatively new, question, and Jamison merely outlines the controversy without offering an opinion on what should be done to rectify the situation, leaving the door open for further research. Mental illness in artists is a fascinating subject, and Jamison does an excellent job of providing a through portrait of many artists who have grappled with manic-depressive disorder, in addition to exploring how these disorders affect creativity and productivity. Jamison also maintains an awareness of the objections to her attempts to draw a correlation
between the mental illness and the artistic community, and addresses these issues accordingly.


References

1) Jamison, Kay Redfield. Touched with Fire: Manic-Depressive Illness and the Artistic
Temperament. Ontario: Free Press, 1993.


The Brains of Violent Males: The homicidal & suici
Name: Nicole Jac
Date: 2003-04-14 22:06:07
Link to this Comment: 5374


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

"It becomes increasingly evident that some of the destruction which curses the earth is self-destruction; the extraordinary propensity of the human being to join hands with external forces in an attack upon his own existence is one of the most remarkable of biological phenomena."
-Karl Menninger (1).

Violence is everywhere in our society- in movies, television programs, video games, and professional sports such as boxing and wrestling. In 2000, 28,663 deaths were related to firearms. 58% were reported as suicides and 39% were reported as homicides (2). The objective of this paper is to qualitatively evaluate and compare the brains of male murderers and male suicide victims. Even though more females attempt suicide, males are used for comparison because males are four times more likely to die from a suicide attempt (3). Male suicidal individuals have a higher success rate because they are more likely to kill themselves in a violent manner (i.e. using a gun).

At first glance, most people would argue that homicide and suicide are opposite behaviors, yet the relationship may not be that straightforward. If it is assumed that the brain dictates behavior and that suicide and homicide are independent behaviors, one would expect that researchers would find differences between the brains of suicide victims and murderers. At the other extreme, suicide and homicide can be considered similar behaviors because in both cases an individual engages in killing someone, the only thing that differs is where the killing impulse is directed. Homicide is directed towards the external world, whereas suicide is aggression turned inward. When the cause of unhappiness can be attributed to an external source, the extreme response is rage and homicide. In the absence of an external source, the extreme response is likely to be depression and suicide (4). The definition of suicide and homicide as extreme responses to unhappiness is not new. It implies that these behaviors are similar, therefore one would expect similarities in the brain, and the possibility that one individual can be suicidal and homicidal simultaneously. Freud first introduced the notion that suicide and homicide were not opposite behaviors. Freud believed that every aspect of human behavior carries within it, its very opposite. In this sense, the desire to kill others is also the desire to kill oneself.

Current research supports the notion that homicide and suicide are not unique behaviors characterized by distinctive brain anatomy and chemistry. It should be noted that homicide is the only crime that regularly results in the offenders taking their own life after committing the crime (5). Of the data collected on suicide victims and murderers, there are comparable deficiencies in the pre-frontal cortex and the serotonergic system. Mechanistically, the dysfunction of the prefrontal cortex in suicidal individuals and murderers is believed to be due to reduced levels of circulating serotonin.

Serotonin is a neurotransmitter involved in the regulation and inhibition of impulsive behaviors. The effects of serotonin are predominantly inhibitory, and considerable research indicates that serotonin is essential for self-control. Furthermore, serotonin regulates mood, arousal, aggression, impulse control, and sexual activity. Low levels of serotonin coupled with testosterone can often lead to violence and aggression. In a study of violent prisoners, it was found that the offenders had significantly lower levels of serotonin in their brains (6). Research has shown that 95 percent of the brains of suicidal individuals show an altered serotonergic system, which is characterized by reduced serotonin activity. Decreases in pre-synaptic serotonin nerve terminal binding sites have been observed in the suicidal brain and there are more postsynaptic serotonin receptors in the prefrontal cortex of suicide victims, which suggests that the body is trying to compensate for low levels of serotonin (7).

The serotonergic neurons of the raphe nucleus have long projections that terminate in the prefrontal cortex. It is logical that a reduced level of serotonin would have effects on a neuron's target site. Research has confirmed that the prefrontal cortex of murderers and suicide victims are different from the prefrontal cortex of a normal individual. The prefrontal cortex is often referred to as the "executive" region since it is where humans think, imagine, and make informed decisions (8). Damage to the prefrontal cortex manifests itself in the form of impulsivity, loss of self-control, immaturity, and altered emotionality. Although correlation does not indicate causation, researchers have occasionally used prefrontal cortex injury as an indicator of the likelihood of engaging in aggressive acts. In normal individuals, the frontal lobe is very active, whereas the frontal lobe can be quite inactive in the brains of murderers (9 , 12). Similarly, suicide victims often have fewer neurons in the prefrontal cortex than normal subjects (10). These findings indicate that the prefrontal cortex may be involved in the regulation of a restraint mechanism that is sub-optimal in suicidal and homicidal individuals.

One noteworthy example of the involvement of the serotonergic system in homicide and suicide is the case of Donald Schell. Schell had been taking Paxil (an antidepressant in a class of prescription drugs called selective serotonin reuptake inhibitors (SSRIs) for only 2 days when he shot and killed his wife, his daughter, his granddaughter, and then took his own life. There appeared to be no motivation for the murder-suicide (11). Numerous examples of such murder-suicides are reported in the media and in some psychological literature, however, there is little neurobiological research that provides firm evidence that links antidepressant use with murder and suicide. The manner in which suicidal or homicidal ideation develops when an individual is taking antidepressants remains unclear.

The problem with the comparison of the homicidal and suicidal brain is that we will never know causation of the deviations of these brains from "normal". Did the killing cause a change in the chemical make-up of the brain, or is the abnormal brain responsible for the production of the killing behavior? The brain of a murderer can be examined at any time, however researchers rely on postmortem studies to reveal clues about the chemistry of the suicidal brain. The trauma of suicide may influence the chemical activity of the brain in the moments after death. One might expect differences in the brains of murderers and individuals that engage in suicidal ideation, but no differences between the brains of murderers and suicide victims, since in the latter case the individual has killed. Yet, a person that attempted suicide and randomly failed should have a similar brain to an individual that succeeded in their suicide attempt because the intent was the same.

Current research shows that there are several shared features between the suicidal and homicidal brain, however, future research may challenge such claims. Perhaps researchers are examining the wrong part of the brain or attending to the wrong neurotransmitter system. Additionally, methodological flaws in experimental design, such as a faulty selection of comparison groups, may lead to poor data. Problems could be related to a lack of comparability across individuals due to age, educational level, and past history. Some studies used very small sample sizes which may undermine the reliability of their results. A more carefully controlled, systematic study is necessary to fully understand the similarities and differences between the suicidal and homicidal brain.

References


1) Menninger, Karl. (1938) Man Against Himself. New York: Harcourt Brace.
2) A fact sheet on suicide and homicide.
3) Suicide Prevention Fact Sheet, Issued by the CDC (National Center for Injury Prevention Control.
4) Suicide Facts.
5) Suicide Information & Education Centre .
6) Relative Murder , a web page by the Discovery Channel
7) An article written by J. John Mann , a researcher investigating the neurobiology of suicide.
8) Decision-making processes following damage to the prefrontal cortex, Article from Brain (2002).
9) "What's different about a Killer's Brain?" Whitley Strieber's web page
10) Why? The neuroscience of suicide: Physical clues to suicide , Scientific American article.
11) Paxil & murder/ suicide. A story about how the maker of Paxil held liable in murder/suicide.
12) The mind of a killer. ABC news web page which contains pictures of normal brain and a murderer's brain

Other interesting sites
A legal history of the "serotonin defense" .
Into the Mind of a Killer , This page has good information and several pictures.
Violent Brains , includes some pictures


The Insanity Defense- Should it be judged legally
Name: Priya Ragh
Date: 2003-04-14 23:17:13
Link to this Comment: 5378


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Former U.S president Ronald Reagan was shot by a man named John Hinckley in the year 1981. The president along with many of his entourage survived the shooting despite the heavy infliction of internal and external injuries. The Hinckley case is a classic example of the 'not guilty by reason of insanity' case (NGRI). The criminal justice system under which all men and women are tried holds a concept called mens rea, a Latin phrase that means "state of mind". According to this concept, Hinckley committed his crime oblivious of the wrongfulness of his action. A mentally challenged person, including one with mental retardation, who cannot distinguish between right and wrong is protected and exempted by the court of law from being unfairly punished for his/her crime. (1)
What is "insanity" and why is this subject of much controversy? Although I do not have a clear definition of insanity, most socially recognized authorities such as psychiatrists, medical doctors, and lawyers agree that it is a brain disease. However, in assuming it is a brain disease, should we link insanity with other brain diseases like strokes and Parkinsonism? Unlike the latter two, whose causes can be medically accounted for through a behavioral deficit such as paralysis, and weakness, how can one explain the behavior of crimes done by people like Hinckley? (2)
Much of my skepticism over the insanity defense is how this act of crime has been shifted from a medical condition to coming under legal governance. The word "insane" is now a legal term. A nuerological illness described by doctors and psychiatrists to a jury may explain a person's reason and behavior. It however seldom excuses it. The most widely known rule in the insanity defense refers to the M'Naghten rule which arose in 1983 during the trial of Daniel M'Naghten who pleaded that he was not responsible for his murders because he suffered from delusions at the time of the crime.
The rule states, " A defendant may be excused from criminal responsibility if at the time of the commission of the act, the party accused was laboring under such a defect of reason, from a disease of mind, as not to know the nature and the quality of the act he was doing..." (3)

The problem with this defense is that insanity here is either examined from a legal angle or a psychoanalytical one which involves talking to people and having them take tests. There is however, no scientific proof confirming the causal relationship between mental illness and criminal behavior based on a deeper neurological working of the brain sciences. The psychiatrist finds himself/herself in a double bind where with no clear medical definition of mental illness, he/she must answer questions of legal insanity- beliefs of human rationality, and free will instead of basing it on more concrete scientific facts. Let me use a case study to elaborate my argument that law in this country continues to regard insanity as a moral and legal matter rather than ones based on scientific analysis.
The insanity defense of Andrea Yates: The country was absolutely appalled when it heard that Yates, a mother of five children, had killed each of her children resulting in a horrific family slaughter. There were extremely polarized feelings about this case- sympathy (concluding that there must have clearly been some uncontrollable behavior) vs. retribution, punishment vs. treatment. What causes a human being to behave and act in such a manner? The criminal justice system and modern science approach the question with interestingly different perspectives. The M'Naghten rule allows little room for negotiation of the crime details. Essentially, the defendant is declared either sane or insane; the defect of reason from a brain disease makes them either right or wrong. The dichotomy left no shades of gray until the 20th century. Advances in the field of neuroscience indicated that certain mental diseases were caused in part by factors outside the control of the individual inflicted with the disease and the fact that medication could be used to successfully alter one's behavior was a scientific breakthrough. (4)
The legal system, feeling quite insufficient on its part, developed modern laws such as the "Irresistible Impulse" which included diseases like kleptomania, and pyromania, and the Durham test in 1954 which stated that the defendant is insane only if his/her criminal charges was a product of a mental disease or defect." The legal system that created the latter law readily removed it from practice after noticing that mental health experts had too big a role in determining what caused the defendant to commit the crime, but seldom answered as to what produced the action. Lawyers felt that left to the doctors and psychiatrists, the very notion of criminal charges and responsibility would be softened because all such behavior would then be categorized a mental illness.
It was the John Hinckley case that re-popularized the tough rule of the M'Naghten creating an ambiguous relationship between law, psychiatry, and neuroscience. Insanity, at the end of the day is a legal determination. The fact that the legal system had the authority to form and terminate the various laws of defense shows that it is a very malleable system of practice.
"Contradictions inevitably emerge where the laws of man are confused with the laws of nature: The former can be broken; the latter cannot. These contradictions are what make the "insanity defense" in a trial like that of John Hinckley Jr. appear to be an affront to justice and common sense." - Donald E. Watson (5)

Nonetheless, in today's insanity cases, mental health experts, doctors, and scientists have important roles to play. They can inform the jury of the nature of the defendant's mental illness, the likeliness that the crime might be repeated, and whether the defendant may bring harm upon himself/herself. However, like any court case, there will always be divided opinions amongst the mental experts regarding the outcome of the case depending on whether they testify for or against the defendant.
The key to balance "insanity" from a legal standpoint to a more medical one is for a breakthrough in neuroscience where well organized causal connections between mental illness and criminal act is established and made mandatory. There has been progress with medical experts introducing new concepts including the Battered Women Syndrome which recognize abusive conduct against women resulting in acts of self-defense.
One might wonder if criminals use the insanity defense to escape punishment?
Perhaps this skepticism on relying on legal methods in determining an "insane" person's fate calculates the fact that there are extremely few insanity cases and even fewer successful ones in reality. For instance, in the 1970s, a study in Wyoming revealed that the insanity defense was used in only .47% of all criminal cases (Melton, 1997, p.187) (6)
Moreover, in the same study it was observed that only one person, of those that used the defense, was acquitted. In the legal realm, insanity and sanity stand in a reversible state. When the defendant no longer tests positive in legal tests, an insane person miraculously becomes sane. Unfortunately, the same law does not account or recognize the physical, emotional or psychological states that may or may not be reversible.


References

(1) 1)All About the Insanity Defense, Mark Godo

(2) 2) Does Insanity "Cause" crime? : Thomas Szasz, M.D., The Myth of Mental Illness (1960)

(3) 3)M'Naghten Rule

(4) 4)The Yates case: Commentary for United Press International; Susan Crump is a former prosecutor for Houston

(5) < a name= "5">5) Donald E. Watson, MD taught and did research in nueropsychology, teaches at UC Irvine Medical School.

(6) 6) Statistics


Vasovagal Syncope
Name: Laurel Jac
Date: 2003-04-14 23:33:27
Link to this Comment: 5379

<mytitle> Biology 202
2003 Second Web Paper
On Serendip

My best friend "Dirk" can easily be picked out of a crowd. His 6'7 stature, impressive muscle mass, very blond hair, big blue eyes, and booming voice cause many people to stare at him-once, in Europe, a Japanese couple asked if they could take a picture of him. Addicted to weight lifting and athletics, my friend does not always enjoy admitting that he is a computer engineer-yes, my 22-year-old buddy is still afraid of the geek label. There is something else to which Dirk will not readily admit-he faints at the sight of blood. In fact, many things can trigger his fainting spells: blood, vomit, overheating, etc.

Dirk lives next door to my parents; we grew up together. Recently, he and my sister ran over from his house to ours, which is a distance of about 50 feet. My sister had not worn shoes; when they got to our house, they walked through two rooms before Dirk got dizzy. My sister had cut her foot, and the blood that had spread over the tile floor made Dirk turn his head away, and sit down. My mother ran to the rescue-Dirk's, not my sister's. She helped him breath deeply, and luckily he avoided fainting.

A few Christmases ago, Dirk caught a stomach virus. He made it to the bathroom just in time, but seconds after vomiting, he fell to the floor, and blocked the door. His parents frantically tried to open the door, they tried to revive him by screaming for probably five minutes, which seemed like an eternity to them at the time. Eventually they revived him.

The summer before that Christmas, Dirk was golfing with his high school's golf team on a hot July afternoon. At the end of the course, he and his coach walked to the parking lot. All of a sudden, Dirk toppled like a tree onto the pavement, suffering a concussion on top of fainting. Dirk's condition is called vasovagal syncope. Stubborn as he is, he often gets angry with his mother, a nurse, for fussing over him. But his mother knows from 22 years of experience that whether it is a particularly hot and humid day, or it is receiving a vaccination, Dirk will pass out unless he takes the proper precautions-resting, breathing deeply, and staying hydrated.

Vasovagal Syncope, also known as fainting, neurocardiogenic syncope, and neurally mediated syncope, is a very common condition, occurring in roughly half of all people at least once within their life; three percent of the population develops it repeatedly. It is not a serious condition.(2) A vasovagal response involves a decrease in the volume of blood that is returned to the heart, which enervates the baroreceptors(2) in the sympathetic nervous system to increase the force of each contraction of the heart. Consequently, the opposing parasympathetic nervous system is alerted to slow the heart rate and dilate the surrounding veins and arteries. These responses of the nervous system cause the blood pressure to drop very low, causing syncope (loss of consciousness).(1) Most patients are young and healthy, although vasovagal syncope can occur in the elderly population that has preexisting cardiac problems. Extremely hot weather and blood-alcohol levels are typical triggers. Some patients suffer from several, often attacks, while others may only experience them sporadically.(3)

While standing, the blood tends to settle in the legs. Maintaining the position for a long time can decrease blood pressure, which means that the brain may not receive proper blood supply. Lack of sufficient oxygen and nutrients can lead to syncope. In general, people do not pass out from standing upright. Mechanisms within the body control blood pressure. In a condition such as vasovagal syncope, the mechanisms do not perform their functions in the appropriate manner, misfiring, over-firing, or under-firing. Faulty mechanisms in the nervous system make syncope a possibility.(3) The mechanisms occur in everyone as we adjust to a new posture. Those with vasovagal syncope have an abnormal reflex to this information-there is an inundation of the messages from the barorecptors, and this overcompensation causes the halting of messages sent from the brain to constrict vessels, and the reverse is communicated, vessels dilate, less blood reaches the brain, and fainting ensues.(2)

In general, physicians use the tilt table test to determine if a diagnosis is necessary. The patient is placed in a quiet room, on either a hydraulic lift of swinging bed that can rotate between 60ş and 90ş, moving the patient from supine to head-up position. Heart rate and blood pressure are monitored throughout the test. Although some variation in technique exists within practicing physicians, the patient is usually monitored in the supine position for five minutes, and baseline heart rate and blood pressure measurements are taken. Next, the patient is moved into the head-up tilt position, and changes in blood pressure and heart rate, and the appearance of symptoms are noted every three to five minutes. If the patient experiences syncope, the patient is returned to the supine position, and they are considered diagnosable. If no symptoms of vasovagal syncope are recorded after at least ten minutes, often the tilt test is attempted again, this time giving the patient isoproterenol, which often shortens the amount of time before syncope occurs. When isoproterenol is used, physicians usually require a loss of consciousness in the patient in order to support a diagnosis of vasovagal syncope.(4)

Medicinal treatments include beta-blockers, fludrocortisone, midodrine, and SSRIs (selective seratonin reuptake inhibitors). Beta-blockers block the adrenaline system, preventing the abnormal reflex of the sympathetic nervous system that precedes the decrease of blood pressure. Fludrocortisone sends a message to the kidneys to retain more salt and water, thus increasing blood pressure. Midodrine tightens blood vessels. SSRIs are usually used in the treatment of depression or anxiety, but they also block the communication in the brain that triggers the blood vessels to open further.(2)

Instead of subjecting themselves to medications, patients with vasovagal syncope tend to choose lifestyle changes, and in most cases, this is all that is necessary for controlling the condition. Both recognizing their personal triggers, and learning to recognize when an episode is about to occur are necessary; an increase in the amounts of water and salt intake can prevent attacks. Blood is primarily composed of water and salt, so by increasing the amount of each, blood pressure may rise, possibly preventing syncope. More of both are needed especially in hot weather, or when vigorous exercise is performed.(2)

Patients who seek treatment outside of modern medicine often turn to licorice root, also known as sweet root. One of the active ingredients is glycyrrhizic acid. It has been used worldwide for thousands of years to treat a variety of ailments. Recent studies have shown that is useful in the treatment of heart disease. When consumed in large quantities, licorice root can raise the level of aldosterone found in the blood. Although this would be considered negative in most individuals, the increase of aldosterone can increase blood pressure, which is helpful in those with vasovagal syncope. If blood pressure increases around the time of a vasovagal episode, the constriction might counteract the activity of the parasympathetic nervous system, reducing the chance of syncope.(5)

Although vasovagal syncope is not a serious medical condition, those who suffer from its effects, like Dirk, cannot write off its impact on their lives. Tough guy to the core, he believes that taking beta-blockers or any other medication would be admitting that he is incapable of controlling himself. As I researched vasovagal syncope for this paper, I tried to explain the physiological processes that occurred in the body that had nothing to do with personal choice, and all though he was interested, he still maintains that he can control the episodes. I hope that this paper proves to everyone else who does not have a hard head that vasovagal syncope is not a matter of choice, apart from the choice of exposure incidents. I have asserted throughout this paper that vasovagal syncope is not a serious condition; it does, however, provide an interesting platform for research. I assumed, when I began researching, that I would find evidence of scientific research apart from merely understanding the process of syncope. Much of the knowledge seems to have been gleaned from observation of drug effects, so there has been no treatment specifically designed to treat vasovagal syncope. Perhaps more research will lead to more conclusive knowledge about the condition.

References

1)Med Help International, This website offers a forum for those with medical questions, allowing them to ask the advice of a physician.

2)London Cardiac Institute, This organization provides information to patients on several conditions. The patients are referred to the pages by their physicians.

3)Karen Yontz Women's Cardiac Awareness Center, Health Wise Physician's Corner provides information about several medical conditions.

4)Tilt Table Test

5)Health and Age


Yellow Voices and Orange-Foam Squid: Questions abo
Name: Danielle M
Date: 2003-04-14 23:37:02
Link to this Comment: 5381


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

"What's first strikes me is the color of someone's voice. [V--] has a crumbly, yellow voice, like a flame with protruding fibers. Sometimes I get so interested in the voice, I can't understand what's being said." --From Synesthesia: A Union of the Senses by Robert E. Cytowic

What would you make of the preceding account? Would you think the speaker was...crazy?...on drugs?...making a play for attention? Would you be skeptical if the speaker told you it was her natural way of perceiving the world? In truth, it is an example of the way in which about one in every 25,000 people observes the world (1). The term (I hesitate to use "scientific term" for reasons I'll discuss further on) given to the condition-- synesthesia--derives from the Greek roots syn, meaning together, and aesthesis, to perceive, and conveys the principle features of the synesthete's perceptual state (2). In synesthetic perception, stimuli activate not just one sense, but several. An oral stimulus isn't a taste alone--it may also be a taste, a shape, a color, a movement (1). For example, a synesthete might explain that the taste of "squid produces a large glob of bright orange foam, about four feet away, directly in front of me" (3). Such joint perceptions are automatic and involuntary, just as is usual perceptual experience, and, unlike imaginary images or ideas, synesthetic perception is not only vividly real, but "often outside the body, instead of imagined in the mind's eye" (2).

Though accounts of synesthetic experience are receiving increased study and documentation, the many in the scientific community remain partially unconvinced, if not wholly dismissive. Lacking sufficient empirical, objective data to depict the synesthetic experience, synesthetes and researchers of the condition have had to combat doubt, disregard, and ridicule in defense of the condition's reality and validity. The question raised by synesthesia then becomes: Why does science discount first-person evidence to such an extent? If a condition has little to no "objective" or empirical "proof," does that mean it can't exist? If researchers can produce no computer read-out, no resonance imaging, no technologically-generated chart, should the scientific community turn up its nose?

The existence of synesthesia has been questioned and discussed for nearly 300 years, and it received the most enthusiastic investigation between 1869 and 1930 (12). At the time, many were fascinated by the unconscious mind and its possible links to synesthesia; yet interest began to wane when researchers were unable to reach any definite conclusions (4). A decade later, the advent of the behaviorist movement in the caused synesthesia to finally fall "off scientists' radar" entirely (3). The principles of behaviorism saw science as strictly an empirically-based field of study in which "anything meaningful" (5) had to be quantifiable or measurable by "objective" machines (4). Humans became scientific "subjects," individuality of experience was discounted, and "subjective experience, such as synesthesia, was deemed inappropriate for scientific study" (4). Studying and measuring behavior was the only sufficiently reliable means of gathering information about the human experience; consciousness and subjective experience were irrelevant, or worse, simply wrong (5). Influenced by the popular principles of behaviorism, modern scientists likely view reports of synesthesia as fine and well, but ultimately worthless as testable, provable scientific information. What use has "hard science" for whimsical metaphors about squid and orange foam?

These behaviorist-influenced attitudes make substantiation of synesthesia troublesome, for while the phenomenonlogy of synesthesia makes it apparent that the condition is a conscious experience, that experience has yet to be fully captured by technological data (4). Because of the subjective nature of perception, the only true understanding researchers have gleaned has typically come from first-person accounts--and therein lies the trouble. Validation of the synesthetic experience "is largely aesthetic" and fundamentally ineffable, "a phenomenon whose quality must be experienced first-hand" and is thus difficult for science to accept (4). Luciano da Costa, in an article critiquing synesthetic research, exemplifies such scientific skepticism:

"The very features considered for its [synesthesia's] diagnosis...rely heavily on phenomenological evidence, which, of course, is subjective. There is no doubt that as such, i.e., without cogent physiological or anatomical substantiation, synesthesia is destined to be treated with understandable scientific caution...[E]ven if thousands of documented cases were available, that would not be enough to qualify synesthesia as a real physical phenomenon" (6).

Da Costa's sentiments echo the contemporary scientific reluctance to accept synesthesia as scientifically valid, and further illustrate the field's "refusal to take any subjective reports seriously" (4). So then, if something can't be objectively verified, can we believe in its existence? I'd assert that yes, we can--and we should. When dealing with human experience, subjective data is an inevitable part of research, and offers "substantially more than nothing" to the effort of making sense of human behavior (5).

What are the reasons for this refusal to entertain subjective validity? In addition to holding to behaviorist principles, the scientific community may be leery of synesthesia for other reasons, including synesthesia's close relation to typical conceptions of metaphor and artistic expression. Creative language abounds with metaphors resembling synesthetic perceptions: sharp cheese, loud shirts, blue music. Hence the resulting "classic fallacy" of shelving synesthesia as "mere metaphor" (7)--that is, as the product of an active or overly fanciful imagination. This reaction is somewhat understandable, after all, because it is so difficult for synesthetes to portray exactly what synesthetic perception is like. Faced with such ambiguity, the best external observers can do is to approximate it to what they already know. The closest approximation most scientists are familiar with is metaphor and imaginative language, as it's regularly used in common discourse "to describe everything from food and wine to art and music" (1). Thus, it's understandable that scientists, without any other point of reference, would associate synesthetic expression with their own concept of metaphor. And if we conceive of our metaphoric descriptions with deliberate effort, wouldn't synesthetes be doing so as well? We don't experience cross-modal sensations, but we can create the idea of them--couldn't synesthetes just be convincing themselves that these creations are real? When one synesthete, told others of her music/shape synesthesia, they could only assume she was "making it up to get attention," "had an overactive imagination," "or was spoiled and wanted attention" (8). Without experiential reference, it's quite difficult just to conceive of such radically different perception, and how can you accept what you can hardly imagine? As one synethete explained, "Synesthesia isn't easy to fathom. People who don't have it have a hard time understanding...what it's like" (9). While understandable, to dismiss synesthesia on the basis that it seems impossible in one's personal conception of reality is to needlessly constrict the possibility of human experience and to hinder progress into new and challenging realms of inquiry.

Another reason is the belief that subjective reports make for problematic evidence. This is true in part: the subjectivity and individual experience implicated in perception not only makes synesthesia difficult to describe, but also difficult to compare to the experiences of other synesthetes (10). Because different synesthetes--even those with the same sensory pairings--usually don't report identical responses, that variability has been interpreted as proof that synesthesia isn't real (8). Moreover, because synesthetes aren't data-reading machines, they don't "have access to [information about] the neural and cognitive processes" underlying their synesthesia, which necessarily limits the scope of information they can provide (10). These problems should not however result in the outright rejection of subjective reports. As there are significant aspects of synesthesia that can't be described or captured using third-person methods, first-person accounts of synesthesia are essential to its understanding. The subjective contributions of synesthetes are therefore quite valuable as research tools, and could offer significant contributions to knowledge about the condition (10).

Finally, scientists may further doubt synesthesia's reality because of the oft-held idea that if a person's experience of the world diverges dramatically from the "norm" they must be mentally ill, or at least notably distressed. Yet synesthetes are neither. Very rarely, for those who experience stimuli across many senses, their synesthesia becomes so overpowering as to be uncomfortable or even unpleasant; but for the majority, synesthesia is quite normal, even pleasurable (8). Clinically, the mental state of the majority of synesthetes is balanced and the results of standard neurological exams prove normal (4). But "how is it that these perfectly healthy and otherwise neurologically normal people can experience color when there is not color there to be perceived" (9). One synesthete encountered such doubts when she mentioned her color/word perceptions to a teacher, and was promptly reported as a schizophrenic (8). The teacher (like many scientists) must have expected that no one could have such a radically altered idea of reality and be mentally stable. Abnormality is most often thought of as an obstacle, a deficit, a problem--synesthesia is not. As such, it challenges long-held conceptions of normal versus abnormal, and typical beliefs about reality and unreality. If a person experiences their world in a way radically different from us, aren't they necessarily crazy? Can someone be so different and yet be "normal?" This last question (to which I would argue yes), is potentially the most frightening of the questions raised by science's treatment of synesthesia. It implies that someone could perceive reality in a way wholly distinct from our own idea of reality and not be "crazy." This then begs the question, so what is crazy? If it's not always what we thought, could we be crazy? Small wonder scientists prefer to distance themselves from synesthesia and its "whiff of mysticism" (8).

Thus, while skepticism toward first-person accounts has merit, an uncompromising insistence on empirical data sacrifices a profound source of information to an excessive insistence on externality and objectivity. Rather than swing to one extreme (pure objectivism) or the other (pure subjectivism), perhaps there is a compromise in which each approach can be seen as mutually edifying. First-person reports can not only lay a basic framework for experimentation and the generation of new hypotheses, they can also be used to compare objective results to actual subjective experience. First-person data could also help to better explain experimental inconsistencies by providing insight into "individual differences between synesthetes" (10). Without this sort of real-world check of empirical data, the results would be substantially less valuable and applicable outside the laboratory. In the end, ignoring first-person accounts would not be "a victory for objective science," but a victory for "an intolerantly narrow vision of science" (5).

If something can't be imagined with our own personal experience, does that then mean it can't exist in another? Is our notion of reality the only truth? I don't have the ultimate answers to such sticky questions, but I do think that to dismiss synesthesia out of an unspoken fear of raising these queries and challenging the sense of complacency they offer would be both craven and detrimental. We shouldn't capitulate to fear or forego discovery in order to maintain a sense of stability. Synesthesia should be explored as far as possible, allowing first-person accounts to generate new theories and support what information researchers discover. In the end, conclusions about the reality of divergent human perception shouldn't rest exclusively with inflexible (and possibly unattainable) standards of objectivity. Scientists would be foolish to insist on it as they have in dealing with synesthesia; such disregard would deaden the nature of inquiry, of stretching beyond previous bounds and moving in new circles of thought and conception. We shouldn't allow the pursuit of knowledge to be so enslaved by technological, objective information that we doubt our own knowledge of the world around us. To do so would be to further distance human experience from the ability to know or posit truth.

References

1)Synesthesia, an interview with Richard Cytowic, on the ABC Radio National Transcripts site.

2)Synesthesia and the Synesthetic Experience, a site by the Massachusetts Institute of Technology.

3)Everyday Fantasia: The World of Synesthesia, a journal article by Siri Carpenter on the APA site.

4)5) Synesthesia and Method, a journal article by Kevin B. Korb, on the Psyche site.

6) Synesthesia - A Real Phenomenon? Or Real Phenomena?, a journal article by Luciano da Costa, on the Psyche site.

7)Hubbard, E.M and V.S. Ramachandran. "Psychophysical Investigations into the Neural Basis of Synesthesia. The Royal Society 268 (2001): 979 - 981.

8)Cytowic, Robert E. Synesthesia: A Union of the Senses. 2nd ed. Cambridge: Massachusetts Institute of Technology, 2002.

9)"Blue Cats and Chartreuse Kittens" by Patricia Lynne Duffy, a review of Duffy's book by Alison Motluk, on the salon.com site.

10) Towards a Synergetic Understanding of Synesthesia: Combining Current Experimental Findings with Synesthetes' Subjective Descriptions, a journal article by Daniel Smilek and Mike J. Dixon, on the Psyche site.


Implications of Post-Traumatic Stress Disorder for
Name: Stephanie
Date: 2003-04-14 23:42:15
Link to this Comment: 5382


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


War is a complex concept that is increasingly difficult to understand, particularly in an age that allows for live images of combat to be beamed around the world. Many war films depict the brutalities of war and affects war has on participants, but it seems that these representations merely skim the surface. The 20th century is an era that saw a significant amount of military action: World Wars I and II, the Cold War, Vietnam, and the Gulf War - millions of men fought, some survived and live among us today. Unfortunately, the war experience for many veterans is traumatizing and as a result, many have been diagnosed with Post-Traumatic Stress Disorder (PTSD). This disorder is often quite mentally debilitating; this, then, begs the question of the social implications of the disorder as well as whether this has any bearing on the necessity of war.

At the minimum, PTSD is a branch of emotion that stems from stress or anxiety. Stress is not uncommon among humans as it can be caused by something as simple as gridlock or an argument. When we feel stressed, our body is attuned to exhibit the fight-or-flight response during which "the body releases chemicals that make it tense, alert, and ready for action" (1). PTSD, however, is a sector of stress that is very specialized for it occurs after traumatic events; these may include car accidents, earthquakes, rape, or military combat. People suffering from PTSD experience paranoia, flashbacks and generally have difficulty engaging in normal daily activities (2). One Vietnam veteran diagnosed with the disorder explains that he often has extreme emotional outbursts: " 'I developed a nasty temper, became very nervous, and have bad dreams that take me back into the war, like it's happening all over'" (3). The flashbacks that many veterans experience suggests that PTSD is largely related to the way they remember the event.

In order to create a memory, the brain releases chemicals "that etch these events into its memory bank with special codes" (4). However, when one experiences a traumatic event, the memory becomes vivid because the context surrounding the event is so significantly different from anything the victim has ever experienced before. For instance, many Americans may find that they can remember what they ate for breakfast on the morning of Tuesday, September 11, but cannot recall what they ate on any other Tuesday. It seems, then, that during a traumatic event, our senses are heightened - this may be due to the fight-or-flight response that readies us for action. Since we sense (or know) that something is amiss, our brain releases more chemicals that allow us to be more alert; this in turn may be the mechanism that helps us to remember traumatic events so well.

Although many of us may have vivid recollections of 9/11, we may not necessarily feel traumatized by it - unless, of course, it directly affected our life in some way. Veterans, however, are often completely traumatized by war because it is such an unnatural (though increasingly common) experience. The majority of men thrust into combat are in their 20s and at this time are still developing as a person. Consequently, when they are then plucked from their homes and placed in an extremely foreign environment, it forces them to shed their identity, thereby necessitating "radical breaks ... between self-systems as a consequence of participation in distinct social systems" (5). This results in the construction of a "fractured" self that often leads to confusion upon veterans' return home because they feel different and people say they seem different, yet they cannot seem to clarify who they were before the war. This confusion is mainly due to the brutality that war entails: killing - the taking of another human life - is an act that is taught in basic training as necessary for survival. One veteran describes his experience in basic training:

'In basic training, I felt wonderful. I felt physically perfect, mentally alert ...
Naturally there was a certain amount of brainwashing that goes through basic
[training]. They would constantly pound it into you that you have to kill to
survive. You know, you are going to Vietnam and you are going to fight a war
and all you are going to do is kill, kill, kill, kill'(5).

This is a particularly compelling story because it juxtaposes the military's image as a highly organized and efficient unit, with the reality of war as chaotic, terrifying, and anything but meticulously executed. Because young recruits who enter basic training may come to feel "physically perfect" or like a machine, they are unprepared for the terror they encounter when engaging in hand-to-hand combat which becomes a act of stalking - particularly when soldiers are faced with the enemy. Not only must they face another person who they have been instructed to kill, they must also face their own mortality as well as the enemy's mortality. At precisely this instant, two people must decide who will blink first, fire, and save their own life at the expense of another who happens to be in the same position. In this way, combat appears to be extremely traumatizing due to psychological fragmentation.

Furthermore, it is also the repetition of killing that seems to have the potential to create a physical memory. Although one becomes inured and desensitized to a particularly act after executing it many times, when the task is killing people, one still becomes desensitized to it, yet at the same time, it appears that even though there might be less hesitation, the soldier still experiences the rush of adrenaline and then the emotional crash that later follows the incident. Here, a veteran describes this phenomenon:

'We just kind of stumbled on the Vietcong and fortunately for us they had their
weapons laying on the ground. We had ours on our person. This one gook
reached for his automatic weapon and I shot him ... The first instant that it
happened was like a flow of adrenalin or whatever, I felt great. It gradually sank
in and made me sick. And then the next day I found out the kid was only 14 and
that did not help' (5).

This is similar to what many other veterans have told researchers about their experiences in combat. It seems that the "high" of killing may be in direct response to soldiers' survival mentality - once the enemy is eliminated, you are safe because you have diminished the threat against your own life. One veteran states that "Getting out safe was the only thing on [his] mind" and that "[He] only thought about survival. Every time somebody got killed [he] was glad it was not [him] lying there" (5). Another echoes the sentiment expressed by the previous veteran regarding his experience with killing which he says "felt great" but when he fully realized what had happened "the effect was one of depression, one of a nonfuture, discontent" (5).

It is these feelings of failure and wrongdoing that often make re-assimilation into civil society so difficult for veterans. This seems related to the idea of self-control in that soldiers do not have complete control over their actions while in combat since they are merely carrying out orders. Of course, they can make conscious choices like whether to fire their weapon or not, but because survival has become so ingrained in their minds, it seems that true self-control is obliterated from their being. It seems that upon returning home veterans may find the contrast between civilization and war overwhelming. In the war situation they lost control over their actions, whereas at home they reclaim it, but feel out of practice in controlling themselves. This harkens back to the idea of memory and how veterans perceive and remember the trauma they experienced.

Research shows that many veterans with PTSD exhibit significant changes in the make-up of the hippocampus and the medial prefrontal cortex which are parts of the brain that control learning, memory and stress as well our response to fear and stress (6). Thus, because PTSD veterans' brains are different, they may also have trouble learning new material due to the neurological effects of PTSD.

There are myriad social implications of PTSD that include experiencing difficulty with feeling any strong emotion, particularly love, or feeling disconnected from reality, and feeling guilty (7). All of these reactions impact veterans' relationships and their ability to function in society. If the effects of war are so debilitating, then why is war necessary? Even though wars are fought against so-called "evil", is there anything more evil than completely annihilating humans' sense of self - aside from the havoc that war wrecks on the area in which it is fought. It seems that the social implications of military combat PTSD victims calls into question the justification of war - for while war does occur between countries, it is carried out by people, by fellow humans beings who should never have to bear witness to such extreme horrors.


References

1) Stress info
2) American Psychiatric Association
3)Kulka, Richard A., et al. Trauma and the Vietnam War Generation. New York: Brunner/Mazel Publishers, 1990.
4) Post-traumatic Stress Disorder, Wars, and Terrorism
5)Wilson, John P., et al, eds. Human Adaptation to Extreme Stress. New York: Plenum Press, 1988.
6)The Invisible Epidemic: Post-Traumatic Stress Disorder, Memory and the Brain
7) Post-Traumatic Stress Disorder, Understanding the Pain


False Memory Syndrome And The Brain
Name: Kathleen F
Date: 2003-04-15 00:37:07
Link to this Comment: 5384


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In the mid-nineties, a sniper's hammering shots echoed through an American playground. Several children were killed and many injured. A 1998 study of the 133 children who attended the school by psychologists Dr. Robert Pynoos and Dr. Karim Nader, experts on Post-Traumatic Stress Disorder among children, yielded a very bizarre discovery. Some of the children who were not on the schools grounds that day obstinately swore they had very vivid personal recollections of the attack happening (1). The children were not exaggerating, or playing make-believe. They were adamant about the fact that they were indeed there, and that they saw the attack as it was occuring. Why would these children remember something so harrowing if they didn't actually experience it? What kind of trick was their brain playing on them? Why did it happen?

False Memory Syndrome (FMS) is a condition in which a person's identity and interpersonal relationships are centered on a memory of traumatic experience which is actually false, but in which the person is strongly convinced (2). When considering FMS, it's best to remember that all individuals are prone to creating false memories. A common experiment in Introduction to Psychology courses include a test similar to this one:
Look at this list of words and try to memorize them:

sharp thread sting eye pinch sew thin mend

After a few seconds, the students will be asked to recall these words, and are asked the following questions: Was the word "needle" on the list? Was it near the top? The majority of the class will vehemently agree that needle was, in fact, on the list. And not only that, it was actually quite close to being the first word. Some will attest to having vivid recollections of seeing the word "needle" on the page. These students have created a false memory. Due to the exposure of words similar to or related to the word "needle", they have very genuine memories of actually seeing the word on the list. Like the children who were absent from school on the day of the sniper attack, the false memories were stimulated by exposure to similar words (or in the sniper case, stories). In the school children's case, the false memories were created by the exposure to the stories of those who actually underwent the trauma. Our brain uses three diverse procedures to receive information, store information, and access it. These processes are: Sensory information storage, which acts like a very small holding tank, briefly storing information upon impact. Short term memory, in which the brain accounts for what has just happened, also based mainly on the senses. This has a bit more durability than sensory information storage because the brain can interpret the information it's receiving more so than in sensory information storage. Finally, there is Long term memory, the procedure in which the brain stores away significant or enduring information for retrieval at a later date (3).

So where exactly is this false memory of the sniper attack being stored? Because the brains of young children are not as fully developed as the brains of adults, it's interesting to consider that Jean Piaget, the well-known child psychologist, asserted that his earliest memory was of a botched kidnapping at the age of 2. He distinctly remembered details of watching his nurse try to fend off the kidnapper as he sat in his stroller, and the policeman's uniform as he chased the kidnapper away. Thirteen years after the alleged attack, the nurse admitted to Piaget's parents that she had fabricated the story. However, the story, told repeatedly by the nurse, crept into Piaget's psyche and expanded until it took on a life of its own. Piaget later wrote: "I therefore must have heard, as a child, the account of this story...and projected it into the past in the form of a visual memory, which was a memory of a memory, but false" (4).

Due to the way our brains work, remembering a kidnapping incident at the age of 2 could be nothing other than a false memory. The left inferior prefrontal lobe is not yet developed in infants (4). It is this lobe that is necessary for long-term memory. The complicated and sophisticated encoding necessary for remembering such an event could not occur in the infant's brain. However, the adult brain works in a far different way. Valerie Jenks, a woman living in Idaho, was raped at the age of 14. After a few seemingly trauma-free years, she began to become depressed shortly after her marriage and the birth of her first child. In therapy, Dr. Mark Stephenson convinced her to try hypnotherapy, and after her very first session, Jenks came to believe that she'd been sexually abused by her family and friends (5). In Freud's theory of "repression", the mind involuntarily expels traumatic events from memory to avoid overpowering anxiety and trauma (6). Aided by the memory of her rape at age 14, Jenks created a false memory - an elaborately fabricated memory of rape and molestation by her father and other family members. These memories were "repressed memories", said Stephenson. Further, Stephenson said she answered "yes" to many of his questions, not verbally, but by tapping the index finger of her left hand. These tappings were "body memories", claimed Stephenson. According to him, some patients have tried to explain their physical distress as coming from repressed "body memories" of incest. Therapists have told patients that "the body remembers what the mind forgets," and that many of the physical sensations they are experiencing during therapy (like Jenks' finger tapping) are symptoms of forgotten childhood sexual mistreatment. These memories, said Stephenson, are documented in cellular DNA. (7).) However, absolutely no scientific proof supports this notion of a "body memory". Memories are encoded and stored in the three processes of the brain discussed earlier. A psychical pain or sensation is not evidence that abuse occurred (5). Jenks no longer sees Dr. Stephenson and is seeking compensation for the horrific trauma she endured because of his treatment.

With organizations such as the False Memory Syndrome Foundation popping up around the globe, the awareness of FMS is spreading. False memories, in their most fundamental condition, are very real and existent in our world, as shown in the aforementioned psychology experiment. However, it is only when therapists, armed with the notion of Freudian "repressed-memories" and bizarre concepts like "body memories", implant unhealthy and false ideas into the brains of their patients that havoc ensues.

References

1)Recovered Memory Therapy and False Memory Syndrome, Recent Legal and Investigative Trends by Dr. John Hochman, M.D.

2) Memory and Reality: Website of the False Memory Syndrome Foundation

3) BodytalkMagazine.com How Memory Works

4) The Skeptic's Dictionary False Memory

5) Salon.com Health and Body - The Story of Valerie Jenks

6) How Memory Really Works Freud's Notion of Repressed Memory

7) FAQ for the False Memory Syndrome Foundation


Anosognosia for Hemiplegia: A Window into Self-Aw
Name: Cordelia S
Date: 2003-04-15 01:23:16
Link to this Comment: 5385


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

You wake up in a hospital bed, scared, confused, and attached to a network of tubes and beeping equipment. After doctors assault you with a barrage of questions and tests, your family emerges from the sea of unfamiliar faces surrounding you and explains what has happened; you have had a stroke in the right half of your brain, and you are at least temporarily paralyzed on your left side. You wiggle your left toes to test yourself; everything seems normal. You lift your left arm to show your family that you are obviously not paralyzed. However, this demonstration does not elicit the happy response you expect; it only causes your children to exchange worried glances with the doctors. No matter how many times you attempt to demonstrate movement in the left half of your body, the roomful of people insists that you are paralyzed. And you are, you just do not know it. How is this possible? You are suffering from anosognosia, a condition in which an ill patient is unaware of her own illness or the deficits resulting from her illness (1).

Anosognosia occurs at least temporarily in over 50% of stroke victims who suffer from paralysis on the side of the body opposite the stroke, a condition known as hemiplegia (1). Patients with anosognosia for hemiplegia insist they can do things like lift both legs, touch their doctor's nose with a finger on their paralyzed side, and walk normally (2). These patients are much less likely to regain independence after their stroke than patients without anosognosia, primarily because they overestimate their own abilities in unsafe situations (3). However, the implications of the illness go far beyond those for patients who suffer from it; anosognosia brings questions of the origin of self-awareness to the forefront. How can someone lose the ability to know when she is or is not moving? Is this some type of elaborate Freudian defense mechanism, or is this person entirely unaware of her illness? How is self-awareness represented in the brain, and is this representation isolated from or attached to awareness of others? Though none of these questions are fully answerable at this time, research into anosognosia has provided scientists and philosophers with insight into some of these ancient questions of human consciousness.

The question of "denial" versus "unawareness" is at the heart of debate between psychologists and neurologists about the origin of anosognosia (3). Proponents of a psychological explanation for the disorder insist that patients are aware on some level of their paralysis, but deny this information, as it would be traumatizing to the image of the self to admit to a lack of ability to control one's own body (4). However, this theory is countered by the fact that anosognosia in stroke patients almost always occurs after a stroke in the right hemisphere of the brain; though a stroke in the left hemisphere is no less devastating to the body, patients with left hemisphere strokes nearly always fully recognize the impact of their strokes on their bodies (5).



Another explanation of anosognosia draws on the fact that this disorder and hemiplegia are nearly always accompanied by hemispatial neglect, in which the patient does not recognize or attend to visual information on the side of the visual field contralateral to the brain damage (6). Some researchers believe that since right-brain stroke patients can be inattentive to visual information on the left side, they may simply be displaying the same inattention to the left half of their bodies when they have anosognosia (4). In other words, if one pays no attention to the left arm, one would not notice if the left arm is doing something odd, like not moving.

However, one of the premier experts on anosognosia, Dr. Vilayanur Ramachandran, has pointed out a key flaw in this theory: though hemispatial neglect patients with right brain damage acknowledge seeing stimuli presented to the left visual field if it is brought to their attention, for example by being moved or set on fire, no amount of attention drawn to an anosognosiac's immobile limbs will make her acknowledge that her limb is paralyzed (4). Usually the anosognosiac will insist that her limb is moving. If pressed, she will cite her arthritis or a lack of motivation as a reason for her immobility. When forced, the patient may even venture completely out of the realm of reality in defending her ability to move, stating that the immobile limb belongs to someone else, or is not a limb at all. Dr. Edoardo Bisiach, another expert in the field, once saw a patient who claimed that his paralyzed hand belonged to Bisiach himself. When Bisiach held his own two hands together with the patient's immobile hand, and asked how it was possible that he had three hands, the patient calmly replied, "A hand is the extremity of an arm. Since you have three arms, it follows that you must have three hands" (4). Ramachandran insists that this type of unrealistic rationalization is particular to patients with anosognosia; a patient suffering only from hemispatial neglect will not justify her beliefs with peculiar stories, but will accept a doctor's diagnosis(4).

Ramachandran favors an explanation of anosognosia dependent on both psychology and neurology. He maintains that due to the vast amount of sensory information the brain regularly receives, it must have a filter of some sort that lets it process only necessary information (4). Ramachandran's idea is that the left hemisphere of the brain contains a schema of the body in its entirety, which is updated as needed by a section of the right hemisphere. This right hemisphere function compares incoming sensory information to the left-brain schema, and decides which discrepancies are worth informing the left-brain about (4). For example, while a few sneezes can be brushed off, a fever will bring the right brain to inform the left-brain that one is sick. Not all discrepancies in information change the left-brain's schematic representation of the body, but the most important or startling ones do. Ramachandran believes that the right brain's ability to detect these discrepancies is damaged in patients with anosognosia (1). Thus, the left-brain receives no information about a change in the body's ability to move, and the current representation of the body as fully mobile is maintained (7).



Recent experiments have shown that if any information of a change in the body's abilities is present in anosognosiacs, it is extraordinarily inaccessible to the I-function. Ramachandran asked three hemiplegic anosognosiacs and two stroke victims with hemiplegia and no anosognosia to choose between winning a small prize for completing a task involving one hand (i.e., stacking blocks) and winning a large prize for completing a task involving two hands (i.e., tying a bow) (4). The hemiplegics with no anosognosia consistently opted for the smaller prize. However, the hemiplegic anosognosiacs chose repeatedly to attempt the two handed task, never learning from their failures, and never recognizing their limitations (4).

Amazingly, many anosognosiacs also seem unable to recognize their own limitations in other people (7). In a recent experiment, Ramachandran found that two thirds of tested hemiplegic anosognosiacs were not able to recognize paralysis in another person (4). He suggests that this is because we have a schema for the bodies of others as well as ourselves, and that they are represented in close proximity in our brains (7). This idea was supported by recent research with monkeys, which showed that the same areas that were active when a monkey completed a certain task were also active when he watched another monkey complete the same task (7). This information suggests that self-awareness is crucial in awareness of others. However, this research is in its early stages, and has not yet been used in the treatment of anosognosiacs.

Two methods of treatment, one primitive and one modern, are used currently to help bring a sense of awareness of failure to anosognosiacs. The first method, invented by Bisiach, involves pouring cold water in the ear of the patient on the side of the paralysis (4). Since nerves in the ear contribute information about the body's balance to the brain, Bisiach figured by shocking these nerves he might startle the part of the body responsible for updating the body schema with new information (4). This appears to work astonishingly well; patients undergoing this treatment often fully realize their paralysis for several hours. The second method of treatment uses virtual reality programming to give patients repeated feedback about their failures in a safe setting (3). This type of program helped I.S., a man with anosognosia for hemispatial neglect, without hemiplegia. I.S. was determined to drive, and saw no reason why he should not, until he was treated with a virtual reality program simulating street crossing. Since I.S. did not pay attention to cars coming on his left side, he consistently had "accidents", which caused the program to make crashing noises and flash warnings. This type of confrontation with his limitations seemed to cause I.S. to begin trusting his doctors over his own sense of self (3). Presumably, the shock of knowing that if he followed the information given to his I-function by the rest of his brain, he would die, caused him to realize that he needed to learn new ways to perceive himself and the world around him, perhaps even by trusting others over himself.



The issue of trust stands out as key in anosognosia, a disorder in which the patient can no longer trust her own information about herself. This seems almost unthinkable, hence the reason I chose to open this essay by addressing you, the reader. Could you possibly believe someone else's information about your body over your own? And, if you ever learned to survive as someone who could no longer trust your brain (and, thus, yourself), could you ever again have any type of free will? Could you be creative or original without fully believing in your own mind? The fact that free will is so hard to imagine without an intact sense of self makes me appreciate the seemingly ridiculous lengths to which anosognosiacs will go to defend their perceptions. For if there were an element of choice involved, I might rather believe that a man could have three hands than believe I had lost the ability to perceive myself. Perhaps, in anosognosia, ignorance about one's own ignorance is bliss.


References

1)Some Selected Aspects of Motor Cortex Damage in Man, Lecture notes explaining hemiplegia, hemineglect, and anosognosia

2)Neuropsychology and Neuropathology, Brief summaries of discussions on several neuropsychological disorders, including anosognosia and prosopagnosia

3)Unawareness and/or Denial of Disability: Implications for Occupational Therapy Intervention, Article discussing unawareness as obstacle in treatment, gives several case studies (download as pdf file)

4)The Brain that Misplaced its Body, Article discussing Ramachandran's research and several specific case examples (search in archive for article)

5) Unilateral Hemineglect and Anosognosia of Hemiparesis and Hemiplegia , Extremely comprehensive graduate student project on said topics

6) Memory: A Neurosurgeon's Perspective, Only briefly touches on anosognosia, but interesting regardless

7)Mind Over Body, Update on research and speculation on impact of anosognosia on general self-awareness


Video Games: A Cause of Violence and Aggression
Name: Grace Shin
Date: 2003-04-15 01:31:09
Link to this Comment: 5386


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

There is a huge hype surrounding the launch of every new game system - Game Cube, XBox, and Sony Playstation 2 being just few of the latest. Affecting children age 4 all the way to 45 year-old adults, these video games have called for concern in our society regarding issues such as addiction, depression, and even aggression related to the playing of video games. A recent study of children in their early teens found that almost a third played video games daily, and that 7% played for at least 30 hours a week. (1) What is more, some of these games being played like Mortal Combat, Marvel Vs. Capcom, and Doom are very interactive in the violence of slaughtering the opponent. The video game industries even put signs like "Real-life violence" and "Violence level - not recommended for children under age of 12" on their box covers, arcade fronts, and even on the game CDs themselves.

In the modern popular game Goldeneye 007 bad guys no longer disappear in a cloud of smoke when killed. Instead they perform an elaborate maneuver when killed. For example, those shot in the neck fall to their knees and then face while clutching at their throats. Other games such as Unreal Tournament and Half-Life are gorier. In these games when characters get shot a large spray of blood covers the walls and floor near the character, and on the occasions when explosives are used, the characters burst into small but recognizable body parts. In spite of the violence, the violent video games are also the more popular games on the market. (2) When video games first came out, indeed they were addictive... however, there seems to be a strong correlation now between the violent nature of games these days and the aggressive tendencies in game players.

On April 20, 1999, Eric Harris and Dylan Klebold launched an assault on Columbine High School in Littleton, Colorado, murdering 13 and wounding 23 before turning the guns on themselves. Although nothing is for certain as to why these boys did what they did, we do know that Harris and Klebold both enjoyed playing the bloody, shoot-'em-up video game Doom, a game licensed by the U.S. military to train soldiers to effectively kill. The Simon Wiesenthal Center, which tracks Internet hate groups, found in its archives a copy of Harris' web site with a version of Doom. He had customized it so that there were two shooters, each with extra weapons and unlimited ammunition, and the other people in the game could not fight back. For a class project, Harris and Klebold made a videotape that was similar to their customized version of Doom. In the video, Harris and Klebold were dressed in trench coats, carried guns, and killed school athletes. They acted out their videotaped performance in real life less than a year later... (3)

Everyone deals with stress and frustrations differently. However when action is taken upon the frustration and stress, and the action is taken out in anger and aggression, the results may be very harmful to both the aggressor and the person being aggressed against, mentally, emotionally, and even physically. Aggression is action, i.e. attacking someone or a group with an intent to harm someone. It can be a verbal attack--insults, threats, sarcasm, or attributing nasty motives to them--or a physical punishment or restriction. Direct behavioral signs include being overly critical, fault finding, name-calling, accusing someone of having immoral or despicable traits or motives, nagging, whining, sarcasm, prejudice, and/or flashes of temper. (4) The crime and abuse rate in the United States has soared in the past decade. More and more children suffer from and are being treated for anger management than ever before. Now, one can't help but to wonder if these violent video games are even playing a slight part in the current statistics. I believe they do.

Calvert and Tan (5) compared the effects of playing versus observing violent video games on young adults' arousal levels, hostile feelings, and aggressive thoughts. Results indicated that college students who had played a violent virtual reality game had a higher heart rate, reported more dizziness and nausea, and exhibited more aggressive thoughts in a posttest than those who had played a nonviolent game do. A study by Irwin and Gross (6) sought to identify effects of playing an "aggressive" versus "nonaggressive" video game on second-grade boys identified as impulsive or reflective. Boys who had played the aggressive game, compared to those who had played the nonaggressive game, displayed more verbal and physical aggression to inanimate objects and playmates during a subsequent free play session. Moreover, these differences were not related to the boys' impulsive or reflective traits. Thirdly, Kirsh (7) also investigated the effects of playing a violent versus a nonviolent video game. After playing these games, third- and fourth-graders were asked questions about a hypothetical story. On three of six questions, the children who had played the violent game responded more negatively about the harmful actions of a story character than did the other children. These results suggest that playing violent video games may make children more likely to attribute hostile intentions to others.

In another study by Karen E. Dill, Ph.D. & Craig A. Anderson, Ph.D., violent video games were considered to be more harmful in increasing aggression than violent movies or television shows due to their interactive and engrossing nature. (8) The two studies showed that aggressive young men were especially vulnerable to violent games and that even brief exposure to violent games can temporarily increase aggressive behavior in all types of participants.
The first study was conducted with 227 college students with aggressive behavior records in the past and who completed a measure of trait aggressiveness. They were also reported to have habits of playing video games. It was found that students, who reported playing more violent video games in junior and high school, engaged in more aggressive behavior. In addition, the time spent playing video games in the past were associated with lower academic grades in college, which is a source of frustration for many students, a potential cause for anger and aggression as discussed in the previous paragraph.

In the second study, 210 college students were allowed to play Wolfenstein 3D, an extremely violent game, or Myst, a nonviolent game. After a short time, it was found that the students who played the violent game punished an opponent for a longer period of time compared to the students who played the non violent game. Dr. Anderson concluded by saying, "Violent video games provide a forum for learning and practicing aggressive solutions to conflict situations. It the short run, playing a violent video game appears to affect aggression by priming aggressive thoughts." Despite the fact that this study was for a short term effect, longer term effects are likely to be possible as the player learns and practices new aggression-related scripts that can become more and more accessible for the real-life conflict that may arise. (9)

The U.S. Surgeon General C. Everett Koop once claimed that arcade and home video games are among the top three causes of family. Although there have been studies that have found video game violence to have little negative effects on their players, there are also many studies that have found a positive correlation between negative behavior, such as aggression, and video and computer game violence. Thus, in order to totally assess the effects of game violence on its users, the limiting conditions under which there are effects must be taken into account, which include age, gender, and class/level of education. (10) However, violent games do affect children, as the studies show, especially early teens, and I feel that there needs to be a stricter regulation regarding the availability of these games to young children.

References

1) BBC News Web site in UK.

2) Game Research Website, covering the art, the business, and the science of computer games.

3) American Psychological Association, Article on the main study discussed in this paper.

4) Mental Help Net, Psychological Self-Help. This site has a lot of interesting links to mental illnesses and just understanding personalities.

5) Calvert, Sandra L., & Tan, Siu-Lan. (1994). Impact of virtual reality on young adults' physiological arousal and aggressive thoughts: Interaction versus observation. Journal of Applied Developmental Psychology, 15(1), 125-139. PS 527 971.

6) Irwin, A. Roland, & Gross, Alan M. (1995). Cognitive tempo, violent video games, and aggressive behavior in young boys. Journal of Family Violence, 10(3), 337-350.

7) Kirsh, Steven J. (1997, April). Seeing the world through "Mortal Kombat" colored glasses: Violent video games and hostile attribution bias. Poster presented at the biennial meeting of the Society for Research in Child Development, Washington, DC.

8) SelfhelpMagazine. Article under teen help. It is a great library of various mental disorders and personal growth topics!

9) American Psychological Association.

10) Internet Impact This paper is a collaborative essay consisting of research and policy recommendations on the impact of the Internet in society.


Inner Vision: an Exploration of Art and the Brain,
Name: Alanna Alb
Date: 2003-04-15 01:45:11
Link to this Comment: 5387


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Is artistic expression intertwined with the inner workings of the brain more than we would ever have imagined? Author and cognitive neuroscientist Semir Zeki certainly thinks so. Zeki is a leading authority on the research surrounding the "visual brain". In his book Inner Vision, he ventures to explain to the reader how our brain actually perceives different works of art, and seeks to provide a biological basis for the theory of aesthetics. With careful attention to details and organization, he manages to explain the brain anatomy and physiology involved when viewing different works of art without sounding impossibly complicated – a definite plus for scientists and non-scientists alike who are interested in the topic of art and the brain. Throughout the book, Zeki supports his arguments by presenting various research experiments, brain image scans, and plenty of relevant artwork to clarify everything described in the text. By mostly focusing on modern masterpieces (which include Vermeer, Michelangelo, Mondrian, kinetic, abstract, and representational art), he convincingly explains how the color, motion, boundaries, and shapes of these unique works of art are each received by specific pathways and systems in the brain that are specially designed to interpret each of these particular aspects of the art, as opposed to a single pathway interpreting all of the visual input.

The subject matter that Zeki approaches here is no easy topic to clearly explain to others, especially since a whole lot remains to be discovered in the field itself. Yet Zeki does a superb job of explaining. In my neurobiology class, I recently learned that if we bang our arm or rub our hands together, it is not really the body that is feeling the pain of banging or the sensation of rubbing, but rather the brain itself. After hearing this idea, I was very surprised and excited to see that Zeki had actually devoted all of chapter 3 to dispelling the "myth of the seeing eye". Here he specifically points out that a painter does not paint with her eye; she paints with her brain (13). I was especially pleased that Zeki made this point, since we so easily forget that the eye is merely the organ through which the brain receives filtered input from the outside world. While it is true that our ability to see depends on the eye and the brain working hand in hand, and damage in either one will ultimately affect the other, only the brain is capable of transforming the necessary input or output so that we are able to see the painting before us. The eye only serves to transmit these signals from the outside world, to the brain, and back again. We only tend to think that people see with their eyes because of well-meaning but incorrect figures of speech commonly spoken in society (13-14). For example, we may hear something like "she has a good eye for painting ocean scenes" or "she eyes the angle of the Golden Gate Bridge just right – look how well she can sketch every line and curve on paper". Hearing statements such as these very often will eventually mislead us into thinking that it is only with the eye that one sees.

Another hotly debated issue within our class is if the brain does really equal behavior. Up until I read this book, I had always been skeptical of that equation; but now, after reading this book I have come to believe that the brain does equal behavior. The reason is due to Zeki's belief that "the function of art and the function of the visual brain are one and the same" and that "art is an extension of the functions of the brain" (1). It is not the statement itself that has changed my opinion, but rather the way in which he goes about proving the statement in every chapter.

I was very surprised to discover that the retina of the eye is not connected to all of the cerebral cortex; rather, it is connected to only a portion of the cortex that is considered the "vision center", or where images produced/received by the brain are processed. However, it does not end there – it is only the beginning. This vision center is made up of smaller vision centers, each intricately connected to each other in some way. Each center is specialized to process a certain aspect of a visual image; for example, the center Zeki labels "V1" is considered to be the cortical retina or "seeing eye" of the brain, since it selectively redistributes information received from an outside image to more specialized centers for processing. For example, information about color and wavelength would be sent the area known as "V4" to be interpreted by the brain.

It should be noted that while damage caused to V1 would most likely result in total blindness, damage done to one of the more specialized centers would result in the inability of the brain to process specific input relating specifically to the damaged area. Zeki relates one experiment performed where a sponge was presented to a woman who had some damage located in her "association" center located next to V1, but possessed no actual damage in V1 itself. In the experiment, the woman was still able to see the sponge, but found herself incapable of understanding what it actually was. For me, this discovery was solid evidence that the brain equals behavior, because the woman's damaged brain consequently led to her behavior, or her inability to understand what the object was that was placed in front of her. If one little damaged spot in the brain could affect her behavior that drastically, then this demonstrates that in general, whatever happens to the brain, happens inevitably to behavior. A deficit in the brain equals a deficit in the behavior as well.

Is not art a behavior of the artist then? The artistic brain yields artistic behavior. All artists' brains are similar, but only in the sense that they are all capable of exhibiting some type of artistic behavior. With this one exception in mind, I realize that all artists still possess brains that are very different from each other's. If this were not true, then I do not believe that we would see all of the various kinds of abstract, kinetic, and other forms of modern art on display in museums today. Different artistic brains yield different artistic behaviors, or different works of art (or even the ability to appreciate particular works of art). Thus, it only makes sense for me to conclude that people who are non-artists also have brains that are different from artists, since they are incapable of producing any type of artwork; in other words, they exhibit different behavior as opposed to the actual artist. For the majority of the chapters in the remainder of the book, I think this is a key point that Zeki distinctly presents to the reader.

What about people who are incapable of appreciating a particular kind of art? They too have different brains. For example, Zeki mentions that a person who is prosopagnosic will not appreciate portrait painting because she is incapable of recognizing faces, due to damage in the face recognition area of the brain. However, she can still enjoy looking at other kinds of art since the other parts of the brain responsible for interpreting those kinds of art have not been affected. This is because viewing certain kinds of art activates different parts of the brain's vision center. Hence, the same is true for the artist herself: she is only capable of painting an artwork that does not contain any elements that a damaged part of her brain's vision center would not be able to process.

Although Zeki has no possible way of proving this theory, I am especially intrigued by the way in which he tries to defend his idea – and doing it quite convincingly as well. Zeki devotes his last chapter entirely to the discussion of Monet's brain, actually questioning whether Monet's brain may have had some damage in its vision center. He comes to this conclusion after a careful observation of Monet's paintings of the Rouen Cathedral. This is a shock to read at first, because who would ever take a glimpse at Monet's paintings and claim that there was a possible abnormality in his brain? However, Zeki hypothesizes that Monet might have been dyschromatopsic, or limited in his ability to see colors (210). As uncomfortable as this makes me, I fear that Zeki might be right for he specifically notes how Monet failed to account for the varied lighting conditions in his paintings of the cathedral – this is significant because Monet painted the cathedral in all kinds of weather and lighting conditions. In each of the paintings, I observed how the brightness of the light depicted seemed to be the same throughout the whole picture. This is rather odd when given some considerable thought.

One of my last questions to Zeki's book is how he could explain how different artists manage to paint similar works of art. The answer is as simple as this: "different modes of painting make use of different cerebral systems" (215). I think the message he tries to communicate to the reader is that while different artists are capable of creating similar paintings, the paintings are never exactly alike because each artist makes use of different visual pathways in the brain to create her own unique work of art. Thus, here I think is another perfect example of how the brain equals behavior. It is again demonstrated here that different artistic brains will create distinct works of art.

Overall, I think that the book is deeply intriguing and engaging – it draws the reader in so intensely that she cannot break free until she reads the very last page. Zeki manages to bring to light so many new ideas about the visual brain. He takes what little we do know about the brain and distinguishes myth from fact. It is interesting to note how much of the book is really just hypothetical guesses proposed by Zeki, since there is still so much about the physiological workings of the brain that we have yet to discover. Nevertheless, I found it fun to read the book and compare the known facts to the theories and make guesses as to what might actually be found to be true someday. This is a most delightful book, and I highly recommend it to anyone who has even the slightest interest in uncovering the mysterious links that exist between the brain and visual art.

References


Looking Out for Future Pain
Name: Luz Martin
Date: 2003-04-15 01:51:14
Link to this Comment: 5388


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


Pain is a method used by the body to interpret the outside world. Our skin is covered with sensory neurons that are responsible for acquiring information about the body's surroundings (6). Some of the nerve endings involved in the pain sensing process are called nociceptors (6). Most of the sensory receptors and nociceptors come from an area near the spinal cord (6). The information from the sensory neurons is sent through intermediate neurons and is passed onto the motor neurons that are involved in a physical movement, or are sent to the brain (1). In the brain, the information is interpreted and behavioral and emotional reactions are created (6). The definition of pain used by the International association for the Study of pain describes it as a sensory or emotional interpretation that is produced when there is the potential or actual occurrence of tissue damage (2).

Adults are able to verbalize the intensity of their pain and can help monitor the effectiveness of treatment when there is damage to the body tissue. How can adults interpret the pain in infants who cannot verbalize their experience? What concerns should we have when treating tissue damage in babies? What about the damage treatment of babies inside the womb?

It has been noted that a newborn has sensory nerve cells that have a greater respond rate than an adult (4). With sensitive sensory nerve cells, the spinal response to a stimulus is also increased and lasts for a longer period of time when compared with an adult (4). The appearance of these sensitive nerve cells is found on a larger portion of a newborn's skin when compared with adults (4). These sensory areas are called receptive fields (4). The receptive fields help the nervous system keep track of where the stimulus was received (4). With a larger receptive field, babies are unable to pin point the exact location of the stimulus (4).

Since newborns have very sensitive sensory nerves, the same response is produced to any stimulus without regard to the intensity (4). A newborn may react in the same way to a pinch as to a soft touch (4). The newborn will respond to non-harmful experiences as if they were potentially harmful (4).

Questions have been raised about the level of sensation that the fetus itself undergoes when using surgery to address abnormalities in a fetus (1). Surgery involves the opening of the mother's uterus to perform corrections on the fetus (1). Once corrected the fetus is returned to the mother's uterus to complete the normal term of development (1). The use of fetal surgery is a way of increasing the survival rate of the fetus after birth (1). The organs corrected would have inevitably prevented the baby from living after birth (1). The corrected threatening abnormalities may produce effects such as respiration failure, neurological damage, and heart failure (1).

The development of the nervous system may help determine the level of neural activity that can be interpreted as pain. The fetus is able to respond with reflex motions as early as 7.5 to 14 weeks (3). Although at 16-35 weeks the fetus displays patterns of reflexes the spinal cord is not yet fully developed (3). Usually the responses observed are exaggerated (2). Although the fetus is able to respond to a stimulus, the fetus needs the cortex as well as memory to experience pain (2). With memory, the fetus may interpret the sensations as pain due to past experiences that would lead to anxiety (2). Lloyd-Thomas and Fitzgerald suggest that the exaggerated response helps the fetus react to stimuli that it is not yet able to synthesis and produce a direct response to (2). The fetus displays receptor systems that have not yet matured (2). Without memory, the fetus is unable to interpret the stimulus as pain, but the fetal nervous system does respond to protect from harmful tissue damage (2).

Although the fetus and a newborn may not feel pain an interest in the issue has been raised due to the observation of the long-term consequences on the body's pain systems. An injury experienced early in development may affect the response to pain in childhood as well as adulthood (4).

It has been observed in adults that when a sensory nerve has been injured the nociceptive system, or pain system is altered (5). The pain experienced should only last for a period of time, until the tissue damage is corrected (5). When there is tissue damage and the sensory neuron is damaged as a result, the nociceptor is no longer considered reliable. The pain system ignores the damaged nociceptor's signals of pain sent to the brain (4).

The only way that pain is sensed in the region where the injured nociceptor is located is by having a neighboring nerve cell that is still functional monitor the area (4). This area continues to be on pain alert and remains sensitive to touch even after the tissue damage has been corrected (4).

Some suggest that because the fetus may not necessarily feel pain, the sensory neurons are in danger of malfunctioning (4). The neurons may be permanently altered if surgery is performed without taking precautions to prevent damage from reoccurring in the sensory neurons (4). Disabling the nociceptors may result in the spreading of nerve terminals in healthy sensors, thereby altering the areas covered by the sensory nerve (4). The nervous system is left with an altered picture of what area the pain is coming from (4). Oversensitive areas may develop as a result of alteration (4). These areas may be triggered by damage that will remain sensitive even though the damage is gone (4).

The sensation of pain is a system that incorporates physical sensation as well as past experiences. The experience of pain through emotions may affect how well a person can function in daily life. Considering ways to prevent pain receptor damage in developing nervous systems may help avoid future pain. There is no better time to catch a problem than before it begins.

References

1)Annual Reviews Medicine, Fetal Surgery, By Flake Alan W. and Michael R. Harrison; 1995

2)British Medical Journal, For Debate: Reflex responses do not necessarily signify pain British Medical Journal; 1996 (28 September), By Lloyd-Thomas, Adrian R. and Maria Fitzgerald.

3)New England Journal Of Medicine, Pain and its Effect in the Human Neonate and Fetus. The New England Journal Of Medicine, Volume 317, Number 21: Pages 1321-1329, 19 November 1987. By K.J.S. Anand, M.B.B.S., D.Phil., And P.R. Hickey, M.D

4)Medical Research Council, , The Birth of Pain. MRC News (London) Summer 1998:20-23. By Fitzgerald M.

5)Richeimer Pain Medical Group, , Understanding Nocicieptive & Neuropathic Pain; December 2000.

6)The Association of the British Pharmaceutical Industry, The Anatomy of Pain.


Anorexia Nervosa: An issue of control
Name: Annabella
Date: 2003-04-15 03:05:09
Link to this Comment: 5389


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

As medicine has progressed through the years, so have the avenues for diagnosing the various causes of many disorders. Recently there have been new discoveries about the disorder anorexia nervosa. Anorexia nervosa is a life-threatening eating disorder defined by a refusal to maintain body weight within 15 percent of an individual's minimal normal weight. (2) Other essential features of this disorder include an intense fear of gaining weight, a distorted body image, and amenorrhea (absence of at least three consecutive menstrual cycles when otherwise expected to occur) in women. (1) Theories about the causes of anorexia nervosa include the psychological, biological, and environmental. This paper will discuss the question of the multiple origins of anorexia nervosa, and attempt to identify a common underlying cause.

Conservative estimates suggest that one-half to one percent of females in the U.S. develop anorexia. Because more than 90 percent of all those who are affected are adolescent and young women, the disorder can be characterized primarily as a women's illness. It should be noted, however, that children as young as 7 have been diagnosed, and women 50, 60, 70, and even 80 fit the diagnosis. (5) Like all eating disorders, it tends to occur in pre or post puberty, but can develop at any life change. One reason younger women are particularly vulnerable to eating disorders is their tendency to go on strict diets to achieve an "ideal" figure. This obsessive dieting behavior reflects a great deal of today's societal pressure to be thin, which is seen in advertising and the media. Others especially at risk for eating disorders include athletes, actors, and models for whom thinness has become a professional requirement. (3)

The classic anorexic patient (although more and more variations are being seen) is an adolescent girl who is bright, does well in school, and is not objectively fat. She may be a few pounds overweight and begins to diet. She comes to relish the feeling of control that dieting gives her and refuses to stop. However grotesquely thin she may appear to others, she sees herself as fat. To combat the slowing of metabolism that accompanies starvation, severely restrictive eating is combined with excessive, frenetic physical activity. (7) The term anorexia, or absence of appetite, is a misnomer because patients are often hungry and can be quite preoccupied with food. They may cook elaborate meals for others, hoard food, or establish intricate rituals around the food they do eat. These behaviors resemble those seen in some patients with obsessive-compulsive disorders, and may extend outside the arena of eating. Generally, patients with anorexia nervosa are very secretive and defensive about their eating habits and may deny any problems when confronted. They may dress in oversized clothes (in an effort to hide their wasted bodies) or refuse to have meals with others. (3) The hallmark of anorexia nervosa is denial and preoccupation with food and weight. In fact, all eating disorders share this trait including binge eating disorder and compulsive eating. One of the most frightening aspects of the disorder is that people with anorexia continue to think they look fat, even when they are bone-thin. Their nails and hair become brittle, and their skin may become dry and yellow. Depression is common in patients suffering from this disorder. People with anorexia often complain of feeling cold (hypothermia) because their body temperature drops. They may develop long, fine hair on their body as a way of trying to conserve heat. Food and weight become obsessions as people with this illness constantly think about their next encounter with food. (6)

As previously stated the causation of anorexia nervosa can be biological, psychological, or environmental. One psychological theory is that food intake and weight are areas that a young woman can control in a life otherwise dictated by an overly involved family, which may include a parent who is unduly concerned with weight or appearance. (8) From this stems the idea that one cause of anorexia may be the development of control issues. Many anorexics admit that they began the downward spiral towards anorexia when they started to perceive that they had lost control of their lives. This is especially common in college students who perceive that they have no control over the direction of their lives. For example, the case of Betsy, who is a hard working college student in her second year. She is anxious about her future. Like her mother, Betsy gets depressed. She goes through periods when she feels very discouraged about life in general. She also is uncertain about how she looks though her friends think she looks great. (5) In an interview conducted with her psychologist, she stated " I began to stop eating as a way to gain some control over my life. I was shocked at the measure of relief that it gave me to have control of this tiny thing. From then on controlling what I ate became an obsession, a very twisted obsession." (5) It seems then, that one psychological causes of anorexia is a perceived lack of control in ones life. Through controlling ones food intake, anorexics are reaffirming that they have control. Can all of the psychological causes of anorexia nervosa be traced to a control issue?

Since the symptoms of anorexia often appear around puberty, another hypothesis is that the girl is afraid of becoming a woman, and therefore diets away all signs of puberty (i.e., breasts, hips, menses). The full-time preoccupation with weight also allows her to avoid adolescent social and sexual concerns and potential conflicts with parents. (8) In case of the pubescent girl, it is possible to trace the cause of anorexia past the initial preoccupation with weight to deeper control issues. The pubescent girl in the throw of fluctuating hormones, and massive physical change, would fit the bill of a person whom would feel as if they had lost control of their world. Their body is turning against them, what better way to reassert control than to regulate its intake of food and thus its shape. Puberty is also the time in which a girl begins to identify herself as a woman, both mentally and physically. She will look at her environment for female role models to copy. This leads to another cause of anorexia—the media. The role of the American cultural ideal of thinness is thought to contribute, at least by encouraging initial dieting, but the scope of its influence is unclear. On any given day, 25% of American men and 45% of American women are actively dieting; moreover, children are starting to diet as early as first grade. Anorexia is now being seen in nonwestern countries that receive American television, so it may be that a cultural ideal of thinness is a potent catalyst. (5)

We have seen that there are various psychological instigators for anorexia, most of which can be traced back to issues of control. However, many researchers propose that anorexia can also find some origin in a biological cause. Most advocates of the biological cause believe that anorexia is caused genealogically (i.e. passed down through their parents). (7) According to a recent study, mothers and sisters of people with anorexia or bulimia are at higher risk of having one of these disorders. Compared to the rest of the population, these mothers and sisters have a risk for anorexia that is 11 times higher and a risk for bulimia that is 4 times higher. (4) At this point one might ask, is it the similar genes or environment that is causing anorexia? As in most human disease, the answer appears to be "both." Better understanding of the genetic roots of eating disorders will make it easier to identify and understand the environmental factors. Some studies that were done on twins demonstrated that anywhere from 50 to 90% of the risk for anorexia is said to be genetic. (4) To date, medical science has not found a specific gene or genes that contribute to anorexia and bulimia. But this doesn't mean medical science has no clues. Genes that affect appetite may be involved. These genes would regulate the feeling of "fullness" after eating. People with anorexia may feel full early, and so suppress their appetites completely. (7)

There is also the issue of a common environment that might explain the higher rates of anorexia nervosa amongst mothers and sisters. Family dynamics must come strongly into play. The boundaries between generations tend to be blurred in the families of persons with eating disorders. That is, parents and children are constantly involved in each other's problems. Other researchers point to early events in family life that cause a "paralyzing sense of ineffectiveness." (5) In both events, perceived lack of control in ones life (due to overbearing parents, or a sense of ineffectiveness) can be associated with the initial issues. A this point the question is posed, since there is a link between family members and anorexia, why do only certain members of the family have anorexia and not the entire family? The genetic and environmental differences that explain differences in our appearance and health. Some family members will inherit genes that predispose to eating disorders, and others will not. Some family members will be exposed to environmental agents that trigger disease, and others will not. (5)

In this paper the question has been raised, what is the cause of anorexia nervosa? This question led to the idea that part of the cause of anorexia nervosa could be a perceived lack of control. To prove this we looked at the three main ideas's behind the causation of anorexia. Most physicians believe that anorexia can trace its origins to psychological, genetic, or environmental issues. In both the psychological and environmental cases it was demonstrated that, upon closer look, there was evidence that suggested a deeper problem of perceived loss of control. For example, the pubescent girl who finds herself going through many changes and becomes anorexic. At first glance the anorexia was caused by a fear of becoming a woman. Is this fear of becoming a woman not indicative of a greater fear of loss of control, not only of ones body (that is changing daily), but also of ones mind (that is at the beck and call of hormones)? In this case it is easy to link the cause of anorexia back to a control issue. Through the same method can we trace control issues back to the original cause of environment. The family environment was used as an explanation as to why mothers and sisters were more highly susceptible to anorexia. However, that same theory then suggested that early events in communal family life (i.e. death of a parent) could cause a "paralyzing sense of ineffectiveness," that in turn caused anorexia. This point easily flows over to prove that perceived lack of control was once again culpable as part of the cause of anorexia. Many conditions spanning from genetic to psychological all help cause anorexia, in most cases the common thread of control issues can be found in each. Future breakthroughs might identify even more causes of anorexia, and hopefully answer the genetic question. Yet no matter how much we research there will always be a part of the human mind that will remain a mystery.


World Wide Web Resources
1)Anorexia Nervosa, a site that give a thorough explanation of what anorexia nervosa is, and its symptoms.
2)Medical Student Conducted Studies, an interesting site that give a listing of studies that medical students are conducting. It also lists the results of the study. In this paper it helped to clearly outline the effects of anorexia/
3)Causes of Anorexia, this site lists the different causes for anorexia, and their effects on the body.
4)Entrez-PubMed, a wonderful site that lists the medical studies that were conducted by the government.
5)Hospital Practice , a site that lists the articles written by various different doctors. This is an extremely informative site that helped identify the broadly different causes of anorexia nervosa.
6)Link Between Depression and other Mental Illness, an article that discusses the links between mental illness and depression. It addresses a very wide variety of issues.
7)Genetics and Eating Disorders, this site looks at the link between genetics and eating disorders. It also looks at the different medical studies that have been conducted on the topic.
8)On the Teen Scene, this site looks at the effect of anorexia nervosa on teenagers. It looks at questions as to why it affect that one age group so hard, and its ramifications.


Postpartum Depression
Name: Nupur Chau
Date: 2003-04-15 03:23:35
Link to this Comment: 5391


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Most of us were appalled when, in 2001, Andrea Yates, a Texas mother, was accused of drowning her five children, (aged seven, five, three, two, and six months) in her bathtub. The idea of a mother drowning all of her children puzzled the nation. Her attorney argued that it was Andrea Yates' untreated postpartum depression, which evolved into postpartum psychosis that caused her horrific actions (1) . He also argued that Andrea Yates suffered from postpartum depression after the birth of her fourth child, and that she attempted suicide twice for this very disorder ((1)). What is postpartum depression, and how can it cause a mother to harm her very own children, altering her behavior towards her children in a negative way?
One in ten women experience postpartum depression ((2)), a condition that often goes undiagnosed, and occurs in women after childbirth. A reason for the lack of diagnosis of postpartum depression is a milder, more common form of depression after childbirth, often known as the "baby blues". The baby blues occur in mothers three to five days after childbirth ((2)) , and may last for as little as a couple hours to a couple weeks ((4)). These symptoms include

* mild sadness
* tearfulness
* anxiety
* irritability, often for no clear reason
* fluctuating moods
* increased sensitivity
* fatigue ((2))

The treatment for the baby blues are frequent naps, a proper diet, and plenty of support from partners, family, and friends ((3)). Generally, the baby blues subside without any sort of serious treatment. However, the baby blues may evolve into postpartum depression. One study discovered a link between postpartum depression and the baby blues: out of the women that were diagnosed with postpartum depression six weeks after delivery, two-thirds of them experienced the baby blues ((2))
Postpartum depression is more serious than the baby blues, with mothers experiencing symptoms with a greater severity and a longer duration than the baby blues. Almost ten percent of recent mothers experience postpartum depression ((3)), occurring anytime within the first year after childbirth ((3)). The majority of the women have the symptoms for over six months ((2)) . These symptoms include

* Constant fatigue
* Lack of joy in life
* A sense of emotional numbness or feeling trapped
* Withdrawal from family and friends
* Lack of concern for yourself or your baby
* Severe insomnia
* Excessive concern for your baby
* Loss of sexual interest or responsiveness
* A strong sense of failure and inadequacy
* Severe mood swings
* High expectations and over demanding attitude
* Difficulty making sense of things ((3))

Consequently, the treatment for postpartum depression is more intense than that for the baby blues. Among the many treatments, many mothers undergo intense counseling, take antidepressants, or even experience hormone therapy ((3)).
In rare instances, postpartum psychosis is diagnosed (one-tenth or two tenths of a percent experience it ((2)) ). When experiencing postpartum psychosis, new mothers can experience auditory hallucinations, as well as delusions and visual hallucinations ((4)), making them lose their sense of what is real and what is false. Treatment is imperative an often times done under immediate hospitalization.
What is about childbirth that leads to depression, whether it is a mild or severe form, and how does that affect the brain? The causes of postpartum depression are vague and have yet to be defined by researchers. It is thought that it is the hormonal changes that may trigger the depression. During the nine months of pregnancy, the woman's level of estrogen and progesterone increase significantly ((4)); however, within the first twenty-four hours of childbirth, the hormone levels drop dramatically, back to the level that they were before pregnancy ((4)) . It is this dramatic hormonal change that is believed to be the cause of postpartum depression among new mothers. This hormonal change can be equated to the "mood swings" that a woman experiences during her menstrual period, a time when her hormone levels are slightly irregular(1). Additionally, thyroid levels can also drop during childbirth, and thus may be a factor in postpartum depression ((4)).
Consequently, ways to prevent another Andrea Yates from going too far is to treat postpartum depression seriously. Because the baby blues are so common, postpartum depression and psychosis are often misdiagnosed as the baby blues, or even more frequently, not diagnosed at all. Thus, postpartum depression must be taken seriously.


References


1)Study Works! Online: What is Postpartum Depression?

2)Postpartum Depression and Caring for Your Baby

3) Postpartum Coping: the Blues and Depression

4)Frequently Asked Questions about Postpartum Depression


Creativity and Bipolar Disorder
Name: Nicole Meg
Date: 2003-04-15 03:27:43
Link to this Comment: 5392


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

History has always held a place for the "mad genius", the kind who, in a bout of euphoric fervor, rattles off revolutionary ideas, incomprehensible to the general population, yet invaluable to the population's evolution into a better adapted species over time. Is this link between creativity and mental illness one of coincidence, or are the two actually related? If related, does heightened creative behavior alter the brain's neurochemistry such that one becomes more prone to a mental illness like bipolar disorder? Does bipolar disorder cause alterations in neurochemistry in the brain that increase creative behavior through elevated capacity for thought and expression? Is this link the result of some third factor which causes both of the two effects?

Centuries of literature and innumerable studies have supported strong cases relating creativity--particularly in the arts, music and literature--to bipolar disorder. Both creativity and bipolar disorder can be attributed to a genetic predisposition and environmental influences. Biographical studies, diagnostic and psychological studies and family studies provide different aspects for examining this relationship.

A 1949 study of 113 German artists, writers, architects, and composers was one of the first to undertake an extensive, in-depth investigation of both artists and their relatives. Although two-thirds of the 113 artists and writers were "psychically normal," there were more suicides and "insane and neurotic" individuals in the artistic group than could be expected in the general population, with the highest rates of psychiatric abnormality found in poets (50%) and musicians (38%). (1) Many other similar tests revealed this disproportionate occurrence of mental illness, specifically bipolar disorder, in artistic and creative people, including a recent study of individuals over a thirty-year period (1960 to 1990). Overall, when comparing individuals in the creative arts with those in other professions (such as businessmen, scientists, and public officials), the artistic group showed two to three times the rate of psychosis, suicide attempts, mood disorders, and substance abuse. (1)

Another recent study was the first to undertake scientific diagnostic inquiries into the relationship between creativity and psychopathology in living writers. Eighty percent of the study sample met formal diagnostic criteria for a major mood disorder versus thirty percent of the control sample. The statistical difference between these two rates is highly significant, where p<.001. This means that the odds of this difference occurring by chance alone are less than one in a thousand. Of particular interest, almost one-half the creative writers met the diagnostic criteria for full-blown manic-depressive illness. (1) This is not to say that the majority of artists are bipolar but rather that there is a considerably higher incidence in bipolar disorder among artists than among the general population.

Collectively, these studies and numerous others have clinically supported the existence of a link between bipolar disorder and creativity. Now the question applies: Is bipolar disorder the result of above-average creativity or is above-average creativity the result of bipolar disorder or are the two a result of some third factor which causes the two effects? From the sources I have encountered, I believe a stronger case is made for the latter, although it is impossible to scientifically or psychologically answer that question at this time.

Predisposition to bipolar disorder is genetically inherited and current studies suggest the same for predisposition to creativity but is there a common genetic factor, which determines the expression of both traits? If there were, neither creativity nor bipolar disorder would implicitly cause the other. A recent study hypothesized that a genetic vulnerability to manic-depressive illness would be accompanied by a predisposition to creativity, which, according to the investigators, might be more prominent among close relatives of manic-depressive patients than among the patients themselves. Significantly higher combined scores from a creativity assessment test were observed among the manic-depressive patients and their normal first-degree relatives than among the control subjects, suggesting a possible genetic link between the two characteristics, as both are prevalent in families with a history of bipolar disorder and not as evident in control families. (1) A wide variety of artistic and creative talents, ranging from music to art to mathematics, were exhibited among the family members of the bipolar patients as well. The varied manifestations of creativity within the same family suggest that whatever is transmitted within families is a general factor that predisposes them to a creative mentality, rather than a specific giftedness in a single area. The coexistence of creativity accompanied by manic depression, whether expressed in bipolar patients or not expressed in their predisposed family members, suggests that a third factor, yet unidentified, may be orchestrating the expression of the two.

Assuming both creativity and bipolar disorder, or at least predisposition to the illness, are expressed simultaneously, what accounts for heightened creativity in people upon onset of bipolar disorder? A deficit in normal information-processing could be manifested in a severe behavioral disorder, but it could also favor creative associations between information units or a propensity toward innovation and originality. (2) The altered neurological structure and functioning in the frontal lobe, prefrontal cortex, hippocampus, hypothalamus and cerebellum associated with bipolar disorder may also allow for more creative thought.

People with bipolar mood disorders tend to be more emotionally reactive, which gives them greater sensitivity and acuteness. This, coupled with a lack of inhibition due to compromised frontal lobe processes, permits them unrestrained and unconventional forms of expressions, less limited by accepted norms and customs. They are more open to experimentation and risk-taking behavior, and, as a consequence, more assertive and resourceful than the mean. (2) (3) Characteristics of the bipolar disorder, such as lowered inhibition, allow for freer expression of previously contained ideas and the constant flux between manic and depressive states also gives an unusual kaleidoscopic perspective of the world. All of these factors can account for increased creativity once the illness erupts. (5)

The current model supports the existence of a relationship between creativity and bipolar disorder as the coexisting effects caused by some third factor. Uncovering the origin of the relationship between creativity and bipolar disorder will require continued studies, particularly those implementing brain scans and genetic isolation techniques, aimed at identifying this mysterious third factor that would link the two traits together. (4) The new equipment and test available, such as PET scans, Magnetic Resonance Imaging and gene mapping, has complicated the process by offering new ways to explain bipolar disorder as a possible collection of disorders presenting closely similar symptoms. Hence, the third factor may actually be a combination of multiple factors like environmental insults to fetal development, hormonal imbalances in the womb and inordinate stress during development in addition to genetic factors. (6) When it is determined which of these factors, acting either alone or in various combinations, are the mysterious third factor, the origin of the relationship between creativity and bipolar disorder will be unveiled.


References

1) Jamison, Kay Redfield. Touched with Fire. New York: Simon & Schuster, 1993.

2) Journal of Memetics, an article addressing creativity, evolution and mental illness.

3)Bipolar Disorder, an educational resource about bipolar disorder.

4) Manic-Depressive & Depressive Association of Boston, an article discussing the genetics of bipolar disorder.

5) Diagnostic and Statistical Manual of Mental Disorders, an online version of the resource book.

6) From Neurons to Neighborhoods, a book that addresses early development of the brain.


Phantom Limbs: What is the Cause of Sensation?
Name: Melissa Os
Date: 2003-04-15 03:36:42
Link to this Comment: 5393


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Do amputee patients actually feel pain in missing limbs? If so how and why? These are some of the questions that have been asked for many years. One particular case study example displays this phenomenon. In 1983 a man named Aryee lost his right arm in an accident during a storm at sea. An experiment was done on him with the following results related in this dialogue between patient and observer:
'-"See if you can reach out and grab this cup in your right hand. What are you feeling now?
-I feel my fingers clasping the cup.
-Okay try it again." (As the patient tries to reach for the cup, the doctor pulls it farther away)
-Ouch! Why did you do that?
-Do what?
-It felt like you ripped the cup right out of my fingers"' (2) .
This dialogue that occurred between Aryee and the experimenter is a description of a common sensation felt by many amputee patients. Although they are missing an arm, finger, or leg, they have sensations similar to Aryee that include tingles, itching, and even pain where the limb used to be. Thus these patients feel sensations "which seem to emanate from the amputated part of the limb" (4). Phantom limb is the name for this type of phenomenon.

Phantom limb is not a newly discovered occurrence among amputee victims; however it has persisted as a medical mystery for hundreds of years. Even the words associated with the sensation phantom and phenomena suggest that it is somehow out of the ordinary and without a rational explanation. Nonetheless, there is a long documented history of phantom limbs. During the Civil War, in 1872, Silas Weir Mitchell who worked at a hospital in Philadelphia wrote the first clinical documents on patients that experienced feeling where their missing limb used to be located (2) , (5). Today amputees are still experiencing this phenomenon. The sensation of phantom limbs "occurs in 95-100 percent of amputees who lose an arm or leg" (6). At first people merely feel sensations coming from their amputated limb, however in time it develops into pains of stabbing, burning, or cramping (4). In addition, temperature and texture can be felt, such as warmth, cold, and rough surfaces (6). While the phantom limb now a day is no longer considered a myth, rather a legitimatized medically proved sensation the source of these sensations is still a topic of much debate. There are many theories neurobiologists argue are the cause of phantom limb. Two of these theories are based on two differing concepts of the brain: one that the brain is hardwired, the other that the brain, particularly, the cortex can be reorganized.

There was a time when the belief was held in the scientific community that adult brains lost their plasticity after adolescence (3). The idea was that there was no ability for change in the cortex but that the brain is considered to be hardwired with particularly functions that continue despite the fact that a limb might be missing. Michael Merzenich supported this theory with research on monkeys. He amputated the index finger of an adult monkey and recorded the signals that reached the cortical map (2). In the somatosensory cortex there is the representation of the human body called the homunculus, which is like a map of the body (2). The experiment resulted in the discovery that neurons in the index finger region of the homunculus were fired whenever fingers next to the amputated one were touched (2). In this experiment there was no evidence for neuronal growth, rather "unmasking [which] sends new impulses to the previously empty region" (2). Existing axon branches uncover at some point after the amputation and continue to operate. This concept is linked to the fact that body image or perception of the body is genetically hardwired in the brain (1), thus the cause of phantom limbs.

The other theory about the causation of phantom limbs is based on a discovery by T.P. Pons. In his experiment on Silver Spring Monkeys, he discovered that the face region of the cortex replaced the amputated arm's cortex. Thus there is a reorganization of the brain, in which axonal branches sprouted up from facial cortex across amputated limbs area (2). Sensations in the phantom limb can simultaneously be felt in the face. This may mean that the actual physical sensations the patients are experiencing are being processed by the face, but associated with the missing limb. "The brain has reorganized itself after the brain the injury [amputation] such that neurons that were responsive to the missing inputs become responsive to remaining inputs" (3). For Pons and then other neurobiologists that followed this line of thought, the brain could not be hardwired. Instead this experiment suggests that if the cortex can change and alter in such a way it can't possible be as set in stone as once thought.

Phantom limbs are an intriguing concept. The thought that our body can feel what is not there has been a much debated and analyzed part of the human nervous system. Although many theories have been experimented and debated, the idea that seems most probable is that the brain reorganizes itself to deal with the change of the body. In all that the brain can do and does do it seems impossible for something like phantom limbs to happen in a brain that is hardwired, rather the possibility that with the change in the body the brain changes as well seems highly likely. The only question that remains, which may require further research, is why the face region of the cortex takes over the amputated limb region as opposed to other regions?

References


1) Biology Articles, paper on differing theories.

2)Biology Homepage for Macalester, discussion of phantom limbs.

3)College Biology Page, information on reorganization of cortex.

4)Harvard Biology page, article on phantom limbs.


5) Ramachandran, Vilayanur. "Phantom Limbs and Neural Plasticity." Neurological Review March 2000: 317-320

6)MIT Phantom Limb Page.


Your Brain, Your Enemy
Name: Marissa Li
Date: 2003-04-15 04:10:05
Link to this Comment: 5394

As ridiculous as it seems, sometimes you read someone's online profile and they say something prolific. A carelessly placed quote prompted a debate in my brain that caused some ample paranoia. "The greatest mistake you can make in life is to be continually fearing you will make one." Thank you to the friend from high school whose personal AIM profile gave me a topic for my neurobiology paper, PARANOIA.
If it has been confirmed that brain equals behavior, than why don't we fear our own thought processes? Persons with paranoia disorder are not aware that they are in fear of their own brains, but in some respect fear of oneself and what ones brain can create is exactly what persons with paranoia disorder experience. Everyone experiences small doses and bouts of paranoia on nearly a daily basis, but not everyone exists on its affects. Those with paranoia disorder deal with a constant nagging that they cannot control because it tends to control them, hence your brain as your enemy. Though the causes of paranoia are not clearly defined in either social or medical fields, the obvious truth is that paranoia stems from the brain and the nervous system causing persons to be "highly suspicious of other people" (4). According to studies paranoia stems from several possible areas. "Potential factors may be genetics, neurological abnormalities, [and] changes in brain chemistry. Acute, or short-term paranoia may occur in some individuals overwhelmed by stress" (4).

In terms of genetics, paranoia is not defined as something strictly hereditary, however there is a tendency towards its occurrence in families with members with schizophrenia or other mental disorders (6). Socially speaking paranoia appears to be passed down from parent to child through shear exposure and environment. If certain personality traits are innate within a person, than the possibility of a genetic inclination towards paranoia does not appear way off base. This of course stems from discussion on whether or not personality is developed or innate. In almost everything somebody does, his or her personality comes through. The question of nature versus nurture starts at the very root of the physical structure, straight from the brain and the nervous system, the decided director of behavior.

Biologically some studies of schizophrenia and other psychosis have shown actual irregularities in the composition and functions of a paranoid brain. "The search [for abnormal brain chemistry] has become very complex, as more and more of the chemical substances that carry messages from one nerve cell to another—the neurotransmitters—have been discovered" (1). Many examinations of the paranoid brain have shown irregularities in the firing of neural circuits and decreased activity in the prefrontal cortex causing what is assumed to be an "impaired ability to judge whether their [a person's] fears are rational" (6). Does simple misfiring cause paranoia disorder or just paranoia itself, which is perpetuated by a brain that insists upon creating fear within the individual? If everyone's neurotransmitters fall out of place now and than, what judges true paranoia? Is it created through an education of fear and insecurity or is completely biological, and than of course, is biology affected by ones surroundings? Whether or not paranoia is created in the brain and nervous system is a given; it is, however what consumes the brain so much that forces it to create, or innately have a personality disorder?

Stress is another external factor similar to the idea behind the environments affect on the upholding of paranoia disorder. Paranoia tends to be "more prevalent among immigrants, prisoners of war, and others undergoing severe stress" (6). This is also seen as more "acute" or temporary paranoia symptoms and causes, but is still a convincing argument towards nurture being the cause of paranoia disorder.

Paranoia is not something easily curable. Because sufferers of their own internal fears are so self-absorbed, they are "immune to reason" (5). They are also difficult to treat "because the person may be suspicious of the doctor" (4). Though medications and therapy are very common "attemptable remedies" for those with paranoia disorder, if the person cannot control their own mind, than how can they be expected to change what their own mind is doing to them? Because a paranoid personality has to deal with its own issues on a consistent basis, automatically, a person suffering from paranoia gets sucked into isolationism.

The basic symptoms of paranoia are "concern that other people have hidden motives, the expectation of being exploited by others, an inability to collaborate, a poor self image, social isolation, detachment, and hostility" (2). Everyone has their moments of each trait, but to what extent one harbors fears of isolation, detachment, etc. is what actually causes paranoia. Regardless of whether or not paranoia is a biological, chemical, natural, innate, external, or stress induced condition, in any state it cause a human being to fear their own control and their own brain. Persons with paranoia believe within themselves that they have control, and yet they are the ones that force themselves to become "aware" of their surroundings and insecure around all those surrounding them.


1) On the Couch: Faces of Paranoia

2) Paranoia

3)
Paranoid Personality Disorder
4
Paranoid Personality Disorder

5)
Self Protection or Delusion? The Many Varieties of Paranoia

6)
Useful Information on Paranoia


Men are from Mars, Women are from Venus -- Brain a
Name: Arunjot Si
Date: 2003-04-15 05:51:18
Link to this Comment: 5395


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

If we were to examine a high school calculus classroom or the staff at an engineering program of a college or university, chances are that the male to female ratio would be significantly skewed. Why are women and men so different in their choices and behavior? The brunt of popular opinion focuses on the environmental cues that lead to our distinct behaviors. But is there also an innate biological basis to the choices and differing abilities between men and women? Cognitive functioning or brain processing differences in the two genders has been a point of interest and contention for many years. The purpose of this essay is to explore if neuroanatomical and genetic differences between males and females play a role in the development of "gender-specific" behaviors, perceived intellectual strengths and professional choices.

Equality regardless of gender or creed is an axiom that is crucial to our modern day society. And yet even in this 21st century, the number of women in certain "male dominated" professions, has remained fairly unchanged. Many social theorists believe that women are discouraged from such professions and that if they were given an unbiased, level playing field, that demand for these professions would be identical for both males and females. Mary Pipher, a psychotherapist for adolescent females writes, "With girls... their success is attributed to good luck or hard work and failure to lack of ability, with every failure, girls' confidence is eroded. All this works in subtle ways to stop girls from wanting to be astronauts and brain surgeons. Girls can't say why they ditch their dreams, they just 'mysteriously' lose interest" (10). Experiments have shown that women perform better when given tests that they believe are unbiased to either gender. However, it may be naďve to believe Piper's statement without studying the male and female innate differences.

Contrary to popular belief, gender and anatomical sex refer to two distinct and separate constructs as each develops at different times and in different parts of the body. John Money coined the phenomenon that codes for masculinity or femininity as "Gendermaps" (1). At a very early age and through an interaction of both nature and nurture, this gendermap imprint is established. What makes gender identification and sex so frequently parallel to each other is that gendermap evolvement is notably also induced by hormones that emanate from the developing fetus (1).

Behavioral Differences:

Though there are many similarities in the cognitive abilities of men and women, there are also discernible differences. For the most part, the behavioral differences between the intellectual capacities of the sexes have to do more with patterns of ability than the actual intellectual capacity (3). For one, attention and perception differ early on. Baby girls have been noted to gaze longer at objects than baby boys. Later they rely on landmarks and memory for guidance. Boys on the other hand, have a better visual-spatial ability such as aiming at stationary or moving targets and detecting minor movements in their visual fields more easily. The fact that males perform better in navigation seems to agree with the possible theory that evolutionarily, many of these abilities would have been important for survival in the time of hunter-gatherer societies, where males navigated unfamiliar terrain while hunting, and females foraged more nearby areas gathering food (3). Another difference is their verbal ability. Women have been repeatedly shown to excel in language and tasks that involve manual dexterity and perpetual speed such as visually identifying matching items. Men appear to have an advantage in tasks requiring quantitative and reasoning abilities and excel in math as well as science (14).

Neuroanatomical Differences:

There are epidemiological suggestions that there may be neuroanatomic differences contributing to the cognitive functioning of males and females, although the literature is by no means conclusive. While it would be ethically suspect to make any conclusions based on observational anatomic research - it is useful to distill the anatomic differences cited in the literature to date.
Comparison in size shows that the male brain is on average 10% larger than the brain of females, although women usually have a larger percentage of information-processing gray matter. A greater proportion of gray matter suggests a greater processing capacity. This explains why the belief that greater head size indicates greater intelligence is invalid in this instance. Women, albeit smaller have more efficient brains - thus explaining why the sexes score similarly on intelligence tests (9). Magnetic Resonance Imaging has shown that male brains contain more white matter and cerebrospinal fluid than females, which may contribute towards better spatial, geometrical capability (11).

Another caveat in the neuroanatomical discrepancy of the sexes is in the hippocampus, hypothalamus and corpus callosum. The right cerebral hemisphere is larger in males leading to possibly the aggressiveness typical of male behavior. One test that verifies the asymmetry is that male rats given testosterone (the principle male sex hormone), develop a thicker, right hemisphere (2). In females, the cerebral hemispheres are symmetrical, with both functioning equally in the processing of speech. This higher level of interchangeability of hemispheres may relate to increased language prowess in females (12). For all these dissimilarities, it is important to keep in mind that the above published differences only represent gross averages of study populations. They cannot and should not be used to directly prove causality, but rather a potential avenue for further exploration.

Genetic and Physiological Differences:

The genetic makeup of individuals tends to dictate physiological differences. An individual with an extra Y chromosome or XYY instead of XY genotype will not only have a different phenotype, but will be much more aggressive due to the increased "maleness". The XYY syndrome brings up another very intricate issue, criminology with behavioral genetics. XYY subjects may be more violent. Adoption and twin studies also show a genetic linkage to certain behavior. Identical twins are genetically identical, and because of their similarity of criminal behavior, it is suspected that behavior is genetically linked (15).

Infants have been shown to have differences in behavior at a young and tender age, preceding much environmental influence. One study reported that the least sensitive female infant may be more sensitive than even the most sensitive male. Female infants appear more sensitive to noise and are also more social. "They are more inclined to "gurgle" at people and to recognize familiar faces than baby boys" (8). The behavioral differences between infants of different sexes appear very early in life, indicating that the mechanism controlling these behavioral patterns is innate and not learned from society.

The difference in the sexes may begin even earlier than the cradle in their perinatal existence. In a study by Emese Nagy, the heart rates of 99 newborns were measured with simultaneous video recording of their behavior. Proving that alert newborns have a similar differences in heat rate as those found in adults: males had a significantly lower baseline heart rate than girls, suggesting that heart rate is gender dependent from birth onward (6).

Hormones also seem to play a major role in sexual differences. Sexual dimorphism according to some studies relies on the presence of androgens such as testosterone. If absent, this leads to female gendermaps. Others suggest the presence of estrogen influencing female gender. In rats, there is a region called SDN (sexually dimorphic nucleus) in the brain that is larger in males. Giving testosterone can actually increase the size of the SDN (2).

An intriguing issue is how estrogen and progesterone hormone levels are
much higher in women, and fluctuate across the lifespan, including puberty, seasonally, during the menstrual cycle and after menopause. Researchers have examined the contributions that these hormones make to behavior and learning through various tests. Females that have been exposed to high levels of testosterone such as in congenital, adrenal hyperplasia, demonstrate a more aggressive behavior as well as improved spatial skills - behavioral strengths of the opposing sex (14). Hormone replacement therapy is actually used in the medical field today for post-menopausal women. Increasing certain hormone levels will improve their attention levels, similar to the situation for adolescent girls with dyslexia, who via increases in their estrogen level during puberty show an improvement in their reading abilities.

Environment:

Although there is much evidence suggesting that gender differences be based on the natural and physiological distinctions in our bodies, the environmental and social influences certainly play a major role as well. From day one, starting with their initial noisy entrance into this world, babies are viewed according to their gender; even color coding them right away. Oh, it's a girl! God forbid if this child ends up wearing blue - she may have a social identity crisis. With their clothes, toys and even rooms decorated accordingly, it would be difficult to remain untouched by this social stereotyping that goes on in society throughout life. Society plays a role in initiating and perpetuating role-assignments of the genders, influencing decision-making and social behavior.

There are cases to support the nature versus nurture theory however, such as the infamous Joan/John case. In 1967, a twin boy had an accidental castration, and a medical decision was made, to raise the child as a girl with the help of surgery. For the next two decades, many sex reassignments were performed. Twenty-five years later, this medical success story came to haunt those same sexologists when Joan, though initially unaware of the facts could not adjust to the change and chose to return to her former sex (7).

Conclusion:

In a society where the exploration of innate variation is a topic of ethical controversy, it is only prudent to approach any data or discussion on gender difference very gingerly. Data interpreted to show genetic biases for differences among humans in intelligence, motor learning capabilities, criminality, and a broad range of other behaviors has, unfortunately, been used to support racism and other forms of bigotry (3). Because of such societal interpretations, scientists are becoming much more cautious of the conclusions they draw in an effort to avoid discriminatory sociological or political ramifications (13).

Throughout this entire semester, we have discussed the matter of brain equaling behavior. Quantifying exactly what behavior to expect from a man or a woman is an enigma, as we're talking about not just the anatomical variations one is born with, be it an XX sex chromosome or an XY, but also the changes that occur over one's lifespan. The brain is a structure that continues to evolve, be it because of the stages in development or stimuli/experiences of life. While the dominance of nurture over nature is no longer assumed yet the percentage of the effect that genetics and environment play on sexual identity is uncertain.

References

1) Gender Identity Disorder by Anne Vitale

2) The Role of Estrogen in Sexual Differentiation by Elaine Bonleon de Castro

3) Gender Differences in Cognitive Functioning by Heidi Weiman

4) Sex on the Brain - Biological Differences between Genders by Deborah Blum

5) Cognitive Development

6) Gender-Related Heart Differences in Human Neonates by Emese Nagy

7) Boys will be Boys: Challenging theories on Gender Permanence by Josh Greenberg

8) Neural Masculization and Feminization by Mary Bartek

9) Thinking about Brain Size

10) Gender Issues - Excerpt from "Reviving Ophelia" by Mary Pipher

11) Women's Brains - More Effective?

12) Speech Processing in the Brain

13) The Nature Versus Nurture Debate

14) The Genetic-Gender Gap

15) Explanations of Criminal Behavior


Why do only some depressed people commit suicide a
Name: Irina Mois
Date: 2003-04-15 06:56:48
Link to this Comment: 5396

Depression can be a very debilitating and devastating illness. Nonetheless, most people, through medical treatment, counseling, and/or emotional support from family and friends can overcome it. The questions remains - what about the people who do not successfully overcome this illness? What happens to them? The most obvious answer is that they end up committing suicide (although there are some people who end up living with the illness for the rest of their lives). I had always wondered what is it that makes one person strong enough, or selfish enough-as some people believe, to commit suicide? Is it something in their brain chemistry, their personality, their surrounding environment, their diet etc?

Here are a few facts about depression:
1. "Suicide is the eighth leading cause of death in the United States and is among the three leading causes of death for those aged 15 to 34 years. For every person in the U.S. who dies by suicide, 10 people attempt suicide but survive." (1)

2. 80% of people who suffer from depression never attempt suicide (2)

3. There are 1 million suicides per year worldwide. (5)

Depression can be caused by a number of things, such as diet, genetic makeup, or a traumatic event. However it is believed that the final step before depression occurs takes place in the brain. Serotonin, dopamine, and/or neuroepinephrine levels are disrupted, leading to depression.(6)

There are several theories on why only a fraction of depressed people commit suicide. There is evidence today to suggest that the pre-frontal cortex of suicide victims, where all the executive decisions are made, is malfunctioning; specifically, the serotonin breaking system. (2) Another part of the brain which is thought to be different in suicide victims is the brain stem. Mark Underwood, a neurobiologist at the New York State Psychiatric Institute, has found 30 percent more serotonin neurons in this area, along with a lower serotonin activity. These neurons seem to be smaller and malfunctioning. (2) Because serotonin victims seem to have more serotonin neurons, it is believed that they inherit this problem (given that you are born with an exact number of neurons). However, environmental factors should also be taken into consideration. For example, people who have experienced abuse during their childhood are prone to greater impulsivity. (5)

Currently Selective Serotonin Reuptake Inhibitors (SSRI) increase serotonin levels and are known to alleviate severe depression and with it, the probability of suicide. Cognitive therapy has also proven to reduce the probability of suicide, without treating the depression. So the person is still depressed but they are less likely to commit suicide. However, if that is the case, then that doesn't explain how the suicidal individual overcomes the low serotonin activity level and the small, malfunctioning serotonin receptors. Cognitive therapy does not change the chemistry of the brain. Therefore, it may be the case that serotonin activity may only cause depression and not be related to probability of a depressed person committing suicide.

To complicate the situation, there are two types of suicides: suicide thinking of a normal person and suicide thinking of a depressed person. Depressed suicides are likely to happen suddenly, whereas "normal" (there is nothing normal about suicides but the term is used to describe people who do not suffer from depression) suicides are more planned. Also, depressed suicides are more likely to try and cut themselves first in order to escape the mental pain. (7) If that does not work, they are likely to attempt suicide.

There are still several statements which do not make sense to me. One such statement is: "Serotonin, which influences how nerves in the brain transmit messages, seems to work abnormally in the frontal lobes of people who commit suicide. In other words, it doesn't trigger the normal restraint against extreme actions like suicide." (5) If that is the case, why is it that these people do not kill other people, or rob stores, or rape, or poison the NYC water system? If they can't control extreme actions, why is it that the only extreme action which they take is suicide? Once again, serotonin levels only seem to cause depression but may not be directly responsible for the increased probability of suicide.

Another statement which disturbed me was this description of depression: "Depression is a disorder of the brain and body's ability to biologically create and balance a normal range of thoughts, emotions, & energy." (4) First, how do we know what "normal" means. Who defines normal? Second, thoughts and emotions are not just created by the brain. There has to be some kind of outside input in order for a thought or emotion to be created. Maybe someone pissed you off. Or maybe you had an experience which really left a mark on you. So I believe that it is a lot more complex that just "the brain creates thoughts/emotions". The creation on energy is obviously even more ridiculous. The creation of energy depends on nutrition, sleep, stress level etc. So a definition like the one above is nothing short of complete ignorance on the part of the person who wrote it. The worst part is that it comes from a site where FAQ about depression are posted. I can only be outraged that this person(s) want(s) to share and spread their ignorance to other people who are in desperate need to answers.

Although I have not found an answer to my original question, I do finally understand why it is that the topic of depression and suicide is taboo. It is clear from the sites I have visited that suicide is thought of as a brain disease. It is an action which is done on impulse and it implies that the person does not have control over their actions. This obsession with control (which we have partly covered in class) is extremely inappropriate in this case. Depression is a lot more complex and it is affected by biological, environmental, and genetic factors. A person's ability to control their depression/suicide tendencies may not even exist. We do not know enough about this illness to make it taboo.


1) by - Kevin Malone, M.D. and J. John Mann, M.D.
2) 3) 4) 5) 6) 7)


The Effects of Schizophrenia on the Brain
Name: Adina Caza
Date: 2003-04-15 07:20:52
Link to this Comment: 5397


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Schizophrenia is a severe mental illness that affects one to two percent of people worldwide. The disorder can develop as early as the age of five, though it is very rare at such an early age. (3)) Most men become ill between the ages of 16 and 25 whereas most women become ill between the ages of 25 and 30. Even though there are differences in the age of development between the sexes, men and women are equally at risk for schizophrenia. (4) There is of yet no definitive answer as to what causes the disorder. It is believed to be a combination of factors including genetic make-up, pre-natal viruses, and early brain damage which cause neurotransmitter problems in the brain. (3)

These problems cause the symptoms of schizophrenia, which include hallucinations, delusions, disordered thinking, and unusual speech or behavior. No "cure" has yet been discovered, although many different methods have been tried. Even in these modern times, only one in five affected people fully recovers. (4) The most common treatment is the administration of antipsychotic drugs. Other treatments that were previously used, and are occasionally still given are electro-convulsive therapy, which runs a small amount of electric current through the brain and causes seizures, and large doses of Vitamin B. (3)

Due to neurological studies of the brain, antipsychotic drugs have become the most widely used treatments. These studies show that there are widespread abnormalities in the structural connectivity of the brains of affected people. (2) It was noticed that in brains affected with schizophrenia, far more neurotransmitters are released between neurons, which is what causes the symptoms. At first, researchers thought that the problem was solely caused by excesses of dopamine in the brain. However, newer studies indicate that the neurotransmitter serotonin also plays a role in causing the symptoms. This was discovered when tests indicated that many patients better results with medications that affect the serotonin as well as the dopamine transmissions in the brain. (6)

New test and machines also enabled researchers to study the structure of schizophrenic brains using Magnetic Resonance Imagery (MRI) and Magnetic Resonance Spectroscopy (MRS). The different lobes of affected brains were examined and compared to those of normal brains, showing several structural differences. The most common finding was the enlargement of the lateral ventricles, which are the fluid-filled sacs that surround the brain. The other differences, however, are not nearly as universal, though they are significant. There is some evidence that the volume of the brain is reduced and that the cerebral cortex is smaller. (2)

Tests showed that blood flow was lower in frontal regions in afflicted people when compared to non-afflicted people. This condition has become known as hypofrontality. Other studies illustrate that people with schizophrenia often show reduced activation in frontal regions of the brain during tasks known to normally activate them. (1) Even though many tests show that the frontal lobe function performance is impaired and although there is evidence of reduced volume of some frontal lobe regions, no consistent pattern of structural degradation has yet been found. (2)

There is, however, a great deal of evidence that shows that the temporal lobe structures in schizophrenic patients are smaller. Some studies have found the hippocampus and amygdala to be reduced in volume. Also, components of the limbic system, which is involved in the control of mood and emotion, and regions of the Superior Temporal Gyrus (STG), which is a large contributor in language function, have been notably smaller. The Heschl's Gyrus (which contains the primary auditory cortex), and the Planum Temporale are diminished. The severity of symptoms such as auditory hallucinations has been found to be dependent upon the sizes of these language areas. (2)

Another area of the brain that has been found to be severely affected is the prefrontal cortex. The prefrontal cortex is associated with memory, which would explain the disordered thought processes found in schizophrenics. Test done on humans and animals in which the prefrontal cortex has been damaged showed similar cognitive problems as those seen in schizophrenic patients. The prefrontal cortex has one of the highest concentrations of nerve fibers with the neurotransmitter dopamine and scientists have learned that the relatively new antipsychotic drug, which increases the amount of dopamine released in the prefrontal cortex, often improves cognitive symptoms. They also found that the prefrontal cortex contains a high concentration of dopamine receptors that interact with glutamate receptors to enable neurons to form memories. This means that dopamine receptors may be especially important for reducing cognitive symptoms. (5)

While these drugs do help control the symptoms of schizophrenia, they do not get rid of the disorder. It is becoming clearer ever day, just what damage schizophrenia is doing to the brain, but researchers are nowhere near to finding all of the answers. Different researchers are still arguing over the conclusiveness of the data that does exist. Other scientists are trying to discover the cause of schizophrenia. Is it caused by various genes, by a virus, or from trauma? This too is still a mystery. The only thing that is truly known is that the disorder is debilitating and that it affects nearly every portion of the brain. Obviously, much more research still needs to be done to help those who suffer from it.


References

1)E-Mental Health,

2) E-Mental Health,

3)National Institute for Mental Health,

4)Psychiatry 24 x 7,

5) Society of Neuroscience,


6)Health-Center,


The Roles of NREM and REM Sleep On Memory Consolid
Name: Alexandra
Date: 2003-04-15 07:21:25
Link to this Comment: 5398


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

All mammals exhibit Rapid-Eye-Movement, or REM, sleep, and yet on certain levels this type of sleep would seem to be disadvantageous. During REM sleep, which is when most dreams occur, the brain uses much more energy than during non-REM (NREM) sleep. (1) This "waste" of energy coupled with the increased vulnerability of this state on account of the body's paralysis at this time suggests that there must be a very important reason, or reasons, for the existence of REM sleep and in extension of dreams. Determining the function of dreams, however, has proved very problematic with many arguments which directly oppose each other. Some of the primary functions of dreaming have been tied to is role in development, its production of neuro-proteins, and also to how it may allow for the "rehearsal" of neurons and neuronal pathways. The influence of dreaming on learning is one of the hottest debates. Some argue that dreams aid in learning, others that dreams aid in forgetting, and yet others that dreams have no effect on learning or memory. That REM sleep seems to aid in development might argue that REM sleep may be connected to learning. It seems that most scientists believe that REM sleep aids in certain memory consolidations although some argue that it actually leads to "reverse learning.

Before discussing the role of NREM and REM in learning, it is necessary to clarify the identity of and differences between the two. This type of sleep is marked by different stages based on different the different brainwaves exhibited. REM sleep differs from NREM in that most dreams occur during REM sleep although the two activities are not synonymous. REM is also marked by an increase in brain activation, breathing , and also heart-rate while the body becomes paralyzed. (2)


Although the precise role may be arguable, REM sleep seems to play a role in development. Although newborn infants spend about half of their sixteen to eighteen hours of sleep time a day in REM sleep, adults spend only about an hour and a half in REM sleep. (1) This difference in both amount and percentage of REM sleep between infants and adults indicates the importance of REM sleep, or of dreams, in development. Several dream researchers have hypothesized that REM sleep may play an important role in infant brain development by providing an internal source of powerful stimulation which would prepare the baby for the almost infinite "world of stimulation it will soon have to face" and also by facilitating the "maturation of the nervous system." (1)

The relative amount of REM sleep in other mammals exhibits in connection with their level of development at birth also supports the idea that REM sleep must aid in development. (1) Typically, animals born relatively mature, such as dolphins, giraffes, and guinea pigs, demonstrate low-amounts of REM sleep, while animals born relatively immature, such as ferrets, armadillos, and platypuses, exhibit higher levels of REM sleep. (3) Humans fall in between the spectrum of amounts of REM sleep with platypuses having the most REM sleep and some species of dolphin and whale exhibiting none. (3)

Partially because infancy is the time when most new information must be taken in as a part of development, scientists have hypothesized the existence of a connection between REM sleep and learning. It seems that the time during infant development would be one of the times when most learning, or processing of information, occurs.

The most commonly held belief among the scientific community seems to be that REM sleep consolidates memories and aids in learning. An article in Science recently declared that "neuroscientists have long known that memory consolidation goes on during sleep." (3) A more recent discovery is that NREM sleep may also play a role, albeit a different one, in learning. Robert Stickgold from the Massachusetts Institute of Technology has found that different phases of sleep are tied to different types of learning. Learning visual skills depends on the slow-wave sleep of the first quarter of the night and also on the REM sleep of the last quarter. Learning movements relies much more on the NREM in the later part of the night. (4) In an important study in 2001, Matthew Wilson, also of MIT, found that rats dream about their activities (i.e. running through a maze) during NREM sleep in addition to in REM sleep. Unlike during REM replay, where the experience occurs approximately in real time, the memory segments that were replayed during NREM seemed to be snippets of experience. (5) Also, unlike REM sleep, slow wave sleep seemed to replay only what had happened immediately before and not something twenty-four hours ago. (5) Because of the possible time-delay REM memory reactivation, it might be representative of a more gradual reevaluation of slightly older memories. (5)

Most evidence for the memory consolidation hypothesis comes from indications that an increase in learning results in an increase of REM sleep, that memory processing occurs during REM sleep, and that sleep deprivation harms the ability to learn (3) In general it seems that having had enough REM sleep before and also after the learning of new information will help with the remembering of that information. The fact alone that REM sleep stimulates the learning region, the hippocampus, has been argued as an indication of REM's influence on learning. (6) Learning tasks that need high levels of concentration or the acquisition of new skills is followed by an increase in REM sleep. (1) Many studies show that learning after having reached a plateau can only take place with the help of REM sleep. A study at MIT has shown that volunteers' skill at key-tapping and speed-spotting tasks improved by 20 per cent after one night's sleep after training, and with more extra nights, it increased even more. (4) Karni and Sagi's establishing that changes in the plasticity of particular neuronal loci which underlie perceptual learning may happen during sleep, also argues for the importance of sleep to memory. (7) The increased production of proteins, which also occurs during deep sleep, may be tied to the learning process if those proteins are in fact associated with learning. (6) Finally, backing up the idea that suppression of REM affects memory consolidation is the study by Dinges and Kribbs that shows that REM deprivation impaired performance on longer tasks, while shorter tasks remained unimpaired. (5)

In apparent contradiction with the concept that REM sleep plays a part in remembering new information is the hypothesis that people actually dream to forget. Crick and Mitchison have proposed that "the function of dream sleep is to remove certain undesirable modes of interaction between cells in the cerebral cortex which could otherwise turn parasitic." (1) Their second hypothesis is that "if these hypothetical 'parasitic' modes of neuronal behavior do in fact exist, then it might be that they 'are detected and suppressed by a special mechanism." (1) Crick and Mitchison call this hypothetical process "reverse learning" or "unlearning," but explain that it "is not the same as normal forgetting." (1) They say that according to their model, "attempting to remember one's dreams should not be encouraged, because such remembering may help to retain patterns of thought which are better forgotten. These are the very patterns the organism was attempting to damp down." (1) Although dream-sleep may prevent "perpetual obsessions or spurious hallucinatory associations," Crick and Mitchison acknowledge that it would be difficult to test for the existence of the reverse learning mechanism. (1) Also it does not seem that people who routinely remember their dreams would be more prone to "hallucinations, delusions, and obsessions" than people who typically forget their dreams. (1) Doing a study to test for the existence of an increase in "parasitic" memories in individuals who take MAO inhibitors to treat depression, which block REM sleep, in comparison with people who have normal REM sleep cycles might help test out Crick and Mitchison's hypothesis.

Despite the many studies demonstrating the function of REM sleep in memory consolidation, there is debate about the validity of the claim that REM sleep aids in learning. For example, Jerome Siegel argues in Science that the evidence for the importance of the role of REM sleep in memory consolidation is contradictory and weak. (3) On the most basic level, the fact that people remember so few of their dreams may argue against their function in terms of learning (8) (3). He argues that learning does not result in an increase in REM-sleep. For example, one of his objections to research is that Smith's study, which documented the higher density of REM in college students after intensive exams, assumes that there can be a control group among humans, which is often very difficult to obtain. (3) Siegel also takes the lack of difference in the amounts of REM sleep time between students with average IQ's and those with high IQ's to prove the lack importance of REM sleep as a memory-consolidator. (3) The problem with this assertion is that it assumes that intelligence is easily quantifiable. His faith in a study involving humans also contradicts his previous objection to the Smith study. He argues against the deleterious effect of REM suppression by stating that the methods used on rats (i.e. the platform technique in which the rat wakes up whenever it starts REM sleep because it falls into water) results in an increased level of stress, which can explain the decrease in learning ability. (3)

Even if objections to the particulars of certain experiments may be valid, these objections can never prove that REM-sleep has no role in learning or memory-consolidation. Although the precise roles, which REM and NREM play, may not be understood fully, there seems to be a connection between both types of sleep with memory. REM sleep seems more conducive to the learning of extended or sequential information since "playback" periods of REM far outlast the bursts of "playback" during NREM sleep. (5) Although my feeling that REM sleep must aid in learning and memory-consolidation may be some-what instinctual, I cannot but help think about my uncle Eli Ginzberg, who wrote one-hundred-ten books (mostly on economics), always got at least nine hours of sleep a night. Although it seems fairly clear that sleep does play aid in memory-consolidation, what remains to be done is to keep testing REM sleep and the less-studied NREM sleep for evidence as to if and how it fulfills this task.


References

1)Lucidity Institute website, "Chapter 8: Dreaming: Function and Meaning,"


2)First Webpaper on Serendip, Great Neurological Resource

3)The REM Sleep-Memory Consolidation Hypothesis," article on Center for Sleep Research's homepage, Interesting site for sleep disorders

4)Nature website, good for scientific articles

5)MIT News website, interesting articles

6); TALK ABOUT SLEEP, Inc., basics answers about sleep

7)Harvard Undergraduate Society for Neuroscience, connected to Computer Science Program

8)UCSC Psych Website,


Child Abuse Changes "I" in More Ways than One
Name: C.Kelvey R
Date: 2003-04-15 08:36:50
Link to this Comment: 5399


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

If brain equals behavior, then a relationship should exist between structural differences in the brain of abused children and changes in the behavior of abused children. Child abuse has been understood to have psychological effects, which are manifested by specific behaviors. Abused children express an array of behaviors ranging from depression, anxiety, post-traumatic stress and suicidal ideations to aggression, impulsivity, delinquency, hyperactivity and substance abuse (1). Furthermore, ten to twenty percent of adult survivors of child abuse suffer from dissociative or post-traumatic stress disorder (2). While there is a normal developmental path for the brain, childhood trauma can disrupt the progress and direction along this path. In response to physical and psychological trauma, a child's brain structure will change in ways that do not occur in children not exposed to trauma. These changes in the brain structure cause changes in behavior, including better or worse strategies for coping with the trauma of abuse. Therefore, changes in the brain and behavior due to child abuse suggest that environmental stressors can profoundly influence the development of the brain, and that the structure of the brain in turn controls behavior. Thus, just as the self acts on the environment, the environment acts on the self. Accordingly, understanding the effect of child abuse on the nervous system may help develop a clearer understanding of a model of the nervous system that incorporates the "I-function". This model in turn raises questions on the permanency of any definition of the "self".

An explanation for changes in the brain due to child abuse may involve the production or survival of neurons. A genetically predetermined, stepwise sequence develops the lower brain initially and then the higher brain centers develop gradually during childhood (3). Due to the sequential development of the brain, each stage depends on the healthy development of proceeding stages. In addition, there is a genetically determined sequential growth, proliferation and overproduction of axons, dendrites, and synapses in different regions of the brain (3). In any brain however, not all synaptic connections survive. Between the ages of three and eight, a child's brain has been shown to have twice as many neurons and connections than those of an adult brain (4). There are two environmentally dependent maturation processes of the brain with distinct critical periods. These critical periods are the times when the organizing systems of different functions are extremely sensitive to environmental inputs (5). Synaptic connections are eliminated if a particular experience does not occur during a critical period and new synapses are not generated or promoted if an experience does not take place. For an abused child, instead of a period of comfort and security, there are often periods of stress and fear (3). Furthermore, exposure to stress hormones released as a result of abuse significantly change the shape of the largest neurons in the hippocampus, kill neurons, or suppress the production of new neurons (6). For example, glucocorticoids are released during stressful periods and circulate for months, killing neurons and reducing the volume of the hippocampus (2). Thus, just as natural experiences can affect the survival or degeneration of neurons, the survival of neurons may possibly be the cause of structural variations within the brain of children who experience abuse.

Research comparing the brains of abused children and control subjects, primarily conducted by Martin Teicher, an associate professor of psychiatry at Harvard Medical School, has shown that abuse seems to induce a cascade of molecular and neurobiological effects that alter the development of specific areas in the brain. These areas include the limbic system, left hemisphere, corpus callosum and cerebellar vermis (6). EEG abnormalities in the left hemisphere were observed in sixty percent of 115 youngsters with documented histories of abuse (7). Similar brain-wave abnormalities are often seen in people with a greatly increased risk for suicide and self-destructive behavior (6). The limbic system is the brain's emotional processing center and includes the amygdala and hippocampus. MRI scans also revealed an association between early maltreatment and the reduction in the size of the adult left hippocampus or amygdala (6). Bermner et al. compared MRI scans of seventeen adult survivors of abuse with seventeen control subjects and found that the left hippocampus of abused subjects was twelve percent smaller (7). Teicher also observed a reduction in the left side of the brain that suggests that the right hemisphere was more active in the abused patients. Other research has found that these left hemisphere deficits may in turn contribute to the development of depression and increase the risk of memory impairments (8).

Animal and human studies have also shown that abuse can reduce the size of the corpus callosum by up to forty percent (9). The corpus callosum is the bundle of nerves that facilitates the communication between the right and left side of the brain. Teicher found that neglect was associated with a twenty-four to forty-two percent reduction in the size of various regions of the corpus callosum in boys while sexual abuse had no affect. The opposite was true for girls, who showed eighteen to thirty percent reduction due to sexual abuse, while neglect had no effect (8). The reduced of integration between the right and left hemispheres may predispose the patients to shift abruptly from the logical, rational, language controlling left side to the creative and emotional right side and remain in one hemisphere as opposed to moving seamlessly between the two sides (9). Many survivors of childhood abuse tend to reside in their left hemisphere when they function well but when traumatic thoughts arise, they retreat into the right hemisphere (9). In effect, abuse has rewired the nervous system to survive the traumatic experiences. Two primary adaptive response patterns to child abuse are the hyperarousal "fight or flight" response or the dissociative "freeze and surrender" response (10). Young children are most likely to use the dissociative response, which attempts to prevent memories from being integrated into the consciousness (10). The ability to limit the integration of memories may be the result of separating the functioning of the two hemispheres. Furthermore, the relatively unilateral use and development of one side of the brain could account for dramatic shifts in mood or personality (8).

Additionally, the cerebellar vermis is more active in abused children with a greater blood flow in the area (8). The cerebellar vermis is involved in emotion, attention and the regulation of the limbic system and is very sensitive to elevated stress hormones, particularly glucocorticoids (8). The greater degree of blood flow in the region may be aimed at controlling the electrical activity within the limbic system (8). While the cerebellar vermis attempts to maintain an emotional balance, trauma may impair its ability and as a result an individual may be particularly irritable (8).

Child abuse's effects on the brain often have a direct connection to the survivors' behavior and interaction with their environment. Abused children who alter their sense of consciousness are effectively altering their sense of self. A sense of self arises from one's own sense of identity combined with an interpretation of others reaction to the person's behavior. For victims of abuse, those behaviors may include imaginary companions or reach the point of a multiple personality disorder. The internal inputs to the nervous system that are struggling to define oneself through specific actions may conflict with the external inputs that are forcing an opposing reaction. This conflict can cause a person's consciousness to fragment, as the person must act out different messages from the subconscious. The I-function is involved in coordinating subconscious thoughts into behavior and ultimately produces a reaction, which may not be explained by a conscious thought. When the I-function receives signals, the response is to transmit the signals to the appropriate centers to organize appropriate behavior. The I- function tries to provide a coherent story with a seemingly logical response, while the story is split into two realities. One reality may be an internal and inherent fight or flight response, while the other reality is the external abuse, which forces a freeze and surrender response. While the system may expect inputs such as comfort and security, the reality of stress and rejection may cause a conflict that requires the nervous system to either become 'sick' or adapt its operating structure.

Although there may be a genetic template for the nervous system, environmental influences organize the nervous system and create individual connections that manifest in behavior. Examining neurons provides one method for noting physical changes in the brain due to abuse. Additional observations of connections between child abuse and biological changes in the brain support the contention that the brain is influenced by environmental factors and result in distinct behaviors. Abusive experiences may literally provide the organizing framework in the brain of a child. The organization of the brain includes the role of the I-function in formulating behavior and ultimately defining self. Instead of a box, the I-function should be represented as a dotted line surrounding an area that cannot be permanently defined because it is constantly changing as it adapts to varying realities. "Who am I?" is a question that can never be definitively answered. However, formulating a definition of "self" and "I" is significantly influenced by interactions throughout childhood. Therefore, it should be remembered that every individual has the power and choice to strengthen, as oppose to fragment, the mind of a child.

References

1)Relationship between Early Abuse, Posttraumatic Stress Disorder, and Activity Levels in Prepubertal Children

2) Hidden Scars: Sexual and other abuse may alter a brain region

3) Environmental Influences on Brain Development

4) Violent Changes in the Brain

5)
Development of the Cerebral Cortex: XIII. Stress and Brain Development: II

6)Psychological Trauma and the Brain

7)Psychological Abuse May Cause Changes in the Brain

8)McLean Researchers Document Brain Damage Linked to Child Abuse and Neglect

9)"Abuse stunts one part of the young mind, says a new study"

10)Childhood Trauma, the Neurobiology of Adaptation and Use-dependent Development of the Brain: How states become Traits"


The Disabling Effects of Selective Mutism
Name: Ellinor Wa
Date: 2003-04-15 10:45:08
Link to this Comment: 5403


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


Among the vast range of anxiety induced disorders that exist, Selective Mutism may be the most disabling to its victims. It has been estimated that approximately one in a thousand children suffer from this presumed psychiatric ailment wherein the ability to speak is limited to the household or other areas of comfort. (2) Public places and schools elicit so much anxiety within these children that their natural capacity to speak is suppressed. Once a child under five years of age exhibits the behavior described, for over a month, and without having other speech impeding barriers affecting them such as autism or a second language, he or she will most likely be diagnosed with selective mutism. (2)


Many hypotheses have been posed as to what causes selective mutism, however, no determinate conclusions have been made. In most cases it has been proven that anxiety disorders are hereditary, thus, nearly all children who become selectively mute have family members who were afflicted with the same or more serious anxiety disorder, like obsessive compulsive disorder, schizophrenia, or social phobia. The fact that anxiety disorders pass through generations implies that brain chemistry is perhaps genetic or that serotonin levels are inherited. Other suggested causes of selective mutism have been speculated upon, however, little research has been instated. Abuse, neglect, extreme shyness, extremely embarrassing experiences like vomiting or having diarrhea in a classroom setting, or living in a home environment with exceptionally nervous parents may also lead to become selectively mute. These theorized causes tend to describe the background of children who have no similar disorders running in the family. (4)


Doctors, for the most part, lean towards medication for those afflicted with selective mutism; however, other methods are practiced as well. (6) Environmental support, confidence boosting, therapy, and behavioral management classes have been known to aid in their struggles. A strong emphasis has been put on the necessity of treatment; for it has been proven that those enduring the disorder selective mutism, will only worsen. Eventually, the disorder will carry on into adulthood.


It should be clarified that children suffering from selective mutism do not choose not to speak, but rather cannot speak under anxiety ridden circumstances. For instance, many cases have been described wherein a therapist will bribe one of these children with a toy. They will offer to give to them the toy; a Barbie and a truck have been used as examples, if they can say what the object is. (1) The selectively mute child will strain themselves to say the word, but cannot force the sound out of their mouths.


Is the choice to speak or not speak a conscious or subconscious choice? It would seem that their desire to obtain the toy would be stronger than their desire to avoid the stressful situation of having to speak since these children are attempting to formulate the sound. Thus, the desire is repressed by a subconscious decision. Moreover, the fact that anxiety disorders are inherently hereditary goes to prove the lack of control over their ability to speak; low serotonin levels or genetics may inhibit their ability to express themselves verbally under high stress situations. This suggests selective mutism should be considered more a neurological disorder than a psychiatric disorder.


While the I - function has been known to instigate behavior human beings cannot normally imitate on their on accord, for instance, attempting to replicate the same eye movements which occur while following the movements of a pencil, perhaps the I - function also inhibits behavior that human beings would normally be able to perform by themselves. A correlation could be made between the disabled voice of the selectively mute who intensely desire to speak, as seen in the previous example wherein the patients tried speaking in order to receive the toy, and the I-function. Maybe the I-function is genetically differentiated and may be altered through certain medications.


An interesting phenomenon has emerged in relation to selective mutism. The children who suffer from this disorder, which essentially limits the basic form of human expression, usually compensate for that lack through artwork or music. It has been estimated that a great deal of those afflicted, for the period in which they undergo immense amounts of silence, gradually improve drastically in their creative abilities as artists and musicians. Often, their talent significantly surpasses that of their peers. Because most doctors prefer to elect medical treatment for those who suffer from selective mutism, namely prescribing them Prozac, the troubled children usually become verbally inclined once again. Once they are able to speak confidently, parents have reported that their children's artistic or musical abilities subside. Dr. Shipon - Blum, a specialist in the field of selective mutism, imparts, "Often, months after I have treated a child for selective mutism, parents will call me and want to know where their Picasso went." (1) Several explanations could be given as to the mystery behind this finding: perhaps there is a correlation between brain chemistry and creativity, or maybe Prozac or other anti depressants stunt creativity, or great expression through creativity could be a psychological method of release in the place of a lost sense.


In accepting medical treatment as a verifiable one, it is implied that selective mutism is a brain disorder, wherein the chemistry of the brain is erroneous. Low levels of serotonin can be heightened with the use of Prozac and other antidepressants. Also, it has been proven that most anxiety disorders are hereditary. If selective mutism is predestined, what accounts for the actual repression of speech as well as the associated artistic inclination? Conceivably, it would seem that along with the hereditary aspect of selective mutism as a disorder, there could be a hereditary gene, with which it is correlated, for creativity. When the disorder is cured through medicine, their creative ability is diminished along with it. On the other hand, a possibility lies in the notion that the I-function could be responsible for impeding the natural facilities of a person given these unfortunate genes, and could be adjusted through the use of modern medicine.


References

1)Selective Mutism, a general information site on selective mutism
2)The Selective Mutism Foundation, a support sight to better understand the disorder
3)Philadephia Page, a site with excerpts about selective mutism from the Philadelphia Inquirer
4)Selective Mutism UK, an interesting article about the seriousness of selective mutism
5)Anxiety-Panic Website, a site which describes several other anxiety disorders
6)Mental Health web page, a helpful site providing several articles about selective mutism
7)Anxiety Network, illustrates well the treatment available for those selectively mute


Somnambulism and the I-Function
Name: Tiffany Li
Date: 2003-04-15 11:32:50
Link to this Comment: 5404


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


Somnambulism, or sleepwalking, belongs to a group of parasomnias. This disorder of arousal is characterized by complex motor behaviors initiated during stages 3 and 4 of non-rapid-eye-movement (NREM) sleep (slow-wave sleep) (3). Behaviors during sleepwalking episodes can vary greatly. Some episodes are limited to sitting up, fumbling and getting dressed, while others include more complex behaviors such as walking, driving a car, or preparing a meal (2). After awakening, the sleepwalker usually has no recollection of what has happened and may appear confused and disoriented. The behaviors performed while sleepwalking are said to be autonomous automatisms. These are nonrelfex actions performed without conscious volition and accomplished independently from the I-function (3). This insinuates that everything done while sleepwalking is involuntary because the exhibited behavior is not a result of the I-function's output. Therefore if the I-function is not involved what causes people to sleepwalk? What happens to the I-function during sleepwalking? What does this imply about brain and behavior?

Sleep is a succession of five recurring stages: four non-REM stages and the REM stage. Researchers have classified these stages of sleep by monitoring muscle tone, eye movements, and the electrical activity of the brain using an electroencephalogram (EEG) (4). EEG readings measure brain waves and classify them according to speed. Alertness consists of desynchronized beta activity whereas relaxation and drowsiness consist of alpha activity (4). Stage 1 sleep includes alternating patterns of alpha activity, irregular fast activity and the presence of some theta activity. This stage is a transition between sleep and wakefulness (4). The EEG of stage 2 sleep contains periods of theta activity, sleep spindles, and K complexes (sudden, sharp waveforms). This stage is believed to help people enter deeper stages of sleep (4). Stage 3 sleep consists of 20-50 percent delta activity and stage 4 sleep of more than 50 percents delta activity (4). Stages 3 and 4 are characterized as being slow wave sleep in addition to being the deepest levels of sleep. Approximately 90 minutes after being asleep, people enter rapid-eye-movement (REM) sleep (4). REM sleep consists of rapid eye movements, a desynchronized EEG, sensitivity to external stimulation, muscle paralysis and dreaming (4).

Sleepwalking occurs during stages 3 and 4 of the sleep cycle, the deepest levels of sleep. This slow-wave sleep is normally characterized by synchronized EEG activity (4). This indicates that mental activity is very low during these stages of sleep. However researchers have shown that the EEG of a sleepwalker has diffuse, rhythmic, high-voltage bursts of delta activity associated with abrupt motor activity (1). This is very different from the EEG activity normally associated with slow-wave sleep. In addition to the EEG results, they found that there is a decrease in regional cerebral blood flow in the frontopariental cortices during sleepwalking (1). This indicates that sleepwalking is a dissociated state consisting of motor arousal and persisting mind sleep, which seems to arise from the selective activation of thalamocingulate circuits and the persisting inhibition of other thalamocortical arousal systems (3).
This study provides several interesting insights into the role of the I-function and its relationship with the nervous system. The results indicate that during sleepwalking there is no input from the I-function due to the lack of strong mental activity in the EEG. Thus the I-function is inactive during slow-wave sleep and somnambulism. The absence of the I-function in slow wave sleep signifies that sleepwalkers despite their outward appearance of being awake are completely unconscious of their actions at the time they are sleepwalking. This also explains why sleepwalkers rarely recall what occurs during a somnambulism episode and often appear disoriented and confused upon awakening.

It is interesting to observe the differences in mental activity between REM sleep and slow wave sleep. They both involve the production of complex behaviors, however REM sleep has a high mental activity associated with dreaming while sleepwalking has very little mental activity (4). Evidence indicates that during REM sleep, the particular brain mechanisms that become active during a dream are those that would become active if the events in the dream were actually occurring (4). For example, McCarley and Hobson, demonstrated that the cortical and subcortical motor mechanisms become active in dreams that contain movements, as if the person were actually moving (5). Therefore in REM sleep the I-function is active and the person is conscious of his/her actions. In somnambulism, the I-function is inactive and the sleepwalker is unconscious of his/her behavior. The sleepwalker is passively producing complex behaviors. These observations indicate that the same outputs are produced using different parts of the brain.

If the I-function is not responsible for the behaviors generated during sleepwalking, what is? The results from the study indicate that the nervous system is responsible for somnambulism. It stimulates the motor outputs seen in sleepwalkers. This indicates that the nervous system can produce behaviors as complex as the ones produced by the I-function. Therefore the I-function is not as influential as it was once thought to be because the nervous system can generate the same outputs with or without its presence. In addition, the study provides evidence that an output can originate within the nervous system without any input from the external environment or the I-function.

The nervous system can function independently from the I-function, but the opposite is not possible. It is able override the I-function and cause people to perform actions that they are unconscious of. This leads me to believe that the nervous system plays a greater role in our behavior than our I-function does.

Somnambulism is a fascinating behavior. It is induced by a dissociation between mental and motor arousal (1). It provides good insight into the correlation between the nervous system and the I-function. Sleepwalking demonstrates that the nervous system is capable of performing behaviors similar to those specified by the I-function and that it can function independently from it. Despite my greater understanding of somnambulism I was unable to determine why the nervous system causes people to sleepwalk. It has been shown that no dreaming occurs during these stages of sleep. Therefore I do not understand what sleepwalkers acting out. This question still remains open for investigation.


Works Cited

1)Bassetti, C., Vella, S., Donati, F., Wielepp, P. Weder, B. SPECT during sleepwalking. Lancet 2000 Aug 5; 356(9228):484-85

2)3)Masand, P., Popli, A., Weilburg, J. Sleepwalking. American Family Physician 1995. v5 n3 p649.

4)Carlson, N. Physiology of Behavior. 7th ed. Allyn and Bacon. USA, 2001

5)McCarley, R.W. and Hobson, J. A. The form of dreams and the biology of sleep. In the Handbook of dreams: Research, Theory, and Applications, edited by B. Wolman. New York: Can Nostrand Reinhold, 1979.


Human Pheromones: What are the implications?
Name: Kate Shine
Date: 2003-04-15 12:33:10
Link to this Comment: 5405


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Our class was initially shocked to learn about proprioception, as it was difficult to grasp the concept that our bodies could possess senses about themselves and the outside world of which our I-functions are unaware. Even more disturbing was the idea that these bodily perceptions could regulate our behaviors without our conscious thought or consent. The class seemed comforted, however, when this seeming sixth sense was found to regulate many functions we consider thoughtless and ethically unimportant, such as heartbeat and balance. These choices we could leave to our bodies as long as our I-functions had power over the meaningful concerns of life.

But the mounting evidence for the existence of human pheromones throws a wrench into the mechanism of this easy solution. Pheromones, or chemical signals sent from one individual to another which affect behavior, are argued to influence meaningful behaviors once thought to be completely controlled by conscious personal choice, such as sexual willingness and attraction. They also introduce the possibility that we may be constantly communicating with each other and making interpersonal judgments of which we are unaware. The possible implications of this invisible sense are significant and far-reaching.

The first convincing evidence for the existence of human pheromones was presented in 1971 when Martha McClintock published a paper documenting the synchronization of the menstrual cycles of her and her fellow female dorm mates.(10) It seemed likely that something pheromonal was at work, as this phenomenon mirrored a similar occurrence caused by pheromones in mice, known as the Lee-Boot effect. (2) McClintock provided further evidence for this a few years ago in a controlled experiment published in the Journal Nature. (1) She found that secretions from the underarms of females in the follicular phase of menstruation significantly shortened the cycles of other female test subjects when applied under their noses, and secretions from the ovulatory cycle accordingly lengthened their cycles. In addition, she noticed that certain females seemed much more sensitive to the secretions than others, with responses of lengthening or shortening ranging from 1 to 14 days difference.

McClintock also predicts that pheromones of social interaction may be found to affect humans in many more of the same ways they have been found to affect rats, including: age of puberty onset, interbirth intervals, age at menopause, and level of chronic oestrogen exposure throughout a woman's life. (1) Not only does this evidence point toward a type of invisible chemical communication between women, but the variable sensitivities of certain women compared to others indicates that certain women may be dominant over others in determining the cycles of the entire group. Further evidence of this invisible power structure is provided by Michael Russell, who performed a case-study on a female colleague who had observed that it was always her cycle to which other females synchronized. (2)

Other studies provide evidence that it is not only females who communicate with each other pheromonally, but that males as well as females can influence each other sexually. Various studies have found that sexual exposure to males causes irregularly cycling women to begin cycling more regularly(4,2), which is another well-studied occurance in mice known as the Whitten effect, and has been linked to pheromones (2). Dr. Alex Comfort also noted that during Victorian times the average age of the onset of menstruation was much higher than in post-Victorian times, when co-education of males and females became more acceptable. (2) So male pheromones may play a large part in the regulation of the hormones which cause menstruation. Women who have sex with men at least once a week have in fact been found to have fewer infertility problems and milder menopause than those who do not. (4) And sex may not be necessary, but rather just the exposure to men's pheromones which are released only at close range. The implications of these findings are that male pheromones may be necessary for women to achieve optimimum health, and that this may in part explain the female attachment to men.

Other studies by Russell also found that around 6 weeks of age almost all babies will react more favorably to a pad containing the sweat of their mother than to a stanger's pad, and that people can identify their own sweaty shirts as well as those of a strange male and female with a relatively high rate of accuracy. (2) These functions in humans seem similar to the identifying functions of pheromones found in many other animals. (8)

But perhaps the most controversial human behavior which may be influenced by pheromones is sexual preference and mate selection. A case study by Kalogerakis found that at around the age of three years a boy named Jackie began to prefer the smells of his mother much more than those of his father, especially after she had recently had intercourse. The smells of the father at this time, until the boy reached six years old, caused aversion and some nausea. This behavior supports not only the theory of sexual attraction by pheromones but also Freud's theory of an innate "Oedipal complex" in young boys. (2) And in addition to women's health being beneficially affected over time by exposure to male pheromones, the moods of women have been shown to improve when exposed to the male steroid androstadienone. (12) A study also found men and women are more attracted to individuals whose genetically based immunity to disease is most different from their own. (5) Companies have not only begun to market so-called "pheromone colognes" containing compounds meant to attract members of the opposite sex, but these colognes have been reported to have some success for both men and women. (4,5)

There are still many who are skeptical about the actual existence of a pheromone receptor in humans which is separate from other smell receptors in the nose. The vomeronasal organ, which serves this purpose in other species, has long been thought nonexistent in humans after a certain fetal growth stage. However, there is a distinctive pit in the human nose with nerve endings which may still serve this purpose, if the axons of these neurons end in separate, more primitive parts of the brain than the more common nasal sensory neurons. This has yet to be definitively proven, but it seems especially unlikely that menstrual synchronization could be caused by scent alone and not specific chemical factors independent of the I-function.

It may not seem important that woman are unknowingly communicating and adjusting their menstrual cycles in order to achieve equilibrium with those around them, but the idea that some women may be dominant over others in this process is very interesting. What evolutionary quality do these women have, and what is its purpose? Is there an aspect to human personality and social rank which is inherent and unseen? There could in fact be many pheromonal factors working within and across the sexes all the time, influencing our preferences for social and sexual interaction. If Kalogerakis is to be believed, disturbing sexual developments such as Freud's Oedipal complex cannot simply be explained away, but are much more deeply rooted. And what might the implications for our society be with the increasing chemical complexity of the products we use, many containing animal and/or human pheromones? Although the ultimate decision of choosing a mate or selecting a friend must pass through the I-function, how do we know where our attitudes towards others come from? Could it all be more unconscious than we think?

References

1)Regulation of Ovulation by human pheromones, Stern and McClintock's publication in Nature 1998
2)Pheromones in Humans: Myth or Reality?
3) Chicago: Campus of the Big Ideas
4) Sexes: The Hidden Power of Body Odors, article from Time, 1986
5) Pheromones: Potential participants in your sex life
6) Following Your Nose to Optimal Sexual and Reproductive Health and Happiness, student webpaper 1999
7) Study finds proof that humans react to pheromones
8) Human pheromones: Communication through body odour, article in Nature 1998
9) Love Stinks: Intraspecies Chemical Communication, student webpaper 1999
10) Nailing Down Pheromones in Humans
11) Pheromones, student webpaper 1998
12) University of Chicago research points to new category of odorless chemical signals


The Hysteria Over Conversion Disorder
Name: Neela Thir
Date: 2003-04-15 19:24:37
Link to this Comment: 5409


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Scientists in fields connected to neurobiology and psychiatry remain mystified about the cause of Conversion Disorder. The disorder is characterized by physical symptoms of a neurological disorder, yet no direct problem can be found in the nervous system or other related systems of the body. This fact alone is not unusual; many diseases and symptoms have unknown origins. Conversion Disorder, however, seems to stem from "trivial" to traumatic psychological events and emotions rather than biological events. The extreme symptoms often disappear as quickly as they appear without the patient consciously controlling or feigning them. Thus, Conversion Disorder serves as a significant example of how blurred the conceived demarcated divisions of mind/body/behavior can be.

Conversion Disorder is diagnosed solely by its physical symptoms seen in patients. Symptoms can be divided up into three groups: sensory, motor and visceral. Sensory symptoms include anesthesia, analgesia, tingling, and blindness. Motor symptoms may consist of disorganized mobility, tremors, tics, or paralysis of any muscle groups including vocal cords. Visceral functions include spells of coughing, vomiting belching, and trouble swallowing (1). Most of these symptoms are strikingly similar to existing neurological disorders that have definitive organic causes. Conversion Disorder, on the other hand, defies the nerve patterns and functions from which the symptoms should follow. CT scans and MRIs of patients with Conversion Disorder exclude the possibility of a lesion in the brain or spinal cord, an electroencephalograph rules out a true seizure disorder, and spinal fluid eliminates the possibility of infections or other causes of neurological symptoms (2). The abnormal behavior shown in Conversion Disorder cannot be accounted for biologically, and this fact has set off even more scientific theories about the many ways in which biology must explain the phenomena.

Optokinetic nystagmus (sub-cortically controlled steady tracking the eye followed by the fixation of the eye on another object) can still be observed in patients with apparent blindness when they are shown a rotating striped drum thus proving their correct operation (2). Numb hands characterize a type of conversion disorder called "glove anesthesia"; the sensitivity stops at the patient's wrists. This clear demarcation does not correlate to any known nerve pattern or function. Also, patients who show paralyzed arms with functioning shoulder muscles that work normally to correct posture of their truck contradicts what we know about how the nervous system is structured (3). These examples demonstrate how the symptoms the patients express belie a properly working nervous system. Knowledge of neurobiology, or lack of it, seems to influence how the symptoms play out in the patients. It has been shown that symptoms become more biologically traceable when the patient knows more about the body's physiological functioning. Learned information stored in the mind is used to determine the physical symptoms unconsciously expressed by the patient. The conscious functions of the mind can work unconsciously through the nervous system to effect behavior.

Treatment of Conversion Disorder primarily involves psychotherapy. Often patients go into spontaneous remission, or they have a complete recovery shortly after visiting a psychiatrist (2). Despite the effective psychological treatment, the patients are often incredulous when told their symptoms are "imaginary" or "mental". Because this is often counterproductive in the doctor-patient relationship, patients are not told that their condition is thought to be psychological; they are first treated as if the symptoms are organic (3). Associating "mental" with "imaginary" in the minds of the doctors indicates a powerful assumption that something that is controlled by the mind and expressed through the body is not "real" and therefore a feigned illness. The mind, with its intimate connection to the nervous system and behavior by way of the larger structure of the brain, is more than a superficial layer of us as humans. It too is a part of the body, and a schism in the mind, whether it initiates from military trauma, sexual abuse, or depression remains a very real source of physical and medical illness.

This discussion of the distinctions of the mind and body are not new. Conversion Disorder was originally called "hysteria," and it has been described by documents hundreds of years old. The ancient Egyptians attributed it to a malpositioned or "wandering" uterus, and the name derives its meaning from this distinctly feminine problem. In the 1560's, the first documented study was done that claimed that "hysteria" was located in the mind rather than the body. Two hundred years later, the French neurologist Charcot hypothesized that hysteria originated from an organic weakness of the nervous system. Sigmund Freud became captivated by this idea and eventually replaced "hysteria" with the term "conversion" because he theorized that the symptoms were "intrapsychic conflicts" manifested (or converted) physically in the patient (3). This progression towards the psychological has been reflectively inverted in recent theory.

Increasingly, Conversion Disorder is being thought of and researched biologically rather than psychologically. Technology has supplied methods of testing elements of the nervous system, and yet no definitive causes have been identified. It has only recently become common thought that Conversion Disorder arises from some sort of collaboration between the mind and the nervous system (3). This distinction between the "mind" and the "nervous system " (and the revelatory idea that they are indeed related) demonstrates a crucial partition that most scientists seem to make between the two. In my view, the mind is the cognitive function of a larger concept, which can be called, as we do in class, "the brain". The brain encompasses both the biological stratum of the nervous system as well as the cognitive stratum of the mind. As debates over Conversion Disorder show, they are interrelated. The nervous system does not only determine the mind, but it too can be influenced by the mind. To attempt to divide them for study may be useful, but their ultimate relationship should not be permanently separated. Parobek describes the state of Conversion Disorder as a "Tower of Babel that obscures accurate identification and nomenclature" (3). This is true in that, like the myth, the relationship between the two elements of the brain create a confusing cacophony of precise causation because that is its inherent structure. Nomenclature is a system of understanding and representing, and if the construction of the Tower of Babel and Conversion Disorder cannot be contained within one language or one scientific discipline, then perhaps the nomenclature and the building of scientific research needs to broadened to include more "multi-lingiustic" elements. The total functioning of the nervous system and the mind meets, exists, and functions together in what is more appropriately called "the brain," and the brain, as a whole, determines behavior.

To understand Conversion Disorder completely will probably require a more multi-discipline approach instead of trying to locate its cause and process on a purely biologic level. When trying to pinpoint whether a patient's symptoms hail from "real" or malingering sources, the observed difficulty lies in the seeming dichotomy between mind and body. This dichotomy however remains a created one for the benefit of our own understanding. Yet, in the case of Conversion Disorder, delineated scientific thinking seems to have prevented our understanding rather than facilitating it; by inspecting the trees, we are missing the forest.


References

1)PsychNet-UK

2)Emedicine: Instant access to the minds of medicine., Dufel, Susan M.D. "Conversion Disorder".

3)Parobek, Virginia M."Distinguishing conversion disorder from neurologic impairment".Journal of Neuroscience Nursing. 04/97. Volume 29. Number 2. p. 128.
Infotrack: Expanded Academic
, scroll down to E-journals, select Science Direct and search for title


Why Can't I Speak Spanish?: The Critical Period Hy
Name: Stephanie
Date: 2003-04-16 12:42:35
Link to this Comment: 5416


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

"Ahhhhh!" I yell in frustration. "I've been studying Spanish for seven years, and I still can't speak it fluently."

"Well, honey, it's not your fault. You didn't start young enough," my mom says, trying to comfort me.

Although she doesn't know it, she is basing her statement on the Critical Period Hypothesis. The Critical Period Hypothesis proposes that the human brain is only malleable, in terms of language, for a limited time. This can be compared to the critical period referred to in to the imprinting seen in some species, such as geese. During a short period of time after a gosling hatches, it begins to follow the first moving object that it sees. This is its critical period for imprinting. (1) The theory of a critical period of language acquisition is influenced by this phenomenon.

This hypothetical period is thought to last from birth to puberty. During this time, the brain is receptive to language, learning rules of grammar quickly through a relatively small number of examples. After puberty, language learning becomes more difficult. The Critical Period Hypothesis attributes this difficulty to a drastic change in the way that the brain processes language after puberty. This makes reaching fluency during adulthood much more difficult than it is in childhood.

The field of language acquisition is very experimental because scientists still do not completely understand how the brain deals with language. Broca's area and Wernicke's area are two parts of the brain that have long been identified as areas important for language. Broca's area is the left frontal cortex, while Wernicke's area is the left posterior temporal lobe. These areas are connected by a bundle of nerves called the arcuate fasciculus. Both Paul Broca and Karl Wernicke had patients with lesions with lesions on their brains. The problems caused by these lesions led to the discovery of Broca's area as the sight for the production of speech and Wernicke's area as tied to language comprehension. (2) The location of these areas, as well as the effects of anesthetizing one half of the brain have lead scientists to believe that language is primarily dealt with by the left hemisphere of the brain.

Recent studies have shown that activity in the planum temporale and the left inferior frontal cortex during acts of language are not unique to hearing individuals and therefore cannot be attributed to auditory stimuli. The same brain activity was shown in deaf individuals who were doing the equivalent language task in sign language. This adds more support to the idea of specific areas of the brain devoted to language. (3)

Noam Chomksy suggests that the human brain also contains a language acquisition device (LAD) that is preprogrammed to process language. He was influential in extending the science of language learning to the languages themselves. (4) (5) Chomsky noticed that children learn the rules of grammar without being explicitly told what they are. They learn these rules through examples that they hear and amazingly the brain pieces these samples together to form the rules of the grammar of the language they are learning. This all happens very quickly, much more quickly than seems logical. Chomsky's LAD contains a preexisting set of rules, perfected by evolution and passed down through genes. This system, which contains the boundaries of natural human language and gives a language learner a way to approach language before being formally taught, is known as universal grammar.

The common grammatical units of languages around the world support the existence of universal grammar: nouns, verbs, and adjectives all exist in languages that have never interacted. Chomsky would attribute this to the universal grammar. The numerous languages and infinite number of word combinations are all governed by a finite number of rules. (6) Charles Henry suggests that the material nature of the brain lends itself to universal grammar. Language, as a function of a limited structure, should also be limited. (7) Universal grammar is the brain's method for limiting and processing language.

A possible explanation for the critical period is that as the brain matures, access to the universal grammar is restricted. And the brain must use different mechanisms to process language. Some suggest that the LAD needs daily use to prevent the degenerative effects of aging. Others say that the brain filters input differently during childhood, giving the LAD a different type of input than it receives in adulthood. (8) Current research has challenged the critical period altogether. In a recent study, adults learning a second language were able to process it (as shown through event related potentials) in the same way that another group of adults processed their first language. (9)

So where does this leave me? Is my mom right, or has she been misinformed? The observation that children learn languages (especially their first) at a remarkable rate cannot be denied. But the lack of uniformity in the success rate of second language learning leads me to believe that the Critical Period Hypothesis id too rigid. The difficulty in learning a new language as an adult is likely a combination of a less accessible LAD, a brain out of practice at accessing it, a complex set of input, and the self consciousness that comes with adulthood. This final reason is very important. We interact with language differently as children, because we are not as afraid of making mistakes and others have different expectations of us, resulting in a different type of linguistic interaction. Perhaps that LAD processes those types of interactions better. There is not yet enough research to make a conclusive statement, but even the strictest form of the Critical Learning Hypothesis does not say that that language learning is impossible in adulthood. I guess that means doesn't get me off the hook. I'd better keep studying.


References

1) Learning Who is Your Mother, The Behavior of Imprinting by Silvia Helena Cardoso, PhD and Renato M.E. Sabbatini, PhD.

2) The Brain and Language Language page of the Neurobio for Kids websight.

3) Brain Wiring for Human Language Scientific American article.

4) Universal Grammar [Part 1] Forum area of Gene Expression websight.

5) The Biological Foundations of Language, Does Empirical Evidence Support Innateness of Language? by Bora Lee.

6) Evolution of Universal Grammar by Martin A. Nowak, Natalia L. Komarova, and Partha Niyogi.

7) Universal Grammar by Charles Henry.

8) A concept of 'critical period' for language acquisition, Its implication for adult language learning by Katsumi Nagai.

9) Brain signatures of artificial language processing: Evidence challenging the critical language hypothesis by Angela Friederici, Karsten Steinhauer, and Erdmut Pfeifer.


Speaking of Happiness: The Interplay between Emoti
Name: Kat McCorm
Date: 2003-04-17 00:01:20
Link to this Comment: 5428


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

"And this is of course the difficult job, is it not: to move the spirit from it's nowhere pedestal to a somewhere place, while preserving its dignity and importance."

I cry. There is pressure behind my eyes, my skin turns blotchy and my lips tremble, and mucus clogs my airways, making it difficult to breath. I hate crying in front of others: not because I want to hide how upset I am, but because the second that most people perceive my emotional state as fragile, they assume my reasoning and mental functions are also not sound. The outward expression of an inward instability is something we save for those who we know and trust best. They do not view our emotionality as a weakness, they already know us to be strong. Crying is represented in our culture as a lack of control. When upset, the "ideal" is to keep a cool head (and a poker face), not allowing emotions to enter into the decision making process. However, I submit that without our emotional base, rationality would have no reason or foundation upon which to operate.

A multitude of opinions are found on the subject: are emotions more a function of the heart or of the head? According to Antonio Damasio (1), emotions and feelings are an integral part of all thought; yet we as humans spend much of our time attempting to disregard and hide them. In the view of source (2), experience is the result of integration of cognition and feelings. In either view, it remains indisputable that emotions are not what we typically make them out to be: the unwanted step-sister of our cultural sweetheart reason. Reason in our culture denotes intelligence, cognition, and control. Emotions seems such a "scary" concept to our collective mind because they can be so overwhelming, and can cause us to lose the control we are so reticent to relinquish. Consequently, the perceived division between emotion and reason has resulted in more polar divisions that we experience on a daily basis: the great schism between the humanities and the sciences, for example. However, as is pointed out by a recent NIMH study on Emotion and Cognition (3), this historical division between emotion and cognition is losing its utility as research progresses. The integration of the concepts is reflected in the interdisciplinary interest: from neurobiology to psychology, the implications are far reaching.

An early interpretation of the relationship between emotion, cognition and physiology was that of William James, who thought of emotions as results of physiological processes of the autonomic nervous system (6). According to his school of thought, first comes cognition, then a physiological response, and then an emotion. In response to an event such as the death of a friend, first the cognition steps in about what this means, then the body begins to cry, and because we are crying, we begin to feel sad. Another later theory was proposed by Walter Canon and Philip Bard. This theory (4)proposes that in response to a stimulus, a signal is sent to the thalamus, where the signal splits and goes toward the experience of an emotion and the other half towards producing a physiological response. Most modern neurobiologists do no agree with this theory, however. The question remains: what physiological structure can we pinpoint as the source of our emotions.

A recent study by NIMH study attempted to trace emotional and cognitive memory to a physical structure in the brain, and found that the amygdala has a large role in the storage and integration of both. There is much evidence that emotions and cognitive processes are in some way interdependent. One example of this is the research of Monica Luciana (3). According to her findings, spatial working memory is influenced by goals and emotional states, as she tested through the use of dopamine and serotonin agonists and antagonists. When dopamine agonists are used (emulating an emotion of happiness), subjects performance on spatial abilities test was improved. The use of serotonin agonists (emulating negative emotions) slightly impaired performance as did dopamine antagonists. In the same study, depressed patients were found to have increased blood flow to the amygdala, and therefore increased amygdala activity. In reflecting on this, I observed that many of my own depressed periods occurred at times when friends and family described me as "thinking too much." Could this be related to the over activity of the amygdala, a structure which serves as an integration center and "memory 'enabler'."? Shakespeare describes a similar experience in "As You Like It" (5), saying: "it is a melancholy of mine own, compounded of many simples, extracted from many objects, and indeed the sundry contemplation of my travels, which, by often rumination, wraps me in a most humorous sadness."

I love. I, the self, love something in the environment around me. But this is not just some ethereal feeling, which cannot be placed or defined or seen, whih can only be felt and described by musicians and poets. Love, traditionally, is placed on an invisible pedestal, floating mysteriously above us, our rationality, our flesh. But I love. I feel weak in the knees, butterflies in the stomach, head over heels in love. Somehow, we still connect this otherworldly experience with physical sensations. And yet, we are reticent to bring love from it's abstract home into our own bodies, our own minds; afraid that if we recognize our emotion as something totally contained with our brains, something totally human, we will wake to find some of the wonder, tenderness, and luster gone. Is this also reflective of some human insecurity? Not until we can bridge the illusioned gap between our emotions and our cognition can we understand fully the relationship between our brain and our behavior.

References

1) A.R. Damasio, Descartes' Error, 1994

2) Thinking, Emotions, and the Brain

3) From Neurobiology to Psychopathology: Integrating Cognition and Emotion, on the NIMH website

4) Laughing out Loud to Good Health.

5) William Shakespeare (1564–1616). The Oxford Shakespeare. 1914. , on the bartleby website.
6 Theories of Emotion--Understanding our own Emotional Experience.


Review: The Forbidden Experiment by Roger Shattuck
Name: Geoff Poll
Date: 2003-04-17 02:05:13
Link to this Comment: 5431


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

It is one of the oldest unanswered questions in all of science, taking a comfortable backseat in importance only to "Where did we come from?" and "Why are we here?" Though less exciting than its company in the pool of great unanswerable questions, the paradoxical 'Nature or Nurture?' question—paradoxical because the two can not be pieced apart as the question form implies—has tormented truth-bound scientists for years, presenting them with a set of hypotheses that can not be empirically tested. Recent advances in genetics have brought forward multitudes of new possibilities for those who would study the pure effects of environmental variables on the now controlled for nature. While it makes for exciting news in animal testing, we do not allow ourselves to manipulate other human beings for the sake of collecting data. Our strong moral stance does not diminish our curiosity and so the question must be asked: What would we do if a case in which the human had already been manipulated, by no will of our own, fell into the hands of science? How far would we go?

Every couple hundred years, one of these cases, by chance or by a case of true cruelty, falls in the hands of scientists, eager to make the most of such a "misfortune". Roger Shattuck's The Forbidden Experiment follows one of the more prominent cases of our recent history, that of Victor, the Wild Boy of Aveyron.

The book takes little time to peak the reader's curiosity with the tale of a "savage" twelve-year-old wandering out of the woods of southern France on a cold January evening in 1800. Without a known history or the ability to communicate with his captors, Victor, as he was later named, was assumed to have lived in the wild for at least six years and probably more. In the midst of an intellectually lively France, Victor found immediate fame and was brought to Paris so that the best scientists could to take advantage of studying a human raised almost completely in isolation. The story is a short one, lasting only the 28 years the Victor lived in Paris society before his death in 1828, and his time of immediate interest to the Paris scientific community finishes long before that. (1)

To an intellectually enlightened community the contradiction should have been apparent; Victor was detained and forced to learn how to live as a proper human being in civilized society, but he was treated every step of the way as if he were an animal. The inconsistencies are not more apparent than they were in the often times tortured mind of the main scientist overseeing the progress of Victor, the young and gifted Dr. Itard. It is Itard who subjects Victor to inhumane treatment from keeping him on a leash during walks around the grounds, to using violence to arrive at a desired level of obedience. Victor, from most observers' accounts, was not human but some kind of savage creature, which justified that type of treatment. Itard was one of the only scientists who disagreed, but his own beliefs that Victor was more than animal had already condemned his own methods. (1)

The focus of the Shattuck's narrative falls within the five years Itard spent attempting to give back to Victor human qualities we all grow up with, but which Itard believed had lain dormant for the years that Victor spent in the wild. The author's treatment of Itard makes for most of the interesting reading of the book. Shattuck clearly admires in Itard his determination, his passionate curiosity and his faith in the innate goodness of the almost altogether unresponsive being in front of him. It is through much of Itard's own pen that we get not only the notable events of Victor's progress, but a look inside the mind of a man who needed to believe that we are made of something more than the right combination of a biological basis and a timely social training. It is that last element, the human one, which Itard searched for. (1)

But his search was misguided and from his original diagnosis he never frees himself from a strict idea of what success with Victor would be. He sees his biggest challenge as that of language and in the end it is his inability to give Victor the gift of speech that gives him the feeling of failure. Shattuck aptly jumps on the irony of that situation; Itard kept Victor in an institution for deaf-mutes, and he never once attempted to teach Victor sign language. (1)

At each step he hoped for Victor to act like a his idea of a human while treating him still more and more like a lab rat. The most striking series of events arrived with the onset of puberty in Victor. Itard saw the heightened sexual drive that comes with puberty as a drive towards love—the most human of all emotions—as well as towards the forming of new ties and relationships between young people. Itard had kept Victor isolated from any peers his age but when Victor began to feel urges, the scientist set up controlled situations with females where he expected Victor's natural urges to lead him towards some kind of more substantial communication. Without any kind of reference Victor just felt frustrated and gave up, not knowing what to do. Itard gave up as well and condemns Victor as an idiot. (1)

Years later, shortly after his death, a peer doctor honored Itard's work in a statement that throws light on Itard's greatest obstacle, his own peer community. "To train an idiot," praised Dr. Bosquet, "to turn a disgusting, antisocial creature into a bearable obedient boy—this is a victory over nature, it is almost a new creation."(165). It was this way of thinking about people that caused the short-sidedness allowing the experimentation to proceed and a very creative young scientist to find himself walled in. (1)

Sadly, Victor and Itard's case does not stand alone. Among many other claims of "wild" children, the most famous recent case was that of Genie. Thirteen years old when found in 1970, Genie had been deprived by her parents of all social contact for the last ten years of her life. She was physically debilitated and was not able to speak, a talent she never fully retrieved. As soon as she was discovered by a social worker, Genie was apprehended by scientists with the same vigor as Victor had been 150 years earlier. (4)

"It's a terribly important case," claims Harlan Lane, a psycholinguist and author of a book about Victor called, The Wild Boy of Aveyron. Regarding Genie, Lane posits, "Since our morality doesn't allow us to conduct deprivation experiments with human beings, these unfortunate people are all we have to go on." (3)

Victor was abandoned by Itard after he failed to produce speech. He lived the rest of his life in isolation, not at all a deviation from his first 18 or so years. (1) Genie is still alive but struggles with all aspects of her life. She has no more contact with the researchers assigned to her case after they were brought to court for taking advantage of her for selfish motives. (3)

Shattuck approaches the case of Victor with a great curiosity and in many way justifies the ignorance and cruelty of Itard and the other scientists of the time. In a more recent book, Forbidden Knowledge, he retracts some of that innocence and makes some heavy criticisms of our overly inquisitive society. He references mostly mythology and his examples range from Adam and Eve to Mary Shelley's Dr. Frankenstein. In each case temptation overrides our intellect. (5)(6) Victor, on the other hand, was unconcerned with knowledge and ego, and not tempted by power and money. His mind was freer than ours may never be. Shattuck speaks of Victor escaping from "humanity into animality"(181). It's an attractive concept. Of course we would like to teach Victor to talk. We want to know what he knows. Itard preferred to refer Victor's state of that of forgetfulness. (1) Is that what we want, to forget? What are our motives in the end? To learn from the dumb how not to speak, or to teach language so that Victor may live with us and suffer the way that we do. What does that make us?

References

1) Shattuck, Roger. The Forbidden Experiment. New York: Farrar Straus Giroux, 1980

2)NOVA transcript, transcript for 'Genie' episode

3)The Civilizing of Genie , the story of Genie

4)Feral Children Website, a great resource about 'wild' children

5)Online News Hour, Shattuck interview

6)Ethical Culture Book Review, review of Forbidden Knowledge


The Runaway Brain: A Review
Name: Kate Tucke
Date: 2003-04-17 11:12:14
Link to this Comment: 5433

<mytitle> Biology 202
2003 Second Web Paper
On Serendip

Christopher Wills has written a fascinating chronicle of human evolution in a style that will keep the reader glued to the book to find out what happened next. The Runaway Brain is organized into four sections. First Wills addresses The Dilemmas, the many problems that students of evolution encounter mainly from public perception of the subject and from the many prejudices of those involved with the work. The question of where our species first appeared is a particularly contentious one, although it is now widely accepted that the species originated out of Africa. There are, regardless, those who still disagree and especially at first, many dismissed an African origin out of hand. Wills' second main issue is that of the transition to actual "humanity" and if it occurred once or twice. As he discusses in the chapter entitled "An Obsession with Race", those who deride those of African descent often use the multiple origin theory as one that justifies racism. Wills decries this abuse of the science and firmly argues against those that would use evolution to further racist propaganda. He also takes issue with those who insist on believing that all of humanity came from one Eve and one Adam, instead putting forth the theory of the "mitochondrial Eve"; that we all descend from the mitochondrial DNA, but that we do not in fact descent from two individuals.

Wills' own slant on the issue is that humans are involved in a feedback loop which he calls the "runaway brain". Wills claims that humans are unique in that they have culture which has developed. The culture injects an otherwise unknown into the evolutionary process. Humans, Wills says, had advanced brains which allowed them to create a complex culture. The culture challenged their brains and led to more complex brains as the species involved. This process continued to repeat and is still repeating today. This is what Wills claims is driving us towards our ultimate best.

The second section of the book is titled The Bones and tells the story of the archeological remains of the ancestors of humanity. Wills creates a fascinating tale as he describes the lives, feelings and desires of the people involved in finding these bones. Not only does he describe the find and its significance to the understanding of evolution, he also tells the story of the finder making the section more of a human drama than a dry telling of facts. In this section it becomes really apparent that Wills is trying to create an interesting version of evolution that everyone will be interested in reading. In the preface to the book he states that he wants to bridge the gap between scientists and the public and here he does so, mentioning that you might be reading this book while sitting by a fire with your pets. Wills is quite adept at turning the facts of evolution into a story telling process.

After tracing the various archeological finds showing our possible descent from apes, Wills goes on to discuss the genetics of evolution in The Genes. He first describes the basics of genetics, instructing the reader as to the basic nature of genes, alleles, DNA and mutations. He places a very strong emphasis in this section on the fact that mutations are usually neutral, usually having neither a positive or negative influence on the organism. Wills provides the basic genetic knowledge needed to understand how traits can be passed down from generation to generation in a population and in turn how this can alter the makeup of the population.

The final section of the book, The Brain, attempts to discover why the human brain differs from all other animals. As with The Genes, Wills first begins with a description of the basic functioning of the brain and then moves into a more complex analysis of why our brain structure has led us to the more complex abilities we see in human culture today. Wills shows that human brains were not that different from the other organisms that we evolved from originally, but that human brains have become more complex over time. Here again he puts forth his theory of the "runaway brain". He postulates that human culture has created a feedback loop with the complexity of the brain; one leading to an increase in the other and back and forth with our current brain as a result. Hence, the brain has been directed towards as ultimate goal. The goal is the complex structure that we utilize today. This is not, however, the ultimate endpoint for humanity, Wills argues. He claims that evolution is still ongoing today, although we cannot see it at work and will continue in the future.

The Runaway Brain is a highly enjoyable read for anyone interested in evolution. Every fact and discovery is punctuated with a human interest story by telling the history of the discoverer. Every important event is marked with an anecdote explaining the significance in a way that everyone should understand. Wills stated in the beginning that his "goal is to try and close the gap between the scientist's perception of how evolution works and the public's; in essence, to demystify the process of evolution"(xx). In this respect, he has succeeded admirably. The problem with the book, however, is that the sections are quite disjointed. Although the reader will learn a lot about The Bones, The Genes, and The Brain, they are not all tied together in a way which the reader can easily comprehend. The message of the "runaway brain" is also not clear enough. Although Wills refers to it periodically, he does not use the mountain of evidence he presents in this book to back up his theory. As an exploration of how humans evolved to their current complexity, Wills does an admirable job of chronicling but was not so successful at explaining the process.

References

Wills, Chrisopher. The Runaway Brain: The Evolution of Human Uniqueness. New York: Basic Books, 1993.


Murderous Mommies
Name: Zunera Mir
Date: 2003-04-18 16:39:08
Link to this Comment: 5442


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip


"I was looking for a way to get attention to myself, and maybe if I could just do something drastic enough, that someone would see that I needed help"-mother who tried to suffocate her child (1)

Why would any mother try to suffocate her child? Is it not true that:

"A mother's love for her child is like nothing else in the world. It knows no law, no pity, it dares all things and crushes down remorselessly all that stands in its path?" --Agatha Christie.

Mothers, in most cases, are seen as the essential "caregivers" in many societies/ cultures. A novel or textbook, screenplay or script, Hallmark card or holiday, could celebrate "motherhood," and what it entails, at one point in time. The bond of mother and child is shown to be "unbreakable" and we hear stories of mother's lifting cars to save pinned children, essentially sacrificing their lives in order for their children's survival. Growing up, we might hear that being a mother is an "under-appreciated job; and "all the work mothers do whether paid or unpaid - has social and economic values"(1). Mothers can essentially be the shapers of the future society: able to raise children, and possibly even hold down a job, while still being able to cook and clean. Author Ellen Bravo stated, "Only Clark Kent had to be Superman, but every mother has to be Superwoman" (2).

Just recently (about 26 some years ago) physicians began to disagree with Miss Christie's belief that mothers are extraordinary defenders of children. Dr. Roy Meadow coined the term, Munchausen Syndrome by Proxy (MSBP), in 1977, to describe an unusual and bizarre behavior exhibited by parents of extremely "sick" children after observing a mother who had tampered with her child's urine samples (3). Parent(s)/caregiver(s) of a child in MSBP falsify illnesses or fabricate symptoms for children. They will "purposefully inflict pain/harm on another person, usually his/her own child" (4). Not only will the parent inflict pain, they will try to elicit "health problems" in their children through various means, usually undetectable by forensic exams (5). They will suffocate the children, poison them, inject them with feces/urine/insulin, block airways with fingers/hard objects, scrub the child's skin with cleaning solvents and bleaches, or feed children large amounts of sugar or salt (6), (3), (5). The mother (usually the sufferer of Munchausen's and perpetrator of MSBP more than 95% of the cases), or, very rarely, the father, might also feed the child more medication than required, tamper with medical equipment, or swap medicine (7), (8), (6).

Perpetrators can make their deceptions look like plausible and valid medical problems, which doctors will then try to treat:

"The perpetrator can make the a child's emesis, urine, or feces appear bloody by adding their own blood, or paint, or dye, or cocoa. Feeding a child large amounts of salts or sugars can create electrolyte imbalance. Scraping a child's skin with a sharp object or applying irritating solutions, such as oven cleaner, or dyes can cause rashes that will last for days, weeks, or even months. Sedatives, tranquilizers, or the injection of drugs or foreign material can induce neurological symptoms. Only the imaginations of these parent's limit the variety of believable signs and symptoms. The children not only suffer from the parents' actions, but they are also subjected to an extensive array of invasive radiological, medical, and surgical procedures that are unnecessary and painful (9).

Take the case of young Jennifer Bush. Her mother, Kathleen Bush, admitted Jennifer to a hospital on "130 separate occasions" (8),. Jennifer spent "640 days in hospitals" and "underwent approximately 40 surgeries for chronic illnesses such as immune system deficiency, gastrointestinal problems and seizure disorders and needed a feeing tube to eat" (10), (8). The surgeries Jennifer Bush suffered removed not only her gallbladder, but also her entire appendix, and part of her intestines (9).

What did the mother get out of it? Instant fame? Previous First Lady, Hillary Rodham Clinton, did chose Jennifer as the poster child for her campaign on healthcare reform (9), (10). Kathleen Bush was lauded and given the highest praise for her "exemplary devotion" to her little daughter (11). However, the cause could go deeper than attention seeking, especially since MSBP is an unusual form of abuse.

Why is MSBP bizarre? There are basically three unusual facts surrounding MSBP. One- the fact that young children are suffering, even dying, from unknown causes; Two- mothers are the ones behind the child's suffering or eventual death; Three- The mothers are knowingly harming their children and trying to cover up what they are doing from others, even from their spouses.

Young children, more vulnerable than adults and older children, can be susceptible to many diseases. However, if there is a history noted of similar sibling illnesses. Unexplained reasons for death/illness of sibling, more than one or two cases of SIDS in the family, or the doctor being baffled by the patient's symptoms, could suggest that the child's illness could be due to an "unlikely" alternative cause-mothers. This was concluded to be the cause of a child's "sickness" once nurses, spouses, doctors, and other family members noticed that when the mother's left the child, the signs and symptoms of the "illness" would alleviate, or no longer surface. An undercover video recorded by an English doctor, David Southall, confirmed suspicions that some children were actually harmed by their parents. "In all Southall recorded 39 different children over the course of eight years. In the taping, 33 of the 39 children are seen being attacked" (12). If this is true, than one in five "cot deaths," or SIDS, is "actually a murder resulting from a mother with MSBP" (5).

This may be hard to accept for many people since, as stated before, mothers, and parents, in general, are seen to be the "caregivers" and "protectors" of children. For instance: "Few people ask questions, for how many people would dare to think that this wonderful, kind, caring, compassionate person who has devoted all her life to helping others is, in reality, a murderer?" (5). This denial can be linked to societal views. Mothers and parents are placed in high esteem when it comes to the care of children. Therefore, why would a doctor even question the reports made by the parent's of a "sick" child. "Physicians are trained to believe in the medical history given by a caregiver and may not question the facts" (13). On the other hand, if the perpetrator of MSBP is a nurse or doctor, society is taught to trust the people "in the coats and uniforms." We give people in uniform a certain degree of power. Police have the right to enforce the law, and doctors have the right to heal and save the sick. Whether or not they abuse this power can not be controlled by what society deems the "proper" function of a job. People, given the power, may use it or abuse it. As long as one does not get caught, society and others will never know how one is using his/her power. Moreover, all the same, most people trust each other. Doctors trust parents to tell the truth about symptoms/problems and parents trust doctors and nurses to treat and not harm their children.

MSBP violates this societal norm. Mothers, or even fathers, afflicted with MSBP, no longer follow the "traditional" role as caregivers. Instead, they inflict pain and irrational attacks on their own flesh-and-blood. They only seem to get away with it because many parents with MSBP are "highly attentive," "unable to leave the side of the child," while also being "supportive to staff" and involved in the "hospital environment," extremely "close to hospital staff" (14), (15). Thus, on the surface, this parent seems to be a caring and overprotective mother. What the medical staff, spouse, and the family, do not realize is the amount of care this "devoted" individual put into hiding the abuse inflicted on the child and the fabrications created around it.

Why would anyone do this? Causes for MSBP are large and varied. One of the most popular reasoning's behind the disorder could be traced to the mother's own childhood. Most of the mothers with MSBP may have had an "emotionally deprived childhood with a high probability of physical abuse" (9). These mothers could have suffered at the hands of their own parents, causing them to do the same to their children. Another popular reason related to the presence of abuse as a child, is the search for recognition, sympathy, or just "attention," such as Kathleen Bush sought when abusing her daughter Jennifer. These women could be in an unhappy marriage, where the husband is inattentive and distant, or could have depression due to childhood abuse and trauma (7), (16), (4), (6), (15), (9). If the husband is distant, the mother may harm the children to "grab the husband's attention or even [use it] as a method of taking revenge against the husband" (4).

The Munchausen (a separate syndrome in which the individual inflicts pain or falsifies illnesses to gain entry into the hospital) could carry over to the victim abusing his/her child.

A more controversial cause deals with mother's "outwitting doctors"- an authority figure in society. "Some offenders might receive gratification that as they fool the doctors. They derive enjoyment from knowing what is wrong with the child as medical experts remain baffled" (15). The relationship developed by the mother and doctor is strictly "sadistic"-the mother harming the child to get back at the doctor (7). In some cases, the doctor could stand in as the target for the mother's hatred or ambivalence towards her own father (7). If the mother felt that her father "rejected her," leading her to feelings of "psychological abandonment" and "emotional hunger," causing her to fail in "developing a sense of self," the mother might redirect her feeling about her father's treatment of her to a male doctor (7).

All of the "sources/causes" of MSBP, I feel, can be related to power and control.

"The mother may feel hugely empowered, because she knows society is on her side- mothers don't deliberately harm their children in such a way, thus giving her the edge in a social encounter with the doctor and to a certain extent she has gained control over the thoughts and actions of a well educated doctor by 'feeding him lies' and determining the periods of acute illness in the child" (7).

Like any abusive disorder, control and power is a major issue. For anorexics, it is to starve themselves. Bulimics control the amount of food after bingeing by either vomiting, or doing intense exercise. When a child is abused, the first thing they lose is control and power. Their attacker wields all the power and all the control. As we discussed in class, control means a lot to us, whether it is physical or mental. To some people, the fact that we can not control certain aspects/functions of our lives frightens them or makes them feel inadequate (9). Thus, the reality behind this disorder can be seen when examining the actions the mother might take.

In order to cause symptoms in the child, mothers have to physically induce the illnesses." As stated earlier in the paper, they can achieve this in numerous ways (injecting feces, applying cleaning solutions to skin, smothering, etc.). The inducing of "sickness" in a child wields a lot of power and control for the mother (7), (16). She can do whatever she likes with the child. The mother has the ultimate upper hand in how much the child will suffer before she gets caught, or before the child dies. This absolute control over a being results in secondary benefits: attention to the mother, feeling of worth by attentive hospital staff and doctors, recognition for "heroism," revenge, etc. Overall, if the mothers practicing MSBP really were affected by childhood abuse/trauma enough to feel that they are "ignored," then the control/power cause would be the most logical cause for the development of MSBP.

To this date, we do not have enough information to pin one cause onto MSBP. However, I do not think the syndrome can actually be explained by one cause, despite the lack of information. Any disorder, especially mental, can be attributed to several factors and causes. Mother's that perpetuate MSBP that never had any childhood abuse/trauma might be led to harm their child for other reasons: to get back at their husband's for disregarding them, to deal with insecurity by searching for attention and support through hurting their child, depression, the list could go on.

A MSBP victim survivor noted,

"Just as we have been called as a nation to stand together, may we survive to pull each other up to the mountaintop and shout-NO MORE. Our children deserve to be safe, to know and receive unconditional love. This will be our mission and our legacy to the next generation" MB (13).

What is important now is to realize that MSBP DOES exist, despite whatever reason. The potential victims are the defenseless children. By reaching out to the mothers and helping them deal with their problems, or by recognizing the "signs" of MSBP early on, we could potentially prevent more deaths of children (9).


Bibliography

1)Mothers and More Homepage , A non-profit organization site dedicated "to improving the lives of mothers through support, education and advocacy. We address mothers' needs as individuals and members of society, and promote the value of all the work mothers do."

2) CNN.com Homepage, Article by Annelena Lobb on "the market value" of a mother's work

3)Hendrick Health System Homepage, A brief definition and description of Munchausen Syndrome

4)General Health Articles, Brief paper on Munchausen Syndrome

5) Bully Online,Site on Attention Seeking Disorders. Has a section on MSBP

(6)Body Magazine Online, site on women's health and lifestyle.

(7)School of Medicine Papers, Paper by Med student on MSBP, entitled Munchausen Syndrome by Proxy

(8)Court TV Homepage, Site on trials and verdicts

9)Nursing Career Site, Self-Study Module on MSBP for RNs

10)CNN online Homepage, Article on Kathleen Bush case by Susan Candiotti

11)Self Help Magazine, Paper on MSBP by Marc D. Feldman, M.D., entitled, "Parenthood Betrayed The Dilemma of Munchausen Syndrome by Proxy"

12)ABC news online, News OnLine. Article on MSBP.

13)MSBP Survivor Network Site

14)PediatricBulletin, Article on MSBP on Pediatric Bulletin Site

15)Allen Cowling Investigations Homepage, Site on False Allegations. Paper on MSBP.

16)Stranger Box site, Post-Trauma Page. Section on MSBP.


Current Research Investigations of Corollary Disch
Name: Michelle C
Date: 2003-04-22 05:40:52
Link to this Comment: 5483


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Corollary discharge assists both human and non-human animals distinguish between self-generated (internal) and external motor responses. By sending signals which report important information about movement commands and intension animals are able to accurately produce motor sequences with ease and coordination. When a motor command initiates an electric organ discharge, the signal transmits important information to the brain which serves as a feed-back mechanisms which assist with self-monitoring; this is formally defined as corollary discharge(6).

Although the corollary discharge system is one of the most important systems which animals possess for the control and detection of motor movements, its specific neurological mapping is largely unknown. Many studies which investigate the specific nature of corollary discharge focus on either auditory or visual sensory perception. Current investigations of corollary discharge are commonly associated with the use non-human primates and humans who suffer from Schizophrenia. By using non-human primates for the investigation of the neuronal network of the corollary discharge system both invasive and non-invasive investigation may be explored. In addition, investigating Schizophrenics who suffer from auditory hallucinations, the inability to differentiate between spoken and thought speech (1), may also significantly contribute to the advancement and increased understandings of the corollary discharge systems.

Sommer et. al., proposed a neuronal pathway for corollary discharge in non-human primates. He suggested that this pathway extends from the brainstem to the frontal cortex. Within the pathway the brain was to initiate movement as well as supply internal information which was then used by sensory systems to adjust for resultant changes(5). These adjustments were said to occur within the peripheral receptors and motor planning systems which would then prepare the body for future movements(5). Corollary discharge signals in Sommer's study were identified as movement related activity which projected upstream (up the spinal cord) away from motor neurons, transmitting information but not causing any actual movement (4). By measuring the neuronal firing of the superior colliculus in the frontal cortex during normal and stimulated saccade movements of the eye, Sommer measured the corollary discharge signals in monkeys. The results of Sommer's study suggested that non-human primates did in fact transmit corollary discharge signals during eye saccades which was suggestive of a brainstem to frontal cortex pathway for transmission.

While Sommer's study provided novel and interesting ideas in regard to the specific pathway of corollary discharge, it focused largely of saccadic eye movements in non-human primates. Sommer's non-human primate research findings are consistent with human research; however, they may not be completely generalizable to all human populations. Currently there are various methods for the investigation of corollary discharge which are used in human population. The use of Electroencephalogram (EEG), functional magnetic resonance imagining (fMRI), positron emission tomography (PET) and single photon emission computer tomography (SPECT) all have worked to provide rich and accurate data in regard to both the specific neuronal signaling and function of the corollary discharge system in humans (4). Some of the most promising research, which utilize many of these methods, has been found with the investigation of human subject who suffer from Schizophrenia, a disorder which has been suggested to result from a deficit in corollary discharge function.

Schizophrenic patients with positive symptoms commonly experience auditory hallucinations (2). Although all schizophrenics do not experiences these hallucinations, a modest population report having these experiences (1). The inability to distinguish between covert speech (thoughts) and overt speech (talking) are common characteristics which are said to be a distinguishing marker of auditory hallucinations (1). These deficits of perception have been thought to occur as a result of the dysfunction of the corollary charge system. Without the proper facilitation of overt and covert speech by this system, these different speech mechanisms are easily confused and may result in server neuroses. Normal persons who do not suffer from Schizophrenia are said to be more sensitive to the factors (signals) which help to distinguish overt and covert speech. With these sensitivities, they are better able to make accurate decisions about what speech mechanism they perceive.

The role of corollary discharge in schizophrenia was investigated by Ford, et. al in a 2001 study using Schizophrenics who had reported having auditory hallucinations. Ford recorded subjects responses to covert and overt speech using EEG and sound recordings (1). The study's findings suggested a frontal-temporal pathway for corollary discharge signaling as well as a reduced sensitivity to signaling in schizophrenics providing a basis for the inability to efficiently distinguish between covert and overt speech. While these findings suggest only that Schizophrenics elicit less attention resources (EEG acoustic probe stimulation reduced) to overt speech as apposed covert speech it provides a explanation of why Schizophrenics might experience auditory hallucinations (1).

An additional study by Ford et. al provided a more extensive explanation of the corollary deficits associated with the hallucinations of Schizophrenics. Research findings suggested that Schizophrenics lacked an ability to recognize self-generated speech as their own (2). This inability to distinguish between internal self-general speech and externally generated speech was suggested to result from deficiencies in the corollary discharge systems of Schizophrenics who experienced auditory hallucinations (2).

A recent study by Ray Li, et. al explored the altered performance of schizophrenics in an auditory detection and discrimination task. While the study's findings concurred with the existing literature which suggest a corollary discharge deficit, there was no evidence found which support the idea of the deficit as specific to auditory hallucinations (4). Both positive and negative symptomed Schizophrenics displayed decreases in their sensitivity to perceptual channels during auditory detection tasks as compared to normal subjects. Schizophrenics who experienced hallucinations, however did not significantly differ from non-hallucinating Schizophrenics on their level of perceptual sensitivity (4). These findings pose an interesting contradiction which challenges the original correlation between dysfunctional corollary discharge systems and auditory hallucinations in Schizophrenic patients. It suggested that these hallucinations in fact may not occur as a result of deficits in the corollary discharge systems, but possibly from another underlying mechanism (4).

Both the neurological mapping of the corollary discharge system and its implications in Schizophrenia remain unclear. Sommer et. al provide interesting findings suggesting a brainstem to frontal cortex pathway for corollary discharge in non-human primates. These findings correlate in many ways to the frontal-temporal pathway which Ford proposed in his investigation of Schizophrenic patients. While it is logical that the frontal regions of the brain contributes to the underlying mechanisms of corollary discharge, considering it contains areas such as the Brocas and Wernikies (areas commonly implicated in decision making and planning), the specific neural network of the corollary discharge system other contributing regions requires much more investigation to be determined. Current research regarding the role of corollary discharge and Schizophrenia provide useful avenues for investigation of the specific nature of corollary discharge, if Schizophrenia is truly a disorder which corollary discharge is dysfunctional. While the investigations of both Ford and Ray Li support this idea, various studies propose apposing findings. Further investigation of both the specific deficits of Schizophrenics perceptual incapacities and there relationship to the specific nature of the corollary discharge system should be perform in an effort to better understand and interpret the mapping and functional properties of the corollary discharge system in Schizophrenics and of normal persons.

References

1)Science Direct ,


2)Science Direct,

3)Letters to Nature,


4)Schizophrenia Research, PubMed,


5)Science Magazine,


6)University of Waterloo Cognitive-Neurospych


Narcolepsy: Am I really that tired?
Name: Christine
Date: 2003-04-23 17:12:27
Link to this Comment: 5505


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Sleepiness, whether due to sleep apnea, heavy snoring, idiopathic hypersomnolence, narcolepsy or insomnia from any number of sleep-related disorders, threatens millions of Americans' health and economic security (1). Perhaps somewhat most concerning of these disorders are those that allow sleep without having any control over when it happens-idiopathic hypersomnolence and narcolepsy. The two are closely related in that both cause individuals to fall asleep without such control, yet narcolepsy occurs without any dreaming during naps (2). For years, narcoleptic people have been falling asleep in corners, concerned, as they have given numerous attempts to try to stay focused and awake. But besides the excessive fatigue that people experience, there surely must be more that can be associated with causing such sleepiness among people at an uncontrolled level. There might especially not be a reason involving the I-function of the brain, as people are not aware of when necessarily they will fall into their deep sleep.

Narcolepsy has been clinically defined as a chronic neurological disorder that involves the body's central nervous system (CNS). The CNS is basically like a "highway" of nerves that carries messages from the brain to other parts of the body. Thus, for people with narcolepsy, the messages about when to sleep and when to be awake sometimes hit roadblocks or detours and arrive in the wrong place at the wrong time. This is why someone who has narcolepsy, not managed by medications, may fall asleep while eating dinner or engaged in social activities-or even at times when they are so focused on being awake, yet they cannot be due to their narcoleptic nature.

In many cases, however, diagnosis is not made until many years after the onset of symptoms. Studies on the epidemiology of narcolepsy show an incidence of 0.2 to 1.6 per thousand in European countries, Japan and the United States, a frequency at least as large as that of Multiple Sclerosis (3). This is probably due to the fact that patients only go to see a physician after many years of excessive sleepiness, rather than trying to approach it right away, assuming that sleepiness is not indicative of any kind of disease. Most people actually probably assume that their need to sleep is due to something that can be easily controlled by changing one's daily routines and/or lifestyle, which with such reasoning would easily delay any visits to a physician.

Assumptions aside, recent discoveries indicate that people with narcolepsy lack a chemical in the brain called hypocretin, which normally stimulates arousal and helps regulate sleep. They also discovered that there is a reduction in the number of Hcrt cells or neurons that secrete hypocretin (4). What this is due to is still rather uncertain. It is obvious that some degenerative process occurs which would lower the number of cells/neurons that can secrete this stimulating substance. However, it might also be an immune response to something else occurring in the body. For example, the amount of seratonin in the body may also have something to do with sleep response. During REM sleep, your brain produces a vital chemical called Seratonin (5). Seratonin is a neurotransmitter, involved in the transmission of nerve impulses. Your body uses Seratonin every day and lack of REM sleep means you are producing less or none at all. The substance that processes this neurotransmitter is the amino acid tryptophan. In the brain, it increases the amount of seratonin, which cause one to have better feeling of their well-being. Seratonin is a chemical that helps maintain a "happy feeling," and seems to help keep our moods under control by helping with sleep, calming anxiety, and relieving depression (6). Thus, if there is not enough seratonin produced, mood can be affected as can the ability to sleep. Among other possible factors, these hopefully can account for some of the visible symptoms we can actually see, or in the lapses in neurological pathways, that we cannot see.

Among the symptoms usually observed, excessive daytime sleepiness is usually the first noted. This is probably the most overwhelming and troubling symptom as people who want to stay awake during the day encounter this problem. The "I-function", a link between body and mind, which seems to be in control of conscious actions, does not seem to play a part in narcolepsy. Instead, it seems that it is forced to sit back because of the lapse in neurological connections. We can see this in that in a narcoleptic person, the "I-function" has been affected and thus cannot help regulate sleep under a conscious individual's scope of regular control over their body. This may lead to what is perceived to be as little sleep, as a person may be frustrated and tiring themselves out trying to keep themselves awake. While someone who is narcoleptic may get more sleep than most, they still feel exhausted and sleepy. Also, at times a narcoleptic may also experience sudden excitement or be taken off guard, in which case their muscles may become weak, leading them to collapse and fall asleep-another symptom of narcolepsy known as cataplexy. So in essence, the narcoleptic, on a regular basis listens to its body as to when it goes to sleep without even consulting the "I-function" which results in frustration as people have become accustomed to being in control of such an integral part of their daily lives.

Narcolepsy is also determined through other symptoms like sleep paralysis, hypnagogic hallucinations and automatic behavior. Sleep paralysis is being unable to talk or move for a brief period when falling asleep or waking up. Many people with narcolepsy will suffer short-lasting partial or complete sleep paralysis. Hypnagogic hallucinations are vivid, scary dreams and sounds reported when falling asleep. And automatic behavior, on the other hand, is the habit of carrying out routine tasks but without full awareness or memory of them later (4). These symptoms often aid in a physician's diagnosis of narcolepsy. However, as they are often associated with other disorders, it becomes increasingly difficult to differentiate what constitutes a symptom of narcolepsy compared to the effects seen from some other disorder. There are, on the other hand, polysomnogram tests available that would measure brain waves and body movements as well as nerve and muscle function (4). This is a big leap in helping diagnose narcolepsy, yet it is an option many people might not be able to take on, for whichever reason, be it economically or for possible inconclusive results nonetheless.

There are many treatments currently available to victims of the sleepy disorder. There is currently no permanent cure for narcolepsy, however, the options available help deal with it in a fitting manner. Stimulants are the mainstay of drug therapy for excessive daytime sleepiness and sleep attacks in narcolepsy patients. These include methylphenidate (Ritalin), modafinil, dextroamphetamine, and pemoline. Dosages of these medications are determined on a case-by-case basis, and they are generally taken in the morning and at noon. Other drugs, such as certain antidepressants and drugs that are still being tested in the United States, are also used to treat the predominant symptoms of narcolepsy (7). On the other hand, drug treatment is only one element of dealing with the symptoms of narcolepsy.

Changes in daily behavior can also help to encourage nighttime sleeping. Avoiding caffeine, nicotine and alcohol in the late afternoon or evening, as well as exercising regularly-at least three hours before bedtime are easy things to incorporate into one's lifestyle to help get more sleep in. Eating foods that are high in tryptophan promotes sleep (8). Great examples of this can be found in turkey, milk, bananas and yogurt. Making sure to get enough nighttime sleep in, at least eight hours could help with this, just as taking regular naps that last between 20-40 minutes at a time. These power naps can really help boost one's energy to alleviate the stress of not getting enough rest in one's day.

So the question remains: What really causes narcolepsy? Yes, we already know that it is a chronic neurological disorder that involves the CNS. But what it is that accounts for this lapse in pathway? While much research has been done, it still remains a difficult task to be able to diagnose narcolepsy, as many of the symptoms are often associated with other more credible disorders, at least in the eyes of those avoiding physicians. Many options are available to try to control this disorder. What we have seen to regularly play a huge role in controlling the body-the "I-function", is being forced to stand aside while the body dictates its irregular sleeping pattern. The intricacies of narcolepsy remain something to be further investigated to find a means to diagnose it properly, preferably earlier on, and to treat it so that this disorder can one day not bring about further economic or personal grief.

References


1)Sleep Apnea, Snoring, Narcolepsy, Insomnia and Other Causes of Daytime Fatigue

2)Better Sleep Now!

3)Center for Narcolepsy: Symptoms and Diagnosis

4)Living With Narcolepsy

5)Sleepnet.com Apnea Forum

6)Seratonin: The chemistry of Well-Being

7)Sleep Channel: Narcolepsy

8)Sleep: Alternative and Integral Therapies


Change Blindness
Name: Erin Fulch
Date: 2003-04-26 21:52:59
Link to this Comment: 5521

<mytitle> Biology 202
2003 Second Web Paper
On Serendip

After investigating spatial cognition and the construction of cognitive maps in my previous paper, "Where Am I Going? Where Have I Been: Spatial Cognition and Navigation", and growing in my comprehension of the more complex elements of the nervous system, the development of an informed discussion of human perception has become possible. The formation of cognitive maps, which serve as internal representations of the world, are dependent upon the human capacities for vision and visual perception (1). The objects introduced into the field of vision are translated into electrical messages, which activate the neurons of the retina. The resultant retinal message is organized into several forms of sensation and is transmitted to the brain so that neural representations of given surroundings may be recorded as memory (2). I suggested in my previous paper that these neural representations must be maintained and progressively updated with each successive change in environment and movement of the eye. Furthermore, I claimed that this information processing produces a constant, stable experience of a dynamic, external world (1). However, myriad studies and the testimony of any motorist who has had the unfortunate experience of hitting an unseen object, contradict the universality of that claim and illuminate a startling reality: human beings do not always see those objects presented in their visual field nor alterations in an observed scene (3,4,5,6,7,8,9). The failure to consciously witness change when distracted for mere milliseconds by saccade or artificial blink events is referred to as "change blindness." In order to comprehend this phenomenon, the physical act of looking and the process of seeing must be differentiated. Through an examination of change blindness, we may confirm and attempt to explain this distinction.

The concept of change blindness has been addressed over the course of nearly half a century, with increasing focus on the subject throughout the past five years (3). Although biologists, psychologists, and philosophers have yet to resolve definitively the paradox of looking without seeing, the investigation of each theory on the matter yields deeper insight into visual perception and sight as well as a decreasingly incorrect understanding of those components of the nervous system, which are crucial for visual cognition. Under normal viewing conditions, changes produce transient signals that can draw attention. Change blindness studies are designed to eliminate or block these transient signals by inserting a visual disruption when the change occurs (3). Flicker Paradigm studies examine the occurrence of change blindness and attempt to explain the inability to not see that which is directly in front of our eyes. The Flicker Paradigm demonstrates the essentiality of attention in the process of seeing (4). The alternation of an object and a modified version of that same object is interrupted by millisecond flashes of blank space. Subjects are then asked to report changes in the images.

In order understand the events leading to the failure to recognize change, comprehension of the mechanism by which change is successfully recognized is requisite. According to the traditional understanding of this process, an individual must form an internal representation of the initial display. Then, comparison must be made between the first and second display. Because the initial display is no longer available, the internal representation must be retained for comparison to the second display. Finally, conscious access to the results of this comparison must be available. To explicitly report the change, the observer must be acutely aware of those results. Change blindness, then, is a product of failure in at least one of these three component processes (5). Although the majority of theories on change blindness focus on failed creation of representation or comparison, failed detection has been shown to occur even in the presence of accurate formation of a representation if a proper comparison is not made between that representation and the second display (6). Furthermore, a comparison does not guarantee the avoidance of change blindness if any fault exists with the representation including insufficient detail or entire absence of the changed feature (7).

This same mechanism involved in the perception of change has been investigated through the Flicker Paradigm and described in terms of memory. According to this paradigm, change perception is a highly integrated process involving several steps. First, information must be loaded into visual short-term memory (VSTM) (3). A subject must retain that information over the duration of the blank interval, compare the stored to the visible information in the new display and unload the VSTM and shift attention to a new location. Without focused attention, representations of objects within the brain are ephemeral and no conscious detection of change can occur. Focused attention, therefore, acts as the mediator of change perception by giving objects coherence across time and space (8). Interestingly, much proof exists that something even beyond anticipation of a change must be present for the observation of change. Rensink has shown that even when instructed to look for change, a task that leads subjects to actively attend an image, change can go undetected (4).

The theoretical explanations for the malfunction of change perception are dependent upon the two common understandings of this capacity. The first of these theoretical approaches begin with the assumption that the creation of internal representations of the outside world and subsequent activation of those representations allow us to engage in the experience of seeing. The other prominent approach suggests instead that the world functions as its own external representation, and sight is enabled once an organism has mastered the governing laws of sensorimotor contingency (9). Those who subscribe to the notion of the creation of an internal representation of surroundings attribute change blindness to the sparseness of this representation (4). Thus, only those environmental objects encoded as interesting are attended and seen. When a distracting factors draws attention away from an object observed without attention, a change in that object goes unnoticed because it was not properly and initially represented in the internal image (3). The proponents of the second philosophy, however, contend that the structure of the rules that govern visual perception, sensorimotor contingencies, and the knowledge of those rules mediate the explorative activity which visual perception is understood to be ( 9). A distinguishing law or contingency is the fact that when the eyes close during blinks, the stimulation changes drastically, becoming uniform: the retinal image goes blank. Scientists holding these beliefs suggest that vision requires the satisfaction of two basic conditions. First, the environment must be explored in a manner governed by sensorimotor contingencies, both fixed by the visual apparatus, and those fixed by the character of the observed objects. Second, the brain of the observer must be actively exercising mastery of the laws of sensorimotor contingencies. These theories question the formation of an internal representation under the flicker paradigm circumstances, suggesting instead that the external environment serves as its own representation for immediate probing during which change blindness can occur (9). Results of experiments conducted on these bases show the very presence of a visual stimulus may obligatorily cause observers to make use of the world in the "outside memory" mode, even though it is less efficient that normal memory processing.

In both theoretical approaches, findings suggest limitations on the amount of information that can be consciously retained and compared between two views, even over short delays (6). Thus, successful change detection requires attention to be focused on an object. And yet, all details and aspects of even those objects under close attendance are often only partially retained and compared across views. Visual perception is an extraordinarily complex function, immense in the levels of processing as well as in its differential success in variable tasks. This variation in capability results from the components of perception and their respective operations. For example, short-term visual memory, which essential for detection of change, is found to exhibit exquisite performance in memory tasks but has a decreased threshold under certain circumstances (6). Current research is valuable, not for its conclusive findings regarding the occurrence of change blindness, but for the implications it holds for perception and for the future of its study. Work is being done continuously to elucidate the relative roles of spatial location, memory, sensory and attentional mechanisms, and general psychophysical capacities in response to specific cognitive demands (3, 7, 9, 10). By exemplifying the potential for failure of a system that we assume to work flawlessly in our daily interactions, change blindness has incited uniquely directed investigations of visual consciousness and facilitated the overall comprehension of the human neurological experience.

References

1)"Where Am I Going? Where I Have Been", First Web Paper

2)Human Position Sense and Spatial Maps, a resource on spatial cognition

3)Rensink Collection, a grouping of articles by one of the creators of the Flicker Paradigm

4) Annual Review of Psychology , an article by Ronald Rensink on change detection and visual perception

5)Cognet, a site on Cognition

6)Memory For centrally attended changing objects in an incidental real world change, An article by Levin, Simons, Angelone, and Chabris

7) Scott-Brown, K.C. & Orbach, H.S. (1998) "Contrast Discrimination, Non-Uniform Patterns and Change Blindness". Proceedings of the Royal Society of London. 256 (1410): 2159-2164.

8)Max Planck Institute

9)A sensorimotor account of vision and visual consciousness , Behavioral and Brain Sciences article from 2001

10)Glasgow Caledonian University, current research in vision sciences


A Review: The Forbidden Experiment by Roger Shattu
Name: Geoff Poll
Date: 2003-04-27 02:34:01
Link to this Comment: 5522


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

It is one of the oldest unanswered questions in all of science. Though slightly more grounded in empirical science than the likes of "Where did we come from?" or "Why are we here?" the impossible Nature/Nurture dichotomy has tormented truth-bound scientists for years. Recent advances in genetics have brought forward new possibilities for those who would study the pure effects of environmental variables on animals, but we are far from allowing ourselves to manipulate other human beings in such ways for the sake of collecting data. This strong moral stance does not diminish our curiosity and so the question must be asked: What would we do if a case in which the human had already been manipulated, by no will of our own, fell into the hands of science? How far would we go?

Every couple hundred years, one of these humans, by chance or by a case of true cruelty, falls into the hands of scientists, eager to make the most of such a 'misfortune'. Roger Shattuck's The Forbidden Experiment follows one of the more prominent cases of our recent history, that of the 'Wild Boy of Aveyron.'

The book takes little time to peak the reader's curiosity with the tale of a "savage" twelve-year-old wandering out of the woods of southern France on a cold January evening in 1800. Without a known history or the ability to communicate with his captors, Victor, as he was later named, was assumed to have lived in the wild for at least six years and probably more. In the midst of an intellectually lively France, Victor wandered into immediate fame and was brought to Paris so that the most capable scientists could take advantage of studying a human raised almost completely in isolation. The story is a short one, lasting only the 28 years the Victor lived in Paris society before his death in 1828. The duration of his immediate interest to the scientific community in Paris was far shorter. (1)

To an intellectually enlightened community the contradiction should have been apparent: They detained Victor and forced him to learn how to live as a proper human being in civilized society; he was treated every step of the way as if he were an animal. The inconsistencies were nowhere more apparent than they were in the often times tortured mind of the main scientist overseeing the progress of Victor, the young and gifted Dr. Itard. It was Itard who subjected Victor to inhumane treatment from keeping him on a leash during walks around the grounds, to using violence to arrive at a desired level of obedience. While popular opinion had Victor as some kind of savage creature, which justified the type of treatment he received, Itard disagreed. His own radical beliefs that Victor was more than animal had already condemned his own methods. (1)

The span of Shattuck's narrative falls within the five years Itard spent attempting to give back to Victor human qualities we all grow up with, but which Itard believed had lain dormant for the years that Victor spent in the wild. The author's treatment of Itard makes for most of the interesting reading of the book. Shattuck admires in Itard his determination, his passionate curiosity and his faith in the innate goodness of the almost altogether unresponsive being in front of him. It is through much of Itard's own pen that we get not only the notable events of Victor's progress, but a look inside the mind of a man who needed to believe that we are made of something more than the right combination of a biological foundation and a timely social training. It was the training which Victor lacked, but in his own training Itard was searching for a third element—that which makes us human. (1)

His search was misguided and after his original diagnosis, he never freed himself from a strict idea of what success with Victor would have been. He saw his biggest challenge as that of language and in the end his inability to provide Victor with the gift of speech gives him the feeling of failure. Shattuck aptly jumps on the irony of that situation: Itard kept Victor in an institution for deaf-mutes, and he never once attempted to teach Victor sign language. (1)

At each step of the training he hoped for Victor to act like his idea of a human while treating him still more and more like a caged animal. The most striking series of events arrived with the onset of puberty in Victor. Itard saw the driving sexual force of puberty as a push towards love, the most human of all emotions, and towards the forming of new ties and relationships between young people. He had kept Victor isolated from peers but when the boy began to feel sexual urges, the scientist set up controlled situations with females where he expected Victor's natural urges to lead him towards some kind of more substantial communication. Without any kind of reference to go by Victor felt frustrated and gave up. Itard gave in as well and condemned Victor as an idiot. (1)

Years later, shortly after his death, a younger doctor honored Itard's work in a statement that throws light on Itard's greatest obstacle—his own colleagues. "To train an idiot," praised Dr. Bosquet, "to turn a disgusting, antisocial creature into a bearable obedient boy—this is a victory over nature, it is almost a new creation."(165). It was this way of thinking about people that allowed the experimentation to proceed and a very creative young scientist to find himself walled in. (1)

Sadly, Victor and Itard's case does not stand alone. Among many other claims of "wild" children, the most famous recent case was that of Genie. Thirteen years old when found in 1970, Genie had been deprived by her parents of all social contact for the previous ten years of her life. She was physically debilitated and was not able to speak, a talent she never fully retrieved. As soon as she was discovered by a social worker, scientists apprehended Genie with the same vigor as they had Victor 150 years earlier. (2)

Harlan Lane, a psycholinguist and author of a book about Victor called The Wild Boy of Aveyron, justifies the work done on Genie, pleading, "It's a terribly important case." Lane reasons, "Since our morality doesn't allow us to conduct deprivation experiments with human beings, these unfortunate people are all we have to go on." (3)

Victor was abandoned by Itard after he failed to produce speech. He lived the remainder of his life in social isolation, not at all a deviation from his first 18 years. (1) Genie continues to live but struggles with all aspects of her life. Contact with the researchers assigned to her case was broken off after they were accused of treating her inhumanely for the gain of fame and prestige, without any hint of scientific direction. The suit was settled out of court. (3)

Shattuck approaches the case of Victor with great curiosity and in many way justifies the ignorance and cruelty of Itard and the other scientists of the time. In a more recent book, Forbidden Knowledge, that innocence is less evident as he makes some heavy criticisms of our natural affinity towards inquisitiveness. He alludes mostly to mythology; his examples range from Adam and Eve to Mary Shelley's Dr. Frankenstein. Each classic case demonstrates an instance of temptation overriding intellect. (4),(5) Victor, on the other hand, was unconcerned with knowledge and ego. His mind was freer than ours may ever be. Shattuck speaks of Victor escaping from "humanity into animality"(181)(1). It is an attractive concept.

It makes sense that we would like Victor to speak to us. We want to know what he knows. Itard never thought of Victor's mental state in terms of knowledge, but preferred forgetfulness. (1) Is that what we want; to forget? What are our motives in these 'unfortunate' instances? Would we learn from the dumb how not to speak; how to forget? Or would we teach language and culture so that Victor may live with us and suffer as we do? What does that make us?

References

1) Shattuck, Roger. The Forbidden Experiment. New York: Farrar Straus Giroux, 1980

2)NOVA transcript, transcript for 'Genie' episode

3)The Civilizing of Genie , the story of Genie

4)Online News Hour, Shattuck interview

5)Ethical Culture Book Review, review of Forbidden Knowledge

6)Feral Children Website, a great resource about 'wild' children


Autism: A lack of the I-function?
Name: Kate Shine
Date: 2003-04-27 19:36:12
Link to this Comment: 5527


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

In the words of Uta Frith, a proclaimed expert on autism, autistic persons lack the underpinning "special feature of the human mind: the ability to reflect on itself." ((3)) And according to our recent discussions in class, the ability to reflect on one's internal state is the job of a specific entity in the brain known as the I-function. Could it be that autism is a disease of this part of the mind, a damage or destruction to these specialized groups of neurons which make up the process we perceive as conscious thought? And if this is so, what are the implications? Are autistic persons inhuman? Which part of their brain is so different from "normal" people, and how did the difference arise?

The specific array of symptoms used to diagnose an individual as autistic do not appear as straightforward as Frith's simple statement. It seems hard to fathom that they could all arise from one similar defect in a certain part of the brains of all autistics. Examples of these symptoms include a preference for sameness and routine, stereotypic and/or repetitive motor movements, echolalia, an inability to pretend or understand humor ((3)), "bizarre" behavior((4)) and use of objects ((2)), lack of spontaneity, excellent rote memory ((2)), folded, square-shaped ears ((3)), lack of facial expression, oversensitivity, lack of sensitivity, mental retardation, and savant abilities.

Obviously not all autistics exhibit all of these characteristics. Psychologists, however, often believe certain symptoms to be more indicative of the disease than others. The world autism stems from a Greek word meaning, roughly, "selfism." Autistics are described as very self-absorbed, and some academics refer to a short list of three characteristics to diagnose them. These are impairment of social interaction, impairment of communication without gestures, and restrictive and repetitive interests. ((3)) Tests have also been designed in attempt to diagnose the disease decisively. One of these is the Sally-Anne test, in which autistic children usually show an inability to understand the perception of another child in a scenario or to understand that she could believe something false. ((4)) Another test involves two flashing lights. Autistic children will look at a light if it flashes, but other children will show a tendency to move their eyes toward a new light while autistic children will not. ((3))

Observation of the behavior of autistics makes it clear that they do interact with their world and understand certain aspects of it to a degree. However they often appear intensely focused on one perception or sensory experience and are unable to integrate multiple factors of emotion, intention, or personality in the way most people do. As a result of this inability to perceive order in all of the circumstances of their environment, they often find themselves in a world that seems very chaotic and random.

Jennifer Kuhn hypothesizes that many of the symptoms of autism are defense mechanisms stemming from a feeling of helplessness to control a situation. ((3)) Rats, when they are forced to jump at one of two doors that are randomly chosen to open unto food or stay closed and hurt the rat, will always pick the same door. (Lashley and Maier, 1934) Thus the repetitive behavior of autistics, their insistence on routine, and their echolalia can all be explained as attempts to calm themselves by achieving an inner order. They are effectively forsaking experimentation in an attempt to avoid failure. This might be evidence for damage to an I-function, as we have discussed that observation and experimentation are innate behaviors of humans, and that they require personal reflection.

However, many autistics do not exhibit behaviors such as echolalia all the time but rather more frequently when they are in an unfamiliar environment. ((1)) And I have observed in myself a tendency to get a nervous repetitive motion in my foot in certain situations or even a rhythmic movement of my head when I am concentrating very intensely, but I am still confident of my ability to reflect on my own thoughts.

One of the most obvious ways to find out if autistics share a collective brain abnormality that causes their unique array of characteristic behaviors is to look into the structure of the brain itself. In studies of autistic brains use scanning images as well as autopsies, scientists have generally not been able to agree on a single consistent abnormality. Some observations have been that the neurons in the limbic systems of the autistic subjects are often smaller and more densely packed than those of other individuals. Monkeys whose limbic systems were removed in experiments showed weaknesses in social interaction, memory, as well as exhibiting locomotor stereotypes similar to those of autistics. ((2))

Another area that often exhibits abnormality in autistic brains is the cerebellum, especially in the CA1 and CA4 cells were there are less dendritic arbors and less Purkinje and granule cells. ((2)) Purkinje cells are responsible for cell death, and in where they are in small number cells may be allowed to crowd each other and not develop networks properly. This could account for autistics' tendency to be overwhelmed by stimulation. The cerebellum is also responsible for muscle movement, and other studies have found significantly fewer numbers of facial neurons (400 as compared to 9,000 in a control brain), which could also account for absence of facial expression. ((3)) Other areas of the brain which some studies have argued are abnormal in autistic brains are the left hemisphere which controls language ((5)), ((6)) the temporal lobes which control memory((2)), and the brain stem((3)).

However, many of these findings are still considered controversial or inconclusive and may not only be attributed to autism but to many people with developmental disorders. And even if these observations are accurate there is still no explanation for why these differences in brain structure occur simultaneously.

One very convincing hypothesis to explain these occurrences has been proposed by Patricia M. Rodier. Inspired by the fact that relatives of autistics are much more likely than the natural population to be diagnosed with the disease, Rodier became convinced that there must be some definite genetic factor contributing to it. However there was still the fact that an identical twin with an autistic sibling will only contract autism about 60% of the time, which makes an argument for environmental influence as well.

She then came upon a study of children whose mothers were exposed to a drug called thalidomide during their pregnancy, and found that a full 5% of their children were diagnosed as autistic. By combining information about the drug with other birth defects in the ears and faces of the children, she was able to guess that autism began 20 to 24 days after conception, when the ears and the first neurons in the brain stem are starting to grow. When she began to examine the brains of previously autopsied autistics, she noticed they showed evidence of particular short brain stems and often they also had a small or even absent facial nucleus and superior olive.

This information suggests that the genetic factor which causes these abnormalities in the primitive brain could then go on to cause the secondary abnormalities in the various other more sophisticated parts of the brain already mentioned. And Rodier has identified a gene known as Hoxa1 which is present in 40% of autistics but only 20% of non-autistics. It is found more often in relatives who are autistic than those who are not. Although this is by no means a conclusive argument that autism is genetic, there could be other variants of that allele which contribute to autism or certain genes which decrease the risk of it which haven't yet been discovered but could account more strongly for genetic determination. ((3))

So could there be specific genes which actually code for the I-function? This is possible, but there is still certain evidence for environmental factors causing some of the symptoms of autism. In-utero exposure to not only thalidomide but also rubella, ethanol, and valproic acid have been liked to the disease. And there is a significant population of people who believe that the elimination of glutens and luteins found in milk, apple juice, wheat, and other products relieves the build up of certain chemicals in the brain and makes autistics more able to fit in with society. ((8)) A recent study of abused children has also claimed that the left hemisphere of the brain, specifically areas dealing with memory and expressing language in the limbic system, can be overexcited and damaged as a result of intense sexual or physical abuse. ((5)) Perhaps all of these environmental factors do additional damage to the different sectors the I-function.

What does all of this say about autistic people? Do the characteristic differences in their brains necessarily indicate that they are lacking this "I-function" or awareness part of the mind that makes someone human? I do not think so. The fact that rats and monkeys can show similar symptoms when these areas of their brains are damaged destroys this argument. The self-awareness or I-function does not appear to be limited to humans. And it may just be that the I-function operates differently or is damaged in the minds of autistics. There are huge degrees of severity in what are called the "autism spectrum diseases." These can be defined to include dyslexia, ADHD, Asperger's syndrome, pervasive developmental disorder, and more, with autism being the most severe. And any person can have symptoms of autism without being fully "autistic." But even severe changes in the I-function system should not necessarily be considered defects.

It is entirely possible that autistic people have thoughts and self-reflections which they cannot communicate through language or in society's "normal" modes of expression. While they do seem do have a diminished capacity to reflect on as much as other people do of the world at once, it may be that what they do reflect on could be just as much or more meaningful. Autistic savants, for example, show an amazing ability in one area such as music, art, or calculation which often surpasses what it seems a "normal" person could ever achieve. And these creations are not just byproducts of the rote memory and habit that some scholars insist is all that exists in autistics. It is my personal opinion after viewing the art of an autistic savant, Richard Wawro ((9)), that the work and the artist are infused with at least as much emotion and unique perspective as any "normal person."

Frith also argues that autistics are "...not living in a rich inner world but instead are victims of a biological defect that makes their minds very different from those of normal individuals." ((4)) But who is it that has the authority to decide what a "normal" individual is? The defense mechanisms that autistics display are evidence that they do experience distress and anxiety, but Freud argued that non-autistic or what Frith would call "normal" individuals also display a number of common defense mechanisms as a result of the anxieties in their lives, and this view is still widely accepted. Just because these may appear more normal or subtle to us does not mean they are not important.

In a society of autistics, we might feel a great deal of incoherence at how they might organize their world, and we might found ourselves the ones unable to communicate. In the movie "Molly," which concerns a fictional autistic woman who becomes normal and reflects on her experiences, there is a speech that illustrates this point beautifully. "In your world, almost everything is controlled....I think that is what I find most strange about this world, that nobody ever says how they feel. They hurt, but they don't cry out. They're happy, but they don't dance or jump around. And they're angry, but they hardly ever scream because they'd feel ashamed and nothing is worse than that. So we all walk around with our heads looking down, but never look up and see how beautiful the sky is." ((10)) If autistic people can possibly use their minds to see a beauty we cannot imagine, who are we to call them the abnormal victims? If they can only see the world in a sincere way and not understand irony or humor, is that necessarily a bad thing? We cannot truly know what autistics are aware of internally unless we have experienced their world.


References


1) Stereotypic Behaviors as a Defense Mechanism in Autism , from Harvard Brain website, Jennifer Kuhn, 1999.
2) Neurobiological Insights into Infantile Autism , Amy Herman, 1996.
3)Rodier, Patricia M. "The Early Origins of Autism." Scientific American February 2000: 56-63.
4)Frith, Uta. "Autism." Scientific American June 1993, reprinted 1997: 92-98.
5)Teicher, Martin H. "The Neurology of Child Abuse." Scientific American March 2002: 68-75.
6) Autism and Savant Syndrome, , web paper by Sural Shah.
7) Considerations of Individuality in the Diagnosis and Treatment of Autism, , web paper by Lacey Tucker.
8) Diagnosis: Autism, , Mothering Magazine, Patricia S. Lemer, 2003.
9) Paintings by Richard Wawro, , an online gallery of an autistic savant's work.
10Duigan, J. (Director). (1998) Molly. [Video-tape]. Santa Monica, CA: Metro-Goldwyn-Mayer Pictures Inc.


Something Happens Sometimes: A sample and critique
Name: Sarah Feid
Date: 2003-04-28 17:12:51
Link to this Comment: 5537

<mytitle> Biology 202
2003 Second Web Paper
On Serendip

"Do you remember how electrical currents and 'unseen waves' were laughed at? The knowledge about man is still in its infancy." - Albert Einstein

Introduction

Perception of future events (precognition), communication through thoughts (telepathy), material manipulation without physical contact (telekinesis), sight of an object or place millions of miles away with enough accuracy to draw it (remote viewing) – these are a few cases of what is referred to as "psi phenomena," also known as parapsychological or psychic phenomena. "Psi" refers to "anomalous processes of energy or information transfer... that are currently unexplained in terms of known physical or biological mechanisms."(1) Long dismissed by scientists and other skeptics all over the world, these occurrences are often attributed to trickery, hallucination, lying, chance, and even spiritual influence. Claims of psychic ability come from many varied sources. From the friend who has premonitory dreams and the dog who knows when the master has decided to come home, to the glamorous astrologer with a 900-number and the clairvoyant with a TV show, stories of paranormal abilities range from personal and thought-provoking to distant and Hollywood-esque. Are these things really possible? What does the scientific community actually know about these phenomena? Ultimately, one must ask the question, what can the scientific community know about these phenomena?

This paper is intended to provide a small sample and critique of the available scientific research on these unexplained and often dismissed phenomena. The examples which form this review are: research on unexplained phenomena not associated with "psychic" individuals, large-scale research centering on many individuals with "psychic talent," and an investigation of the claimed abilities of a single internationally celebrated "psychic."

Despite the historical and prevalent stigma and sensationalization associated with this field, many respected educational establishments have laboratories involved in the research of psi. The Princeton Engineering Anomalies Research program, instituted in 1979 to investigate mind-matter interactions (2); the Parapsychological Association, a 1957 offshoot of the Duke Laboratory (3); the Koestler Parapsychology Unit at the University of Edinburgh (4); and Stanford University's 1946 endeavor, Stanford Research Institute are four of these. It should be noted that Stanford Research Institute separated from the university in 1970, and became SRI International. (5)

Examples

Impersonal phenomena
If a person is asked to identify the color of a rectangle, and is subsequently asked to read a randomly generated color name, it is well-known that a matching color name will be called out faster than a mismatching color name. This pattern of outcome, the Stroop effect, has been used since the 1930s to measure cognitive interference. By recording the time it takes to announce the rectangle's color (T1) and comparing that to the time it takes to read the subsequent color name (T2), researchers can approximate how long it takes individuals to process these inputs. (6)

Holger Klintman, a clinical psychologist at Lund University in Sweden(7), was attempting to improve the sensitivity of these measurements when he discovered another interesting pattern – there was correlation between T1 and T2. Specifically, if the color name and the rectangle color matched, then T1 for that trial was faster than if they mismatched. That is, if the subject was shown a red rectangle, and the randomly generated color name in the future was going to be the word 'red', the subject announced the color of the rectangle faster. This is a startling correlation – since the color name was randomly generated only after the rectangle image was taken away and T1 recorded, there was no logical way for the future word to have affected T1. And yet, it did. In fact, in five experiments designed to rule out the possibility of mechanical or design influence, he reported a surprising cumulative p-value of 10^-6. Dean Radin of the Boundary Institute revisited this phenomenon with a paper published in July of 2000, where he reported similar correlation with a different experimental design, and a highly significant p-value of 0.001. (6)

Government Research
In 1969, US intelligence sources concluded that the Russian government was investing large amounts of time and money into development of 'psychotronic' weaponry, which included psychic spies. In response, the US government began research in 1972 into the viability of remote viewing as an intelligence tool. The program, originally the SCANATE program under the Central Intelligence Agency (CIA), was maintained for 23 years under various names. In that time, several hundred projects involving thousands of remote viewing sessions were completed. Psychics from this program were made available to several intelligence branches, including the CIA, the National Security Agency (NSA), the US Army Intelligence and Security Command (INSCOM), the Army Chief-of-Staff for Intelligence (ACSI), and the Defense Intelligence Agency (DIA). (8)

The SCANATE program, eventually termed the STAR GATE program, combined active operational training in psychic intelligence gathering with intensive laboratory research. The laboratory component began at SRI in 1972. The researchers collected individuals who they thought showed natural psychic ability, with a minimum accuracy rate of 65%; 23 remote viewers in total were involved in the STAR GATE program. One of the fruits of this program was a set of instructions developed by professed psychic Ingo Swann, which supposedly allow any human being to develop the ability to remote view. This instruction set was used in the training aspect of the program. The National Academy of Science's National Research Council reviewed the program unfavorably in 1984, and the American Institutes for Research released a report in 1995 that stated a statistically significant effect had been demonstrated in the program, with a 15% accuracy rate, but which was overall a negative review. The CIA concluded that no useful intelligence data had ever been provided by the program, and terminated it in 1995. (8)

Professional Psychic
Uri Geller, an Israeli psychic, is an unabashed showman. He has been internationally famous since the 1970's for his purported ability to bend spoons with the lightest touch, and often no touch at all. He has been demonstrating this ability to friends since he was five years old, and has been giving paid public performances since 1969. An author, artist, inventor and controversial figure, he has been alternately praised and condemned by the media as a true psychic and a dramatic magician, respectively. He claims many abilities – telepathy, telekinesis, remote viewing, dowsing (detecting specific metals and minerals in the ground), precognition, the ability to erase digital media, and the ability to make seeds sprout in his hand within seconds, are among them. He credits himself with helping certain sports teams win, with influencing the US Army, and with possibly averting World War III by mentally bombarding a Russian-American diplomatic meeting with thoughts of peace. Like other psychics, his abilities do not work under all circumstances, and while his successes support his claims, his failures often lead to wholesale dismissal of them. (9) (10)

Uri Geller was brought to the Stanford Research Institute in 1973 by respected physicists Harold Puthoff and Russell Targ (11) of the SRI Electronics and Bioengineering Laboratory. They carried out a series of blind and double-blind experiments in a controlled environment, during which Geller was asked to demonstrate extra-sensory perception in two protocols: remote viewing of selected images, and remote viewing of a die shaken in a steel box. In the first series, Geller produced drawings of images which were sketched by people not in contact with Geller, and which were sealed in multiple opaque envelopes which he could not touch and often was not allowed to see. In the second series, he determined the upward face of a die shaken in a box. While Geller on occasion did decline to provide a response when he was "unsure," all of the responses he did provide were without error: the die faces were correct in each of the eight trials, and the target pictures were correctly matched with Geller's sketches by two SRI scientists unassociated with the project. The probabilities of these outcomes occurring by chance were 3x10-7 and 1x10-5 against, respectively. While Puthoff and Targ reported also observing Geller's metal-bending abilities, they concluded that the control conditions were not sufficiently stringent as to provide data supporting the paranormal claim. (12)

Discussion

Claims of this nature are difficult to assess. Support for them is often largely anecdotal, and full of uncertainty. A psychic can rarely perform before a group of expert magicians; it could be that they create a stressful and inhibitive environment that is by its nature detrimental to the psychical mechanism, or it could be that the psychic dare not reveal himself as a charlatan while surrounded by those who would surely catch him. While the psychics ply their trade, the skeptics seek to debunk them at every possible turn. (13) And yet, while the skeptics ply their trade, others seek to debunk their debunkings! (11) In many cases the believers document their cases just as well as the skeptics.

Even controlled research presents a fundamental problem: how does one even begin to design an experimental protocol for phenomena which defy biology and physics as we know it? The fact is, experiments designed to test the existence of psi cannot in reality be trusted to do just that. All such experiments can do is provide possible evidence about the nature of psi. Without knowing what exactly one is testing, one cannot narrow down the possible factors affecting it, and therefore one cannot effectively interpret any results.

In many ways, such research is simply running in circles. For example, Dean Radin's Stroop-based experiment yielded significant results; the same experiment does not always yield significant results. What if the mechanism he was observing was affected by the wavelength of the lights in the lab, the gravitational pull of the moon, the collected moods of those involved with the study? Without knowing the nature of the mechanism, we cannot know what is capable of influencing it, and therefore can reach no significant conclusions from either its success or failure at materializing. All we can say, from Radin's or SRI's research, is that something happens sometimes. Which, ultimately, is what Uri Geller or any psychic would say.

These examples were presented as a variety of sources both of data and interpretations. While there are fundamental limits to the ability of this data to actually describe any mechanism clearly, it does afford us the opportunity to acknowledge that there may indeed be aspects of biophysical interactions that we have seldom observed and never explained. Neither parapsychologists nor skeptics can prove that psi is real or not real. All they can do is interpret the data, whether it be data from a primary paper or data from one's own experience.

References

1) http://comp9.psych.cornell.edu/dbem/does_psi_exist.html"
Cornell University, Psychological Bulletin,1994, Vol. 115, No. 1, 4-18. Does Psi Exist? Replicable Evidence for an Anomalous Process of Information Transfer by Daryl J. Bem and Charles Honorton. A very clear, useful online paper concerning parapsychology.

2)http://www.princeton.edu/~pear/
The Princeton Engineering Anomalies Research: Scientific Study of Consciousness-Related Physical Phenomena website.

3)http://www.parapsych.org/
The Parapsychological Association website.

4)http://moebius.psy.ed.ac.uk/
The Koestler Parapsychology Unit at the University of Edinburgh website.

5)http://www.sri.com/
The SRI International website.

6) http://www.boundary.org/articles/tri2.pdf
Evidence for a retrocausal effect in the human nervous system. Radin, D. and E. May. One of several primary papers available on the Boundary Institute website.

7) http://www.lu.se/lu/engindex.html
The Lund University website.

8) http://www.fas.org/irp/program/collect/stargate.htm
The Federation of American Scientists' intelligence projects database's STAR GATE entry.

9)http://www.uri-geller.com/
Uri Geller's website.

10) http://www.andrewtobias.com/newcolumns/990324.html
A columnist's personal experience with Uri Geller.

11) http://www.michaelprescott.freeservers.com/FlimFlam.htm
A critical review of James Randi's skeptical book, Flim Flam Flummery!

12) http://www.uri-geller.com/content/research/sria.htm
The text of the SRI report documenting experiments with Geller and others. Found on Geller's website.

13)http://www.randi.org/
The James Randi Education Foundation. Instructional skeptical resource from professional magician and self-proclaimed psychic debunker "The Great Randi".

Fun resources:

1)http://www.psipog.net/
Psychic Students in Search of Guidance online community. Offers peer instructions, media files, and community activities.

2)http://www.fork-you.com/
Personal site of a professed fork-bender. Offers photographs, descriptions, and instructionson how to mentally channel energy into metal to render it soft enough to twist into tight spirals and interesting shapes before it rehardens.


The Genetic Basis of Addiction
Name: Neesha Pat
Date: 2003-04-29 02:31:13
Link to this Comment: 5556


<mytitle>

Biology 202
2003 Second Web Paper
On Serendip

Addiction is defined as a compulsive physiological need for and use of a habit-forming substance, characterized by tolerance and by well-defined physiological symptoms upon withdrawal. Broadly, it is defined as a persistent compulsive use of a substance known by the user to be physically, psychologically, or socially harmful (1). Addiction is a complex phenomenon with important psychological and social causes and consequences. However, at its core it involves a repeated exposure to a biological agent (drug) on a biological substrate (brain) over time (2). Most abnormal behaviors are a consequence of aberrant brain function, which means that there is a tangible goal to identify the biological underpinnings of addiction. At a NIDA meeting a decade ago, a scientist announced to a spellbound audience that he had identified some of the genes associated with drug abuse. He described the mutations in those genes that lead people to abuse marijuana, heroin, cocaine, and other drugs. His landmark discovery brought scientists a giant step closer to dramatically curbing drug abuse (5). The genetic basis of addiction encompasses two broad areas of inquiry. One of these is the identification of genetic variation in humans that partly determines susceptibility to addiction. The other is the use of animal models to investigate the role of specific genes in mediating the development of addiction. Whereas recent advances in this latter effort have been rewarding, a major challenge remains, to understand how the many genes implicated in rodent models interact to yield as complex a phenotype as addiction (7). What I questioned in this paper was how and why researchers had turned to genetics to solve the problems of addiction. I strive to give an account (using numerous experiments as examples) of why genes play the most important role in addiction and how manipulations of genes can lead to possible cures for addictive personalities. Further, the ponderance that Brain = Behavior and the inherent ramifications of such, proves no more fascinating than when addressed in the context of the biological basis of addiction.

Investigators believe that there is great variability among individuals when it comes to their vulnerability to becoming addicted. "Pretty much everyone enjoys having their dopamine levels shoot up dramatically, which can happen without the use of drugs, as with the 'runners high'. However, not everybody craves the experience so much that it consumes them. Nor does everyone experience the same changes in brain function!" says Alan Leshner, director of NIDA (5). For marijuana addicts, the non-family environment has the biggest influence, accounting for 38% and with genes accounting for 33%. Thus we see that the ability of drugs of abuse to alter the brain does indeed, in part, depend on genetic factors. Acute drug responses as well as adaptations to repeated drug exposure can vary markedly, depending on the genetic composition of the individual. Genetic factors can also influence the brain's responses to stress and are thus also likely to contribute to stress-induced relapse (2).

A gene might contribute to addiction vulnerability in several ways. A mutant protein (or altered levels of a normal protein) could change the structure or functioning of specific brain circuits during development or in adulthood. Animal models have provided information that adaptations that drug exposure elicits in individual neurons alter the functioning of those neurons, which in turn alters the functioning of the neural circuits in which those neurons operate. This leads eventually, to the complex behaviors (for example, dependence, tolerance, sensitization and craving) that characterize an addicted state (1). A critical challenge in understanding the biological basis of addiction is to account for the array of temporal processes involved. Thus, the initial event leading to addiction involves the acute action of a drug on its target protein and on neurons that express that protein (7).

A study was conducted at Yale University Department of Psychiatry and Pharmacology, on the molecular and cellular adaptations that occur gradually in specific neuronal types in response to chronic drug exposure, particularly those adaptations that have been related to behavioral changes associated with addiction. The study focused on opiates and cocaine, not only because they are among the most prominent illicit drugs of abuse, but also because considerable insight has been gained into the adaptations that underlie their chronic actions. The results of the study showed that the best established molecular adaptation to chronic drug exposure is up-regulation of the adenosine 3',5'-monophosphate (cAMP) pathway. This phenomenon was first discovered in cultured neuroblastoma and glioma cells and later demonstrated in neurons in response to repeated opiate administration. Acute opiate exposure inhibits the cAMP pathway in many types of neurons in the brain, whereas chronic opiate exposure leads to a compensatory up-regulation of the cAMP pathway in a subset of these neurons. Up-regulation of the cAMP pathway would oppose acute opiate inhibition of the pathway and thereby would represent a form of physiological tolerance. Upon removal of the opiate, the up-regulated cAMP pathway would become fully functional and contribute to features of dependence and withdrawal (1).

Many researchers attempted to identify the genetic basis of these behavioral differences by the use of Quantitative Trait Locus (QTL) analysis. There is now direct evidence to support this model in neurons of the locus coeruleus, the major noradrenergic nucleus in the brain. These neurons generally regulate the attentional states and activity of the autonomic nervous system and have been implicated in somatic opiate withdrawal. Up-regulation of the cAMP pathway in the locus coeruleus appears to increase the intrinsic firing rate of the neurons through the activation of a non selective cation channel. This increased firing has been related to specific opiate withdrawal behaviors. cAMP Response Element Binding (CREB) protein, one of the major cAMP regulated transcription factors in the brain, was knocked out to create mutant mice. These mutated mice deficient in CREB showed attenuated opiate withdrawal (1). Overexpression of CREB in the nucleus accumbens counters the rewarding properties of opiates and cocaine; overexpression of a dominant-negative CREB mutant has the opposite effect. These findings suggest that CREB promotes certain aspects of addiction (for example, physical dependence), while opposing others (for example, reward), and highlight that the same biochemical adaptation can have very different behavioral effects depending on the type of neuron involved (1).

A recent NIDA funded study illustrates how genetic differences can contribute to or help protect individuals from drug addiction. The study shows that people with a gene variant in a particular enzyme metabolize or break down nicotine in the body more slowly and are significantly less likely to become addicted to nicotine than people without the variant (5). Dr. Edward Sellers of the University of Toronto examined the role that a gene for an enzyme called CYP2A6 plays an active role in nicotine dependence and smoking behavior. CYP2A6 metabolizes nicotine, the addictive substance in tobacco products. Three different gene types or alleles for CYP2A6 have been identified by previous research- one fully functional allele and two inactive or defective alleles. Each person has a maternal and paternal copy of the gene. Therefore, a person can have two active forms of the gene and normal nicotine metabolism; one active and one inactive copy and impaired nicotine metabolism; or two inactive copies, which would further impair nicotine metabolism (6).

The study found that people in a group who had tried smoking but had never become addicted to tobacco were much more likely than tobacco-dependant individuals in the study, to carry one or two defective copies of the gene and have impaired nicotine metabolism. The researchers theorize that the unpleasant effects experienced by people learning to smoke, such as nausea and dizziness, last longer in people whose bodies break down nicotine more slowly. These longer lasting aversive effects would make it more difficult for new smokers to persist in smoking, thus protecting them from becoming addicted to nicotine. Generally, smokers with slower nicotine metabolism do not need to smoke as many cigarettes to maintain constant blood and brain concentrations of nicotine (5). Currently researchers are looking into the possibility of being able to block the enzyme, consequently leading to the prevention of people becoming smokers.

Several large chromosomal regions have been implicated in addiction vulnerability, although specific genetic polymorphisms have yet to be identified by this approach. It has been established however, that some East Asian populations carry variations in enzymes (for example, the alcohol and aldehyde dehydrogenases) that metabolize alcohols. Such variants increase sensitivity to alcohol, dramatically ramping up the side effects of acute alcohol intake. Consequently, alcoholism is exceedingly rare in individuals for individuals that are homozygous for ALDH2 allele, which encodes a less active variant of aldehyde dehydrogenase (7).

Despite the progress made in the genetic field, candidate gene approaches are limited by our rudimentary knowledge of the gene products and the complex mechanisms underlying addiction (2). As a result, more open-ended strategies are needed, such as those based on analysis of differential gene expression in certain brain regions under control and drug-treated conditions. Differential display, for example, enabled the identification of NAC-1, a transcription factor-like protein, which is induced in nucleus accumbens by chronic cocaine and is now known to modulate the locomotor effects induced by cocaine (3).

In addition to secondary targets, there are numerous neurotransmitters, their receptors and post-receptor signalling pathways that modify responses to acute and chronic drug exposure. For example, several behavioural aspects of mice lacking the serotonin 5HT1B receptor indicate enhanced responsiveness to cocaine and alcohol; notably, they self-administer both drugs at higher levels than wild-type controls. The mice also express higher levels of FosB (a Fos-like transcription factor implicated in addiction) under basal conditions. These observations point to the involvement of serotonergic mechanisms in addiction. There are many other such examples. Mice deficient in the dopamine D2 receptor or the cannabinoid CB1 receptor have a diminished rewarding response to morphine, implicating dopaminergic systems and endogenous cannabinoid-like systems in opiate action. The stability of the behavioural abnormalities that characterize addiction, indicate that there is a high possibility of drug-induced changes in patterns of gene expression. One demonstration of this approach showed that FosB accumulates in the nucleus accumbens (a target of the mesolimbic dopamine system) after chronic, (but not acute) exposure to several drugs of abuse, including opiates, cocaine, amphetamine, alcohol, nicotine and phencyclidine (also known as PCP or 'angel dust'). This is in contrast with other Fos-like proteins, which are much less stable than FosB and induced only transiently after acute drug administration. Consequently, FosB persists in the nucleus accumbens long after drug-taking ceases (6).

Pathological gambling (PG) is an impulse control disorder and a model behavioral addiction. Familial factors have been observed in clinical studies of pathological gamblers, and twin studies have demonstrated a genetic influence contributing to the development of PG. Molecular and genetic research has identified specific allele variants of candidate genes corresponding to the neurotransmitter systems associated. Associations have been reported between pathological gamblers and allele variants of polymorphisms at dopamine receptor genes, the serotonin transporter gene, and the monoamine-oxidase A gene (8).

Dr. David Russell at Yale University examined the role for Glial-Derived Neurotrophic Factor (GDNF) in adaptations to drugs of abuse. Infusion of GDNF into the ventral tegmental area (VTA), a dopaminergic brain region important for addiction, blocks certain biochemical adaptations to chronic cocaine or morphine as well as the rewarding effects of cocaine. Conversely, responses to cocaine are enhanced in rats by intra-VTA infusion of an anti-GDNF antibody and in mice heterozygous for a null mutation in the GDNF gene. Chronic morphine or cocaine exposure decreases levels of phosphoRet, the protein kinase that mediates GDNF signaling in the VTA. Together, these results suggest a feedback loop, whereby drugs of abuse decrease signaling through endogenous GDNF pathways in the VTA, which then increases the behavioral sensitivity to subsequent drug exposure (10).

In 1999, researchers funded by the NIH unearthed the unexpected connection between circadian rhythms in insects and cocaine sensitization, a behavior that occurs in both fruit flies and vertebrates, and that has been linked to drug addiction in humans. The circadian clock consists of a feedback loop in which clock genes are rhythmically expressed, giving rise to cycling levels of RNA and proteins. Four of the five circadian genes identified to date influence responsiveness to freebase cocaine in the fruit fly, Drosophila melanogaster. Sensitization to repeated cocaine exposures, a phenomenon that is seen in humans and animal models, and associated with enhanced drug craving, is eliminated in flies mutant for certain genes (period, clock, cycle, and doubletime), but not in flies lacking the gene timeless. Flies that do not sensitize owing to lack of these genes do not show the induction of tyrosine decarboxylase normally seen after cocaine exposure. These findings indicate unexpected roles for these genes in regulating cocaine sensitization and indicate that they function as regulators of tyrosine decarboxylase (9).

Animal models have proved to be pivotal to our understanding of neurobiological mechanisms involved in the addiction process. One drawback (and one that is not limited to the field of addiction) is that sometimes a genetic mutation is found to result in a phenotype without any plausible scheme as to how the mutation actually caused the corresponding phenotype (4).. Various types of microarray analysis have led to the identification of large numbers of drug-regulated genes; it is typical for 1–5% of the genes on an array to show consistent changes in response to drug regulation (12). However, without a better means of evaluating this vast amount of information (other than exploring the function of single genes using traditional approaches), it is impossible to identify those genes that truly contribute to addiction (7). As we can see, the method to approaching drug addiction is not straightforward- which dugs should be used as models? What should researchers be looking for- Drug abuse per se? sensation seeking? specific biological markers?

Fortunately, the increasing sophistication of genetic tools, together with the increasing predictive value of animal models of addiction, makes it increasingly feasible to fill in the missing pieces—to understand the cellular mechanisms and neural circuitry that ultimately connect molecular events with complex behavior (11). Genetic work remains the highest priority, as it will greatly inform our understanding and treatment of addictive disorders in the years to come. In addition, the genetic basis of individual differences in drug and stress responses represents a powerful model of the ways in which genetic and environmental factors combine to control brain function in general (1).


References

1)Nestler, E., Aghajanian, G. (1997) Molecular and Cellular Basis of Addiction, Science, New Series, Volume, 278, Issue 5335.

2)Vulnerability to Addiction

3)Brain Buildup Causes Addiction

4)Everyone is Vulnerable to Addiction

5)Genes can Protect Against Addiction, NIDA Notes

6)Biological Basis of Addiction Studied

7)Genes and Addiction, Nature Genetics

8)Genetics of Pathological Gambling, Dept. of Psychiatry, Acala University, Spain.

9)Response to Cocaine Linked Biological Clock Genes, NIDA Notes

10)Promising Advances Towards Addiction , NIDA Notes

11)Genetic Animal Model of Alcohol and Drug Abuse , NIDA Notes

12)Dependance of the Formation of the Addictive Personality on Predisposing Factors

13)Evidence Builds that Genes Influence Cigarette Smoking


The Perfect Prescription or Just Prescribed Nonsen
Name: Lara Kalli
Date: 2003-05-01 04:52:11
Link to this Comment: 5595


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Amphetamine-dextroamphetamine, or Adderall, is one of the most commonly prescribed pychiatric medications today. It is generally used to treat attention-deficit/hyperactivity disorder (ADHD), though less frequently it is prescribed for narcolepsy, epilepsy and parkinsonism. (1) The almost cavalier manner in which the psychiatric community distributes Adderall prescriptions has led to the public's view of it as somewhat of a "wonder drug", but the reality of the matter is that this drug is as harmful as it is helpful, if not more so.

It is important first to understand what Adderall is exactly and its effects on the brain. As an amphetamine, Adderall is a CNS stimulant. Its primary mental effects are increased alertness and ability to concentrate on a single task - hence its efficacy in the treatment of ADHD - along with a sense of well-being, severe reduction in appetite, and possible paranoia. The physical effects of this drug include restlessness, dry mouth, increased heart rate, breathing rate and blood pressure, and pupil dilation. (2) Overdose symptoms include anxiety, panic attacks, delerium, hallucinations, and highly aggressive behavior; in extreme cases, patients may experience what is known as amphetamine psychosis, a state that is from a clinical perspective virtually indistinguishable from paranoid schizophrenia. Doctors traditionally start patients on a 5 to 10mg daily regimen, increasing the dosage to as high as 40mg as needed, and keeping in mind the fact that both physical and psychological dependence are quite likely with
extended use. (3)

There can be no question that, in many cases, Adderall can be quite useful. The number of clinical studies done that have shown the efficacy of amphetamine administration in the treatment of ADHD verges on ridiculous in its size. Furthermore, many people to whom the drug has not been prescribed have found that a one-time use of the drug is extremely helpful, most notably in the case of students doing homework (4). Small wonder that this drug is now so popular amongst physicians and the general public alike.

However, Parents Magazine reported that the rate of prescription of Adderall and similar drugs to minors has reached a record high, to the point where experts are beginning to be concerned that psychiatrists are encouraging a "quick fix" attitude rather than making a genuine attempt to treat the problem (5). This is a genuine concern. Like all psychiatric medication, Adderall treats the only the symptoms of a disorder and should not be used in lieu of therapy. In addition, extended amphetamine use can create serious problems aside from the distinct possibility of physical and psychological dependence. Malnutrition, vitamin deficiencies, severe weight loss, skin disorders, depression, speech and thought disturbances - these are all possible results of long-term use of amphetamines (2). The current popularity of Adderall does not change the fact that it is far from being an innocuous drug, and ought not to be treated lightly by doctors.

Furthermore, the significant increase in the drug's availability allows for an increased potential for abuse, by both those to whom it has been prescribed and those to whom it has not. A study performed by Dr. Christine Poulin showed that, in a year, 14.7% of seventh, ninth, tenth and twelfth grade students to whom amphetamines had been prescribed had voluntarily given away some of their medication, 7.3% had sold some of it, 4.3% had had some of it stolen and 3% had been forced to give some of it up (6). The more Adderall is prescribed, the larger the numbers that those percentages represent will be.

The psychiatric community's extremely lax attitude toward the dispensing of Adderall and other similar ADHD medications speaks volumes about the state of our culture, and those volumes are not filled with positive sentiment. There is far too much pressure put on performance and efficiency, to the point where it is deemed necessary and legitimate in a rapidly increasing number of cases to jeopardize health and well-being for them. To put it simply: In relation to its many potential adverse effects, Adderall is very much over-prescribed, and that in and of itself is an understatement. While its usefulness cannot be denied, it is a drug whose use carries with it serious repercussions, and the fact that the nature of our society has made its widespread use and abuse practically normative is frightening.

References


1. RxList FAQ on Adderall
2. Erowid's amphetamine vault; a highly informative, not anti-drug resource
3. Internet Mental Health's Adderall drug monograph; contains a great deal of information for both patients and psychiatrists
4. Erowid's amphetamine experience report vault; individual accounts of experiences with amphetamines
5. Parents Magazine article abstract
6. Write-up of Dr. Christine Poulin's study


From Biblical Times to Today: What Has Changed and
Name: Patricia P
Date: 2003-05-01 10:57:25
Link to this Comment: 5598


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

"Epilepsy is a brain disorder involving recurrent seizures. You can relax. It's not the end of the world." This was my neurologist's introduction to my diagnosis as an epileptic with partial petit mal seizures including a curious, not to mention exciting, history of 2 grand mal seizures. As a 12-year-old girl, I remember feeling confused and greatly changed by these words that I had yet to understand the meaning of. As I grew to learn more about my condition, I realized that there are people around the globe, ranging in age, race, social and economic background that have experienced this same confusion. Collectively, we have gathered an incomplete, but valuable and working concept of epilepsy. Although it is one of the earliest recorded diseases, it attracts the attention of doctors, scientists, and researches everywhere, still in search of a clear understanding of the causes of particular seizures. Different nations contribute to our ever-expanding understanding of its history, epidemiology, prognosis and mortality, along with clinical manifestations and differential diagnosis. Tracing modern diagnosis and therapies back to biblical times allows us to compare another very important aspect of epilepsy: very similar modern and ancient perspectives on this disorder.
Our language gives clues as to the longevity of epilepsy: the term epilepsy derives from the Greek word "epilambien" which means "to take hold of" or "to seize." (1). Epilepsy is a disease with one of the longest recorded histories and an impact spanning the globe, allowing healers and physicians from a wide range of countries and time periods to study epilepsy. Worldwide studies have estimated the mean prevalence of active epilepsy, (i.e. continuing seizures or the need for treatment) at approximately 8.2 per 1000 of the general population.) (2) Further research in the world is looking to explain why in developing countries, such as Colombia, Ecuador, India Liberia, Nigeria, Panama, and the United Republic of Tanzania and Venezuela show a prevalence rate of over 10 per ever 1,000 people. Thus, it seems that at any one time, approximately 50 million people in the world suffer from epilepsy. (3) Today, most would agree that epilepsy involves a deviation from normal brain activity through instability of neurons. The neurobiology of epilepsy is hard to describe, as different types of seizures are related to different parts and separate problems within the brain. This instability causes them to fire in a rapid and/or excessive, synchronous and/or inconsistent manner, with excess electrical discharges within our brain resulting in a seizure. Because this involves very complicated brain activity, and "instabilities" of varying degrees or locations in the brain, there is a great difference between the appearance and treatment of acute presentations, and more severe or chronic presentations. The severity of these seizures depends on several factors: if the episode is fever induced (febrile,) can be located to one area of the brain (partial seizures and temporal lobe seizures,) can be traced to places throughout the entire brain (generalized seizure,) or can be categorized by the intensity and level of electrical activity in the brain (petit mal seizures and grand mal seizures.) Unfortunately, although epilepsy's history dates back before biblical times, there are still very surprising gaps left to be filled.
Diagnosis is complicated and often specific to the type of seizures a person may suffer from. For a variety of medical reasons, a great number of people may experience only one seizure in their lifetime. This, however, does not meet the criteria for an epilepsy diagnosis. Many people are misdiagnosed, as there are several episodes that a person may experience that mimic a seizure, which are not. Furthermore, many seizures go undiagnosed, (most often petit mal seizures) due to their unfocused, almost "blank-out" appearance, and extremely short duration. The diagnosis of epilepsy requires a minimum of "2 or more unprovoked" seizures in a person's lifetime. (4) Although seizures are a symptom of the disease, and not always epilepsy itself, it is important to realize that what is most often "diagnosed" is what type of seizure the patient has experienced. Partial or focal seizures, for example, are most often traced to one localized part of the brain, and may not impair consciousness at all. The spread of these seizures has the potential to create generalized seizures, (also known as generalized tonic-clonic or grand mal seizures.) These involve electrical discharges that affect the entire brain, causing a loss of consciousness along with the popularly depicted muscle spasms or stiffness. (The image of a helpless person frothing at the mouth and shaking uncontrollable is not always an accurate depiction of this type of seizure.) Status epilepticus is the most rare and severe form of epilepsy. A person will suffer from frequent seizures without recovery of consciousness between each episode, or one single extremely prolonged episode. From a prolonged period of time without breathing, the lack of oxygen then results in damaged tissue, ultimately leading to brain damage or sudden death.
Differential diagnoses are quite common. A diagnosis of epilepsy requires several seizures; of whichever type or category they fit, occurring in a predictable pattern. Seizures may also be induced by an injury or trauma to the head, high fever or heat stroke, diabetes (seizures can occur when blood sugar levels are too low,) or in the presence of a brain tumor (30 to 40 percent of patients with brain tumors also have seizures.) (5) When diagnosing a patient with epilepsy, if the root of the seizure can be attributed to any one of these alternate circumstances, the seizure is then regarded more as a symptom of that condition, rather than epilepsy.
The unique history of our methods of diagnosis for epilepsy shed light on our ever-evolving understanding and improved treatment. During the Roman era, epilepsy was diagnosed by providing a piece of jet for the patient, waiting to see if they would collapse, most likely encouraging the nickname, "The Falling Sickness." Ancient Greek doctors practiced burning the horn of a Goat, (an animal considered to be prone to epileptic seizures) underneath the patients nose. (6) Today, we have moved away from the "smell test" methods, and moved toward studying the body and the brain. A thorough physical examination is preformed, often followed by a myriad of neurological exams. Blood and Urine are also studied in an attempt to identify and kidney or liver problems that may lead to a differential diagnosis, or give clues as to what anti epileptic drugs may be harmful and non-compatible with the patient. The fluctuation of electrical impulses is then measured by an electroencephalograph (EEG), which transmits signals to nerve cells with the use of electrodes. The EEG equipment is designed to receive and intensify the fluctuations of voltage, and then transfer the information to a computer. This has become an invaluable tool for understanding the particular root or type of seizure disorder a person has. However, this is not to say that although this would be the exception, a person may still have a normal EEG reading and have epilepsy. Magnet-response-imaging (MRI) is another technique used to diagnose epilepsy, as well as computerized axial tomography (CAT Scan) more commonly used to locate or rule out brain lesions as the cause of seizures.
Treatment is available and there are several options. Which treatment an epileptic will receive is almost always contingent on the severity of their presentation, what part of the world they are being treated in, and what each individual suffering from this disease personally feels most comfortable with. Before the emergence of anti-epileptic drugs, ketogenic dieting techniques were a popular method of treatment. Although the diet closely resembles starvation in an attempt to change the body's metabolic state, it is capable of improving select types of seizure control. (7) However, today up to 70% of adults and children diagnosed with epilepsy, in both developed and still developing countries, show that their condition can be successfully treated with the use of anti-epileptic drugs. (8) Some of these drugs include Phenobarbital, Acetazolamide, Carbamazepine, Clonazepam, Levetiracetam, Phenytoin, Topiramate and many others. The UK, Scotland, Germany, China, along with many other countries, as is common with regards to the treatment of most diseases, prefer different drugs. If you lived in China, for example, it is likely that an epileptic would first consider herbal remedies, which include deadlocked silkworm, gastrodia tuber, antelope horn, centipede, and dozens of other ingredients. In the United States, Diazapam, (otherwise manufactured as Valium, Stesolid and Diazemuls,) and Phenobarbital are more common drugs used to treat more serious forms of epilepsy. Although economic and geographical boundaries do not play a role in who experiences epilepsy, unfortunately, they play a huge role in whether or not they will be able to receive proper treatment. Chinese herbal supplements, drug therapy, strict and specific diet restrictions, and surgery are among many of the common options. It is important to keep in mind that the treatment should match the presentation of the disorder, whether it is acute or chronic.
Prognosis for those who can afford and obtain access to all medical options is very encouraging. Children have some of the most promising prognosis, as many children grow out of epilepsy before the need for serious treatment is necessary. Although many presentations of epilepsy are chronic, lifelong conditions, extremely effective options of treatment are available. Sadly, permanent brain damage or death can occur as a result of a serious seizure.
There are no concrete preventative measures to avoid epilepsy. However, that is not to say that prevention is not a key word to an epileptic. The goal is to avoid possible seizures, and often times, situation that promote seizures. Some factors that may increase a person's risk of seizures may include brain injury, history of seizures in the family, and other medical problems effecting electrolytes. Remaining aware of your physical condition, as with many illnesses, can help decrease your risk of having a completely unexpected incident. Exposure to certain medications and illicit drugs (such as the commonly debated, over-the-counter drug discussed in the media today, ephedrine,) has been known to intensify and cause seizures. Common to my experience as well as many other epileptics, a doctor will also recommend that their patient avoid anything and everything that may have been responsible for triggering their previous seizure. Provocative factors may include stress (emotional or physical), flashing lights (such as strobe lights, television, video games, etc,) over-hydration, fatigue, and any combination.
Unfortunately, epilepsy is also associated with an increased rate of mortality. This may be for a myriad of reasons, spanning from those which are medical, such as an underlying brain disease, (tumor or infection,) which may be the cause of seizures in a particular patient, Status epilepticus, or for other sudden an unexplained causes that result in respiratory or cardio-respiratory arrest during a seizure. Other causes of mortality are due to drowning, burns, or head injuries as a result of location at the time of the seizure, (especially car accidents,) and suicides. (9)
Some of the earliest writings on this disease reveal that it was once known as the "Holy Sickness," studied by the Greek physician Hippocrates, whose studied epilepsy with the belief that it could be cured with the understanding of what is known today as humoral pathology (controlling body fluids or humors.) Fascinatingly, recent translations, dating from about 500 BC, of a Babylonian tablet have revealed even earlier descriptions of epilepsy. However, it was during biblical times that the most famous historical account of a seizure was given in St. Matthew's Gospel, Ch. 17, Verses 15-17 of the bible: "Lord have mercy on my son, for he is lunatick and sore vexed, for oftimes he falleth into the fire and oft into the water. And I brought him to thy disciples and they could not cure him. Then Jesus answered and said, O faithless and perverse generation how long shall I be with you? bring him hither to me. And Jesus rebuked the Devil and he departed out of him: and the child was cured from that very hour." Mark Chapter 9, Verses 17-18, confirm the initial suspicion that the latter account is one of epilepsy; "he has an evil spirit in him and can not talk. Whenever the spirit attacks him, it throws him to the ground, and he foams at the mouth, grits his teeth and becomes stiff all over." Today the boy's condition would most likely be diagnosed as a grand-mal seizure, but at the time, traditional healers would most likely surmise that there had been an act committed against God, along with the presence of demons, that caused this horrific episode.
If we could imagine that the young boy from this biblical story was able to tell of his experience with epilepsy, and his interaction with the healers of the time, he may recount an experience similar to this:
"I do not remember exactly what happened. I felt lightheaded upon awakening. My father has told me that evil spirits have possessed me, and we must go to Jesus. I am ashamed, and unable to imagine what sin God is punishing me for. My father has told me that demons can enter the body at birth. However, this has not stopped neighbors from fearing me. I am alone and in need of healing. I know that God would be displeased if I did not seek him first for a cure, and so I will go to Jesus, although there are other doctors who believe that my condition is due to an imbalance of humors. My father has told me that it is the casting out of demons that will cure me. Jesus prayed for me and cast out the devil, and I was told I was cured. I prayed the demons would not return to haunt me, in fear that I would be abandoned by all of my community."

Today, the initial reaction of epileptics, actions of healers, and understanding of epilepsy are completely different. Our methods of diagnosis are based on a completely separate belief system, and our treatment of the patient, as well as the disease, has undergone centuries of change.
The most accurate modern day account I can give is my own. Although some of the details are hazy, the experience of my first seizure was like none other. I was a very academically interested and concerned student at a very young age. However, my teachers began to notice (as is common with many children suffering from petit mal seizures,) that I would have spells that showed "a real lack of attention and frequent disorder." My mother returned from a parent-teacher conference a little short of concerned for my health: "You are going to bed earlier! That's it! And you better start looking at your teachers in class so that you remember to focus on what they are saying!" It was not until my mom found me on the bathroom floor on February 19th of 1997 that we took my lack of focusing spells seriously. I was home from school with the flu and had just recently taken my temperature, which was nearing about 101 degrees feirenheight. I was standing near the shower, and began to feel light headed so I attempted to catch my balance on the bar of the shower. I still remain unsure of whether or not I fell (which may have caused the seizure,) or had the seizure and then fell. However, I recall this distinction being very important to every doctor I was brought to see. The fall was quite bad; I hit my head on the tile of the shower, yet the doctors' questions about the fall were left unanswered. I do not recall regaining consciousness until I hit the cold air outside while I was being carried out to the ambulance on a stretcher. It was my first grand mal seizure, and it clearly warranted an ambulance.
I live on a small island, and the retelling of this story would not hold the same impact without expressing how thoroughly embarrassed I was. All the people in my tiny community were standing outside their houses. My little sister was crying and my mother was gasping, clutching her chest, and asking questions uncontrollably. Personally, the most horrific aspect of this disease is the feeling of 'I have no idea what just happened,' combined with a constant fear of: 'Next time this is going to happen in school and all the kids are going to think I need an exorcism. I will become the ultimate freak show!' I was instructed not to fall asleep in the ambulance, which I now deduce was a precaution involving the fall, attempting to avoid slipping into unconsciousness.
By the time I arrived at the hospital, I was feeling fully recovered. After the seizure, I had had very brief but shooting stomach pains, but those subsided quickly and my goal was to recover from the shock. I'll never forget one extremely kind, vivacious, and un-professional nurse remaking, "Next time, your parents can give you milk and chocolate syrup so you can mix them some chocolate milk." I appreciated her humor, and the lighthearted, stress-free atmosphere that all of the doctors and staff had worked so hard to create. The doctor in the emergency room was less playful, as he wrote down my complete medical history, including specific pieces of history from my parents and ancestors. He was specifically interested in the details of my birth, any complications or illnesses involving my nervous system, any possibly related or suspicious childhood events, any medications I was currently taking, and some brief questions regarding drug and alcohol use. Especially because this was my first seizure, it was made clear to me that a detailed description of the events leading up to the seizure was crucial in distinguishing my seizure type. Appointments were immediately made regarding an EEG, MRI, as well as a CAT scan.
As I grew older, there were obstacles that my family and I needed to take into serious consideration. We made some very controversial decisions for my circumstances, that I feel are decisions that many epileptics struggle to make. Families often need to consider whether it is appropriate or not for their child to disclose this information on certain forms. On one hand, you place yourself and others in danger if you conceal that you are epileptic and are then placed in a situation that brings on a seizure, with no one prepared to handle a situation if it arises. On the other hand, it is a very real fear of an epileptic that they will be discriminated against. The fear that when applying for summer programs, jobs, and other positions and opportunities that would attribute to their growth as a person and resume for college, that they would be discarded as a serious liability. For instance, we chose doctors in New York City, not only because they were some of the best, but also because a New York doctor is not required by law to disclose information to New Jersey (or more specifically, the New Jersey DMV.) I was adamant about not becoming "disabled" until we were clear about my condition. These are some of the more controversial aspects of epilepsy that are most difficult to address and talk about. Different doctors will advised families to take complete separate courses of action based on their specific case. But I found my interactions with doctors to be most fascinating, as they acknowledged that I might suffer from something that many people opt to hide from the world. Traumatizing than the seizure itself, the stigma of being incapable is a huge concern.
I have been "seizure free" for about 3 years now. The experience has sharpened my awareness of the importance of people who dedicate their lives to professions as nurses, doctors and specialists. It has sparked my interest in the origin of this disease, the first treatments, conceptions, and fears surrounding it, and the speed at which our understanding has evolved. I recognize that my condition is minor compared to other epileptics, and that many cases of epilepsy are life altering, creating a very painful dependency on medications, friends, and family, and in severe cases, even surgery. Epilepsy is unique in that it effects people from all over the world, represents itself in tremendously varying degrees, and can been treated very differently depending on a patients geographical location. But what I find most fascinating are the similarities in experience for the person experiencing the seizure over thousands of years. The value of evaluating a person's experience with epilepsy during biblical times, in comparison to a person's experience today, is to recognize two main things. Shame and secrecy are not exclusive to either time period's approach to handling this disease, even though epilepsy has been considered one of the most serious brain disorders in every country of the world. Secondly, we must remain committed to the search for even greater neurological understandings, and new ways for scientist, neurologists, doctors, psychiatrists, and psychologists to introduce methods of caring for this disease and its victims, while implementing what we find over a broader region of the world. I will always remember the first explanation I was given for what I was experiencing: "Epilepsy is a brain disorder involving recurrent seizures. You can relax. It's not the end of the world." It was comforting, and also extraordinarily true.

References

WWW Sources
1)Epilepsy: Across Time and Place, an interesting perspective of epilepsy's groth and change over the years.

2) Epilepsy- EPIDEMIOLOGY, ETIOLOGY AND PROGNOSIS , a sort of fact sheet about a variety of complicated aspects of epilepsy.

3) Epilepsy-EPIDEMIOLOGY, ETIOLOGY AND PROGNOSIS , same fact sheet, only focusing on mortality, prognosis and interesting facts.

4) Seizures , site explaining the diagnosis and other distinctions between seizures and common occurrences mistaken for them.

5) Seizures: Common Causes , site examining seizures, outside of epilepsy, focusing on differential diagnosis and tumors.

6) German Epilepsy Museum , a fascinating site focusing on common folk lore concerning epilepsy and several descriptive stories illustrating epilepsies role in the past, and all over the country.

7) Ketogentic Diet , a description and exploration of one very common and world wild treatment for epilepsy, being used less often today with the introduction of drugs.

8) Surgical Treatments for Epilepsy , a personal questionnaire helping one who is epileptic to decide if surgery is right for them; packed with valuable and interesting information.

9) Seizure-Related Injuries in Epileptics , an editorial by Somsak Tiamkao looking at injuries related to epilepsy.


Flight or Fight
Name: Elizabeth
Date: 2003-05-01 14:38:35
Link to this Comment: 5601


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

In most cases, a cornered animal will either run from or attack the danger
threatening him. Most animals, including humans, will respond in a similar fashion, both
behaviorally and psychologically, when confronted with a stressful situation. This
response, commonly called "flight or fight", can, for humans, serve as a reaction to less
serious events, such as crossing a busy street or rushing to catch a plane. First discovered
by scientist Walter Cannon in the 1930s (4), the flight or fight response differs not only
from one species to another, but also between the genders. While males tend to have an
aggressive response to stress, females deal with stress in a more calm and subdued
manner. Females may exhibit the bodily symptoms associated with the flight or fight
response, but they tend not to experience the extremes of this phenomenon, unlike males.

The flight or fight response prepares our bodies to either confront danger head on,
or to run from it as quickly as possible. When facing a potentially hazardous situation,
one's heart rate, breathing, and metabolism quicken, while muscles contract and the
digestive system slows down or stops altogether. Additionally, the brain sends the
kidney a signal which releases adrenalin into the bloodstream, which gives the body an
extra rush of energy. This adrenalin is transported to the heart, lungs, and muscles in
order to ready these organs for a dangerous situation. Breathing becomes more rapid as
the lungs attempt to provide more oxygen, in order to insure proper muscle functioning.
To transport this extra oxygen, the blood vessels near the heart and lungs dilate. In turn,
this dilation forces the heart to beat faster as more blood requires faster transport through
the body. At the same time, blood vessels contract in areas not needed for defense, such
as the digestive system. The contraction of blood vessels cause the familiar feeling of
having "butterflies" in one's stomach. In addition to the symptoms which ready the body
for an imminent flight or attack, the response may also trigger headaches, blurred vision,
dry mouth, a sore back and neck, heart palpitations, and increased sweating (2). While
flight and fight genetically evolved as a reaction to situations of acute stress caused by
extreme, immediate peril, the response has adapted itself to the modern atmosphere of
chronic stress. Chronic stress triggers repeated instances of the flight or fight response in
situations where one may not actually need the physical readiness provided by an
adrenalin release. In such cases, a body does not have proper time to allow a return to
normal bodily conditions between stressful situations. Chronic stress is common among
those who work stressful jobs, live in cities, or often find themselves facing dangerous
situations (1).

Over the years, the symptoms associated with the flight or fight response have
developed in order to help humans deal well with stressors, even when these stressors do
not pose a life or death consequence. In recent years, questions have arisen regarding
whether men and women experience the flight or fight response in the same manner.
Prior to 1995, only 17% of the participants in studies concerning human responses to
stress were female (3). Of course, this discrepancy led to a strong gender bias in the
results, causing an uncertain perception of how women respond to stressful situations.
While it has been found that women can also experience the flight or fight response,
studies now show that females they do not react to stressful situations in the same manner
as men. A recent UCLA study of female responses to stress found that women tend to
care for others and seek help from outside sources during times of stress rather than
aggressively confront their problems. This calmer mode of coping has been termed the
"tend and befriend response". The study, conducted by Shelley E. Taylor, based on the
results of hundreds of biological and behavioral studies conducted on thousands of male
and female subjects, both human and animal, reports that women indeed experience a
different response to stress than men (6). Some components of the flight or fight stress
response seem to be linked only to males, including an increased pain inhibition and high
cortisol response (3). These aspects of the flight or fight response are triggered by male
sex hormones, which, while also present in much lower amounts in females, obviously do
not affect women to the same extent or in the same manner as they affect men. During a
stressful situation, both males and females receive a rush of stress hormones. After the
initial influx of epinephrine, norepinephrine, and cortisol, females experience a rush of
oxytocin and endorphins, aided by estrogen, the female sex hormone. Oxytocin is a
stress hormone responsible for soothing and calming the body during stress. This
hormone, which is also released during childbirth and breastfeeding, acts as a powerful
mood regulator by reducing anxiety and encouraging the female to seek friendship. Men
also produce oxytocin, but in lower quantities (6). When released in reaction to stress,
oxytocin inhibits the flight or fight response as it triggers an opposite reaction, classified
as attachment behavior, or the "tend and befriend" response. Males, however, react to
these initial stress hormones but creating more testosterone, which in turn triggers the
flight or fight response (5). Hormonal reactions such as these play an integral role in
determining which type of stress response is triggered.

The flight or fight response may be stronger in males because of genetic
inheritance. This aggressive response evolved over time as a method of protection,
which was passed down genetically to offspring. In the past, males were more likely to
encounter the life or death situations which warranted a flight or fight response, which
may be why the response is more fully developed in and utilized by males. Thus, over
the years males have developed a response to stress which values the survival of the
individual over the wellbeing of the pack, emphasizing either direct combat with an
enemy or a swift retreat. However, this self-centered response contradicts the females'
nurturing instincts, as the flight or fight response may leave more vulnerable members of
the community open to attack. In contrast to the male stress response, females, who
traditionally have played a larger role in caring for children, are more likely to experience
a more sedate response to danger which protects not only themselves, but their offspring
as well (6).

While these biological considerations must be taken into account regarding
dramatic life or death situations, it remains unclear whether the same differences exist
between male and female responses to situations to moderate stressors. It has been
established, in various studies, that women tend to reach out to those around them during
times of stress, particularly communicating with other women (6). Men, in contrast, keep
their emotions to themselves and prefer not to share feelings which may betray any
weakness. Therefore, these distinct differences in reaction also create biological
differences in the way males and females respond bodily to stress. Men are much more
likely to experience high blood pressure, which comes from the elevation of the body
during flight or fight response and also not having an external outlet for the emotions
produced by stress. Females, on the other hand, experience an increased heart rate. This
may originally have functioned as a soothing mechanism for children being held to their
mother's chest during times of stress and crisis (5).

However, in spite of the mounting evidence which indicates that men and women
experience fundamentally different responses to stressful situations, this may not always
be the case. The research in this area is still very preliminary, and some contradictions
remain. Indeed, studies conduct on certain species of monkeys suggest that the females
exhibit aggression instead of the tend and befriend response (5). Also, not all men will
become aggressive when confronted, just as not all women will remain calm and seek
outside support. As more studies have been performed on female subjects, it has become
increasingly apparent that the flight or fight response may not be is not the primary
reaction to stress for females. Instead, this aggressive response may function as a
secondary reaction or a last resort. Females are much more likely to rely on the tend or
befriend response, in which they turn to an established support network of friends and
families for help. The primary goal of females in any stressful situation is to protect her
offspring. While tend and befriend may not be as aggressive or even effective as the
male's fight or flight response, females need to protect the young in order to allow the
species to survive. The flight or fight response leaves the vulnerable open for attack and
only considers the survival of the individual rather than of the female. Considering her
genetic role as nurturer of the species, it stands to reason that females while choose a
method of stress response which protects those who cannot protect themselves.


References

1)Flight or Fight

2)Fight or Flight

3)Do Women Differ from Men?

4)The Science Show

5)Women, Men, and Approaches to Stress

6)A Woman's Response to Stress


Fluid on the Brain: Hydrocephaly
Name: Rachel Sin
Date: 2003-05-01 19:12:01
Link to this Comment: 5602


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Last spring, I traveled to the Mutter Museum with my Developmental Biology class. This museum, located in Philadelphia, contains many examples of medical anomalies from the past. As my classmates and I walked through the top floor of the museum, we encountered multiple glass cases including preserved facial and bodily skin lesions. Perhaps the two things which fascinated me the most on the top floor were the picture of Chang and Eng (conjoined Siamese twins) and the Soap Lady. However, as I reached the lower level of the museum, I realized that I was about to be even more awe-stricken by what I saw, as I was instantly drawn to one glass case. This case contained multiple glass jars, filled with tiny preserved fetuses. Although all of the fetus-containing jars were disturbing, one in particular caused my jaw to drop; this fetus's head was larger than the others, and looked excessively swollen. My eyes peered down at the label under the jar: "Hydrocephaly", it read. Soon, I was joined by other shocked classmates in front of this case, and we all began to ask the same types of questions, such as: "Why was this fetus's head so large? Could it had lived through adulthood if it had been born alive and were afflicted today?" The image of that fetus was so disturbing to me that it stuck in my head, and left me wanting to answer the questions posed by my classmates and myself.

Perhaps our most burning question about the hydrocephalic fetus was the question regarding the size of its head (WHY was it so large?). The enlargement of a hydrocephalic head is actually due to a buildup of cerebrospinal fluid, or CSF, in the cranium. The buildup of CSF which is seen in hydrocephaly (also known as hydrocephalus and "water on the brain") is due to a lack of balance between the absorption and production of the CSF. As a result, the ventricles get larger, and pressure is created on the tissue of the brain surrounding the cranium. (2). Among infants who have hydrocephaly, enlargement of the head is due to the buildup of fluid in the central nervous system, which causes an expansion of the head, and a bulging out of the soft spot or fontanelle. This is due to the fact that prior to the age of 5, fusion of the bony plates which compose the skull surface has not yet occurred.

Typically, the CSF which is produced in the ventricles passes through the ventricles before encircling the brain and the spinal cord. Finally, it undergoes reabsorption over the brain's surface, into big veins which serve to transport the CSF to the heart. In hydrocephaly, this usual sequence of CSF transport is interrupted by a certain factor, leading to the buildup of CSF. In many child patients with hydrocephaly, the exact cause is unknown. However, certain potential factors leading to its development include vascular problems, infection, trauma, bleeding, structural problems and tumors. While a small number of these problems are genetic, some of these problems are present after birth, others are present during pregnancy (5). (as seen in the fetus at the Mutter Museum).

Several possible causes have been proposed for congenital hydrocephaly. One possible cause is German measles, also known as Rubella. During pregnancy, this condition can cause hydrocephaly and other malformations in the developing fetus. A second cause is T gondii or Toxoplasmis organism, which can be passed on through contact with an animal with the infection, the consumption of poorly-cooked meat, or contact with soil which has been contaminated. A rare congenital cause of hydrocephaly which is genetic is X-linked hydrocephaly, which is transmitted via the X-chromosome to the son from the mother. One in 20 males is afflicted with this condition (The large majority of cases exist in males), which can only be inherited from the maternal X-chromosome. A final cause of congenital hydrocephaly is viral; this cause is from the herpes class of viruses, and is known as CMV, or Cytomegalovirus. The symptoms seen in this form are similar to those present in patients with a cold virus. (1).

In addition, a risk of having hydrocephaly exists in premature babies. The prematurely-born baby is more susceptible to the condition, as its development is still progressing at the time of birth. A crucial area in the brain is that which is found below the ventricles' lining; during development, activity below the ventricles is great, and the blood supply here is abundant as a result. Unfortunately, this causes fragility in the blood vessels of the area, which may rupture if the baby is extremely ill or has a large blood pressure shift. (7). Other causes of Hydrocephaly include diseases which have detrimental affects on the brain, such as spina bifida and meningitis.

Hydrocephaly can exist in two different forms. The first is known as non-obstructive or communicating hydrocephaly. In this form, communication still exists between the subarachnoid space and the ventricular system, despite swelling of the ventricles. This form of hydrocephaly is usually caused by the after affects of hemmorhaging or infection. In the second form of hydrocephaly, there exists no communication between the subarachnoid space and the ventricular system. This second form is known as obstructive or non-communicating hydrocephaly. (6).

Although enlargement of the head is one of the symptoms which is characteristic of hydrocephaly, it is not the only symptoms. In addition, varied symptoms appear at different stages in younger age groups. In infancy, early symptoms of hydrocephaly can include a bulging of the cranial soft spots (called fontanelles). This bulging can occur both in the presence or absence of head enlargement. In addition, the infant can exhibit vomiting and separated sutures. As hydrocephaly continues in the infant, he or she can display muscle spasms, and a lack of control of temper. Late in infancy, multiple symptoms can appear, including a delay in development, lethargic feelings, a decrease in cognitive functioning, trouble eating, loss of bladder control, slowness in growth, a decrease in movement, high-pitched cries and sleepiness. Other symptoms appear in older infants and young children, and will differ according to the degree of pressure damage caused by the condition. These symptoms include a loss of coordinated motion, changes in vision, psychosis, confusion or other mental aberrations, vomiting, poor pattern of walking, crossed eyes, headache, and vision changes. (3). Although hydrocephaly appears most frequently in children, it can occur during adulthood as well.

Multiple methods exist for detecting hydrocephaly in both adults and children. In older children and adults, if symptoms appear which are characteristic of hydrocephaly, an MRI or CT scan can be performed. These tests will aid the doctor in finding any causes responsible for the hydrocephaly. (6). In children who are younger than 1 year old, the doctor will usually ensure that the rhythm of growth of the head and its size are normal by measuring the head in a medical checkup. This method can be employed for infants, seeing as how two of the main symptoms of hydrocephaly in infants are a bulging soft spot and increased speed of cranial growth. (4).

Looking back at the fetus in the Mutter Museum, I still recall our other burning question. We looked at the tiny fetus in the jar, and asked, "If the fetus had been born today, could it have survived until adulthood?" If the hydrocephaly were detected and treated, yes, it most definitely could have survived until adulthood. And yes, it could even live for a full lifespan. Not only do there exist many extra-sensitive ultrasound devices that can detect diseases while a child is still in the womb, but various treatment options are available for individuals with hydrocephaly. Although the condition is most often treated via surgical methods, non-surgical forms of treatment are available as well. In drug treatment, a drug which can effectively reduce the cerebrospinal fluid production would be used. One option is a carbonic anhydrase inhibitor known as Acetazolamide. This drug has been documented to cause a decline in the formation of cerebrospinal fluid through the choroid plexus, and has been used successfully to treat hydrocephaly in immature infants. Another non-surgical option which exists to treat this condition is the head-wrapping. In this method (which was used in the past and put into use again by Epstein), bandages made of muslin are applied to the head firmly. In addition, head compression was achieved through the use of rubber or adhesive plaster bandages. The main goal of this treatment option is to augment the trans-ependymal cerebrospinal fluid absorption (or, open the CSF pathways which are blocked by increasing the pressure inside the head. However, this method has been negated and is no longer commonly used. (6).

Even though non-surgical methods are available to help treat hydrocephaly, the condition is most frequently treated through surgical means. One method of surgery which can be used is known as Endoscopic third ventriculostomy, or ETV. This method involves creating an opening at the base of the third ventricle. This method ensures that the spinal fluid can flow to be absorbed in the basal cisterns. (1). However, although this method is starting to be used more often, the most reliable and efficacious surgical procedure to treat hydrocephaly involves the insertion of a shunt. The shunt is composed of a valve (to thwart back-flowing of cerebrospinal fluid and keep the speed of draining under control) and several tubes. The device serves as a means of redirecting the cerebrospinal fluid to the bloodstream from the blocked ventricular path. Proper positioning of the shunt is crucial; the upper end should lead to the brain's ventricle, while the lower end goes either to the abdomen or the heart for drainage. In addition, the lungs' lining can be used as a drainage site. (7). Usually, if the patient with hydrocephaly (which has not been caused by tumors) is treated with a shunt insertion, there is an excellent prognosis for success in managing the condition. (1).

Unfortunately, although the shunt is the most effective means of treating hydrocephaly today, it does not come without complications. Many potential complications include physical disabilities, intellectual impairment, meningitis, complications of surgery, dysfunctioning of the shunt (such as tubal separation, kinking, blockage or related problems), infection of the region where the drainage of fluid occurs, and neurological damage such as a decrease in function, movement and sensation. (3). As the shunt is a foreign body inside the patient, complications are not to be wholly unexpected. In fact, 6 out of 10 children with hydrocephaly who have shunts inserted will need some form of correction at some point in their lifetime. If a shunt is not working properly, it is important that the child seek medical attention immediately. At times, a child's cerebrospinal fluid blockage causing hydrocephaly may vanish completely following the shunt's insertion. Yet when this happens, it is unnecessary to perform surgery (which may cause more harm than good) to rid of the shunt. The shunt can remain inside of the body for the rest of the child's life without causing any problems.

Thinking back to the field trip to the Mutter Museum last year, I remember myself and my classmates looking at the hydrocephalic fetus, almost in horror, and asking questions. We wondered why its cranial size was so much larger than normal. We also couldn't help but ask ourselves what the fetus's prognosis would have been today, had it been born. It is true that we often fear what we do not understand. That being said, I wish I had been educated about the actual cause of hydrocephaly when I was viewing the fetus, and that I could have gained encouragement from having the knowledge that the condition is highly treatable. Perhaps if my classmates and I had known those facts, our reaction to the Mutter display would have been a little bit different.

References

1)About Hydrocephaly from Arc of King County

2)Hyrdocephaly, from the TX Dept. of Health

3)Hydrocephalus, from Medline Plus

4)Hydrocephalus, From Diagnostico.com

5)Pediatric Neurosurgery - Hydrocephalus, From Dept. of Pediatric Neurosurgery, Columbia University

6) Hydrocephalus, From Dept. of Pediatric Neurology, Adelaide University

7)What is Hydrocephalus?, From the Association for Spina Bifida and Hydrocephalus


The Origins of Depression:Environmental, Genetic,
Name: Clarissa G
Date: 2003-05-02 16:01:45
Link to this Comment: 5606


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Depression has become an increasingly prevalent behavioral illness among individuals in our society. But how does one "get" depression? According to some recently published articles, depression is not solely a case of familial, chemical or environmental deficiencies, but a combination of these three factors. In other words the origins of depression are not from one specific deficiency. Depression could emerge as the result of a combination of many factors, which need to be looked at as a whole, in order to obtain proper treatment.

Genetic predisposition is currently being researched as a key component in the makeup of major depression. A correlation has also emerged between family members. A specific study focused on mothers of children with depression. This study found that often mothers of depressed children were in fact suffering from depression themselves. The depression that could be experienced in children of mothers with symptoms of depression could be a learned behavior. But this is unlikely. What is more likely is that these children are members of families with genetic traits that make them more susceptible to depression.

Recently, further support of a genetic link to depression has been uncovered. Researches have found that children who are being treated for depression often times have mothers who are themselves suffering from depression or other mental illness. In a study conducted by the New York State Psychiatric Institute at Columbia University in New York City mothers who brought their children in for a psychiatric evaluation were also given a questionnaire on the state of their own mental health. Researchers found that a significant portion of the "mothers rated their overall emotional health as being 'fair to poor'". (1) The mothers reported psychiatric disorders from major depression to drug abuse. Some even reported thinking about suicide or self harm within recent weeks. Dr. Ferro of Reuters health "said that the findings 'basically confirm what we know from family studies of depression—that there is familial transmission of depression and that the mothers of children who are depressed and coming for treatment are frequently depressed as well". (1)

A another recent study of depression identified the possibility of a specific area on a chromosome, which can contribute to depression in women. (5)
Scientists at the University of Pittsburg have identified a link between depression in women and the gene CREB1. CREB1, which is located on the 2q33-35 chromosome, has been found to have significant alterations in patients who have died with major depression. The CREB gene is a regulatory protein, which appears to be responsible for neural plasticity as well as cognition and long-term memory. CREB interacts with estrogen receptors and may be related to irreversible dementias. The crux of the University of Pittsburg article is that CREB does affect occurrence of depression, specifically in women. CREB interacts with estrogen receptors women making them more at risk for a depression.

The level of serotonin absorbed in the brain is also a factor in the onset of depression. When the brain does not absorb or produce the necessary amount of serotonin an individual will become depressed. In a study done at the University of North Carolina at Chapel Hill researches determined that even when a patient is not suffering from a depressive episode the ability of the brain to absorb an appropriate amount of serotonin to regulate emotions is hindered. (4) Damaged serotonin reuptake has significant ramifications for the treatment of depression. Some of the focus when treating depressed or formerly depressed patients should be on the functioning of the brain. Even when a patient is not currently experiencing depression the serotonin absorption in the brain needs to be closely monitored. The study suggests that even when depression is not being experienced the symptoms are still present and need to be treated. Perhaps elongating the time period in which a patient takes medication. This study offers further support to the chemical and biological origins of depression.

The previous studies strongly suggest genetic or biological deficiencies as the root cause of depression. These two factors are not to be overlooked when treating depression, however they are just a predisposition. Not a prediction of whether or not someone will develop depression. That being said, one should also look at environmental factors as a possible cause of depression.

Environmental triggers can expose individuals to susceptibility to depression. Loosing a job, a death in the family, a mental or physical illness of a loved one are just a few environmental factors that can lead to depression. The environment or life experiences can lead to a downward turn in our mood. Sometimes these experiences are so strong they themselves can make someone experience a major depression.

The question asked in the beginning of this paper was where does depression originate? This question "is rather beside the point. There is only one of you, not a separate physical or psychological you. Depression may be triggered by either physical or psychological events...both seemed to be involved" (2). Research into the development of depression helps psychologists and psychiatrists better understand treatment. By biological, genetic, and environmental cause should not be viewed as separate in determining a diagnosis, but in tandem. The origins of depression are found when examining and treating the patient's environment, genetics and biology as a whole(3).


1)Mothers of Depressed Offspring,short concise articles

2)HealthyPlace.com,Depression Community

3)Mood Disorders Unit ,a plethora of information

4)health research,information on seretonin

5)University of Pittsburg,depression gene in women

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


The Genetic Basis of Addiction
Name: Neesha Pat
Date: 2003-05-04 23:55:24
Link to this Comment: 5609


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Addiction is defined as a compulsive physiological need for and use of a habit-forming substance, characterized by tolerance and by well-defined physiological symptoms upon withdrawal. Broadly, it is defined as a persistent compulsive use of a substance known by the user to be physically, psychologically, or socially harmful(1). Addiction is a complex phenomenon with important psychological and social causes and consequences. However, at its core it involves a repeated exposure to a biological agent (drug) on a biological substrate (brain) over time (2). Most abnormal behaviors are a consequence of aberrant brain function, which means that there is a tangible goal to identify the biological underpinnings of addiction. The genetic basis of addiction encompasses two broad areas of inquiry. One of these is the identification of genetic variation in humans that partly determines susceptibility to addiction. The other is the use of animal models to investigate the role of specific genes in mediating the development of addiction. Whereas recent advances in this latter effort have been rewarding, a major challenge remains, to understand how the many genes implicated in rodent models interact to yield as complex a phenotype as addiction(7).

Dramatic advances have been made over the past two decades, both in neuroscience and the human genome project- complete genome sequences are providing a framework to allow the investigation of biological processes, and have revolutionized the understanding of genes and their role in addiction. Researchers have long questioned what exactly leads a person to become addicted and what causes the 'recurrent' cravings that characterize addiction. My hypothesis was that genes play the most important role in addiction and manipulations of such genes could lead to possible cures for addictive personalities. I questioned the mechanism by which genes contributed to addiction vulnerability and sought to answer these questions by giving a detailed description of the avenues explored by researchers and the consequent emergence of 'addiction-genetics'. Further, the ponderance that Brain=Behavior and the inherent ramifications of such, proves no more fascinating than when addressed in the context of the biological basis of addiction.

Investigators believe that there is great variability among individuals when it comes to their vulnerability to becoming addicted. "Pretty much everyone enjoys having their dopamine levels shoot up dramatically, which can happen without the use of drugs, as with the 'runners high'. However, not everybody craves the experience so much that it consumes them. Nor does everyone experience the same changes in brain function!" says Alan Leshner, director of NIDA (5). For marijuana addicts, the non-family environment has the biggest influence, accounting for 38% and with genes accounting for 33%. Thus we see that the ability of drugs of abuse to alter the brain does indeed, in part, depend on genetic factors. Acute drug responses as well as adaptations to repeated drug exposure can vary markedly, depending on the genetic composition of the individual. Genetic factors can also influence the brain's responses to stress and are thus are also likely to contribute to stress-induced relapse(2).

Results of a genome-wide search, or "genome scan", by a team of researchers led by Dr. George Uhl, from the National Institute on Drug Abuse (NIDA) in Baltimore, Maryland, provided the first evidence that specific regions of the human genome differed between abusers of illegal drugs and non-abusers. The findings of Dr. Uhl and his colleagues was an important step toward identifying genes that affect a person's vulnerability or resistance to substance abuse, and offer hope for identifying individuals at high risk for addiction and matching abusers with the most effective treatments. The researchers looked for differences in the frequency of 1,494 genetic variants known as SNPs (single nucleotide polymorphisms, or "snips") between DNA samples from 667 unrelated individuals with a history of heavy drug use and 338 individuals with no significant lifetime use of any addictive substance (controls). Using the SNP markers, whose locations in the genome are known, the research team identified more than 40 regions across the genome that differ between drug abusers and controls in DNA samples from both European Americans and African Americans. Eight of these regions have been previously linked to alcohol or nicotine dependence, suggesting that genes in these regions contribute to individual vulnerability to abuse of multiple substances(11).

A gene might contribute to addiction vulnerability in several ways. A mutant protein (or altered levels of a normal protein) could change the structure or functioning of specific brain circuits during development or in adulthood. Animal models have provided information that adaptations that drug exposure elicits in individual neurons alter the functioning of those neurons, which in turn alters the functioning of the neural circuits in which those neurons operate. This leads eventually, to the complex behaviors (for example, dependence, tolerance, sensitization and craving) that characterize an addicted state(1). A critical challenge in understanding the biological basis of addiction is to account for the array of temporal processes involved. Thus, the initial event leading to addiction involves the acute action of a drug on its target protein and on neurons that express that protein (7).

A study was conducted at Yale University Department of Psychiatry and Pharmacology, on the molecular and cellular adaptations that occur gradually in specific neuronal types in response to chronic drug exposure, particularly those adaptations that have been related to behavioral changes associated with addiction. The study focused on opiates and cocaine, not only because they are among the most prominent illicit drugs of abuse, but also because considerable insight has been gained into the adaptations that underlie their chronic actions. The results of the study showed that the best established molecular adaptation to chronic drug exposure is up-regulation of the adenosine 3',5'-monophosphate (cAMP) pathway. This phenomenon was first discovered in cultured neuroblastoma and glioma cells and later demonstrated in neurons in response to repeated opiate administration. Acute opiate exposure inhibits the cAMP pathway in many types of neurons in the brain, whereas chronic opiate exposure leads to a compensatory up-regulation of the cAMP pathway in a subset of these neurons. Up-regulation of the cAMP pathway would oppose acute opiate inhibition of the pathway and thereby would represent a form of physiological tolerance. Upon removal of the opiate, the up-regulated cAMP pathway would become fully functional and contribute to features of dependence and withdrawal (1). Many researchers attempted to identify the genetic basis of these behavioral differences by the use of Quantitative Trait Locus (QTL) analysis. There is now direct evidence to support this model in neurons of the locus coeruleus, the major noradrenergic nucleus in the brain. These neurons generally regulate the attentional states and activity of the autonomic nervous system and have been implicated in somatic opiate withdrawal. Up-regulation of the cAMP pathway in the locus coeruleus appears to increase the intrinsic firing rate of the neurons through the activation of a non selective cation channel. This increased firing has been related to specific opiate withdrawal behaviors. cAMP Response Element Binding (CREB) protein, one of the major cAMP regulated transcription factors in the brain, was knocked out to create mutant mice. These mutated mice deficient in CREB showed attenuated opiate withdrawal (1). Overexpression of CREB in the nucleus accumbens counters the rewarding properties of opiates and cocaine; overexpression of a dominant-negative CREB mutant has the opposite effect. These findings suggest that CREB promotes certain aspects of addiction (for example, physical dependence), while opposing others (for example, reward), and highlight that the same biochemical adaptation can have very different behavioral effects depending on the type of neuron involved (1).

A recent NIDA funded study illustrates how genetic differences can contribute to or help protect individuals from drug addiction. The study shows that people with a gene variant in a particular enzyme metabolize or break down nicotine in the body more slowly and are significantly less likely to become addicted to nicotine than people without the variant(5) . Dr. Edward Sellers of the University of Toronto examined the role that a gene for an enzyme called CYP2A6 plays an active role in nicotine dependence and smoking behavior. CYP2A6 metabolizes nicotine, the addictive substance in tobacco products. Three different gene types or alleles for CYP2A6 have been identified by previous research- one fully functional allele and two inactive or defective alleles. Each person has a maternal and paternal copy of the gene. Therefore, a person can have two active forms of the gene and normal nicotine metabolism; one active and one inactive copy and impaired nicotine metabolism; or two inactive copies, which would further impair nicotine metabolism(6) . The study found that people in a group who had tried smoking but had never become addicted to tobacco were much more likely than tobacco-dependant individuals in the study, to carry one or two defective copies of the gene and have impaired nicotine metabolism. The researchers theorize that the unpleasant effects experienced by people learning to smoke, such as nausea and dizziness, last longer in people whose bodies break down nicotine more slowly. These longer lasting aversive effects would make it more difficult for new smokers to persist in smoking, thus protecting them from becoming addicted to nicotine. Generally, smokers with slower nicotine metabolism do not need to smoke as many cigarettes to maintain constant blood and brain concentrations of nicotine(5) . Currently researchers are looking into the possibility of being able to block the enzyme, consequently leading to the prevention of people becoming smokers.

Several large chromosomal regions have been implicated in addiction vulnerability, although specific genetic polymorphisms have yet to be identified by this approach. It has been established however, that some East Asian populations carry variations in enzymes (for example, the alcohol and aldehyde dehydrogenases) that metabolize alcohols. Such variants increase sensitivity to alcohol, dramatically ramping up the side effects of acute alcohol intake. Consequently, alcoholism is exceedingly rare in individuals for individuals that are homozygous for ALDH2 allele, which encodes a less active variant of aldehyde dehydrogenase (7).

Despite the progress made in the genetic field, candidate gene approaches are limited by our rudimentary knowledge of the gene products and the complex mechanisms underlying addiction(2) . As a result, more open-ended strategies are needed, such as those based on analysis of differential gene expression in certain brain regions under control and drug-treated conditions. Differential display, for example, enabled the identification of NAC-1, a transcription factor-like protein, which is induced in nucleus accumbens by chronic cocaine and is now known to modulate the locomotor effects induced by cocaine (3).

In addition to secondary targets, there are numerous neurotransmitters, their receptors and post-receptor signaling pathways that modify responses to acute and chronic drug exposure. For example, several behavioral aspects of mice lacking the serotonin 5HT1B receptor indicate enhanced responsiveness to cocaine and alcohol; notably, they self-administer both drugs at higher levels than wild-type controls. The mice also express higher levels of FosB (a Fos-like transcription factor implicated in addiction) under basal conditions. These observations point to the involvement of serotonergic mechanisms in addiction. There are many other such examples. Mice deficient in the dopamine D2 receptor or the cannabinoid CB1 receptor have a diminished rewarding response to morphine, implicating dopaminergic systems and endogenous cannabinoid-like systems in opiate action. The stability of the behavioural abnormalities that characterize addiction, indicate that there is a high possibility of drug-induced changes in patterns of gene expression. One demonstration of this approach showed that FosB accumulates in the nucleus accumbens (a target of the mesolimbic dopamine system) after chronic, (but not acute) exposure to several drugs of abuse, including opiates, cocaine, amphetamine, alcohol, nicotine and phencyclidine (also known as PCP or 'angel dust'). This is in contrast with other Fos-like proteins, which are much less stable than FosB and induced only transiently after acute drug administration. Consequently, FosB persists in the nucleus accumbens long after drug-taking ceases (6).

Dr. David Russell at Yale University examined the role for Glial-Derived Neurotrophic Factor (GDNF) in adaptations to drugs of abuse. Infusion of GDNF into the ventral tegmental area (VTA), a dopaminergic brain region important for addiction, blocks certain biochemical adaptations to chronic cocaine or morphine as well as the rewarding effects of cocaine. Conversely, responses to cocaine are enhanced in rats by intra-VTA infusion of an anti-GDNF antibody and in mice heterozygous for a null mutation in the GDNF gene. Chronic morphine or cocaine exposure decreases levels of phosphoRet, the protein kinase that mediates GDNF signaling in the VTA. Together, these results suggest a feedback loop, whereby drugs of abuse decrease signaling through endogenous GDNF pathways in the VTA, which then increases the behavioral sensitivity to subsequent drug exposure(10) .

In 1999, researchers funded by the NIH unearthed the unexpected connection between circadian rhythms in insects and cocaine sensitization, a behavior that occurs in both fruit flies and vertebrates, and that has been linked to drug addiction in humans. The circadian clock consists of a feedback loop in which clock genes are rhythmically expressed, giving rise to cycling levels of RNA and proteins. Four of the five circadian genes identified to date influence responsiveness to freebase cocaine in the fruit fly, Drosophila melanogaster. Sensitization to repeated cocaine exposures, a phenomenon that is seen in humans and animal models, and associated with enhanced drug craving, is eliminated in flies mutant for certain genes (period, clock, cycle, and doubletime), but not in flies lacking the gene timeless. Flies that do not sensitize owing to lack of these genes do not show the induction of tyrosine decarboxylase normally seen after cocaine exposure. These findings indicate unexpected roles for these genes in regulating cocaine sensitization and indicate that they function as regulators of tyrosine decarboxylase (9).

Pathological gambling (PG) is an impulse control disorder and a model behavioral addiction. Familial factors have been observed in clinical studies of pathological gamblers, and twin studies have demonstrated a genetic influence contributing to the development of PG. Molecular and genetic research has identified specific allele variants of candidate genes corresponding to the neurotransmitter systems associated. Associations have been reported between pathological gamblers and allele variants of polymorphisms at dopamine receptor genes, the serotonin transporter gene, and the monoamine-oxidase A gene (8).

Thus we see that animal models have proved to be pivotal to our understanding of neurobiological mechanisms involved in the addiction process. The described experiments provide sufficient evidence that the genetic makeup of the individual seems to play a very large role in drug addiction, whether the cause be a single dysfunctional/altered allele or by a cascade of biological events leading to change in gene expression patterns. One drawback (and one that is not limited to the field of addiction) is that sometimes a genetic mutation is found to result in a phenotype without any plausible scheme as to how the mutation actually caused the corresponding phenotype(4) . Various types of microarray analysis have led to the identification of large numbers of drug-regulated genes; it is typical for 1–5% of the genes on an array to show consistent changes in response to drug regulation(12) . However, without a better means of evaluating this vast amount of information (other than exploring the function of single genes using traditional approaches), it is impossible to identify those genes that truly contribute to addiction (7). As we can see, the method to approaching drug addiction is not straightforward- which dugs should be used as models? What should researchers be looking for- drug abuse per se? sensation seeking? specific biological markers?

Fortunately, the increasing sophistication of genetic tools, together with the increasing predictive value of animal models of addiction, makes it increasingly feasible to fill in the missing pieces—to understand the cellular mechanisms and neural circuitry that ultimately connect molecular events with complex behavior(11). Genetic work remains the highest priority, as it will greatly inform our understanding and treatment of addictive disorders in the years to come. In addition, the genetic basis of individual differences in drug and stress responses represents a powerful model of the ways in which genetic and environmental factors combine to control brain function in general (1).

References


1)Nestler, E., Aghajanian, G. (1997) Molecular and Cellular Basis of Addiction, Science, New Series, Volume, 278, Issue 5335.

2)Vulnerability to Addiction

3)Brain Buildup Causes Addiction

4)Everyone is Vulnerable to Addiction

5)Genes can Protect Against Addiction, NIDA Notes

6)Biological Basis of Addiction Studied

7)Genes and Addiction, Nature Genetics

8)Genetics of Pathological Gambling, Dept. of Psychiatry, Acala University, Spain.

9)Response to Cocaine Linked Biological Clock Genes, NIDA Notes

10)Promising Advances Towards Addiction , NIDA Notes

11)Genetic Animal Model of Alcohol and Drug Abuse , NIDA Notes

12)Dependance of the Formation of the Addictive Personality on Predisposing Factors

13)Evidence Builds that Genes Influence Cigarette Smoking


Surreal Perceptions of the Body
Name: Ellinor Wa
Date: 2003-05-06 00:57:22
Link to this Comment: 5621


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

"Do I look fat in this outfit?" Millions of men and women all over the world, in hundreds of different dialects have uttered this question. Depending on the century or decade even, they have hoped for a range of different responses. Some of them are aware of the answer but merely fishing for compliments, others understand the objectivity of the question, and the smallest percentage of these people is completely oblivious. This percentage perceives a completely different image than what shown, standing in front of the mirror. The representation does not necessarily apply to weight; it may apply to noses, lips, legs, buttocks, abdomen, spinal chord, ears, etc. Rather than regarding themselves realistically, they ingest a surreal image which plagues their thoughts and leaves them inclined to take drastic measures to fix the object of their obsession. When this perception becomes deeply skewed, altering a person's actions to an unhealthy level, the disorder associated with this loss of reality is called Body Dysmorphic Disorder.


The diagnostic critera for Body Dsymorphic Disorder requires all three of the following symptoms: 1. Preoccupation with an imagined defect in appearance. If a slight physical anomaly is present, the person's concern is markedly excessive. 2. The preoccupation causes clinically significant distress or impairment in social, occupational, or other important areas of functioning. 3. The preoccupation is not better accounted for by other mental disorder (dissatisfaction with body shape and size in Anorexia Nervosa). (1) Body Dysmorphic Disorder (BDD) does not usually apply to weight (as obsessive weight dissatisfaction is classified under more specific eating disorders); more often it applies to a specific imaginary defect. Once a defect has been noticed, the person suffering form BDD will often spend hours scrutinizing about their flaw in front of a mirror or other reflective surface. They tend to spend hours a day thinking about it. The defect will eventually take over their lives. It is common for sufferers of BDD to lose jobs, wreck marriages, and attempt suicide because of their feelings of inadequacy. (2)


One case study showed a woman who had earned her PhD and then decided to have children with her husband. Once the two boys she had bore reached three and five years old, it had become time for her to resume her career. A couple weeks later, as she was cooking in the kitchen, a small piece of ceiling fell on her nose. As a result of this accident, she bled a bit and then bandaged her nose. A scar rested there for a couple weeks, after which she was completely healed. No one could see an imperfection on her nose except for her. She became obsessed with this invisible mark, staring her nose for hours in the mirror. She felt her face had come to look grotesque. Eventually, she became housebound. At this point her husband and friends demanded she seek psychiatric help. (5)


In a number of cases, BDD was a result of a traumatic experience endured. In the story above, the woman who perceived her nose as deformed did not experience this misperception until it was time for her to leave her children and go back to work. In other cases, adolescent girls became disordered with this same metal illness after having been raped.(6) It seems that BDD may serve as an escape for some people who are faced with a horrible incident of experience in life. Instead of focusing all their emotional frustrations on an undesirable event, their mind creates a flaw in appearance with intent to distract.


This notion is reminiscent of the blind spot anomaly. When given a piece of notebook paper with a large dot and large cross, we moved the paper forward, focusing our eyes on the cross, until the dot disappeared. Instead of leaving a hole where the dot was until we could no longer see it, the mind created information to fill in the gap; where the dot was, we saw notebook paper instead. Perhaps this notion should be applied to BDD; the mind inserts an image to preoccupy itself.


In other cases, however, BDD does not develop as a repercussion of some disturbing event. The human percept becomes skewed on its own accord. How can the mind diffuse the eye's ability to see something in its realistic form? It seems that the image which is reflected on the back of the retina has been deformed by brain; the brain signals false information to the eye. The brain has tremendous power over the body. For sufferers of BDD, often they will not only see this surreal perception, but they will also feel it. When the brain signals misinformation to the eye, it also sends it to our sensory percepts causing it to be even more believable.


This convincing image often draws people towards enlisting the help of plastic surgeons. The image seems so authentic that operating on the defect will make it disappear. However, in most cases, people with BDD undergo plastic surgery repeatedly in attempts to fix their flaw. This fact further supports the argument that the brain has sent false information because the flaw usually will still be perceived by a disordered person even after they have had their defect operated on. This same type of obsessive behavior is applied to people who see themselves as larger than they are in actuality. They will resort to drastic measures to rid themselves of the extra fat they see in the mirror; becoming extremely anorexic (not eating enough to nourish their bodies) or bulimic (binging on a variety of foods and then purging their intake by vomiting or exerting themselves through exercise). Even after these obsessive food rituals are accomplished, they will still see themselves in the same way they did before.


The imaginary beliefs that are often attributed to Body Dysmorphic Disorder do not always indicate such a serious mental disorder. People view their bodies differently than is the case all the time. For instance, a new genre of disorder has swept the nation over the past couple years deemed 'tanorexia'. (9) In this case, people visit tanning booths constantly, convinced they are not as tan as they actually are. The new craze has been attributed to pop celebrities who constantly tan themselves.(8) Their frequent visits can cause cancer or death from overexposure to radiation.(7)


Several ideas come to mind in terms of the surrealistic v. realistic perceptions of the body relating to neurobiology. Does the same notion apply to body dysmorphic disorder that does to perceptions of sound? The noise that a tree makes falling in the forest sounds different to different people. Does the way one perceives oneself look different to different people? Perhaps the brain has a predetermined notion based on a familiar image of what the body looks like. Or does the brain signal different information to the retina concerning body image depending on its mood state? Most therapists prescribe serotonin level lifting medication to those suffering from Body Dysmorphic Disorder. It would seem that the cure to and root of such a mental illness would lay in the homeostasis of chemicals in the brain.


Another question that comes to mind in relation to Body Dysmorphic Disorder is what role the I-function plays in the imaginary perceptions seen. If we know that we can use the I-function to scare ourselves, how do we know that we can't use the I-function to make ourselves believe we look a different way than we actually do? Should we attribute skewed perceptions to the brain's misinforming the retina, or should we attribute them to the I-function's ability to take an image to the extreme?


The implications of Body Dysmorphic Disorder are neurobiologically significant in many ways. If the brain can send false information to the retina rendering an perceived image skewed, then how can we tell what images are realistic and which ones are surrealistic? Body Dysmorphic Disorder specifies that its victims only see themselves as deformed, not others. But if it is the brain which signals false information, how do we know that it only applies to those it afflicts? Maybe we see morphed images of everyone. If the brain can signal wrong information to the eye and can extend to the sensory percepts, then how do we know what images are real and which one are false? Perhaps we all see ourselves in a way which others do not, and only those who suffer from BDD see an extremely dramatic version of distorted perception. That could be why they stand out from the rest of us; their perceptions are so distorted that it is called to the attention of others. When those words "Do I look fat in this outfit" are uttered is it a possibility that no one knows the answer?

References

1)American Psychiatric Association. DSM-IV-TR, 2000.
2)Health Information, an interesting site about BDD
3)ABC scientific article page, an interview concerning BDD
4)BBC webpage, an interesting article about the severity of BDD
5)Mental Help Page, this site provides a tutorial for understanding and coping with BDD
6)College Health Page, this site explains BDD to college students
7)Tanning Laws, demonstrates the dangers of tanning
8)New York Times Archives, tanorexia: the new self-destructive appearance disorder
9)UWO Gazette Page, a college newspaper article about the international obsession with tanning


Brain=Behavoir: Psychosomatic Disorders
Name: Stephanie
Date: 2003-05-06 14:36:28
Link to this Comment: 5625


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Everyone feels stress, anxiety, fear, or worry at some point in our lives, if not on fairly regular basis. There are varying degrees of stress; some people let stress consume them and it ends up controlling their life. If stress is something that originates in the brain, which then has the capacity to influence us, doesn't it make sense that brain does equal behavior? This seems particularly apparent in people suffering from psychosomatic disorders - if someone can make themselves sick as a result of their thoughts and feelings, it seems clear that brain and behavior are directly related.

Pyschosomatic is a term that expresses the relationship between mind and body and "is applied to physical disorders thought to be caused by psychological factors" (1). There are, however, some differences among patients who suffer from such disorders. Just as some people fixate on the stress they feel and others ignore it, people with psychomatic disorders also obsess over it and seek medical attention frequently, whereas others cannot distinguish the sources of their bodily pain (2).

General stress seems to be largely psychosomatic, though of a milder formulation. Psychosomatic responses occur through the autonomic nervous system and include activation of the sweat glands and mucous membrane lining of the lungs, and the release of hydrochloric acid in the stomach (3). This explains why when we are nervous we often sweat more, experience an upset stomach, and an increase in pulse rate. However, tbe rationale for becoming nervous in the first originates in the brain.

Although people experience anxiety in different situations, most of us would admit to feeling nervous before a job interview or before giving a presentation. This is due the way we perceive the event. If we attribute our nerves to our fear of how the presentation will be received or to the previous time we were in a similar situation and made a mistake, our physical response to this neurological trigger may be heightened.

Furthermore, the same mechanism can be helpful in alleviating stress - this is why many people preach positive thinking as a why to overcome anxiety or fear. Since "anxiety [often] results from perceiving oneself as helpless and feeling unable to cope with an anticipated danger or problem," it can be beneficial to change our perception of the situation (2). Therefore, since stress can be ascribed to thought processes, it seems that the brain and the I-function have direct bearing on how we behave.

Psychosomatic disorders are usually termed as somatoform disorders. Included in this category is somatization disorder, somatoform pain disorder, and conversation disorder. Somatization disorder is generally chronic and occurs when the patient exhibits or expresses much physical pain that can be explained in terms of psychological problems. The pain these patients feel tends to be located within the digestive system, the nervous system, and the reproductive system. In addition, patients tend to not accept a psychological explanation and usually insist that the pain is entirely physical or bodily (4).

Those who suffer from somatization disorder may fall under one of two categories: alexithymic or hypochondriacal. Patients diagnosed as alexithymic tend to have difficulty identifying the nature of their emotions and moods. Consequently, they usually reject the suggestion that their physical pain is psychological (4). One case involving a young woman who grew up in an abusive household exhibits this well.

Hannah was raised by her father since her mother was frequently hospitalized soon after Hannah's birth. Her father was very strict and forbade Hannah to speak of her mother's illness outside the house. As a result, Hannah was very doting and sought to please her father. Academically, Hannah was gifted and went on to receive a Ph.D. She married twice but both husbands abused her emotionally and physically, largely because she succumbed the dominant male role in the relationship; as a child, she had not had a female role model to explain that being assertive is acceptable. Because she was well-educated, Hannah was articulate and could talk about her problems, but did not show any emotion when identifying them. Once she began to complain of a backache, her psychoanalyst pinpointed this as a somatic response: "the conflict inside her between wanting to get away from [an ex-boyfriend] and not being able to do so was splitting her back in two" (5).

It seems that beginning in her early childhood, Hannah's obedience to her father negated all other emotions - throughout her life she was only concerned with the well-being of others. As s result, her own emotions were under-developed. Related to Hannah's alexithymic somatization, is hypochondriacal somatization.

Patients defined as hypochondriacal are essentially hypochondriacs - that is, they are intensely convinced that they are ill most of the time. The difference between regular hypochondriacs and those with a somatization disorder is that the latter are entirely sure they have a disease and often become agitated when doctors try to disprove them (4). In addition, people with somatoform pain disorder are preoccupied with pain even though there is no physical evidence or explanation for it. Thus, it seems to be that these patients think they feel pain, but in reality, do not (4).

Finally, those who experience conversation disorder find that they lose a physical ability like vision or movement. Symptoms of this disorder are not created by the patient intentionally, or at least do not appear to make manifest the intention of the patient; thus, these symptoms occur due to psychological stimulation (4). The best example of this condition appears in the story of Alice Greer. When admitted to a neurology clinic, Alice suffered from paralysis in both of her legs. In a consultation with one of the doctors, Alice confessed that she had had an affair with her husband's brother while he was visiting. She felt extremely guilty, but he wanted to continue the affair and threatened to tell everyone their secret when she refused. He forbade her to speak to anyone of their relationship; since she could not tell anyone how she felt, her legs became paralyzed. Interestingly, as Alice spoke with the doctor "Strength immediately returned to her legs" (6).

The nature of psychosomatic disorders seems to make a particularly strong case for the brain equal behavior discourse. The fact that our thoughts can control or dictate our behavior is an interesting and significant aspect of being human. As previously discussed, people with psychosomatic or somatoform disorders experience physical pain or disability as a result of stress. Because stress arises as a result of the way we perceive certain situations (I can't do this versus I can handle it), and since perception is a neurological function, it is clear that in this sense, brain is directly linked to behavior.

Even if we think of behavior being the result of a motor symphony, a series of synapses in response to an input, the input may still be psychological and not merely external or environmental. Although behavior can certainly be influenced by one's environment, psychological factors still have a major role in prescribing behavior. If we find ourselves in a situation we find uncomfortable, we may think to ourselves: This makes me uncomfortable; I don't like this, therefore, I'm just going to keep to myself and mind my own business. This thought process is then what makes us act slightly aloof or solemn. Similarly, if we have an important job interview or presentation, we may say: This is really important - I hope I don't mess up - I'm really worried about this. As a result, we become nervous and find that our voice cracks or quavers when we speak or that we tremble while gesturing. Although these examples are somewhat trivial, they still exhibit part of the extent to which the brain controls behavior.

Furthermore, it seems that patients suffering from psychosomatic disorders have a distorted I-function. Their I-function appears to become somewhat muddled as a consequence of their intense psychological activity. Although these people still have a sense of themselves, they do appear to be confused about what that means. They seem to vacillate between the following thoughts: Do I feel pain because I think of something that then induces this feeling? Or: do I feel pain because I did something that actually hurts (i.e. I stubbed my toe)? In this instance, it seems that part of the behavioral mechanism in their brain is malfunctioning - particularly since they cannot recognize that their pain is due to something they inflict themselves - something as abstract as a thought.

Because thoughts are so difficult to control, it is no wonder that we do not lose control of ourselves and our behavior. Although it does seem clear to me that brain does equal behavior, it is also clear that our I-function, still part of the brain, checks our behavior based on culturally accepted norms.


References

1)The Merck Manual of Medical Information
2)Psychological Self-Help
3)Psychosomatic Disorders
4) Melmed, Raphael N. Mind, Body, and Medicine. New York: Oxford University Press, 2001.
5) Sidoli, Mara. When the Body Speaks. Philadelphia: Taylor & Francis Inc., 2000.
6) Griffith, James L. & Melissa Elliott. The Body Speaks. New York: Basic Books, 1994.


Endorphins, Depression, and the Brain of a Distanc
Name: Katherine
Date: 2003-05-06 16:37:24
Link to this Comment: 5626


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Ask a long-distance runner about the place and role of running in his or her life, and do not expect to get a simple answer. Running, and specifically distance running, goes well beyond the practical physicalities of mile splits, muscles and orthotics. It is decisively a sport of the mind as well as the body, and has carried this unique duality for hundreds of years. We have heard stories of ancient Greek messengers who run for hours and miles over rugged terrain in order to deliver their word. (1) Is such ability simply a case of "mind over matter?" What really is happening in the brain of a distance runner?

There are many factors behind the psychological aspects of running as a sport, both in relation to the sport itself, and to the athletes who choose to run. We must look into all of these factors when exploring the neurobiology of running. Among other effects, long-distance running has been associated with the "runner's high" and an ability to decrease depression. In assessing the validity of these claims we must look into the events that occur in the brain and the body of a distance runner.

In a Duke University study, 150 participants with depression, age 50 or more, were divided into three groups. One group was put on an exercise schedule, another group was administered Zoloft, and a third was given a combination of the two. At the end of four months, all three groups showed significantly lowered rates of depression, and those in the exercise group experienced significantly less relapse than those in either of the other two groups.(2) Positive mood changes have also been found to occur specifically as a result of long-distance running. (3)

We have all heard of endorphins, and the famously mysterious "runner's high." But is it really so mysterious? During all types of activity, chemicals are released in the brain to be used as signals and messengers. Running, in the spectrum of possible activities, is one of relatively high intensity. Therefore, it makes sense that such an activity should produce noticeable chemical changes in the body. When the body is put under stress, the brain reacts to this stress. Prolonged running is sensed by the body and brain as an aberration from normal, resting activity. It could also be associated with a need for heightened physical capabilities for survival, as the autonomic nervous system may not distinguish between running for recreation and running for another purpose. The activity of running is thus sensed as a stress, and the brain reacts accordingly. The method of reaction is the release of chemicals in response to stress, which are designed to counter this stress and enable the body to function efficiently as it continues to run.

A group of opiate proteins have been identified as naturally occurring in the brain, and these chemical neurotransmitters, known commonly as endorphins, are responsible for pain-relieving properties. Endorphins have been associated with the runner's high. (4) Endorphins are thus named because of their nature as endogenous (produced inside the body) neuropeptides (proteins in the brain) which act in a similar way to morphine and are chemically similar to that drug, the latter being derived from the pain-reducer, opium. Endorphins therefore convey some of the same properties as morphine, but are found naturally in the body. Endogenous opioids (endorphins being a specific example) were identified in 1975 based on knowledge of the human body's response to morphine. The body is able to respond to a chemical such as morphine because of receptors in the brain that recognize the drug and trigger a response to it. Scientists realized that, since the brain has receptors that recognize morphine, the human body must also produce its own morphine-like substances. It is not likely that receptors existed for the sole purpose of recognizing an exogenous chemical such as morphine. The newly recognized neurotransmitters were given the names endorphins and enkephalins, emphasizing their natural origin inside the body (prefix from Greek en-, meaning in). Endorphins are polypeptides containing 30 amino acid units, and opioids are hormones similar to corticotrophin, cortisol, and catecholamines (adrenaline and noradrenaline). They are made by the body to reduce stress and relieve pain, and are usually produced during times of extreme stress to block the pain signals sent by the nervous system. There are at least 20 known endorphins in the human body, with beta-endorphin having the strongest affect on the brain and body during exercise. (5) Beta-endorphin is structurally very similar to morphine, but has different chemical properties, and tyrosine is the main amino acid contributing to this peptide hormone.

Endorphin levels have been found to increase with conditions of stress in prolonged exercise such as running, cycling, cross-country skiing, and long-distance swimming. (6) The physical and psychological manifestations of this increased endorphin level can be varied. A runner may frequently feel a "second wind," meaning that he or she is more energized and balanced than when the run or the race began. Such a feeling often extends beyond the sensation of increased physical energy, however, and seems to take effects on the runner's mental outlook; hence the mood change associated with endorphin release. The effects are individual, and thus cannot be completely quantified. However, among the positive feelings associated with distance running, an athlete may begin to feel extremely balanced, rhythmic, connected, and in synch with her surroundings. There may be a simultaneous feeling of being both very aware and comfortable in the body, while also feeling that the body does not exist; the true reality being, instead, an unattached rhythm. The mind may seem to be both inside and outside of the body, so that, ironically, the relationship of the two is both close and detached. Also simultaneous is a feeling of being in control while also being free from any need or act of controlling. The world itself may appear more ordered and make more sense, so that things fall effortlessly into place, and the runner may feel a greater sense of her own placement in that world, with a simple yet distinct feeling of purpose. It may be a feeling of suspension, height, and calmness, where the runner could seemingly keep going forever, in a type of calm, automatic, and meditative rhythm, accompanied by a heightened awareness and crispness of perception. It is, in fact, interesting to note the meditative aspects of distance running, since both running and meditation have been shown to produce similar changes in hormone levels. (7) A seeming suspension of time may also occur, and these feelings of suspension come together in the spaces between each step, when the body rests for a short moment above the ground.

Pain threshold tends to increase directly following exercise, and moods are often elevated. But it is the amalgamation of all of these factors that may enable us to explain on a neurobiological level the power running may have to combat depression. The act of running produces a stress, which causes chemicals to be released in the brain. (One might here note the inherent requirement for contrast; the fact that an initial stress is necessary (running) before the cure can be imparted (endorphins and a heightened sense of consciousness and improved mood). As with many activities, difficulty necessarily precedes satisfaction, and there must be a low place before the high). These neuropeptides that are released have notable effects in combating pain and in creating noticeable sensations in mood and mental outlook.

How, though, do we determine whether these changes are due solely to the distinct release of endogenous chemicals? Indeed, human emotion is both psychological and physiological in nature, and there are numerous factors in the running experience that may contribute to a variation in mood. When running outside in nature, for example, the fresh air, sunlight, and scenery may contribute to positive mood changes. (8) The feeling of a strong body may make the athlete feel more confident, healthy, and self-sufficient. A run may be a respite from the daily bustle of life, offering a time to organize thoughts or to spend some time free from thought. The schedule and planning required for workouts, as well as the establishment and achievement of specific goals, may impart a sense of satisfying balance, accomplishment, and purpose for the runner.

There are, of course, two sides to every coin. Although running has been shown to alleviate depression in many cases, it also has the possibility to cause negative feelings and depressive states. Any activity performed at a high level will necessarily be both rewarding and difficult. It is a challenge, however, to pinpoint the true source of these sensations; and depressive feelings seem most associated with the psychological complexities associated with training, rather than solely to the release of neuropeptides per se. In this case, it is important to look at the type of people who are drawn to distance running, and to comment upon how their brains tend to react to life and to various situations. Perhaps we can find similarities between the behaviors of distance runners which would point to similarities in their brains. Indeed, there are common characteristics that are shared by many distance runners. Such recurring traits among these athletes include: an introspective and solitary nature, tendencies towards perfectionism, a desire for independence, and stubbornness in maintaining a strict training schedule. Quickness for self-critique, and the establishment of rigid and elevated goals, may contribute to depressive tendencies. Like any athletic or artistic endeavor at a high level, there is a potential for obsession. This comes from the striving to reach the extremes, and to find the highest potential; it is, by definition, at the brink of normalcy where this highest level necessarily lies. This brink is by its nature unstable. If the athlete, or the artist, is to reach it, she must also waver on the brink of instability. Hence comes the cathartic potential of an important race for the trained and tapered athlete; and yet, after this race, there is the potential for other swings of sentiment, and for the need to reestablish goals and new purpose. We may, perhaps, see parallels to this phenomenon in the experience of a new mother suffering from post-partum depression, or of a writer who has finally submitted a book for publication. There is, on the one hand, a sense of accomplishment and creation, and on the other, a sense of imbalance in being separated from the thing that was so long a part of each day's vision and purpose. The key then lies in re-calibration and continued motion. And yet while continuation can create a sense of purpose and structure, it also creates a type of relentlessness, an uneasy infinity. Hence, the immateriality of running provides both freedom and tension, just as a piece of music escapes our grasp the moment it is played. Running, of course, provides huge benefits to the body and to fitness; however, the hormones produced in over-training by some elite athletes have been known to cause depression. (9) Endorphins have also been linked with decreased estrogen levels in female distance runners, which can lead to lowered bone density and, eventually, osteoporosis. (10) All in all, endorphins are a type of natural drug and must be treated with respect. Like any drug, there is the potential for addiction and withdrawal, to which any runner who has missed a needed run, or has been injured, can attest.

All valuable pursuits provide both brightness and shadow, and it is perhaps by this contrast that we can recognize what is truly worthwhile. The challenges of running do not outweigh the rewards. On the contrary, distance running stands as a valuable way for many people to live fully, both mentally and physically, and to hint at bridging, in fleeting suspended moments, the eternal internal gap.

References

1) Coolrunning website: a source for runners and those interested in the sport.

2) Article on DepressioNet: a website resource for depression information and support.

3) PubMed Article: Markoff, R.A., Ryan P, Young, T. "Endorphins and mood changes in long-distance running." Medical Science Sports Exercise 14(1):11-15 (1982).

4)
Website for the International Association of Mind-Body Professionals Article by Dr. J.S. Ellis: Runner's High.

5) PubMed Article: Wildmann, J., Kruger, A., Schmole, M., Niemann J., Matthaei, H. "Increase of circulating beta-endorphin-like immunoreactivity correlates with the change in feeling of pleasantness after running." Life Science Mar 17;38(11):997-1003 (1986).

6) PubMed Article: Henry, J.L. "Circulating opioids: possible physiological roles in central nervous function." Neuroscience and Biobehavioral Reviews (6-3:229-45) 1982.

7) PubMed Article: Solomon, E.G., Bumpus, A.K. "The running meditation response" American Journal of Psychotherapy 32(4):583-92 (Oct 1978).

8) PubMed Article: Lambert G.W., Reid, C., Kaye, D.M., Jennings G.L., Esler, M.D. "Effect of sunlight and season on serotonin turnover in the brain." Lancet 7;360(9348):1840-2 (Dec 2002).

9) PubMed Article: Lane, A.M., Lane, H., Firth, S. "Performance satisfaction and postcompetition mood among runners: moderating effects of depression" Percept Mot Skills (June 2002).

10) PubMed article: Harrison, R.L. "Ovarian impairments of female distance runners." Annals of Human Biology, Jul-Aug 1998.


Retinal Tissue Transplantation: To See or Not to S
Name: Michelle C
Date: 2003-05-08 12:01:42
Link to this Comment: 5633


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Many Americans suffer from ocular diseases that are neurodegenative. Aside from visual loss due to aging, diseases such as Glaucoma, Uveitis, Pigmentosa and Cataracts potentially lead to blindness. With no known cure for these series diseases partial to total blindness is a reality for many. What if there was a new and innovative way to prevent r repair the damage caused by these disorders. What if we could stimulate the growth of new cells in our eyes and essentially improve our vision. New and interesting research regarding fetal retinal cells and adult intrinsic cell transplantation provides an interesting approach to the conservation and regeneration of the essential cells of the eyes.

The movie "Minority Report" depicted a futuristic era where the scanning of our eye would be used for identification. In a drastic attempt to conceal one's own identity and privacy an ocular transplant was preformed. In the movie it was successful, however, would probably be unheard of in real life. This operation, although fictional, provided new hope and inspiration for persons suffering from ocular disorders and or blindness. With current advancements regarding stem cell growth and transplantation, the reality of this fictional surgery and other options for persons with ocular neurodegenerative disorders are coming the forefront of scientific capability.

Retinal transplant grafts have become the most promising solution for the prevention of ocular neurodegeneration. The usage of adult and embryonic retinal sheets into the sub retinal area have showed promising results with exceptional cell survival rates (1). The usage of animal models such as rats have provided important information regarding the location where retinal graft cells are most successful. Adeghate and Donah used pancreatic tissues fragments for transplantation into the anterior eye chamber in rats (1). These fragments were either implanted alone or coupled with brain tissue fragments. This study was largely performed to examine the survival and variability of intrinsic nerves which have been detached from their original environment. Importantly, the pancreatic tissues contained intact 5-HT and AChE-positive neurons when it was transplanted. Results displayed a stable survival among the transplanted tissues. In addition, both the pancreatic and brain intrinsic tissues retained their ability to produce and store neurotransmitters and enzymes in the anterior area of the eye (1). These findings provide an innovative method for the reconstruction and restoration of damaged neuronal connections and cells which are damaged and/or lost due to neurodegenerative ocular diseases. The ability to successfully transplant and maintain the function of instrinic issue in the eye, points to numerous possibilities regarding improvements in vision for persons in the future.

While the majority of ocular cell transplantation research is concentrated among animals increasing studies have use humans and their fetal cells; this is becoming the norm in this area of research. In a preliminary report by Radtke, et. al the visual function of human patients was observed. Patients (ages 23 and 72) who suffered from retinitis pigmentosa received human fetal retinal sheet transplantation subretinally near the fovea of the eye. Results of their implantations revealed enhanced visual sensation in the visual field corresponding to the transplantation area (4). Both patients continued to experience these improvements in vision at 8 and 12 months after transplantation surgery (4). The successes of this transplantation are encouraging due to the consistent visual improvements across variable ages. Although, this investigation was for the most part a case study, it provides encouraging results and a technique which could potential benefit persons who suffer from a wide range of diseases and disorders which reduce visual ability.

With the success of survival among fetal retinal grafts and the promising results of intrinsic tissue survival, knowing the functional ability of the transplanted cells growth and if necessary guidance cues are in place for them is imperative. The growth and navigation of optic axons in the eye was investigating through the usage of grafts of retinal, optic disc, optic tectum, and floor plate tissues. Each was implanted into an organ cultured embryonic chick's eyes. The growth of these axons into and out of the graft was monitored and cross sectioned for study. Findings demonstrated that embryonic axons are able to grow into neonatal and adult retinal grafts, suggesting that older retina remains permissive for axonal growth (3). It was also noted that the axons from retinal grafts may be rotated during growth in their peripheral-central orientation due to the retina's inherent polarity that permits axon growth toward and away from the optic disc; however, axonal growth is not permitted in a perpendicular fashion (3). In addition, it was revealed that the centroperipheral cued growth operates locally rather than by long distance and that the optic disc provides an exit for the axons from the retina (3). The transplants of the optic tectum however reveal minimal growth expansion (3).

Understanding the specific growth patterns of retinal grafted cells provides important information regarding the most useful location for implantation transplanted tissues. In addition, in Vivo studies such as the one above provide a stable comparison from which can compare to normal ocular development. Tracking grafted retinal tissue migration could be an incredibly useful tool for both retinal and optic nerve restoration but for also for the study of the transgression of specific neurodegenerative ocular diseases. By following the pattern of normal growth in Vivo organs, then simulating ocular disorders within the cultured eyes, we may over time be able to efficiently speculate about the specific effects and causal of these disorders. Research in this area could be important for the correct treatment and the creation of effective pharmaceuticals for ocular diseases and disorders.

Enhanced vision as well as the observation of retinal grafted growth in a culture have provided optometrist with invaluable information. However it is important to see both survival and normal growth patterns in Vito subjects. In 1997 Sharma and Ehinger demonstrated successful retinal grafting proliferation in non-human subjects (5). Rabbits were implanted with fragments of embryonic retinal tissue and monitored for tissue survival and efficient proliferation. The donor retinal tissue which was transplanted, organized in rosettes after 1 day of transplantation. In addition, these donor tissues were observed to continue proliferation in the host eye until they formed patterns that resembled the normally developing retina (5). It was concluded from these observations, that the factors essential for normal proliferation in the embryonic retinal cells was preserved during transplantation (5). It was additionally suggested that the embryonic cells followed the normal course of development which corresponded to postnatal development when they were transplanted into a host eye (5).

A later study built on these findings and suggested that the thickness of the neuro-retinal tissue was an important growth factor. Using both partially and thick grafts of adult and embryonic retinal tissue, survival into the sub retinal region was observed in transplant rabbits. It was found the 11 out of 13 full thickness transplants revealed straight, laminated, transplantation with correct polarity in all normal retinal layers present (2). Overall embryonic tissues had the highest survival rates when they were of high thickness; the adult tissue grafts showed poor survival in full thickness (2). This research demonstrates the importance of graft thickness as one factor of successful transplantation survival (2). In addition, it suggests a possible advantage of embryonic tissue in transplantation compared to adult retinal tissues (2).

Although the successes of retinal grafting have been a cornerstone in optometric research the continued success of this research is uncertain. This due to the specialized embryonic cells necessary for transplantation. With infant mortality at a all time low and the currently debated law working to make late termed abortion illegal I question whether this research will have a opportunity to be developed more in the future. One alternative however is to use adult intrinsic tissue fragments for implantation Adeghate's et. al successes with both the survival and functioning of these tissue suggest another means for restoration of dead or damaged ocular tissues and nerves (1). The use of the anterior chamber of the eye in their experimentation is important to note as well as the use of non-embryonic tissue. It is important however to keep in mind, the fact that adult tissues did survive (2), however, just at a low rate compared to embryonic and that the sub-retinal area's tissue friendly environment has been claimed to be a suitable environment for the transplantation of most any tissue fragment (2,3 &5). With these encouraging factors, the future of transplantation research seems to be at most, promising.

While surgeons have not gone as far as total ocular transplantation like that of the movie "Minority Report" they are making major strides towards success. The successful incorporation of retinal grafting in the eye has provided most surgical optometrist with an opportunity to reverse some of the symptoms involved in the once known irreversible ocular diseases. By using this tool on patients, many may be provided with the opportunity to regain one of their most valued senses, sight.

References


1)Brain Research Protocols

2)Science Direct: Experimental Eye Research


3)Science">Direct: Developmental Biology


4) Radtke, N. D., Aramant, R. B., Seiler, M., & Petry H. M. (1999) Preliminary report: indications of improved visual function after retinal sheet transplanation in retinitis pigmentosa patients. American Journal of Opthalmology 128:3: 384-387


5) Sharma, R.K. & Ethinger, B. (1997) Cell Proliferation in retinal transplants Cell Transplantation 6; 2: 141-148


What About Elizabeth Smart?
Name: Marissa Li
Date: 2003-05-08 12:27:09
Link to this Comment: 5634

What happens to abducted children when they are returned to their families? As is, the mental state of a child is extremely variable. Putting the additional stress of child abduction and eventual return to ones family is something very significant in a life. Is it actually possible to remedy an abduction experience, and if so, what is the healthiest way to do so? What if a member of the family did the abduction? Does that change the form of mental reformation that must be provided to children that have experienced trauma due to forced removal from their homes? Another question at hand is the idea of the "Stockholm Syndrome," which basically states that the abducted actually come to worship or defend their abductor(s). What are the specific affects on the brain when a child is abducted; is there a physical change that comes from possible brainwashing, and what about post traumatic stress?

In the case of Elizabeth Smart, her father insisted that her abductors had brainwashed her. He claimed that "deprogramming" was necessary, and that though the family was more than grateful to get their child back, there is a major concern for her mental state and reaction to her experiences in the nine months in which she was missing (4). Because every case of child abduction involves different experiences and motivations, the whole notion of recognizing a child's issues and attempting to sort them out is somewhat imprecise. However, there are a variety of techniques that medical and legal experts use to attempt to sort out what is going on in the mind of a child that has been abducted. In the case of child abduction by a stranger, or at least a person outside of the close family or friends, there is a common inclination to suppose that the child has been brainwashed, perhaps sexually abused, and has been living in unknown conditions. This of course requires family counseling, individual and family therapy, and a variety of other techniques devised to allow the child to vocalize and come to terms with their experience. Like post traumatic stress disorder related to war trauma or any sort of span of time of intense unfamiliar conditions, abduction can cause a child to be silent about their experiences and withdraw from normal everyday interactions. The child could plausibly "re-experience the ordeal in the form of flashback episodes, memories, nightmares, or frightening thoughts, especially when they are exposed to events or objects reminiscent of the trauma" (2). People with the disorder tend to feel emotional numbness, bouts of depression, the inability to sleep, anxiety, feelings of guilt and general fears of repeating the experience that they have just been through (2).

Physical remnants of posttraumatic stress disorder include actual alterations in brain activity in terms of a person's hippocampus, which is critical to the memory and emotion aspect of the brain. These lapses in the system may cause flashbacks which many abducted children deal with even weeks after they return home. Naturally those mentally affected by the abduction may also experience changes in hormonal levels and other chemical imbalances, causing them to not only recap on the events, but also continue to affect their everyday state of mind and interactions with others (2). To diagnose and treat a victim of abduction, therapy and familial support may not be the only answer. The time it takes some cases to really surface could be anything from hours to days to years, without even knowing whether or not the child has come to terms with their experience. In some cases, parents would like to think that the best remedy for their child's recovery is for the child to be able to forget the incident all together; however, if the stress of the experience actually causes a chemical alteration, than "forceful forgetting" of the experience may just surface later in life. Without the impact of something like abduction, the young child's brain is already in a constant state of absorption and chemical adjustment. The trauma that abduction causes can seriously scar a young person for life, almost like an imprint on the nervous system, as a constant reminder, whether that "scar" is dormant or not.

If a member of the family is the abductor, another emotional trauma issue comes into play. While the idea of a close family member or friend may initially feel more comfortable to the child victim or the rest of the family left behind, in a sense for some children, it can be even more emotionally distressing. Because family abduction is not technically a "crime against the child," the system does not have as many rights to interfere on behalf of the child and prosecute the parent involved. This of course is dependent upon circumstance, like whether or not the abductor was a consistent member of the child's life, or just a floater, which than classifies them more closely to abduction by strangers (3). Abduction by family members is an interesting dichotomy. The child should feel safe with a parent or family member, but instead is put in a position that compromises their trust of a familiar face because the notion of removal from ones normal lifestyle is discomforting to any child. If a child cannot trust their family, than whom can they rely on? That in itself would seem even more frightening and hindering to forgetting or getting over ones abduction experience, than if it had taken place among strangers.

Abduction can take more than one form. A child can fear their abductors, be sexually assaulted by them, or may even grow to care for them, especially over large spans of time. Many experts argue that the "Stockholm Syndrome," can be defined as victims feeling cared for by the abductors and therefore in defense of their captors (1). "The term, Stockholm Syndrome, was coined in the early 70's to describe the puzzling reactions of four bank employees to their captor. On August 23, 1973, three women and one man were taken hostage in one of the largest banks in Stockholm. They were held for six days by two ex-convicts who threatened their lives but also showed them kindness. To the world's surprise, all of the hostages strongly resisted the government's efforts to rescue them and were quite eager to defend their captors. Indeed, several months after the police saved the hostages; they still had warm feelings for the men who threatened their lives. Two of the women eventually got engaged to the captors (1). This sort of incident is actually common among abduction cases, especially those involving sick obsessions with the need or want of children that turn into a forceful removal of a child from their home. "Stockholm Syndrome" is actually diagnosed as a survival mechanism (1). Many theorists believe it to be more common in women due to an inclination towards safety and codependency, but this of course is simply speculation. On a general level, the whole notion of "Stockholm Syndrome" appears to be conducive to the idea that it is brainwashing, or at least an affective means of control over the captives.

In the case of Elizabeth Smart, the media attention surrounding her return has been surprisingly vague. Her parents fear that more attention will be a constant reminder of the nine months in her past, and they desire time alone with her and the rest of the family so that she can be resituated. Whether or not this is the healthiest option for recovery or coping with the trauma and/or brainwashing that Elizabeth has faced is up in the air. If her family stays tight and does not allow Elizabeth to be bombarded with professional help, is she going to eventually go into a state of remission and temporarily forget those nine months? Is it too soon to delve into her life on the road with two strangers for nine months in odd garb and unusual habits? If Elizabeth is not drained of information, there is the possibility that she will continue to have thoughts, ideas, and a lifestyle instilled in her remnant of that nine months, that may not surface until years later. Children are sponges; everything they absorb during their childhood will impact them later on in life. The issue of the "Stockholm Syndrome" is also pertinent to her case. According to reports, there is no confirmation that Elizabeth was sexually abused, and there is no evidence that she was desperately seeking an out from her captors. Upon sighting, she appeared calm and relaxed which makes it seem as though her captors were not persons she feared. Of course since "Stockholm Syndrome" can also be a defense mechanism, perhaps Elizabeth had fallen into that pattern. As her family copes with the trauma, she must as well. There is no telling what will pop up in her future in terms of memory flashbacks, fanatical religious or non religious beliefs or practices forced upon her by her captors, and the possible future findings of sexual assault or general mental antagonism. What happens now is entirely up to Elizabeth and her family. If they decide to shelter their daughter from her experience, that may help her current emotional state, but in looking towards the future affects, the facts stand that nine months away from home and family with strangers who put her in unfamiliar locations, garb, and environments are going to have a large impact on her life. Coming straight from this experience, whether or not she was brainwashed, exploited, assaulted, or anything or nothing at all is going to have major affects on her regardless. Elizabeth is too young to ignore the trauma and too old to misinterpret what happened to her. If she is to "recover" than it will be an experimental process of determining her needs based upon her experiences. Hopefully the plausible chemical imbalance in her brain and nervous system will subside and she can be a fully functioning regular child again with only love for and from her family and surrounding persons involved in her life.

1) Exploring Gray's Anatomy, Societal Stockholm Syndrome

2) National Institute of Mental Health

3) Reunite International

4) Time Magazine, Post Trauma: Reclaiming a Child
http://www.time.com/time/archive/preview/from_redirect/0,10987,1101030324-433208,00.html


A Question Of Speech
Name: Amelia Tur
Date: 2003-05-08 23:17:24
Link to this Comment: 5638


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Stuttering, also known as stammering, is a speech disorder that belongs to a larger class of disorders known as disfluent speech. Two disorders also grouped in the category of disfluent speech include cluttering, which is distinguished by rapid, irregular speech, and spasmodic dysphonia, which is a voice disorder whose symptoms include "momentary disruption of voice caused by involuntary movements of one or more muscles of the larynx or voice box." (1)

Stuttering is characterized by the disruption of the normal flow of speech by frequent repetitions or prolongations of speech sounds, syllables, or words or by a person's inability to begin to say a word. Rapid eye movement, tremors of the lips and/or the jaw, or other struggle actions are all behaviors that a person who is stuttering may use to help himself begin to speak. The degree of stuttering can range from mild to severe, and stuttering can take many different patterns in different people. Many times, a stuttering person will substitute another word in conversation for one that they have difficulty saying. Often times, this new word is less appropriate than the original word, or not appropriate at all. This sometimes leads people speaking with the person who is stuttering to think that he is less intelligent that he actually is. Speaking before a group of people or talking to another person on the telephone are two situations that can make stuttering worse. However, behaviors such as speaking when one is alone, speaking in unison, whispering, talking to pets, speaking to small children, and singing are thought to help reduce stuttering. (1) (2) (3) (5)

It is generally estimated that over three million Americans of all ages stutter. While individuals of all ages can stutter, stuttering typically occurs most frequently in children between the ages of two and six who are just beginning to acquire language; boys are three to four times more likely than girls to develop a stutter. Therapists acknowledge that adults with chronic stutters are rarely cured, but stuttering can be controlled by the use of management techniques over the long term and long-term practice to reduce stuttering. While many children may acquire a stutter, most by and large outgrow it, and it is estimated that less than one percent of American adults have a stutter. Eighty percent of this number are men. However, the National Stuttering Association claims that at least one quarter of children who stutter do not "develop out of it." The NSA goes on to say that man pediatricians, teachers, counselors and even some speech pathologists advise parents to delay speech therapy until a later stage where it might prove difficult to prevent life-long stuttering in children. Because of this, the NSA recommends that parents consult with a speech pathologist if their children show early signs of stuttering. (1) (2) (3) (5)

When researching this topic, most of the information sites that I encountered stated that scientists are unsure of the exact causes of stuttering and listed several probable causes for the disorder. It is almost certain that stuttering has different causes in different people. The Stuttering Foundation of America site stated that it is highly likely that the initial cause of stuttering may be very different from what causes the stuttering to continue or possibly get worse. The site also stated that the cause of stuttering may arise from a combination of different factors, not just one. (1) (3)

The National Institute on Deafness and Other Communication Disorders information page stated that there most likely is a genetic cause for many cases of stuttering, but scientists do not fully understand it. A person who has a parent who stutters is twice as likely to stutter than a person who does not have a parent who stutters. The site goes on to state that while scientists have long known that stuttering can run in families and that there is a high possibility that some forms of stuttering are hereditary, no gene or genes for stuttering have yet been found. (1) (3) (5)

A number of sites mentioned that a common cause of stuttering are developmental difficulties. As children are in the process of gaining speech and language, their speaking skills do not mature quickly enough to meet their verbal demands. Therefore, stuttering occurs as the child searches for the appropriate word. This type of stuttering is generally outgrown as the child's language skills become more developed. However, early school experiences may heighten the incidences of stuttering because of the need for rapid, fluent speech. (1) (2)

Neurogenic stuttering arises from communication problems between the brain and nerves or muscles. In this type of stuttering, the brain is unable to properly synchronize the various components needed to speak. Stuttering of this type can occur after a stroke or other brain injury as well. (1)

Stuttering can be also classified as originating in the mind or mental activities associated with thought and reasoning. These causes are now known to account only for a small number of stuttering cases, although they were once thought to be a main cause of stuttering. While people who stutter may develop emotional difficulties such as fears of speaking on the telephone or meeting new people, these fears most often arise from experiences with the stutter itself and are not the cause of the stutter. People who have mental illnesses or have experienced several mental or emotional traumas may develop psychogenic stutters. (1) (3)
As with the causes of stuttering, there appears to be no one tried and true treatment for stuttering that is accepted by all the groups whose websites I visited. The most generally accept method of treatment for stuttering is speech therapy, and most traditional methods of treatment are aimed at helping people control their speech so that they do not stutter at all. Many of the therapy programs that are currently popular focus on methods of unlearning faulty speech or relearning how to speak. The National Stuttering Association's website states that while the association's members have found that not all treatments work for everyone and a diversity of treatments were found to work, treatment through speech therapy was the best option for the majority of people. The NSA goes on to recommend that people seeking treatment go to an ASHA-certified speech language pathologist with clinical experience in treating stuttering. The National Institute on Deafness and Other Communication Disorders suggested on their information site that parents seek therapy for their children with developmental stuttering if the child stutters for longer than six months or if the stutter is accompanied by struggle behaviors. One of the most important aspects of seeking speech therapy is finding the right therapist for the person with the stutter; the patient must be comfortable with the therapist for the therapy to have the full effect. Many people who have successfully undergone treatment still feel fear and shame that they might still stutter; however, in many programs these feelings are often addressed. (1) (2) (4)

The NIDCD also recommends that parents restructure the home environment to help their children with developmental stuttering overcome the disorder in several ways; the first of these ways is providing a relaxed home environment for the children to speak in, and possibly setting aside time free of distractions for the parent and child to speak together. Also, parents are advised to not criticize the child's speech or react negatively to the child's disfluencies. Parents should also not require the child to verbally perform for other people. The child should be listened to attentively when it speaks and parents should wait for the child to say the intended word rather than completing the child's thoughts. Parents ought to speak on a slow and relaxed manner; often the child will imitate this way of speaking. Finally, parents should take openly with the child when it brings up the topic of stuttering. (1)

Interventions with medications and electronic devices have also been used in therapy for stuttering. However, side affects of medications or drugs that affect brain function often times make them difficult for use in the long term. Devices that help a person who stutters control his or her speech are often bothersome in many speaking situations. Therefore, the person who stutters will often times not use the device in those situations. (1)

Unlike the causes of and treatments for stuttering listed above, the National Center for Stuttering believes that, "individuals destined to become people who stutter are born with the tendency to lose varying amounts of inhibitory control over their vocal cords and that this occurs only when they intend to speak under conditions of sufficient vocal cord tension." As I understand the information presented on the National Center for Stuttering's website, they believe that all people who stutter have a tension center located in the muscles surrounding their vocal cords, as other people do in their shoulders or lower backs. Because of this tension center, when these people become stressed they have difficulty speaking, which leads them to stutter. (6)

The treatment plan of the National Center for stuttering focuses on reducing this tension in the vocal cords through a variety of methods not usually used in the treatment of stuttering. Intent Therapy, which dilates and relaxes the vocal cords, and Low Energy Speech, which "reduces the tension on the vocal cords by lowering the air pressure beneath them" are used to reduce the pre-speech tension on the vocal cords. The treatment plan uses three stress-reduction techniques; Education and Demonstration shifts the stutterer's locus of control to internal from external while speaking, Recalibration is used to reduce the speed demands from listeners, and the Bath Tub Technique is, "an auto suggestion technique for bringing the subconscious mind along as a willing ally in the process of changing emotionally charged self images." Finally, three Nutritional Approaches are used: Vitamins and Minerals, which have been shown to reduce muscle tension in some individuals, a self-administered test to eliminate "Provocative Foods" which lead to muscle tension, and a homeopathic preparation to reduce chronic and acute stress in some individuals. (6)

After an initial two-day workshop, the patient in this program enters the "basic training" phase, in which he goes over a set of twenty-four exercises over a twelve week period, recording part of the exercises onto tape with a "special" microphone to send to his therapist. The patient speaks with the therapist over the telephone, and either joins a local support group or practices with other patients over the telephone as well. During the second month of this phase, the patient uses a device called "The MotivAider" that sends a private reminder for the patient to use all of the techniques learned in all situations. The final phase of the treatment involves "eliminating the scanner," which the National Center for Stuttering define as looking ahead in situations to see if there could be elements that could cause stress. They view the scanner as a large causation of stress and tension in the vocal cords. (6)

I feel very skeptical about the National Center for Stuttering's explanation for their stuttering program use in overcoming stuttering. While I recognize the validity of some of the information that they included on their website from other sites that I looked at in my research, I feel that their explanation of stuttering and treatment plan is very flawed. I have more faith in the sites that claim to be unsure of the cause of stuttering, who state that there are several different explanations for stuttering, and state that in many cases of stuttering there might be multiple causes for the stutter. It seems that National Center for Stuttering is attempting to "cash in" on a group of people who very much want to be rid of their condition. As the National Center for Stuttering is providing one easy explanation for all cases of stuttering, their explanation would seem to be more appealing to a person who stutters rather than a group who states that there are multiple causes of stuttering and are unsure which one the suffer has and says that there are no easy or permanent cure. All in all, the National Center for Stuttering appears to be a group that is exploiting a very at risk group.


References

1) National Institute on Deafness and Other Communication Disorders, National Institute on Deafness and Other Communication Disorders information page on stuttering.

2) National Stuttering Association, National Stuttering Association's information page on stuttering.

3) Stuttering Foundation of America FAQ, A Frequently Asked Questions page on the website of the Stuttering Foundation of America.

4) Stuttering Foundation of America Treatment Page, A page outlining treatment and choosing a speech therapist on the page of the Stuttering Foundation of America.

5) Stuttering.Net, An online guide to stuttering.

6) National Center for Stuttering, Homepage of the National Center for Stuttering


Aphasia: Robber of Language
Name: Grace Shin
Date: 2003-05-09 01:43:31
Link to this Comment: 5639


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

"My most valuable tool is words, the words I can now use only with difficulty. My voice is debilitated-mute, a prisoner of a communication system damaged by a stroke that has robbed me of language... I desperately wanted to make sense of the confusion, but every time I tried to express myself nothing came out. I was forced to remain silent and could not follow either verbal or written commands. Words sounded to me like a jargon, as though the people around me spoke a foreign tongue..." (1)

Everyone experiences those moments when words cannot be found to express the feelings felt at that moment. For example, a senior student from Ghana wanted to go home for winter break but could not afford it due to the incredible cost of traveling to Africa. So when her classmates, friends, and professors all got together and gave her the $3500 plane ticket home as well as a $500 check for her expenses during the trip for her birthday, she stood without words, only tears of joy expressing her heart. These are rare moments... but what if that feeling of no words stayed. What if all you wanted was a pillow from the closet but could not communicate the words to do so? "The words are on the tip of my tongue..." would be a phrase used by many people who suffer from the language disorder known as aphasia.

The case of aphasia is unlike the paralysis situation with Christopher Reeves, where the issue was that the "I-function" of C. Reeves is still able to communicate one thing despite his nervous system's behavior. Someone with aphasia can want to say something or want to describe something knowing full well what it is that he or she wants to say or describe, but will not be able to even communicate the idea to another. Language is key in understanding our brain and behavior, and many aspects studied in our class assumed that one communicates something via language or gesture. However, the case for aphasia is interesting in that the one thing taken for granted is no longer there to be assumed. One of the privileges that we as Americans have is our right to freedom of speech. We often hear people note "You can take away everything else, but you can't take away my words!" Even in class, we discussed how despite the noted difference between the actions of the I-function and the nervous system as a whole, we are still allowed to at least say what we want and describe what we want in words. But what if this is not possible? Patients suffering from aphasia have been robbed of the essence of our identity, the one thing that makes humans unique-- our ability to use speech as a form of communication. Despite the need for a cure and an effective treatment, there has been much hesitation in the past to advance in the methods used for aphasia. However, the development in the treatment of aphasia has been advancing in the past few years in a direction away from the traditional method of speech therapy, and the attempts of the therapy currently explored in the aphasia therapy will be discussed.

Aphasia currently affects one million people in the United States. Most people have not even heard about this disorder, myself included, until they meet someone with it. 90-95% of the time, aphasia is a disorder resulting in an impairment of language, usually from a damage from the left hemisphere. (Brain areas involved in language include Broca's area, primary motor cortex, supramarginal gyrus, angular gyrus, Wernicke's area, and the primary auditory area. (2)) In varying severity, the disorder affects the production or comprehension of speech and the ability to read or write. Some may be almost impossible to communicate with while others with aphasia just have difficulty understanding names. Though the disorder affect mostly people of older age who has suffered from strokes, it has also been seen in young children and young adults who have suffered from various head traumas, infections, or brain tumors. (3)

Some of the varieties of aphasia are Global aphasia, Broca's aphasia, mixed non-fluent aphasia, Wernicke's aphasia, and Anomic aphasia, along with others. These types of aphasia were determined from the different types or patterns of symptoms corresponding with certain location of the brain injury. Global aphasia is the most severe form and results in patients who can produce few recognizable words and understand little or no spoken and written language. This may be temporary for patients who receive treatment immediately following a stroke, but with greater brain damage, severe and lasting disability may result. In other forms of aphasia, the severity may be less such as Broca's aphasia, which is a form in which speech output is severely reduced, being limited to only short utterances. Understanding speech may or may not be an issue, and the patient may be able to read. Cases of Anomic aphasia show patients who are always saying, "It's on the tip of my tongue." They found an inability to supply the words they want to talk about, which is demonstrated in their speech and writing. However, they can read adequately.

The causes of aphasia in the brain is still questionable and so there are no known treatments for aphasia, which can be frustrating for both the affected and the family and friends. However, some steps taken to retrieve what was lost include speech therapy, which is the most traditional method, as well as drugs and psychological therapy. Often people with aphasia have multiple aspects of their communication lines impaired, while some channels are still left for accessibility to exchange information. The treatments therefore need to target the available functioning channels and enhance the use of those channels. (4) Currently, it's been exciting to see the new therapies, being explored in the case to treat aphasia. (5) Previously the main treatment has been via education, meaning the patient is re-taught to make words and say them and re-taught to write and make sentences through speech therapy. However, the problem itself, the damage in the brain, has not been attacked for repair or replacement. Merely trying to re-educate people to speak and write can be a life time process, especially once their brain is damaged. But the modern methods for treatment have opened new doors and are awaiting exciting results. The conclusions are yet elusive and not quite definite, however, in the past two years, extensive studies have been taken as a step to understand this mysterious disorder called aphasia. Many researchers are studying methods for aphasia treatment supplementing the traditional method of speech therapy.

Pharmacological approaches have received much enthusiasm as potential methods such as neurotransmitter therapy, involving drugs to improve the functioning of the damaged brain. Bragoni and colleagues at the University of Roma la Sapienza, in Italy studied the combined effects of bromocriptine (BR) and speech therapy and found that high dosage of bromocriptine improved performance in a number of language skills in a late recovery for non-fluent aphasic stroke patients. They performed a double-blind study with high dosage of BR, prescribed according to a dose-escalating protocol, comprehensive of clinical data, relatives' impression, and language evaluations. The study was divided into phases including inclusion, language re-test to evaluate the stability of aphasia, placebo and speech therapy, BR and speech therapy, BR, and washout. Compared to the initial assessment of the patient, a significant improvement was observed in the tests of dictation, reading-comprehension, repetition and verbal latency. (6) Other studies by Gold et al used bromocriptine and found that it improved the verbal performance of their subject. (7)

In addition, number of studies suggest that drugs which increase the release of norepinephrine promote recovery when administered late (days to weeks) after brain injury in animals. A small number of clinical studies have investigated the effects of the noradrenergic agonist dextroamphetamine in patients recovering from motor deficits following stroke. Walker-Batson et al found that amphetamines paired with regular speech/language therapy facilitated language recovery in aphasic patients shortly following a stroke. In a prospective, double blind study, 21 aphasic patients with an acute nonhemorrhagic infarction were randomly assigned to receive either 10 mg dextroamphetamine or a placebo. Patients were entered between days 16 and 45 after onset and were treated on a 3-day/4-day schedule for 10 sessions. Thirty minutes after drug/placebo administration, subjects received a 1-hour session of speech/language therapy. The Porch Index of Communicative Ability was used at baseline, at 1 week off the drug, and at 6 months after onset as the dependent language measure.

They found that although there were no differences between the drug and placebo groups before treatment, by 1 week after the 10 drug treatments ended the dextroamphetamine group showed significant improvement, even after correcting for initial aphasia severity and age. At the 6-month follow-up, the dextroamphetamine group again showed increased scores compared to the placebo group, though not as significantly as after the 1st week. They concluded that neuromodulation with dextroamphetamine, and perhaps other drugs that increase central nervous system noradrenaline levels, may facilitate recovery when paired with focused behavioral treatment. (8)

Other approaches to aphasia are currently being explored such as stem cell infusion and neuronal transplantation, all aimed at improving the function of the brain making it more responsive. (9) Studies using neural regeneration seem to open new trends in aphasia treatments. Using cell implantation into a living brain, they hope to facilitate recovery from brain damage. Current results show that this promotes the restoration of brain tissue after brain injury, but because this field of brain cell transplantation is still in its infancy, much controversy surrounds its use in clinical settings. In addition, extensive aspects of recovery process have been explored such as neurological, behavioral, cognitive, linguistic, and psychosocial aspects. Any given recovery process has various components of these aspects and so analyzing any particular recovery process is difficult since multiple variables must be considered and connected together.

Many of these patients studies described gives much promise and hope for the aphasic patients. However, it is important to note that once again, these are only the beginning stages of many new possibilities, and thus completion of additional studies with larger subject groups are necessary before coming to any broad conclusions regarding the efficacy of these experimental treatments. Aphasia therapy still remains a difficult challenge despite the marvelous availability of information from technology such as functional magnetic resonance imaging (fMRI), x-ray computer tomography (CT), and positron tomography (PET). The results obtained above and in the literature show that there are indeed methods, beyond the traditional speech therapy, that may be available for treating aphasic patients, but the real answers can only come with time and advancements of the modern understanding.


References

1) NAA Article "The Words I Lost", Testimony of an aphasic patient

2) PowerPoint Presentation on Language and the Brain

3) American Speech-Language-Hearing Association, Good source of information for learning about Aphasia

4) Aphasia Hope Foundation, Frequently Asked Questions about Aphasia

5) The Future of Aphasia Treatment, Brain and Language, 2000, 71, 227-232

6) Bragoni, M., Altieri, M., Di Piero, V., Padovani, A., Mostardini, C., Lenzi, G.L. (2000). Bromocriptine and speech therapy in non-fluent chronic aphasia after stroke. Neuroscience, 21, 19-22.

7) Gold, M., VanDam, D., and Silliman, E. R. (2000). An open-label trial of Bromocriptine in nonfluent aphasia: A qualitative analysis of word storage and retrieval. Brain and Language, 74, 141-156.

8) Stroke Online, Walker-Batson, D., Curtis, S., Natarajan, R., et al. (2001). A double blind, placebo controlled study of the use of amphetamine in the treatment of aphasia. Stroke, 32, 2093-2098.

9) The Future of Aphasia Treatment, Brain and Language, 2000, 71, 227-232


Ecstasy: A dream come true?
Name: Christine
Date: 2003-05-09 15:58:54
Link to this Comment: 5641


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

You're stressed and you really need to take some down time-to do anything but work right now. Some of your friends mention that they feel like going out dancing, to a "rave" perhaps. They've talked about how much fun they have had there in the past, so you agree to join them. However, it is not until you are offered some rumored feel-good drug called "ecstasy", that you understand why your friends feel so good after coming back from dancing. Ecstasy, also known as MDMA or 3,4 methylenedioxymethamphetamine, is often referred to as a "love drug" because of its ability to give people the feeling that they are in love with everyone and everything. By artificially providing euphoric contentment, ecstasy allows people to build self-esteem and overcome the awkwardness of their normal lives.

In overcoming excessive self-awareness at raves, people feel uninhibited and dance freely for extended periods of time. The combination of music and dancing together produces an exhilarating trancelike state (1). Although many probably find the drug liberating and enjoy letting go, others feel uncomfortable to be without their normal defenses. Users may come to bitterly regret having revealed their insecurities or longings when under the influence of ecstasy and certain realistic insights can be extremely unpleasant.

What is it about ecstasy that has this liberating ability, often promoting a good feeling among users? The chemistry behind ecstasy's influence is two-fold in that it affects both serotonin and dopamine levels by lowering them overall. Serotonin and dopamine are in part responsible for feelings of euphoria and happiness. While ecstasy decreases the amount of dopamine that can be released, it forces the brain chemical serotonin, which is related to memory, pleasure, mood, and sleep functions, to be released in abnormally large amounts. In this process, the neurons that store serotonin are deformed or destroyed. Scientists have found that these injured neurons can regrow, but they may grow back abnormally or in the wrong locations (2). When ecstasy wears off, the overflow of serotonin reverses to create serotonin deficiency. Without enough serotonin it is difficult to sleep, learn, remember, or feel happy.

Recently, researchers at London Metropolitan University have found that people who take ecstasy are more likely to suffer depression compared to non-users and even people who use other drugs (3). The study was based on almost 600 working professionals who were split up into three groups: those who have only taken ecstasy, those who took other drugs, and those who have never taken drugs. The study essentially revealed that depression levels among ecstasy users were linked to the amount of drugs that were taken. People who abuse ecstasy are more likely to become depressed, but this also means that using ecstasy is a likely indicator behind an effort to compensate for an already depressed state of mind. While the latter might suggest a dependency on the chemistry behind the drug, the reality behind it is that users become more addicted to the euphoric feelings they experience rather than to an addictive chemical within the drug itself.

There is also a lot of data that shows ecstasy damaging the neurotransmitter for serotonin, which directly affects depression levels (3). Neurons normally release serotonin in response to electrical impulses, which results in the normal activation of serotonin receptors, keeping psychological and physiological function in check. The addition of ecstasy thus causes a sustained increase in the amount of serotonin in the synaptic space leading to sustained activation of more serotonin receptors (3). This can produce an elevated mood similar to the action of anti-depressant drugs. Eventually, the serotonin neurons cannot make serotonin fast enough to replace that, which was lost. So once ecstasy is gone from the body, less serotonin is released with each electrical impulse and fewer serotonin receptors are activated, producing depression-like feelings and anxiety. This implies that although depression may not be solely based on the chemical interactions in the brain, it has a good amount to do with it. This suggests that brain and behavior are interrelated since in seeing that a physical reaction is occurring in the brain, an individual would experience a change in behavior.

Medium term medical dangers are contracting hepatitis or jaundice after taking MDMA several times (1). These are not supported by documented research, but physicians have considered them to be possible effects of ecstasy. Ecstasy is available in many forms, one of which can be injected as it has a low enough melting point. This may perhaps account for contraction of hepatitis, as infection occurs often through blood-blood interaction. Contaminants in the drug might also cause adverse, unfavorable effects in the user. Any kind of combination of drugs, which would leave room for uncertain and unforeseen results in the user, keeps open the possibility that negative effects might occur.

Other effects that users look out for are more short-term. Like I have mentioned previously, this type of drug is often seen at raves. People take ecstasy while dancing for hours on end in very hot, humid raves without sufficient drinking water. These conditions can easily cause overheating, or heatstroke even without the exacerbation of a drug. Those who especially develop high temperatures easily are more susceptible to these dangers.

Heatstroke is a well-known cause of death, but in other situations it only affects people who are pushing themselves to the limit or are unable to escape from the heat. What I find most interesting about ecstasy-related deaths is that they occur because people appear to make no real effort to cool down. This has been explained by ravers being in a trancelike state, but experiments with rats and mice show that overheating may be a more direct effect of the drug (5). Researchers have examined the way that rats react to ecstasy in very hot conditions. Without MDMA, the rats did their best to cool down by becoming less active and losing heat through their tails. But on the drug, the rats became more active and did not attempt to lose heat. It is as if the rats had lost the sense of being too hot until they died of heatstroke. Likewise, rats in a cold environment made no attempt to keep warm when on MDMA. Also, it has been seen that ecstasy is more toxic among rats in crowded conditions than those in isolation (5). Though it is uncertain since the effects seen in rats cannot necessarily be directly related to those seen in humans, this phenomenon may help to explain why ravers die of heat exhaustion. They simply cannot feel that they are becoming overheated to be able to realize that their bodies need to cool down. Thus, the drug has the ability to actually override biological functions as well as one's mental capacity.

How might overheating kill someone though? Body temperature is something that needs to be controlled very carefully for organisms to be able to function. When taking ecstasy, or any other stimulant drug, body temperature will rise. Likewise, when taken in a hot environment like a rave, that temperature will rise even more. Furthermore, dancing in such a hot atmosphere greatly increases temperature overall. With body heat raised to these very high levels, there is definitely an obvious risk of heatstroke. When someone's body overheats, fluid is lost. Dancing for hours on ecstasy at a rave can cost someone pints and pints of fluid that the body needs. At a crowded indoor rave, you could lose up to 6 pints in 6 hours (6).. This fluid needs to be replenished. In fact, if users don't drink enough water, their bodies can undergo severe dehydration, leading to coma, convulsions, and eventually death (7). At the other extreme, in rare instances ecstasy causes people to drink water compulsively. Too much water can lead to hyponatremia, a potentially fatal condition in which sodium levels become dangerously low.

As previously mentioned with respect to the club scene, when one's body temperature is rising, a rave provides too much heat to allow it to escape quickly enough from the body to keep up with the elevated level of activity. The drug thus makes the user feel good, disregarding the need to cool down or stop for some water. As the temperature reaches a critical level, mental state deteriorates and the user becomes less responsive to people around him/her. The proteins in the blood that form clots to stop bleeding start to break down and start forming clots throughout the circulatory system clinging to arterial walls (8). Losing clotting factors crucial for stopping bleeding allows for internal bleeding to occur without any resistance/help from the body to stop it from occurring.

The liver also in danger's way by being placed under the stress of the drug, begins to fail under the high temperatures. First isolated liver cells, then large sections of the liver begin to die. As the liver and clotting system break down, large amounts of protein and other material are dumped into the blood. The kidneys functioning ability slows down under the load, then stops entirely as kidney tissue begins to die, poisoned by the extraordinary load of breakdown products in the blood. Death is sure to follow as a result of these attacks in the body.

While ecstasy has been seen to provide a carefree, and possibly therapeutic venue to be able to express oneself honestly and without inhibitions, it has also proven to be a dangerous source of release. Ecstasy taken in too large a quantity and without regard to resulting overheated body temperature and dehydration, the body's system can go into shock. Blood clotting leading to stroke, liver/kidney failure and death might all occur without proper attention to the body's needs. Other adverse effects may be seen such as jaundice or hepatitis as a result of a possible contaminant in the drug, or even mixing it with alcohol or other drugs. Though it has been illegal to sell ecstasy since 1985, many still turn to it for its appealing release from the confines of our troubling world. However, what provides short-lived euphoria ends in depression, as the neurons that store serotonin are damaged in the excessive release of serotonin that ecstasy provides, resulting later in less serotonin being released on a regular basis.

So ecstasy makes you feel good for a while, but later on, one must wonder, is it worth it? The depressed feeling inside will not go away because of the chemical changes in brain activity. This can be somewhat remedied, mostly through antidepressants such as Prozac which hurt the liver or natural remedies like St. John's Wort, but not by relying on the body to cure itself, as that option becomes less and less available in consuming ecstasy. As can be seen through ecstasy's effect on the brain and thus behavior, the two are interrelated. This ailment to the brain causes visible changes in mood and likewise behavior. Perhaps realizing these unadvertised, in fact somewhat ironic, effects from the drug ecstasy will prevent some people from taking this "miracle drug" and thus would possibly protect them from seeing some ultimate lows in their future. It's important to take care of your brain, your natural state of health and happiness depends on it.


References

1)Erowid MDMA Ecstasy Vault

2)The Chemistry of MDMA/Ecstasy

3)Ecstasy 'link to depression'

4)The Neurobiology of Ecstasy (MDMA)

5) Gordon, Christopher et al. Effects of MDMA on Autonomic Thermoregulatory Responses of the Rat. 1990.

6)Watch Out for Heat Stroke

7)The Ecstasy and Agony of Ecstasy

8)"What is Ecstasy?"


Phantom Limbs- Is the pain an illusion or a realit
Name: Priya Ragh
Date: 2003-05-09 16:04:57
Link to this Comment: 5642


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

"All amputees, and all who work with them, know that a phantom limb is essential if an artificial limb is to be used. Dr. Michael Kremer writes: " Its value to the amputee is enormous. I am quite certain that no amputee with an artificial lower limb can walk on it satisfactorily until the body image, in other words, the phantom, is incorporated into it. Thus the disappearance of a phantom limb may be disastrous, and its recovery, its reanimation, a matter of urgency. This may be effected in all sorts of ways: Weir Mitchell describes how, with faradisation of the brachial plexus, a phantom hand, missing for twenty-five years, was suddenly resurrected. One such patient, under my care, describes he must 'wake up' his phantom in the mornings: first he flexes the thigh-stump towards him, and then he slaps it sharply- 'like a baby's bottom'- several times. On the fifth or sixth slap the phantom suddenly shoots forth, rekindled, fulgurated, by the peripheral stimulus. Only then can he put on his prosthesis and walk. What other odd methods (one wonders) are used by amputees?"
-Oliver Sachs. The Man Who Mistook His Wife For a Hat. P.64

I have been deeply intrigued by the "phantom limbs" phenomenon. The primary question that boggles my mind when this phenomenon occurs in amputees, and persons who have undergone surgery- is the sensation of pain caused by phantom limbs an illusion or a reality? If this sensation does truly exist, what solutions and remedies are suggested to reduce the pain and sufferings of the patients?

Through closely studying various articles, I now understand that losing a part of the human body including the arms, legs, breasts, and other "phantom" bodily parts, does not necessarily stop the brain from continuing to "map" the missing parts. The reason why this occurrence is referred to as the "phantom" limb is that although few amputees do not experience the limb pain after the removal, the majority at least initially continue to believe that the body part is still present in the some form or the other. (1) Patients who experience the phantom limb pain usually refer to the pain occurring in the spatial location of where the limb would have otherwise been present. In efforts to reduce this feeling, doctors in the past have attempted to remove additional inches from the point of the affected limb though the relevant nerve roots. However, according to researcher Ramachandran, from the Center for Brain and Cognition in San Diego, these methods have very rarely proved to be effective at reducing post-surgery pains and generally end up with a surgical "game without end".
I was extremely interested in studying the pain caused by phantom limbs and some methods to reduce this sensation in its patients. Ramachandran performed an experiment that in turn led to exciting conclusions. According to his experiment, visual feedback and psychological expectation can play a vital role in the phantom limb phenomenon in reducing the degrees of pain.

Ramachandran's Experiment:
The apparatus involved in this experiment included a mirror and a cardboard box. By rotating the healthy arm in a spatial fashion, with the reflective side of the mirror, the patient is made to rotate his phantom arm through the other hole. Ramachandran summed that this exercise enabled the brain of the patient to achieve feedback to the motor area of the brain corresponding with the phantom area:
"Philip rotated his body, shifting his shoulder, to "insert" his lifeless phantom into the box. Then he put his right hand on the other side of the mirror and attempted to make synchronous movements, As he gazed into the mirror, he gasped and then cried out, "Oh. My God! Oh, my God, doctor! This is unbelievable. Its mind boggling!" He was jumping up and down like a kid. "My left arm is plugged in again. It's as if I'm in the past. All these memories from so many years ago are flooding back into my mind. I can move my arm again. I can feel my elbow moving, my wrist moving. It's all moving again."(2)
The sensation of Pain:

Amputees often perceive their phantom limbs as normal, and physically functioning body parts that have the ability to hold a glass, feel the texture of a cloth, and grasp an object, which in reality, does not exist. However, these same amputees can also experience tremendous pain in the absence of healthy limbs. When a person's limb gets amputated, severed nerves in the stump grow into nodules called neuromas. The first few guesses by early scientists suggest that the neuromas fired randomly to the spinal cord, which in turn carried signals to the thalamus followed by the somatosensory cortex that finally caused the phantom pains in the body. However, the fact that experiments conducted by cutting the nerves leading to the neuromas had no impact on the phantom sensations made researchers rethink their initial postulations. Instead, an alternative experiment was conducted to see if the thalamus and the somatosensory cortex were the root cause and source of the pains. This too resulted in no significant breakthrough. Ramachandran and Melzack have come up with two very interesting approaches to explaining phantom pains.

Melzack- According to Ronald Melzach's theory, the neuromatrix, a bundle of tightly interconnected neurons, already exist in the human genes. Thus, this makes the neuromatrix pre-wired with the genetic codes for the pain sensations. Melzack believes that regardless of whether the person is born with or without limbs, he or she still experiences the sensations that are already genetically determined by the neuromatrix. I find this explanation interesting because perhaps this could explain why a baby born without an arm or leg could still feel the sensation of nerves even when there is no limb.(3) My initial response to Melzack was this-- if our body functions are somehow already coded in our genes even before we are born, then we actually do not even need the physical mass of the body to feel its presence. My understanding of Melzack is that the human brain obtains information about what will and will not happen to the body as it grows. This extreme preparedness of the brain enables it to generate the perception of a limb even in an amputee.

Ramachandran- In my opinion, the theory of cortical re-mapping that Ramachandran presents, is complementary to Melzach's theory regarding the pre-existing genetic codes of the brain's nerve bundles. Ramachandran discovered that regions of an amputated hand are mapped out on different areas of the body. He conducted a simple experiment in which he rubbed a Q-tip on the amputee's chin to note his reaction. The amputee immediately pointed to an area of the phantom limb indicating his source of pain. Repeated experiments of these kinds led Ramachandran to conclude that there indeed exists a hidden neural circuitry in the human brain that remains dormant in the presence of the limb. However, once the limb is removed from the body, the activity in the homonculus is shut down. The homonculus is the section of the brain in which neurons identify the specific area of the body that's being stimulated through the information they receive from the somatosensory receptors in our skin.(4) Ramachandran concludes in his theory that the hidden neural circuits in the brain replace the inactive homonculus when an injury or surgery takes place. Thus, new neurons do not grow in the place of the homonculus that respond to the peripheral damages. Instead, already existing but latent neurons get activated and produce their effects, commonly causing the phantom limbs.(5)

Phantom Limbs- Remedies for the reduction of pain:

The wonders of the phantom limb phenomenon is that the treatment for pain caused by this occurrence is very case specific. Not all treatments have the same effect on its patients but rather depends on the individual body. Some of the common treatment methods include prescription drugs, surgical methods, physical support and therapy, and anesthetic applications. I feel the necessity to discuss some of these remedies in more detail in the following section.

Physical Therapy:
My initial response to using physical therapy as a means of reducing pain was very positive. Through applications such as vibrations, massages, and electrical nerve stimulants, I assumed that the pain sensation could be rectified and numbed. However, its interesting to learn that although few people have claimed significant relief, this technique is not considered an extremely effective one.

Surgery:
Neurosurgical methods would seem to be an ideal alternative to therapy and drugs. However, even these methods have proven to produce relatively low success rates in eliminating phantom induced pains. In this procedure, medical practitioners have attempted " to cut nerves above the neuromas (nodules that form when connections are cut and cannot be reestablished), cut the dorsal roots that bring the information in, or to make lesions in somatosensory pathways in the spinal cord, thalamus, or cerebral cortex."(6) Although these measures would seem to be logically effective, in truth, patients who undergo such surgeries have often reported of experiencing pain after a few months of relief.

While several studies done on the phantom limb point to cortical reorganization and re-mapping of the neurons as the result of the limb pain, a single and most effective solution has not been thus formulated. I believe that since the technique of pain reduction is case specific with regard to amputees, scientists need to take in additional considerations apart from focusing solely on the cortical reorganization. For example, physical inactivity in the amputee can cause poor blood circulation around the stump area producing pain sensations. Incorrect surgical procedures, mental and psychological stress, and prior experience with pain even before the amputation could all be causes for prolonged phantom pains.

References


1.) The ramchandran Method

2. ) Page 4

3. < a name= "3">3) Melzack, Ronald. Phantom limbs and the concept of a neuromatrix. TINS, vol. 13, No.3, 1990: 88-92
4. < a name= "4">4) Ramachandran V.S.;Rogers- Touching the Phantom Limb. Nature V.377 (Oct.12, 1995) P. 489-90
5. ) Macelester Website
6 < a name= "6">) Macalester University


The Parable of Conversion Disorder
Name: Neela Thir
Date: 2003-05-09 16:21:06
Link to this Comment: 5644


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

The cause of Conversion Disorder mystifies scientists in fields connected to neurobiology and psychiatry. It is characterized by the physical symptoms of a neurological disorder, yet no correlating problem can be found in the nervous system or other related systems of the body. This fact alone is not unusual; many diseases and symptoms have unknown origins. Conversion Disorder, however, seems to stem from "trivial" to traumatic psychological events and emotions rather than biological events. The extreme symptoms often disappear as quickly as they appear without the patient consciously controlling or feigning them. Thus, Conversion Disorder serves as a significant example of how fallacious scientifically conceived demarcated divisions between the mind/body/behavior actually are in the nervous system.

Conversion Disorder can only be diagnosed by physical symptoms seen in patients. Symptoms can be divided up into three groups: sensory, motor and visceral. Sensory symptoms include anesthesia, analgesia, tingling, and blindness. Motor symptoms may consist of disorganized mobility, tremors, tics, or paralysis of any muscle groups including the vocal cords. Visceral functions include spells of coughing, vomiting belching, and trouble swallowing (1). Most of these symptoms are strikingly similar to existing neurological disorders that have definitive organic causes. Conversion Disorder, on the other hand, demonstrates no organic causes plausible under accepted functions from which the symptoms should follow. CT scans and MRIs of patients with Conversion Disorder exclude the possibility of a lesion in the brain or spinal cord, an electroencephalograph rules out a true seizure disorder, and spinal fluid tests eliminate the possibility of infections or other causes of neurological symptoms. Optokinetic nystagmus (sub-cortically controlled steady tracking of the eye followed by the fixation of the eye on another object) can still be observed in patients with apparent blindness when they are shown a rotating striped drum. This action proves that the patient's eyes should function correctly according to scientific beliefs about the nervous system's operation (2). The symptoms shown in Conversion Disorder cannot be accounted for under current paradigms of biology, and this fact has only provoked even more scientific theories about the ways in which traditional biology must explain the phenomena.

Aspects of Conversion Disorder puzzle scientists and doctors not only because tests do not detect a problem but also because the symptoms defy the traditional understanding of how our nervous system works thus demonstrating a more complex system not yet explored. Numb hands characterize a type of conversion disorder called "glove anesthesia" in which sensitivity stops at the patient's wrists. This clear demarcation does not correlate to any documented nerve pattern or function. Also, patients who have paralyzed arms with functioning shoulder muscles that work normally to correct posture of their trunk contradicts how the nervous system is believed to be structured (3). These examples demonstrate how the symptoms of the patients contradict the idea that the biology of the nervous system is physically definitive; the nervous system is organized along lines other than the ones we have drawn. The physical functions we have constructed can be transgressed by an integral element that science has separated out: the mind.

Knowledge of neurobiology, or lack of it, seems to influence how the symptoms play out in the patients. It has been shown that symptoms become more coherent with a traditional understanding of biology when the patient knows more about the body's physiological functioning. If a patient knows how a symptom such as paralysis perpetuates itself "normally," then they will mimic or unconsciously replicate those symptoms. Similarly, if a patient believes the nerves in the hand are isolated from those in the arm, they may show glove anesthesia because of a belief that the wrist is the end of a bodily segment and therefore a limitation for sensation. Learned information stored in the mind is used to determine the physical symptoms unconsciously expressed by the Conversion Disorder patient. The conscious functions of the mind can work unconsciously through the nervous system to effect behavior. This indicates that the nervous system's effect on the body and behavior is intimately related to that of memory, of knowledge, and associated with the conscious I-function.

Conversion Disorder patients' ability to unconsciously change bodily symptoms with their mind suggests that perhaps the tendency to classify science as a factual discipline could have an effect on how we perpetuate disorders. If learned information can change bodily behavior through the power of belief, then science as a structure of belief may be controlling and changing the very thing it seeks to define. Our beliefs about how the nervous system, mind, and body operate and how they are segmented could be creating a cycle of causation which ultimately impacts how patients play out illnesses, how doctors diagnose them, and how scientists research treatments. Science seeks to make distinctions, define, and organize; these ideological categories may assist in formulating symptoms just as they observe and generalize them. Conversion Disorder may therefore be an example of the holes in science's simplified self-validating discipline. The scientific community's effort to classify the disorder within an already determined system of belief has been proven difficult, and this problem can be seen through the continuous movement of the disorder from one contrived category to the next: from the body to the mind and from the real to the imaginary.

This discussion of the distinctions of the mind and body are not new. Conversion Disorder was originally called "hysteria," and it has been described by documents hundreds of years old. The ancient Egyptians attributed it to a malpositioned or "wandering" uterus, and the name derives its meaning from this distinctly feminine problem. In the 1560's, the first documented study was done claiming that "hysteria" was located in the mind rather than the body. Two hundred years later, the French neurologist Charcot hypothesized that hysteria originated from an organic weakness of the nervous system. Sigmund Freud became captivated by this idea and eventually replaced "hysteria" with the term "conversion" because he theorized that the symptoms were "intrapsychic conflicts" manifested (or converted) physically in the patient (3). Just as scientists in the past have tried to locate the disorder in one specialized location due to one definite cause, so do we. The progression towards the psychological has been reflectively inverted in recent theory towards the biological. Hundreds of years from now, scientists may find our pin-hole approach to diagnosis as needlessly narrow and hypothetical as the Egyptians' wandering uterus.


Current treatment of Conversion Disorder primarily involves psychotherapy. Often patients go into spontaneous remission, or they have a complete recovery shortly after visiting a psychiatrist (2). Despite the effective psychological treatment, the patients are often incredulous when told their symptoms are "imaginary" or "mental." Because this is often counterproductive in the doctor-patient relationship, patients are not told that their condition is thought to be psychological; they are first treated as if the symptoms are organic (3). Doctor's association of "mental" with "imaginary" represents a flawed assumption that something controlled by the mind and expressed through the body is not "real" and therefore a feigned illness. The mind, with its intimate connection to the nervous system and behavior by way of the larger structure of the brain, is not scientifically superfluous to us as humans. It too is a part of the body, and a schism in the mind, whether it results from military trauma, sexual abuse, or depression remains a very real source of physical and medical illness. The scientific community's tendency to classify the mental with illusory illness and the physical with "true" illnesses rests on the assumption that internally-caused symptoms are self-propagating and therefore preventable. The mind has become separated from the body and the nervous system in which it is physically placed, and this false division creates problems in understanding examples such as Conversion Disorder.

Conversion Disorder is increasingly being thought of and researched biologically rather than psychologically. Technology has supplied methods of testing elements of the nervous system, and yet no definitive causes have been identified. It has only recently become mainstream thought that Conversion Disorder arises from some sort of collaboration between the mind and the nervous system (3). This distinction between the mind and the nervous system (and the revelatory idea that they are indeed related) demonstrates a crucial partition that most scientists seem to make between the two. In my view, the mind is the cognitive function of a larger concept, which can be called, as we do in class, "the brain". The brain encompasses both the biological stratum of the nervous system as well as the cognitive stratum of the mind. As debates over Conversion Disorder show, they are interrelated. The nervous system does not only determine the mind, but it too can be influenced by the mind. To attempt to divide them for scientific study may be useful, but their ultimate relationship should not be permanently separated. Parobek describes the state of Conversion Disorder as a "Tower of Babel that obscures accurate identification and nomenclature" (3). This is true in that, like the myth, the relationship between the two elements of the brain (the body's nervous system and the "spirit's" mind) create a confusing cacophony of precise causation only because that is its inherent structure. Nomenclature is a system of understanding and representing, and if the construction of the Tower of Babel and Conversion Disorder cannot be contained within one language or one scientific discipline, then perhaps the nomenclature and the building of scientific research needs to broadened to include more "multi-linguistic" elements. The total functioning of the nervous system and the mind meets, exists, and functions together in what is more appropriately called "the brain," and the brain, as a whole, determines behavior.

Understanding Conversion Disorder will require a multi-discipline approach rather than trying to locate its cause and process on a purely biological or psychological level. When trying to pinpoint whether a Conversion Disorder patient's symptoms hail from "real" or malingering sources, the observed difficulty lies in the seeming dichotomy between mind and body. By separating the two so definitively, we are preventing ourselves from comprehending Conversion Disorder as well as our more complex nervous system. The dichotomy, however, remains a created one for the benefit of our own understanding because divisions and simplifications facilitate scientific knowledge. Yet, in the case of Conversion Disorder, delineated scientific thinking seems to have prevented our understanding rather than facilitating it; by inspecting the trees, we miss the forest.


References

1)PsychNet-UK, COMMENTS ABOUT IT

2)Emedicine: Instant access to the minds of medicine, Dufel, Susan M.D. "Conversion Disorder".

3)Science Direct from BMC library E-journals, Parobek, Virginia M. "Distinguishing conversion disorder from neurologic impairment". Journal of Neuroscience Nursing. 04/97. Volume 29. Number 2. p. 128.
Infotrack: Expanded Academic


Alzheimer's Disease and the Eye Test: Break-though
Name: Kathleen F
Date: 2003-05-09 18:07:11
Link to this Comment: 5645


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

At the start of the 19th century, Alois Alzheimer was a neuropathologist working in Germany. Known for his occupation as a renowned Psychology professor at Breslau, it was only later in his life, in Munich, that he started to examine the pathologic composition of the mentally ill. Some of his most famous studies were conducted with Franz Nissl, a German psychiatrist and neuropathologist who established the association of mental illness and nerves through the link between them and modifications in glial cells, blood vessels, and brain tissue. In 1907, after publishing numerous dissertations on cerebro arteriosclerosis, Alzheimer published his consequential work regarding Alzheimer's Disease (1).

Alzheimer's Disease is a progressive degenerative brain disease that augments with age and is the leading cause of dementia. Dementia impinges on memory, personality and ability to think rationally. In dementia, the brain functions start to break down slowly. However, it's important to note that not all dementia can be regarded as Alzheimer's Disease, and likewise, it's important to rule out conditions that can be confused with Alzheimer's, like other causes of dementia, some strains of depression, thyroid disorders, and vitamin deficiencies. In Alzheimer's Disease, it gradually becomes more difficult for the person to do jobs, or even perform simple tasks at home. How can you make breakfast when you don't remember how to scramble eggs? However, the effect of Alzheimer's Disease is not one confined solely to the inflicted. Family members, spouses, children, and neighbors are all affected by this disease. In the advanced stages of Alzheimer's, the person may be unaware of their surroundings and not capable of recognizing the people involved in their life (2). A husband may not recognize his wife; a mother may regard her child as a stranger. Because this disease is one that chiefly affects the elderly, there have been many studies that try to pinpoint exactly who develops Alzheimer's.

On February 21, 1996, the Journal of the American Medical Association stated that Alzheimer's disease could be detected as early as 20 years of age. The study was based on the story of 93 young women who were about to join a convent. All of these women were white, came from alike backgrounds, and were born in the same time-frame. While all most of these factors make the nuns seem like a control group, the advantage of studying them was that they lived together in the same environment for sixty years and ate the same diet (3). Regardless, the study did not tackle the questions of whether race, environment, or reproductive history, can affect the risk of Alzheimer's disease. It also used the term "detect." Unfortunately, the neuropsychological tests employed to diagnose Alzheimer's are fairly vague. They often identify patients significantly late in the progression of the disease. Because of this, modern Alzheimer's research is generally based around preparing an analytical test that will precisely identify Alzheimer's patients before they show the indicators of cognitive deterioration (4).

Currently there is no test or biological marker that can positively identify the presence of Alzheimer's disease in a living patient. A definite diagnosis of Alzheimer's is found post-mortem, based on histopathological evidence found in autopsy (5).
Patients are diagnosed through verbal tests, a neurological examination and a process of excluding other known causes of dementia. What a diagnosis of Alzheimer's rely on are biomarkers, usually the presence of b-amyloid plaques in the eyes of patients. Amyloid plaque are brillo-like clusters found in the brains of Alzheimer's patients. The formation of these plaques in the brain is a characteristic sign of the disease. One would assume that the most precise way to identify Alzheimer's would be to identify these plaques in the eyes of the living, which are building up long before the patient has full-blown Alzheimer's Disease. Currently, there is much research being conducted about the eye test, a test that looks for b-amyloid plaques in the eyes of the living.

Ashley I. Bush, M.D., Ph.D. is an instructor in Neurology at Harvard Medical School and one of the forerunners for Alzheimer's research. He and his colleagues released a paper in mid-April, 2003 explaining a new study that used the postmortem specimens of the eyes and brain from nine individuals with the disease. The study stated that b-amyloid is present in the cytosol lens fibre of people with Alzheimer's disease (6). However, this research is only correlational. Even if a doctor were to recognize these plaques in the eyes of patients who may develop Alzheimer's later in life, it could never be absolutely proven that the patient possessed Alzheimer's disease until an autopsy. In that case, there is a very high rate of false positives when using this method. If a doctor were to use this eye test to inform someone that they were going to develop Alzheimer's, and it ended up being a false positive, there would be profound psychological effects on the patient and their family. So is this eye test a break-through in Alzheimer's study or merely just progress, heading towards the point where there will be a definite test to unquestionably identify Alzheimer's before cognitive deterioration?

In Bush's paper, he notes the limitations of the study. He acknowledges that he used small sample sizes, and that some of his findings are "postulations". However, he states that his findings "provide evidence" and that further exploration of his hypothesis regarding the association between Alzheimer's and b-amyloid plaques await more controlled studies in people at varying progressive stages in the Alzheimer's disease process.

Bush was also one of the first researchers to find that these amyloid proteins interact with zinc. The amyloid, generally regarded as toxic to brain cells, is composed of fragments of a small protein called A-beta. In Alzheimer's, an insoluble form of A-beta is left in these amyloid plaques. Bush tested several metals implicated in brain metabolism and found that zinc strongly bonded to A-beta, inducing the development of amyloid. Bush remarked, "The results were remarkable in several ways. First, the binding metal was zinc, which had already been shown to bind to the amyloid precursor protein. Second, A-beta came out of solution and formed amyloid almost immediately. The result was caused only by zinc, not by the other metals tested, and only a slight increase in zinc was needed to produce the amyloid" (7).

Bush compared the effect of zinc on solutions of both human and mice forms of A-beta, because the only mammal in which amyloid plaques do not form in the brain are mice. They produce an A-beta that is different from those found in humans. These studies show that rat A-beta interacts weakly with zinc, pointing toward the idea that zinc and amyloid plaques are interrelated. Bush also found that the concentration of zinc required to form amyloid from human A-beta is only slightly higher than normal brain zinc levels. Zine is vital to neuro-transmission in the brain, and is found in high concentration in those areas most affected by Alzheimer's (7). However, these test are done post-mortem, which is of little use to the two to four million Americans affected by Alzheimer's disease.

Other research regarding the eyes of Alzherimer's patients are also being conducted by other researchers and doctors. In a paper published in the Nov. 11 1994 issue of Science, Huntington Potter, a professor of neurobiology at Harvard Medical School, reported that in a trial involving 58 individuals, his new eye test identified Alzheimer's disease with 95% accuracy. Potter's diagnostic test involves measuring the sensitivity of the pupil to tropicamide, a chemical used to dilate the pupil during eye examinations. The test places a weak solution of tropicamide into the eye and then measures the dilation. Potter found that the amount of dilation is much greater in those with Alzheimer's. In the Nov. 11 issue of Science, Potter accounted that in a trial including 58 individuals, his test diagnosed Alzheimer's disease with 95 percent accuracy. "It's a sort of window into the activity in the brain, and we seem to have hit on something that occurs very early in the disease process," says Potter (8).

Potter's test seems more reliable and concrete than Bush's, however, both are slightly correlational. There are many other factors involved in the dilating of the eye than the presence of Alzheimer's disease, and it seems as if this test may have a high number of false positives as well. Also, Potter's test is done on patients who have already developed full-blown Alzheimer's. Because there is no known cure for Alzheimer's, these tests could almost be disregarded as ineffective. They are, in reality, diagnosing Alzheimer's when it is too late. What good will these tests do the patient if the patient is already dead, and Alzheimer's is found on the autopsy table? However, regardless of their latency, the test should be considered highly auxiliary in the search for a cure. They are a piece of the puzzle that we, as scientists, patients, and families of those with Alzheimer's, will hopefully hold all the pieces to in the near future.

).

References

1)Founders of Neurology, Alois Alzheimer and Franz Nissl

2) Alzheimer's Disease , Symptoms of Alzheimer's Disease

3) Alzheimer's Disease , What Age Does It Show Up?

4) Havard.edu , Alzheimer's Disease: The Eyes Have It.

5) Biological Markers for Alzheimer's Disease

6) Bush, Ashley. "Cytosolic B-amyloid deposition and supranuclear cataracts in lenses from people with Alzheimer's Disease"

7) Harvard.edu , Zinc may play a role in Alzheimer's plaque formation.

8) The Hunt for an Early Diagnosis. , Can You Detect Alzheimer's Disease Early?


Chemical and Structural Alterations in the brain a
Name: Clarissa G
Date: 2003-05-12 00:09:03
Link to this Comment: 5650


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Psychiatrists and psychologists use a list of nine-symptom list to determine whether or not a patient is suffering from depression. The prongs, which make up this list, are as follows.
1. Depressed mood most of the day, nearly every day
2. Markedly diminished interest or pleasure in all, or almost all, activities
3. Significant weight loss when not dieting or weight gain, or decrease or increase in appetite nearly every day
4. Insomnia or hypersomnia nearly every day
5. Psychomotor agitation or retardation nearly every day
6. Fatigue or loss of energy nearly everyday
7. Feelings of worthlessness or excessive or inappropriate guilt
8. Diminished ability o think or concentrate, or indecisiveness
9. Recurrent thoughts of death, recurrent suicidal ideation, or a suicide attempt or a specific plan for committing suicide (1)
Any five of these symptoms and an individual is clinically depressed. These are observable alterations in behavior, which a medical care professional can use to diagnose depression. But what does the brain look like when someone is purported have five of these symptoms? Is the chemical makeup of the brain different? The structural makeup? If so how?

My previous papers examined various aspects of the complex effect depression has on the brain. The first one examined how suicide affected the structure of the brain. The second paper attempted to explain where depression "comes from". This third and final paper will examine the changes, which occur in the brain, when an individual is suffering from depression. These changes can be structural, relating for example to the size of the frontal lobe, or chemical, the brain's ability to process serotonin. The abnormalities expanded on in this paper are obviously not all of the deficiencies known to scientists. However, they are widely accepted as important in understanding patients' behavior whom are suffering from depression. Understanding these chemical and structural changes will lead to better diagnosis and treatment of individuals with depression.

The prefrontal cortex located in the front part of the brain is responsible for higher intellectual functions and regulation of emotional and motivational behavior. The prefrontal cortex or "gray matter" is known to be smaller in depressed patients. This area is made up of the soma, dendrites, and pre-synaptic terminals. Patients with decreased prefrontal activity along with increased thalamus activity can experience moodiness, negativity, low energy sleep and appetite problems.

Decreased prefrontal cortex activity along with increased or decreased temporal lobe activity can produce can produce sadness, irritability, mild paranoia, atypical pain (headaches or abdominal pain) and insomnia. (2) When the prefrontal activity is muted two types of sells located in that region of the brain are affected. These two types of cells located in the prefrontal cortex can account for the decrease in the prefrontal cortex's size. They are the neurons and glial cells.

Neurons are the basic cells of the brain. Supported by the glial cells, they transmit, receive, and process signals from the brain. In class we designated individual neurons as self -contained input output boxes in the brain. A decrease in the density and size of neurons in the prefrontal cortex has recently been found in patients suffering from depression. This abnormality could be due to the decrease density in the glial cells. Repercussions of lack of density and decrease size could possible mean that the brain is not receiving the appropriate amount of information needed in order to function correctly. Glial cells and neurons work in tandem so that the signal process in not interrupted.

Glial cells are known to have changes in part of the cerebral cortex. In the prefrontal cortex they function as a "support system" to the neurons. (3) Glial cells monitor the nutrients neurons receive from the blood as well assist in immune response. The discovery of fewer glial cells in the prefrontal cortex has a serious implication. If there is a decrease in the amount of glial cells in the prefrontal cortex the neurons lack the appropriate support. Without appropriate support the neurons are unable to perform their function correctly. Because the prefrontal cortex is in charge of higher intellectual function this would explain why some patient's experience decreased concentration.

In my previous paper I examined the causes of depression. Three causes were summarized, environmental, genetic, and chemical. One of these causes, environmental can have a particularly interesting impact on the structural and chemical makeup of the brain. Environment particularly a stressful one leaves certain areas of the brain susceptible to depression, particularly, the hippocampal formation.

The hippocampus is an area of the brain located near the spinal cord. The hippocampus is part of the neuroendocrine system. The neuroendocrine system is made up of Hypothalamic-Pituitary-Adrenal axis. This area is known as the "Limbic"-Hypothalamic-Pituitary-Adrenal axis or LHPA. (4) LHPA is responsible for arousal, sleep, appetite, and "happiness". The hypothalamus specifically is the location of the brain where memory encoding occurs.

The hippocampal formation can experience a decrease in neurogenesis in high stress situations. (5) This decrease in neurogenesis will lead to a shrinking of the hippocampal formation. Even when a patient is not currently experiencing a depressive episode the hippocampal formation can be significantly decreased. The decreased size of the hippocampus could be due to glucocorticoid-induced neurotoxicity. What this means is that there an abnormal level of adrenal cortical hormones. The adrenal cortical hormones protect against stress and affect protein and carbohydrate metabolism.

Recently the hippocampus has been closely linked to emotion in the brain. (6) If we consider that the hippocampus encodes memory for storing in the other part of the brain, this linkage seems logical. An alteration in the way our brain would store "bad" memories or "good" memories would affect the way in which we recalled those memories. Possibly having an effect on the way we responded to similar situations in the future.

Two Monoamines are found to be abnormal in the brains of depressed patients norepinephrine and serotonin. Norepinephrine and serotonin regulate mood and suicidal inclinations. In the depressed brain serotonin production can have several defects. The following actions reduce the effectiveness of serotonin as a mood regulator.
 Not enough serotonin is produced,
 There are not enough receptor sites to receive serotonin,
 Serotonin is being taken back up too quickly before it can reach receptor sites,
 Chemical precursors to serotonin (molecules that serotonin is manufactured from) may be in short supply, or
 Molecules that facilitate the production of serotonin may be in too short supply. (7)

Norepinephrine has similar chemical implications as serotonin. As a matter of fact norepinephrine can be regulated though serotonin medications. The impact that decreased levels of norepinephrine is depression while increased levels of norepinephrine can lead to mania. (8) Recent studies have found that norepinephrine plays a role in depression but it does not function alone. It functions in tandem with serotonin and dopamine. Abnormal levels of any of these three can lead to depression.

The research on the subject of depression and the structural and chemical changes it imposes on the brain is vast and complex. Often it is hard to read through all of the data to pinpoint the exact structural abnormality which depression causes. This is indicative of a larger theme in depression. No one factor causes it. No one area in the brain is affected. Each area is closely linked together and affected by each other in this disease. Then again, each area of the brain is closely linked and affected by each other in a normal brain. So this shouldn't come as a huge surprise.


1)The Harvard Mahoney Neuroscience Institute Letter, an interesting dialogue on depression

2)Brain SPECT Information and Resources

Science Daily

Health and Age: A Novartis Foundation

American Scientist: The Magazine of Sigma XI, The Scientific Research Society

Sigma XI: The Scientific Research Society

What you need to know about Depression

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


On the Scent of a Good Ploy: The Debate Over Phero
Name: Danielle M
Date: 2003-05-12 21:40:43
Link to this Comment: 5654


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Pheromones have become a topic of recent interest, and the media has particularly clung to the titillating and very marketable hypothesis that pheromones can alter human sexual behavior. Human ability to sense and process pheromones is still debated, and the body of research has yet to provide a clear picture of the biological processes that may be implicated in pheromone perception. Yet despite the ambiguity, research has begun to claim that pheromones are not only perceived by the human senses, but that they can actually trigger particular behavior--from making a person more sexually attractive to eliciting an irrepressible desire to mate. Pheromone research has thus unsurprisingly captured the interest of the public, and some researchers have responded by aligning themselves with companies trying to market pheromone-based products to the attentive public. But what is the result of this alliance on the integrity of these researchers' work? The debate, over both pheromones and the reliability of dubiously motivated research, continues.

Pheromones are sensory stimuli perceived by of the sense of smell, a particularly important sense in many animals. To better understand the nature of pheromone perception, then, it's necessary to understand the workings of the sense of smell. The olfactory system, responsible for detecting and communicating smells to the brain, has remarkable range and sensitivity and enables organisms to analyze thousands of odors (1). The ability to discriminate among the many organic compounds (ie odors) that permeate the environment is invaluable to survival and functions on two levels (1). On the first level is the main olfactory system, used to identify odors as a means to find food, detect predators and prey, discern potential sources of danger, mark territory, and so on (1). The main olfactory system is directly linked with the brain, and the information it relays is consciously processed--that is, when the olfactory system picks up a scent--burning wood, for example--the scent is a perceived on a conscious level. On the second level of smell-sensation, however, is the vomeronasal system, used to perceive odorless chemosignals passed between members of the same species. Within the vomeronasal system is located the vomeronasal organ (VNO), which is responsible for detecting these odorless signals. Such signals are thought to contain information about both the location of another organism and also its "reproductive state and availability" (1).

The cheomosignals--termed pheromones--detected by the VNO are thought to interact with the brain's endocrine system to trigger specific behaviors, such as mating or courtship, in response to the perceived pheromone (1). Defined as a chemical product of a body that "serves the reproductive life of the species," pheromones have been well documented among many mammals, but not humans (2). The most recent interest in the idea that the human body may also use pheromones began when, in 1971, Martha McClintock published a study showing that "the menstrual periods of women who lived together tended to converge on the same time every month" (3). Pheromones were implicated as the force behind McClintock's findings, and were thought to act as "ectohormones," that is, as hormones "that work between individuals in much the same way that 'endohormones,' like testosterone and estrogen, work within them" (3).

Since McClintock's study, interest in human pheromones has risen, and the debate over the reality of their existence has grown alongside. Scientific opinions on the existence of pheromones now range from "gung-ho boosterism" to "outright skepticism," while "accusations of data fudging fly" (3). In recent pheromone-related research, various forms of evidence have been offered for their existence, but the details of how pheromones affect humans remain unclear (3). In 1998, for example, McClintock and her research partner Kathleen Stern found that when a fluid extract taken from a menstruating woman's underarm secretion was regularly placed on the upper lip of another female subject, it could hasten or delay the subject's menstrual cycle by a day or more (2). The specific chemicals elements responsible for this effect, however, couldn't be identified (2). In later studies, Noam Sobel of Stanford University presented evidence that the brain does in fact respond to the presence of pheromones by tracking brain activity with magnetic resonance imaging; a finding which McClintock and colleague Suma Jacob later re-affirmed (3). At the Karolinska Institute in Sweden, Ivanka Savic found that that pheromones affect the pituitary gland, a structure in the brain that plays a role in reproductive behavior (3). Still, none of the studies were able to discern which particular agents were responsible for their observations or which biological processes may be involved in human pheromone perception. While there's evidence that pheromones are in fact perceived by the human senses and affect the human body, the lack of specific details has led skeptics to question the level of influence pheromones are able to exert on human behavior.

In particular, the majority of pheromone skepticism has centered around the VNO, which in other mammals is known to detect pheromones, but whose function in humans is still unclear (4). Studies have claimed that the human VNO is merely vestigial and disappears before birth, thus making it seem that humans, unlike their VNO-equipped mammalian counterparts, have no perceptual processes devoted to detecting or interpreting pheromones (1). However, other studies have implicated pits found in the nasal septum, called Jacobsen's organ, which are thought to be the receptors of pheromone signals (5). Others have countered that, although there are indeed such pits in the nasal septum of some adults, there is no conclusive evidence of "functional vomeralnasal receptor neurons" connecting the pits to the brain (6). Without such connections, the pits would have no means to relay sensory input to the brain. It remains possible that humans, like some animals, may be able to sense pheromones through the main olfactory system, and that the absence of a human VNO would not rule out the possibility of human pheromones (3). Undisputed proof of its presence and functionality, however, would be "a major support" for proponents of the theory that pheromones significantly effect human behavior (3).

The debate over the existence and influence of pheromones on human behavior has also brought forth an interesting and unexpected problem--a commercialization, and consequent oversimplification, of science. As research into human pheromones was just beginning, it was thought that they would act similarly in humans as in animals--that they would trigger fixed patterns of response behavior such as mating or courtship (3). The possibility of such a link between pheromone perception and sexual behavior caught the attention of the popular media, whose coverage increased the visibility of pheromone research and piqued public interest in the findings. As a result, both the media and the public have begun to watch pheromone research more closely, eagerly awaiting each new finding for evidence of the power of pheromones in manipulating behavior (3). Some researchers, taking note of the attention, have taken advantage of the opportunity by establishing or investing in companies devoted to developing and marketing pheromone-based products (3). When these researchers conduct and analyze their experiments with human pheromones, though, one begins to question just who it is that they're packaging their results for--the scientific community or the consumer public. The high visibility and potential marketability of pheromone research thus raises questions of scientific integrity, both of how experiments are conducted and of how the results are presented. When researchers have a stake in the results produced by their experiments, it's quite possible that they may approach their experiments not as scientists, but as marketers. Designing, conducting, and analyzing an experiment with an eye toward promoting the results and selling a product--deliberately or not--will likely supersede the interests of rigorous accuracy.

For example, a research team led by Bernard Gosser and David Berliner has presented experimental results said to show that "human pheromones are a powerful mediator of sexual attraction, anxiety, and hormone-related disorders" (7). Berliner, however, is also one of the founders of Pherin Pharmaceuticals, a company now producing a spray purported to "free women from irritability, depression and other symptoms of premenstrual syndrome" called PH80 (8). Gosser and Berliner also claimed to have demonstrated both the existence of the human VNO and of an active pathway between it and the brain, as well as a direct effect exerted by pheromones on "hormone-related disorders" (8). These claims are all central to the legitimacy of the spray they say "binds to receptors in the VNO," and speeds its message along the neural pathway to the brain. There, they say the message is processed in the hypothalamus, which then restores the body's chemical balance, thus relieving the discomfort of PMS (8).

Pheromone experiments like Gosser and Berliner's seem to reflect a tendency to approach pheromone research with an eye toward attracting media and public interest. Seeking a broader audience isn't necessarily a bad thing in scientific research, but when it's motivated by a desire to market a product or the idea of a product to that wider audience, it becomes detrimental to the genuine pursuit of scientific knowledge. Presenting research as a form of advertising may lead researchers to present their data in a biased or incomplete light, in which they may opt for an overly simplistic representation of the complex nature of human biology and behavior. For example, a group of researchers at the University of Utah that has ties to a pheromone pharmaceutical company that markets pheromone-laced perfumes made significant claims for the power of pheromones in eliciting patterns of behavior in humans. Like Gosser and Berliner, the researchers claimed that the results of their study proved the existence of a human VNO and of a neural pathway between it and the hypothalamus. In their analysis, they claimed that this pathway meant that when the VNO perceives pheromones, it relays the sensation to the brain where the message triggers stereotyped sexual behavior (3). Using magnetic resonance imaging to show that the steroid pheromones androstadienone and estratetraenol elicited electrical responses in the brain, the researchers concluded that these pheromones were sexual attractants and made those who emitted them more successful with the opposite sex (3). Yet, promising though the results may sound, most other research in the field (Gosser and Berliner notwithstanding) has concluded that "the evidence for a working human VNO is mixed at best" (3).

In 2001, Savic et al. collected data similar to that of the University of Utah group, but were less sure of its conclusiveness as evidence that pheromones elicit sexual behavior. In the discussion of their analysis, the group says that though they too found brain activity in response to androstadienone and estratetraenol, that alone was not enough to conclude that pheromone perception "altered the subjects' reproductive behavior" (9). In addition, other researchers, such as Jacob--who has done extensive work in the field--maintain that, "although sexual effects are generally implied by marketing claims with perfumes containing these steroids, any claim that a particular product will increase the user's opportunities for sexual intercourse" or activate "stereotyped releaser effects" related to brain chemicals and behavior are both unlikely and misleading (10). Taken together, it appears that perhaps in the interest of promoting their product, the research team at Utah failed to scrutinize their data rigorously enough and may not have given adequate weight to the complex nature of human behavior, and instead perhaps have chosen to present a simpler, more direct and marketable interpretation of their experiment.

By using scientific studies as a means to support and market a product, researchers have commercialized their work and compromised its scientific integrity to appeal to popular (and consumer) audiences. Experimental results about human behavior have been presented in one-dimensional, overly simplified ways to make them appear stronger and more sound than rigorous analysis might show. In consequence, not only is genuine progress toward knowledge impaired, but the public view of the nature of human biology and behavior is significantly distorted in the process. Where there is consistent evidence supporting the influence of pheromones on human behavior, such evidence is complex and tends to reveal that they modulate or influence behavior in certain subtle, context-specific ways. That is, pheromones are not the quick-acting chemical messengers and stereotyped-behavior triggers that pheromone-marketing proponents have claimed them to be (3). It's more likely that, though human pheromones do appear to be recognized in the brain, and even to have "effects on psychological state and brain function," their effect and influence is far from simple or defined (3). Unlike animal behavior, in which pheromones can elicit certain fixed-response patterns, human behavior "is rarely determined by any single signal" or consistently responsive to stimuli in fixed patterns (4). Part of the complexity in understanding human behavior is that it relies significantly on context--emotional, psychological, environmental--and processes a variety of sensory input from "a rich social and physical environment" (10). Thus, human behavior "is complex and highly dependent on learning and the context of the social and physical environment" and pheromones are more likely to "guide human behavior" (4)--given the appropriate context--than to trigger a fixed pattern of behavior (10). By not recognizing this complexity and paring down results into a one-dimensional, marketable claim, researchers are presenting an overly-simplified portrait of human biology that misleads the public and undercut the work of others in the field.

Thus, the existence and influence of pheromones, previously seen in animal behavior, has stimulated much debate and curiosity in both the scientific community and the larger public. Though recent studies have begun to offer proof that humans can in fact perceive and emit pheromones, some remain unconvinced and many debate the significance of their influence. The debate has taken on a surprising tone as the public interest has become increasingly focused on the possibility that behavior, particularly sexual behavior, can be manipulated by pheromones. Using this popular interest, some researchers have invested in pharmaceutical companies working to market pheromone-based products, taking on an element of commercial motivation in their research. What are the implications of this? It seems that to reach their audience, some researchers have opted for a more simplistic and superficial analysis of their experimental results in order to attract and hold a wider interest with direct and unambiguous claims. Doing so, however, means trying to simplify one of the most difficult and multifaceted mysteries of science: human behavior and the biological processes that comprise it. This not only distorts the public's understanding of behavior, it undermines other research in the field that acknowledges the complexity of the inquiry and refuses to draw abbreviated or simplistic conclusions. It would be quite unfortunate to abuse the opportunity afforded by the public's interest in pheromones and behavior by seeking financial gain rather than using the opportunity to further our understanding of ourselves--our bodies, our senses, our behavior--and the profound complexity therein.

References

1) Firestein, Stuart. "How the Olfactory System Makes Sense of Scents." Nature 413 (2001): 211-218.

2) McCoy, Norma L. and Lisa Pitino. "Pheromonal Influences on Sociosexual Behavior in Young Women." Physiology and Behavior 75 (2002): 367-375.

3) "Pheromones, in Context," by Etienne Benson.

4) Jacob, Suma, Davinder J.S. Hayreh, and Martha K. McClintock. "Context-Dependent Effects of Steroid Chemosignals on Human Physiology and Mood." Physiology and Behavior 74 (2001): 15-27.

5) Sansom, Clare. "Sixth Sense Could Avoid the Blood-Brain Barrier." DDT 6 (2001): 386-387.

6) Trotier, D., C. Eloit, M. Wassef, G. Talmain, J. Bensimon, K.B. Doving, and J. Ferrand. "The Vomeronasal Cavity in Adult Humans." Chemical Senses 25 (2000): 369-380.

7) Gosser, Bernard I., Louis Monti-Bloch, Clive Jennings-White, and David L. Berliner. "Behavioral and Electrophysiological Effects of Androstadienone, a Human Pheromone." Psychoneuroendocrinology 25 (2000): 289-299.

8) "Pheromones can Banish Premenstrual Syndrome," by Catherine Zandonella.

9) Savic, Ivanka, Hans Berglund, Balazs Gulyas, and Per Roland. "Smelling of Odorous Sex Hormone-like Compounds Causes Sex Differentiated Hypothalamic Activations in Humans." Neuron 31 (2001): 661-668.

10) Jacob, Suma and Martha K. McClintock. "Psychological State and Mood Effects of Steroidal Chemosignals in Women and Men." Hormones and Behavior 37 (2000): 57-58.


Does the language we speak affect the way we think
Name: Nupur Chau
Date: 2003-05-14 16:02:22
Link to this Comment: 5672


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

"In my observation and experience, the language which we use has great impact in our ways of thinking and expressions. The most important thing is the mother tongue. The ability in expressing our ideas and feelings without any difficulty...has great importance. One's native language plays the most important role in this case. Different languages have different ways of expressions and ultimately shapes one's mind to express in that way. Thus, one's mother tongue plays [a] crucial role in...one's thinking and expressions"
Rabindrah Raj Giri
Nepal (1).

Does the language we speak affect the way we think? Surfing the British Broadcasting Corporation's online forum, I stumbled across this heated debate, with numerous postings put up by people that are bilingual and multilingual. Bonny from the United States believes that "language drives the way we think. The more rich a language (say in terms of vocabulary), the more likely a person will have varied thinking. It just opens new avenues of the thought process" (1).. Bilal Patel, a resident of London that speaks English, Gujarati and Urdu (the latter two languages originating from South Asia) says "I think differently in English and not just because of the different words. English seems to me to be so formal, serious and self-centered-somehow 'cold'-whereas Asian languages are somehow warmer and have far more feeling...And when I want to vent and let off steam, I always do it in Gujarati!" If the language we speak affects the way we think, then does that mean that the way in which one learns and remembers a language is inadvertently linked to the way in which we behave? The way we react? The way we are? It is widely known that the processes that are linked with language acquisition are located in the left side of the brain, consisting of a variety of tasks that the brain must perform. To complete the task of fluency (with the exception of Sign Language), one must be able to say see and speak words. This translates to numerous brain task: listing, speaking, seeing (in the form of reading), writing, and above all, comprehension (3) . Consequently, it has been recently proven that there are different parts of the brain that are focused performing on performing these separate tasks (3) , beyond the Wernicke's are if the brain that is responsible for the comprehension of the language (7) and the Broca's area of the brain that handles language production (7) . It is for this reason that learning a language is so difficult for some people: it involves an amount of coordination between various parts of the brain that some people may not have; one must be able to seamlessly synchronize the different parts of the brain (3) .
It is known that language is a form of communication. Whether it be spoken, in the case of the four thousand languages that the population of the world is currently speaking (6) or signed, in the case of the various forms of Sign Language, language is the way in which one human shares thoughts, ideas, feelings and facts with another human. However, how is it that one acquires language? The "mother tongue", in the words of Rabindrah Raj Giri, is the native language of the person: it is commonly thought of as the language that the person is "wired" to understand-the language that is learned in a persons formative years is the language that a person has been developed to speak.
Historically, the acquisition of one's "mother tongue" has been a much heated debate. Noam Chomsky, in the late 1950's, argued that all infants had universal grammar and universal phonetics already established within their brains (5) . He believed that infants had an inherent knowledge of language, and that the language development among infants was in reality a growth within the language module already established (5) . In 1957 however, B.F. Skinner had proposed the opposite learning view in his book Verbal Behavior by arguing that "language, like all animal behavior, was an 'operant' that developed in children as a function of external reinforcement and shaping...infants learn language as a rat learns to press a bar-through the monitoring and management of reward contingencies" (5) . Skinner believed that "no innate information was necessary...and language input did not cause language to emerge (5) .
Skinner's theory of operant conditioning translates to a type of associative learning where there is a contingency between the response the subject and the presentation of the reinforcer (8) . Adopting this theory to learning a language, operant conditioning allows humans to learn languages at any age, at any time, with the guarantee of reaching fluency. However, there is increasing research that goes against this theory, starting with the Critical Period method (8) , in which it is believed that one can become fluent in a language if s/he starts learning the language within the window of approximately two to seven years of age (8) . Most recently, conflicting research has been in the form of the Native Language Magnet.
The Native Language Magnet supports the theory that the brain is "mapped" or "wired" in a certain way to learn a certain language, and goes against Skinner's idea that mere immersion and conditioning can lead to fluency. "A model reflecting this developmental sequence from universal perception to language-specific perception, called the Native Language Magnet model, proposes that infants' mapping of ambient language warps the acoustic dimensions underlying speech, producing a complex network, or filter, through which language is perceived...once formed, language-specific filters make learning a second language much more difficult because the mapping appropriate for one's primary language is completely different from that required by other languages" (5). Thus, the Native Language Magnet model constrains the neural structure of the brain, "mapping" and "wiring" the mind so heavily that it is difficult to learn another language (5). Present research supports the fact that that language acquisition is less like Skinner's or Chomsky's theory and more like theories such as the Native Language Magnet where the brain is "mapped" or "wired" (5). Biologically speaking, however, that "mapping" or "wiring" process is really the formation of the synapses in the brain (3).
There are two ways in which "synaptic connections" (3) are added to the brain: the first way is that there is an overabundance of synapses formed in the brain, resulting in a number of synapses "selectively" lost (3). This formation of synapses runs for different duration in different parts of the brain: in the visual cortex, it takes two to three years to develop, whereas in selective parts of the frontal cortex it takes eight to ten years to develop (3). Another way synaptic connections are formed is through the gradual addition of synapses (3). This type of connection takes the entire life span of a human for the reason that it is fueled, and essentially driven, by experience (3). Synapses are formed before and after birth (3); however, it is those synapses that are formed after birth that are affected are affected by human experience (3).
Consequently, experience is a key factor in the "mapping" or "wiring" of the brain. A 1987 study by Black proved that animals raised in "complex" environments with various stimuli had a greater supply of blood to the brain (3), causing an increase in the amount of oxygen and nutrients being supplied to the brain and resulting in an increased capacity for the brain. Thus, those that are exposed to an increase in experiences resulted in an increase in their brain development. Additionally, there have been studies that have shown the benefits animals have encountered through a constantly changing and stimulating environment (1978: Ferchmin et al; 1978: Rosenzweig and Bennett; 1972: Rosenweig and Bennett) (3). As a result, there is not only an "increased capacity" (3) of the brain when exposed to experience, but also an overall change in the physical state of the brain when presented with various prospects to learning (3). It has been proven that "Infant perception is altered-literally warped-by experience to enhance language perception. No speaker of any language perceives acoustic reality; in each case, perception is altered in the service of language" (5). Thus, one's experience effects one's language acquisition, one's "mapping" and "wiring" of the brain. Additionally, one's "language experience" (5) results in a type of "mapping" that alters one's perception (5). As a result, experience affects how you learn a language.
In conclusion, it is clear that the answers to language acquisition have yet to be determined. If anything, researching this topic has only raised more questions. If Skinner's theory of operant condition can be proven to some extent, than it would mean that those among lower classes of society, where studying may not be as supported as the middle classes, may find language acquisition to be difficult, seeing as the community may be a form of negative reinforcement. If experience really does play a role in learning a language, then what can be said about the emotions behind those experiences? Bilal Patel, the London resident that speaks English and two South Asian languages, feels that English seems much "colder" and not as "warm" as the South Asian languages that she knows. Could this relate to the fact that her experiences with English may have been in a classroom, within the constraints of school, and her experiences with her South Asian languages may have been with her family? This may or may not relate to Rabindrah's, idea of language affecting the way one thinks: that if the Native Language Magnet is indeed correct, then that may relate to various thought processes and various behavioral changes. Perhaps Anand Rajoo from Guyana, another participant in the online forum is right when saying, "I personally feel that language does affect someone's personality. My grandparents came from India to live in Latin America. Although I live in the only English speaking country there and can speak English, Spanish and Hindi fluently, I speak English every day because I don't have a choice. However, when I speak Hindi, my ancestral tongue to my family, an empty vacuum is filled for me because I'll always be Indian. There is definitely a change in personality and I feel that mankind is more beautiful because of diversity. So I hope the world will never speak one language" (1).


References


FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

1)Does the Language We Speak Affect the Way We Think?,(BBC News Forum)

2)Why, How, and When Should My Child Learn a Second Language?,A Parents Guide

3)Learners and Learning: Chapter 5-Mind and Brain

4)How Does The Brain Learn Language?

5)Kuhl, Patricia. A New View of Language Acquisition. Proceedings of the National Academy of Sciences of the United States of America. Vol 97, Iss 22: Oct. 24, 2000.

6)Gleitman, Lila. A Human Universal: The Capacity to Learn a Language Modern Philology Vol 90 May 1993

7) Dronkers, Nina. The Persuit of Brain-Language Relationships Brain and Language 2000, 71.


Why Do People Use Psychoactive Drugs?
Name: Lara Kalli
Date: 2003-05-15 10:50:33
Link to this Comment: 5676


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

The purely non-pragmatic – non-medical, non-religious or ritualistic, etc; in other words, the purely recreational – use of psychoactive drugs has been in existence ever since man first discovered the possibility. When questioned, the average user might cite any number of reasons for his drug use: they're fun, they're relaxing, they remove the need for sleep, they afford a path for spiritual awakening or resolution of latent life problems – the list goes on. These reasons, however, all have one thing in common: they are reasons that stem from a conscious need or desire. Upon examination of the effects of the various types of drugs and statistics of drug use, however, we will see that the fact of psychoactive drug use can attributed to two subconscious needs or desires that are universal to all drug users and in some cases.

The myriad types of psychoactive drugs can be broken down into three primary categories: CNS stimulants, CNS depressants, and psychedelics. Stimulants – including amphetamines, cocaine and caffeine – generally induce in the user feelings of euphoria and increased alertness, often accompanied by decreased appetite, restlessness, and occasionally paranoia. Depressants – including alcohol, tranquilizers and opiates – induce a sensation of relaxation and well-being, along with physical sedation and sometimes sleepiness. These two categories also have a high potential for tolerance and both psychological and physical addiction. The third category, psychedelics – including LSD, marijuana, and Ecstasy, to name a few – is a much more diverse category in terms of the effects of the substances. In general, psychedelic drugs can be said to be those drugs that induce in the user distortions in perception and awareness that are similar to psychosis; hallucinations, false beliefs and so on. Of these substances, the only one documented to have any real potential for tolerance and psychological (but not physical) addiction is Ecstasy (though it is not, in a sense, a true psychedelic, as it is also chemically related to amphetamines). (1)

The question "Why do people use psychoactive drugs" is a very socially relevant one; drug use is widespread and pervasive amongst people of all types. Psychiatrists have now classified drug addiction and abuse as a psychological as well as physical disease (2). Substance Abuse and Mental Health Services, a division of the United States Department of Health and Human Services, published a study in 2001 that indicated that more than half of Americans over the age of twelve were current consumers of alcohol (3). Monitoring the Future, which is "an ongoing study of the behaviors, attitudes, and values of American secondary school students, college students, and young adults" released data at the end of 2002 that showed that, by the senior year of high school, 53% of students had used some form of illicit drug. Of those represented by that percentage, 47.8% had used marijuana, 12% had used some form of hallucinogen, 4.3% had used Ecstasy, 3.6% had used cocaine and 1.6% had used heroin. (4) These percentages only directly represent about 50,000 students; however, those 50,000 were selected completely at random, with no geographical, racial, religious or socioeconomic bias, and thus can be considered to be indicative of a larger trend. In other words: these percentages indicate that the presence of drug use, regardless of the drug's legal status, is a very real aspect of all parts of our society.

The fact that the use of psychoactive substances is such a widespread phenomenon indicates that there must be some common cause behind it, that individual motivations for drug use can be reduced to some single attribute or set of attributes that is universal to users and perhaps humans in general. Further evidence of this is the fact that, despite the infinitely varied nature of their effects, psychoactive substances all have one thing in common: that, in whatever way, their consumption results in an alteration of reality for the user. Herein lies the aforestated subconscious desires that drug use fulfills: first, the desire to be in control of one's reality; second, the desire not to be in control. While these ideas at first seem completely contradictory, we shall see that in fact they go hand in hand.

The desire to be in control of one's reality is not solely an attribute of drug users; it is an aspect of human nature in general that is found underlying almost all human action. This concept can thus be explained in terms of almost anything, but perhaps the best example of it is the practice of scientific inquiry. Regardless of whether one chooses to approach science from the realist perspective – that it is the use of empirical observation and experimentation to determine the "way the world is" – or from the pragmatic perspective – that it is the attempt, through empirical observation and experimentation, to develop ideas that are useful in terms of verifying the past and predicting the future – the fact remains that science is, in both these senses, an attempt to quantify that which we perceive to be external reality (5). It is the gradual and progressive attempt to reduce uncertainty about the world. As we progress to a greater and greater amount of certainty about the nature of the world around us, we progress toward a state in which there will be no event that we cannot predict, no phenomenon that we cannot explain. This is the essence of the desire to be in control: if we understand our perceived external reality, then we have mastered its every nuance.

So how does this idea relate to an explanation for drug use? First and foremost it must be stated that drug use is a conscious decision (excluding those cases where substance dependence is completely out of the individual's control, i.e. a child is born addicted to heroin because his mother was an addict while she was pregnant). This fact, combined with the knowledge that all drugs will in some way alter reality, makes the explanation clear. The willful consumption of a drug is the user's choice to alter his perceived reality. In consuming a psychoactive substance, the individual is acknowledging the fact that alterations in reality do not always have to be the result of external, uncontrollable factors; they can, in fact, be the result of as simple a voluntary, conscious action as swallowing a pill. In this sense, the decision to use drugs is the ultimate fulfillment of the human need to be in control of perceived reality.

Directly linked to this idea is the notion that people who choose to use drugs must also be understood as having a strong desire not to be in control. When an individual chooses to consume a psychoactive substance, he is essentially attempting to relegate the cause of his thoughts and perceptions to the substance he has consumed. The nature of perceived reality, in this case, is only attributable to the individual insofar as the effects of the drug vary from person to person; primarily, it is attributable to the effects of the drug. One might say that the decision to use a drug can be equated to a contractual agreement that the individual makes with himself: "In consuming this substance, I agree to allow the causes of my thoughts and perceptions to be reducible to an entity other than myself for as long as the substance lasts." The aforementioned desire to be in control coincides with a desire not to be in control: it is the controlled, willful decision to abdicate that control.

While it is certainly the case that there are many social and psychological factors that contribute to an individual's decision to use drugs, it seems to me that all of these factors can be reduced to the pervasive human desire to be in control of perceived reality in conjunction with the individual's desire not to be in control. What, then, can be said in terms of these arguments about the brain and behavior? It seems that, given the nature of the effects of psychoactive substances on perception, the fact that we cannot transcend our perception and the fact that drug users seem to base their choice to use on those facts, I am left with the supposition with which I held at the outset of this class: that the brain = behavior, and nothing more.

References

1. The Vaults of Erowid
2. Internet Mental Health, an extensive resource for information about mental illnesses, their treatment, medications, and further research
3. source for the Substance Abuse and Mental Health Services study information
4. Monitoring the Future's study data
5. Philosophy of Science Forum (Bryn Mawr College, PHIL310)


Free Will: Does it exist and, if so, where?
Name: Clare Smig
Date: 2003-05-15 11:48:29
Link to this Comment: 5678


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

During our class discussions about mental illness, the I-function, deception of our brain, and numerous other topics, many students brought up the concept of free will. This paper will attempt to define free will-- as its definition is crucial in determining its existence--and ascertain how it can be accounted for in the human brain. I will not discuss the existence of God or the soul and whether or not free will can be accounted for or dismissed by either of these two beings. Instead, it will present various arguments on whether or not free will can exist in the brain and assess these arguments based on logical reasoning and scientific evidence. Finally, assuming that we are our individual brains that allow us to think and behave as individuals (another argument not covered in this paper) and that the evidence and reasoning I provide supports free will in the brain, I assert that we are capable of exerting free will.

The debate for and against the existence of free will has been argued from many standpoints including mental illness, addiction, moral and societal responsibility, neurobiology, religion, and philosophy. Despite the variety of standpoints, most discussions work within the framework of two definitions: free will as the ability to make a choice between two or more genuinely open possibilities without force, or free will as the ability of humans to act as moral beings not subject to physically or divinely imposed necessity ((1), (2)). The former definition, however, does not specify who or what has the choice and who or what exhibits force. The latter connects free will with morality and, as I will argue later in the paper, this is not necessarily true. Moreover, the latter definition involves the issue of "divine" influence which, as stated earlier, I will not discuss. Therefore, for this paper, I define free will as our brain's ability to make a choice between two or more options without the constraint of forces outside of the human brain. These forces can, but do not necessarily include, social, environmental, and "divine" factors outside of the brain.

Occasionally, conditions may limit the options presented to the brain, such as physical or mental illness, drugs, or our sense of perception. However, this does not mean that the brain is necessarily unable to exert free will. Instead, the brain simply has limited options and/or cannot act on the choice it has made. For example, a person who has broken his/her arm may want to move it, but they cannot. Mental illness is another condition that alters options. However, whether or not chemical imbalances cause a person to be mentally ill or that person has some sort of choice in the matter ((3), (4)) their brain has either chosen this pathway through free will or, if not, the person still has the ability to want to change their condition. In the case of chemical imbalances, however, they are unable to ((5)). For example, someone who is depressed may want to feel better, but they are unable to do so on their own ((6)). These factors alter the options presented to the brain and limit its ability to carry out the chosen option, but do not limit its ability to choose the option. Therefore, these conditions do not deny free will.

Most of the scientific evidence, however, is not in support of free will. One experiment required subjects to record the time they intended to move and the time they were aware of moving. At the same time, an EEG (Electroencephalogram) and movement-related cortical potentials (MRCPs) were assessed to determine timing of brain activity. The results showed that there is some unconscious activity in the brain prior to the subject being aware (conscious) that a decision to act has been initiated in the brain. These results have been repeated several times ((7),(2)). However, this experiment only involves simple muscle movements such as moving a finger or raising a hand. To initiate these things unconsciously is not a surprise to most people. For example, as I type this paper I am not thinking of what finger I want to move and where, I merely do it unconsciously because it is a simple, trained muscle movement and, evolutionary, would be a waste of time and energy to consciously think about. This experiment does not consider more complex tasks, such as doing a cartwheel or ice skating for the first time. These movements would involve conscious decision making on how one will act.

In addition to muscle movement, other tasks such as deciding ones religious beliefs or contemplating a novel are thought processes that one is conscious of it occurring. Moreover, even if there are always unconscious events that occur before or without consciousness, you still have the ability to exert free will while the action or thought is occurring or after it has occurred to decide to stop or change the action or thought. In fact, multiple experiments have shown that there is about a quarter of a second between conscious awareness of an impending action and it actual occurrence which provides enough time to permit or cancel the intention ((15)). Although the article only discusses the data from one experiment and another scientist claims that there is ambiguity in determining when a conscious intention arises, the experimental data clearly show an electrical check on the unconscious involved in muscle flexion ((15)). Furthermore, it makes sense that the conscious part of our brain has some check on the unconscious part because we do not often find ourselves doing things that we would not normally intend to do.

Finally, these experiments assume that consciousness and free will are the same thing. Even though the subjects were not aware of the decision until after the activity of making the decision in the brain was recorded, one area of the brain still exerted free will by making a choice. The area of the brain involving consciousness, however, was not activated until afterward. In class, we referred to this area of the brain as the "I-function" ((8)). I will not discuss this aspect of the brain, however this muscle movement experiment clearly shows that it can be separate from the initial steps in the decision making process of the brain ((9)). This separation would also explain why you may decide to stop eating, but then find your hand moving towards your food. In this way, the choice your brain has made has either been ignored or not received by the motor circuits initiating your hand movement. Once you become aware of your hand moving, however, you have the ability to stop it. Other experiments, in which subjects respond to motor stimuli that they do not record perceiving or in which they perceive the stimuli after responding to it ((8)), are also separation of the areas of the brain. As one neurologist states, "The self is not the entity that governs brain processes, but the outcome of those processes ((9)). Therefore, even though you may decide to stop eating and then find your hand moving towards your food this is merely a disconnection of the brain's concept of "self" and its decision-making section. This does not mean, that the conscious is not involved in free will, it simply means that the conscious may not be activated until shortly after free will. Although this may present some implications, it does not deny free will.

Many argue that free will exists because without it we would not have moral responsibility ((10)). Those against free will, however, argue that moral responsibility does not come from free will and that, in fact, we lose nothing if we do not have free will ((11)). If free will is our brain's ability to choose between two or more options without the influence of forces outside of the brain, I agree that moral responsibility does not come from free will. Moral responsibility is a social construct. Different societies have different moral standards. Our brains learn through socializing and past experience what these moral standards are and that in order to thrive in a stable society, we must consider them in exerting free will ((11)). Like moral responsibility, our brain has other past recollections and knowledge about life and the environment that it uses to exert free will about current decisions. This knowledge is within the brain and, therefore, is not an outside force exerting pressure on how a decision should be made.

On the other side of the argument there are people that claim free will must be something in the brain that is not influenced by present or past circumstances. However, this would be an evolutionary disadvantage. We are a part of nature and a product of evolution. The laws of nature limit our free will. We cannot "will" ourselves to disappear ((12)). Therefore, if we are to ignore the laws of nature and all that we have learned and experienced when making a decision, this type of free will would have led to a failed species as a result of decision-making without common sense. Even with the type of free will found in the brain that is subject to the knowledge and experience stored in the brain, we still retain out individuality because of our individual genes and experience. Moreover, we remain rational beings capable of, as stated before, moral responsibility because we know we must act this way to thrive in society. We are our brains and, therefore, since our free will is found in our brains, we can still be held accountable for our wrongdoings.

But where in the brain can we attribute this free will to? If, according to my definition, free will is the ability to choose between two or more options without outside force, free will can be found in the parts of the brain attributed to decision-making. This is where we make choices and determine, based on past experience and knowledge in our brain, how we should act, speak, or think. One experiment used functional magnetic resonance imaging (fMRI) to analyze brain activity in people who were asked to ponder a range of moral dilemmas. The scanning consistently showed a greater level of activation in emotion-related brain areas during the personal moral questions than during the impersonal moral or non-moral questions. During the latter types of questions, areas associated with working memory, which has been linked to ordinary manipulation of information, were more active ((14)). Similar results were found in an experiment that studied a brain structure called the ventromedial prefrontal cortex. The study involved asking two groups of people to play a gambling game. Players were asked to choose cards from different decks to try to maximize their winnings. People with undamaged brains gradually started to play more from the less risky deck, and showed emotional stress whenever they took cards from the risky deck - even though they couldn't say why. People with damage to the prefrontal ventromedial cortex continued to bet from either deck, making bad decisions, and showed no emotional response as they played ((13)). From these two experiments and others ((16), (17)), it is justifiable to conclude that certain decisions may be made in the prefrontal ventromedial cortex which may be emotionally related. This makes sense because if we only made decisions based on logic we would have difficulty making ourselves and other people happy- a vital part of thriving in our society. Another article argues that emotionally-related areas of the brain, such as the prefrontal ventromedial cortex are not associated with decision making or emotional response. However, no experimental evidence is provided for these statements ((17)). Moreover, the card experiment showed that even seemingly unemotional tasks recruit neurons in emotional areas.

Another study monkeys stated that scientists were able to observe the complete decision-making process based on the activities of neurons in a brain region called the medial premotor cortex (MPC). This experiment involved determining where and how neurons were activated in the MPC when monkeys received different stimuli voltage. However, based on the previous experiments on decision making the MPC cannot account for the entire decision making process of all decisions. Moreover, the experimenters did not look at neurons firing in other areas of the brain during the experiment and, therefore, although they claim to have been able to attribute all of the decision making to the MPC neuron patterns, it does not rule out that other neurons were not vital in this process as well ((18)).

From experience, it is clear that decisions take into account emotion, logical reasoning, memory and experience, compiled knowledge, and our own capabilities, among other factors. Therefore, the brain will call on some or all of the areas containing this information to make a decision. How much each area contributes and where exactly they are located is not necessary in demonstrating the existence of free will in the brain. Free will exists because you, as your brain, are able to decide what you want to do- according to or in spite of the information your brain considers. Therefore, you, as your brain, are able to exert free will. Although this paper has not touched on all of the arguments surrounding the issue of free will in the brain, I have represented the most significant and widely used arguments.


References

1) Notes (Part 1 and 4) on Free Will and Determinism- Prof. Norman Swartz

2) Cotton, Paul. Neurophysiology, philosophy on collision course? The Journal of American Medical Association. March 1993: 1485-1586.

3) Depression is a Choice

4)Reasonable Accommodations

5)The 'Chemical Imbalance' in Mental Health Problems

6)Clinical Depression.co.uk


7)Physiology of Free Will

8) Grobstein, Paul. Department of Biology, Bryn Mawr College. Notes: February 6, 2003.

9) Malik, Kenan. "Science has stolen our soul: according to Daniel Dennet, we are evolved machines..." Book review. New Statesman. Feb 2003: 48-49.

10)Horgan

11)The Natural Selection of Autonomy

12)Mind and Body: Free Will and Causality

13)October 10, 1997, Hour 2: Decision Making

14)Brain imaging study sheds light on moral decision-making

15) Bower, Bruce. "Who's the boss? There is evidence that the conscious mind selects from among possible acts developed by the unconscious." Science News. April 1986: 266-267.

16) Vogel, Gretchen. "Scientists probe feelings behind decision-making." Science. Feb 1997: 1269.

17) B.B., "Emotional judgments seek respect." Science News. July 1999: 59.

18) Chicurel, Marina. "Neurons weigh options, come to a decision." Science. March 2002: 1995-1996.


Post-Abortion Syndrome: Myth or Mental Illness?
Name: Cordelia S
Date: 2003-05-15 11:56:16
Link to this Comment: 5679


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip


"This is an article about a medical syndrome that does not exist" (1). This is how Dr. Nada Stotland opened a paper for the Journal of the American Medical Association in 1992 on post-abortion syndrome (PAS), and this is how I will open my paper on the very same nonexistent disorder. Post-abortion syndrome - excessive guilt, depression, anxiety or psychosis after an abortion - has been at the center of key arguments in the US government over abortion laws in the past two decades. However, pro-life and pro-choice groups do not even agree on whether the disorder exists. I decided to use PAS as a model to investigate what actually defines a syndrome, how politics can affect the recognition of a syndrome, and how scientific research is manipulated to meet political agendas. I knew when I began this research that PAS is cited as a problem predominantly by pro-life groups, and suspected that the syndrome might be a pawn of politics. However, I could imagine valid scientific reasons for its existence; after all, hormonal changes that occur naturally after childbirth are known to cause depression and even psychosis (2). When researching PAS, though, I found nothing about hormones or about the brain. Instead, I found widespread lies, poorly done and misleading research, and emotionally charged propaganda designed to convince women that they are suffering from a disease which I am now sure does not exist.


Post-abortion syndrome was brought to the attention of the medical community in the mid 1970s, very soon after the Supreme Court's decision on Roe v. Wade in 1973 (3). The term "post-abortion syndrome" was coined by famous pro-life advocates Vincent Rue and Anne Speckhard, who defined the syndrome as a specific type of Posttraumatic Stress Disorder (PTSD) resulting from both legal and illegal abortions (4). PTSD is defined by the American Psychological Association's DSM-IV (Diagnostic and Statistical Manual of Mental Disorders) as horror, traumatic and intense fear, and helplessness lasting more than one month as a result of experiencing an event involving actual or threatened death or serious harm. In older versions of the DSM, the traumatic event had to be "an event outside the range of usual human experience that would be markedly distressing to almost anyone" (3). PTSD was a favorite media topic in the mid 1970s and 1980s, as it was first identified in Vietnam veterans. It became almost expected that people undergoing traumatic experiences would later have PTSD; thus, it made perfect sense for anti-abortion groups to begin looking for symptoms of PTSD in abortion patients (3).


Anti-abortion lobbyists persuaded former President Reagan to undertake an investigation into the psychological consequences of abortion in the 1980s. According to a spokesperson for the American Psychological Association (APA), which undertook eight years of research on PAS, the anti-abortion movement in the United States had decided to follow the successful path of the anti-smoking movement, which had recently made large legislative gains by examining the effect of smoking on public health (3). However, even Reagan's Surgeon General, the adamantly anti-abortion C. Everett Koop, was unable to find evidence of PAS. In 1989, he announced to the president that psychological consequences from induced abortions were miniscule (5). The APA has continued to assert the nonexistence of PAS, and has refused to include abortion as a possible PTSD stressor in the DSMs (6).


Since the federal government and the APA have thus both publicly stated that PAS doesn't exist, one would imagine that its existence would be sufficiently invalidated. However, several states require that a woman receives counseling on PAS before having an abortion (6), commercials warning women of the dangers of PAS play constantly on television (7), and the vast majority of pro-life websites provide evidence that makes it seem like PAS does exist. A frequently cited research project done in Finland, for example, found that the suicide rate associated with birth was much lower than the suicide rate associated with abortion (8). By examining birth and death records, this study found that the suicide rate for the first year after the end of pregnancy was 5.9 per 100,000 live births and 34.7 per 100,000 abortions. These may at first glance seem like shocking statistics; however, wanted and unwanted births were not separated in this study. A healthy, economically prepared mother who wants her newborn child is significantly less likely to kill herself than a mother who did not plan for a pregnancy and who is uneducated, poor, unmarried, young, and unemployed. This study shows that among 15-19 year olds, where births would be more likely to be unwanted than in the rest of the reproducing population, the suicide rates associated with birth and abortion were almost exactly the same.


I found this type of methodologically unsound research repeatedly in my search for information supporting either side of the PAS argument. Errors included, but were not limited to, failure to separate wanted from unwanted pregnancies, failure to control for age, socioeconomic status, marital status, and other such factors, failure to consider mental state prior to abortion or birth, failure to look at birth as a comparison, failure to distinguish between wanted and unwanted abortions (unwanted as in therapeutic abortions of wanted pregnancies), and failure to distinguish between first trimester abortions and later term abortions. Noted by former abortion provider Suzanne Poppema but ignored by virtually every research project, is the fact that "after all, the picketers screaming "murderer" at women entering clinics are significant stress-inducers, too" (6). Many religious and pro-life websites cited research done by anti-abortion activist David Reardon, but neglected to mention that all of his subjects were members of a support group called Women Exploited by Abortion (7). This study found that "aborted women", as Reardon calls them, are nine times as likely as other women to commit suicide, and are nearly 20 times as likely to see a psychologist or psychiatrist. However, out of Reardon's 252 subjects, 65% also felt that their lives were very out of control at the time of their abortion, 47% reported physical complications from their abortions (as opposed to .06% of abortion patients reporting physical complications in Michigan in 1995), 97% felt at the time of interview that the fetus was a human child when they aborted it, and 19% were more than 13 weeks into their pregnancy at the time of their abortion, more than twice today's national average (7).


Many abortion providers and psychiatrists feel that arguing for the existence of PAS detracts from the ability to actually help women who are having a difficult time dealing with their abortion or unwanted pregnancy (3). Groups like Reardon's Elliot Institute and Rachel's Vineyard offer women post-abortion counseling which involves "naming ceremonies for your murdered child" and "masses for the unborn" (4). Though these pro-life organizations argue that PAS is a medical disorder, they never suggest getting medical attention for it. Rather, they suggest speaking out against abortion, writing anti-abortion legislation, or even suing an abortion provider as ways to "become whole again" (3). This demonstrates an obvious abuse of the women seeking help. They are lied to about the existence of a disease, and then used to advertise the agenda of the people who are supposed to be helping them.


Pro-choice websites, physicians, psychiatrists, and all of the scientifically approached research projects that I found when doing my research make it clear that some women do experience negative emotions after abortions. They also argue that some women experience negative responses after losing their jobs, getting the flu, having bad days, etc. These experts believe that the best predictor of a woman's mental state post-abortion is her mental state pre-abortion. In the APA's eight year study of PAS, they found this to be the case for the 5,295 women who were interviewed and tested for depression every year (7). It has also been found that religious women, women who feel that the fetus is a human, women who feel that they have been pressured into having abortions, and women who have had difficult times deciding whether or not to abort are more likely to experience guilt or depression after abortion (5). Interestingly, the APA found that religious Catholic women were more likely than nonreligious women to experience depression after abortion, but were also slightly more depressed on average before the abortion or unintended pregnancy (7).


A study by Dr. Brenda Major found that women were most likely to receive high scores of the Beck Depression Inventory (indicating high levels of anxiety and depression) after their abortion if they had rated their pregnancy as highly meaningful to them (9). She also found that those women who expected to cope well after their abortion had lower scores, and women who spoke freely about their procedure also had lower scores than women who did not. Women who were accompanied to the clinic by their partner or husband tended to be more depressed immediately after their abortion than women who were not, but this difference was not significant in a follow-up study 3 weeks later(9). Dr. Adler found that women who rated their decision to have the abortion as difficult several days prior to their abortion were more likely to have high scores on depression inventories 2-3 months after the abortion than women who rated their decision as less difficult. She also found that women who perceived that they were supported by friends and family in their decision and women who had fewer internal conflicts over abortion in general were most likely to be satisfied and relieved after their abortions (9).


Many studies have found that abortion is less emotionally taxing than birth or adoption (5, 7). The APA reported that PAS is rare and less common than emotional upset after birth of either a wanted or an unwanted child (5). Dr. Brewer found the risk of a major psychotic episode to be five to six times greater after childbirth (wanted or unwanted) than after abortion, and also found that fairly serious psychiatric distress was experienced by approximately 20% of women after childbirth (5). Dr. Schwartz reported that psychiatric problems following abortion occurred in only about 2% of women (5). Though there are understandable limitations on research about psychological consequences to the biological mother following adoption, one 1980 study (5) found that in the fifteen years before the Supreme Court's decision on Roe v. Wade, about 65% of parents who had given children up for adoption had initiated a search for their child. In the five years following the decision, this number had dropped to 37%, indicating that the availability of legal abortion had significantly reduced the number of biological parents experiencing trauma after adoption, as "searchers" have been identified as highly emotionally distressed individuals. Overall, many studies have indicated that the well-being of teenage mothers increased significantly after Roe v. Wade, and psychopathology decreased significantly (5).


All of the methodologically sound studies on post-abortion syndrome have led up to the statement of Dr. Nancy Russo that,
"The experience of having an abortion plays a negligible, if any, independent role in women's well-being over time, regardless of race or religion. The major predictor of a woman's well-being after an abortion, regardless of race or religion, is level of well-being before becoming pregnant... Despite a concerted effort to convince the public of the existence of a widespread and severe postabortion trauma, there is no scientific evidence for the existence of such trauma, even though abortion occurs in the highly stressful context of an unwanted pregnancy." (1).


While I searched rigorously for ANY mention of the effect of post-abortion syndrome on the brain, I failed to find even one. One of the only medical explanations given was by a doctor in 1958, who said that "woman's main role here on earth is to conceive, deliver, and raise children . . . when this function is interfered with, we see all sorts of emotional disorders" (3). No one mentions the brain when talking about PAS; in fact, I do not think the words "brain" or "nervous system" appeared in any of the articles I read. How can a psychiatric disorder exist independently of the brain? It cannot. Instead of talking about the brain, PAS believers talk around it, describing feelings and emotions. However, in the words of Dr. Nada Stotland, who opened this paper, "Negative feelings should ALWAYS be distinguished from psychiatric illness" (1). Of course women experience a wide variety of emotion after any response to an unwanted pregnancy; pregnancy, whether wanted or unwanted, is a major life event, and making a decision about how to resolve an unwanted pregnancy can be very stressful. However, the negative emotions of a small minority of women do not justify the invention of an entire mental illness to encompass them. As virtually every study I read said, the predominant emotion felt by most women after an abortion is relief (1, 2, 3, 4, 5, 6, 7, 9).

References

1) Summary of research on PAS and other anti-choice tactics.


2)Why do I have the baby blues? An investigation of postpartum depression by Claire Albert, student paper on Serendip.


3)The Context for the Development of "Post-Abortion Syndrome"Speech given by Ellie Lee.


4)Abortion Psychological Sequelae: the debate and the research Speech given by Ellie Lee and Dr. Anne Gilchrist.


5) Russo, Nancy Felipe. "Psychological Aspects of Unwanted Pregnancy and Its Resolution". Butler, J. Douglas and David F. Walbert, eds. Abortion, Medicine, and the Law. New York: Facts on File, 1992. An amazingly comprehensive book covering all aspects of abortion in today's society, this article covers abortion, adoption and birth.


6)The myth of post-abortion syndrome Ms. Magazine article discussing flaws in research and false claims of evidence.


7)Post-abortion trauma Religious tolerance organization's fact sheet on PAS.


8)Suicide after pregnancy in Finland 1987-94: register linkage study by Mika Gissler, Elina Hemminki, and Jouko Lonnqvist. Poorly done study, does not separate wanted and unwanted pregnancies.


9)Psychological Responses after Abortion 1990 article in Science by such experts as Nancy Adler, Brenda Major, Nancy Russo, Henry David, Susan Roth and Gail Wyatt.


Affecting the Response To Pain
Name: Luz Martin
Date: 2003-05-15 12:27:06
Link to this Comment: 5681


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip


Adults are able to verbalize the intensity of their pain and can help monitor the effectiveness of treatment when there is damage to body tissue. Without verbal communication, interpreting the pain in infants who are not able to verbalize their experience has led to discussions about the feeling of pain. There have been issues raised about concerns of the experience of pain in the fetus. There has also been much emphasis on the safety of treating babies still in the womb through surgery. Many are concerned with the assumption that the fetus is unable to feel pain and therefore there is no need to take precautions when surgery is considered.

Pain is a method used by the body to interpret the outside world. The nervous system is able to use physical cues to help protect the body from any harmful tissue damage. The nervous system is able to keep track of sensations though sensory neurons. Our skin is covered with sensory neurons that are responsible for acquiring information about the body's surroundings (6). Some of the nerve endings involved in the pain sensing process are called nociceptors (6). Most of these nociceptors and other sensory receptors come from an area near the spinal cord (6). The information acquired by the sensory neurons is sent through intermediate neurons and is then passed on to the motor neurons that are involved in a physical movement 6(6). The information gathered by the sensory neurons is also passed on to the brain (6). In the brain, the information is interpreted and behavioral and emotional reactions are created (6). When a stimulus that the body is not expecting is sensed, there is a misalignment between the nervous system's expectations and the actual sensation. This unexpected sensation is interpreted as pain by the nervous system. This interpretation by the brain is also affected by the past experiences of any harm caused to the body.

There has been much debate about the experience of pain and the fetus. According to the International association for the Study of pain, the experience of pain is describe as a sensory or emotional interpretation produced when there is the potential or actual occurrence of tissue damage (2). With this definition, it is argued that the fetus is unable to feel pain and therefore there is no need to take the same precaution when treating a fetus as when treating a child or an adult.

There has been a greater interest in the early years of development with the idea in mind of finding clues about the experience of pain. It has been noted that a newborn develops sensory nerve cells that have a greater respond rate than an adult (4). The greater response rate results in the creation of sensory nerve cells that are more sensitive to stimulus (4). The spinal response to a stimulus is also increased and lasts for a longer period of time when compared with an adult (4). The appearance of these sensitive nerve cells is found on a larger portion of a newborn's skin when compared with adults (4). These sensory areas are called receptive fields (4). The nervous system keeps track of where stimulus is received by using the receptive fields to help monitor the body's condition (4). With a larger receptive field, babies are unable to pin point the exact location of the stimulus (4). This altered version of the receptive field leads to a less accurate picture of the body. The nervous system is unable to narrow the possible location of where a stimulus is received as a result of the spreading of the receptive fields on a larger area of the skin.

Since newborns have very sensitive sensory nerves, the same response is produced to any stimulus without regard to the intensity (4). A newborn may react in the same way to a pinch as to a soft touch (4). This has created some difficulty in understanding the experience of pain in newborns. The newborn sensory nerves seem unreliable if the nervous system responds without regard to the intensity of the stimulus. The newborn will respond to non-harmful experiences as if they were potentially harmful (4). The response generated by the newborn's nervous system does not help us determine if the baby experiences pain. The intensity of the pain is expected to correspond to the type of stimulus. In this case the developing nervous system is on the lookout for anything that may cause harm to body tissue without the need for prior experience to help determine how much harmful the stimulus is.

Questions have been raised about the level of sensation that the fetus itself undergoes when using surgery to address abnormalities in a fetus (1). Surgery involves the opening of the mother's uterus to perform corrections on the fetus (1). Once corrected the fetus is returned to the mother's uterus to complete the normal term of development (1). The use of fetal surgery is a way of increasing the survival rate of the fetus after birth (1). The organs corrected would have inevitably prevented the baby from living after birth (1). The threatening abnormalities left untreated may produce effects such as respiration failure, neurological damage, and heart failure (1).

The fetus is able to respond with reflex motions as early as 7.5 to 14 weeks (2). Although the fetus is able to respond with a reflex motion, the fetus needs the cortex as well as memory to experience pain (2). At 16-35 weeks the fetus displays patterns of reflexes, but the spinal cord is not yet fully developed (2). Usually the responses observed are exaggerated (2). The fetus displays receptor systems that have not yet matured (2). With memory, the fetus would be able to interpret the sensations as pain due to past experiences. The memories of past experiences would then lead to anxiety (2). Lloyd-Thomas and Fitzgerald suggest that the exaggerated response helps the fetus react to stimuli that the fetus is not yet able to synthesis and produce a direct response to (2). Without memory, the fetus is unable to interpret the stimulus as pain, but the fetal nervous system does respond with the purpose of protecting tissue from damage (2).

Although the fetus and a newborn may not experience the feeling of pain there has been much interest in the consequences of altering the developing nervous system. This issue has been raised due to the observation of the long-term consequences on the body's pain systems after major tissue damage has occurred. An injury experienced early in development may affect the response to pain in childhood as well as adulthood (4)).

It has been observed in adults that when a sensory nerve has been injured the nociceptive system, or pain system is altered (5). In normal situations, the pain experienced should only last for a short period of time. The pain normally goes away signaling the tissue damage has corrected by the body (5). In cases when there is damage to tissue, there may be damage to the nociceptor. Many times the sensory neuron or nociceptor is damaged as a result of harm to the skin. This damaged nociceptor is no longer considered reliable by the nervous system. Without the ability to take in information from the environment, the damaged neuron is no longer incorporated in to the nervous system. The pain system ignores the damaged nociceptor's signals of pain sent to the brain (4).

The only way that pain is sensed in the region where the injured nociceptor is located is by having a neighboring nerve cell that is still functional monitor the area (4). This unharmed sensory neuron becomes responsible for the area once monitored by the damaged sensory neuron. With this new configuration, the functional sensory neuron is monitoring a much larger area then it did before. Along with creating a larger area that is monitored by the unharmed neuron, the area that was once damaged is protected with a greater sensitivity to stimulus. This area continues to be on pain alert (4). The skin also continues to be sensitive to touch even after the tissue damage has been corrected (4).

Some suggest that because the fetus may not necessarily feel pain, the sensory neurons are in danger of malfunctioning with serious consequences later in life (4). The neurons may be permanently altered if surgery is performed without taking precautions to prevent damage from reoccurring in the sensory neurons (4). Disabling the nociceptors may result in the spreading of nerve terminals in healthy sensors, thereby altering the areas covered by the sensory nerve (4). This altered region also creates an altered nervous system map of the body. This leads to a larger receptive field that is similar to the one observed in a newborns. With the larger receptive field, the fetus may develop a nervous system that is overly sensitive as well as unable to pin point the location of the stimulus. This change in the pain system may be present into childhood. Precautions must be taken to allow only for the most minimal alteration to the nervous system when performing surgery.

The nervous system is left with an altered picture of the area the receptive field is monitoring (4). This leads to a less accurate picture of the specific area the pain is coming from (4). Oversensitive areas may develop as a result of this type of alteration (4). The area may be triggered by damage and remains sensitive even though the damage is gone (4). Oversensitive areas may ultimately cause agitation and discomfort later in life. This sensitivity remains intact over the years and seems unnecessary knowing that damage is due to surgery and may have been prevented. The over sensitive pain system created may be prevented once it is acknowledged that the feeling of pain is only a small part of how the body responds to damaging stimulus.

The sensation of pain is a system that incorporates physical sensation as well as past experiences. The experience of pain through emotions may affect how well a person can function in daily life. Considering ways to prevent pain receptor damage in developing nervous systems may help avoid future pain. There is no better time to catch a problem than before it begins.


References

1) Annual Reviews Medicine , Fetal Surgery. Annual Reviews Medicine; 1995, by
Flake Alan W. Michael R. Harrison.

2) British Medical Journal , For Debate: Reflex responses do not necessarily signify pain British Medical Journal; 1996 (28 September), by Lloyd-Thomas, Adrian R. Maria Fitzgerald.

3) The New England Journal Of Medicine , Pain and its Effect in the Human Neonate and Fetus. The New England Journal Of Medicine, Volume 317, Number 21:
Pages 1321-1329, 19 November 1987. By K.J.S. Anand, M.B.B.S., D.Phil., And P.R. Hickey, M.D

4) Medical Research Council News , The Birth of Pain. MRC News (London) Summer 1998:20-23.


5) The Richeimer Pain Medical Group , Understanding Nocicieptive &
Neuropathic Pain. From The Richeimer Pain Medical Group December 2000.

6) Association of the British Pharmaceutical Industry , The Anatomy of Pain


The Chinese Room Argument and Neurobiology
Name: Andy Green
Date: 2003-05-15 15:50:01
Link to this Comment: 5686


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

The study of artificial intelligence has much to share with neurobiology. Perhaps the best way to understand a mind, as Frank Dretske suggests, is to build one. (1) The understanding of what makes thinking possible in physical systems is a question that is most often posed to projects of artificial intelligence, but is just as relevant to understanding neurobiologically how thought occurs in physical brains. The question of the plausibility of thinking machines is tied deeply to the question of whether the mind is a subjective property of the brain which cannot be reduced to physical terms. An examination of some historical issues of artificial intelligence will thus be very useful here with regard to neurobiological issues.

The original question, "Can computers think?" was answered in a certain regard by Alan Turing in 1950. The Turing Test was designed as a threshold above which we ought to grant computers the ability of thought. The test is as follows: A questioner communicates by typing and receiving typed responses with two entities in separate rooms whom he cannot identify. One is a computer, the other is a person. The person attempts to reveal to the questioner that he is not a computer. The computer pretends to be the person such that the questioner cannot differentiate between the two. If the questioner cannot determine which is the computer and which the person within some arbitrary time limit, say one hour, than we must grant the computer the ability to think. The Turing Test is, admittedly, not at all precise in delineating the minimum criteria for thought, but it works as a practical method of determining a point at which a computer's communication resembles a human's enough that it deserves "thinking" status. (2)

The Turing Test was attacked at its foundation by John Searle in 1980 by the famous and highly debated "Chinese Room Argument." The argument is as follows: Suppose we have a room, inside of which is a mono-lingual American. From a hole in one side of the room written questions in Chinese are passed in . Now the American is also in possession of a very thick manual which, without ever giving a definition of any of the Chinese words, tells the reader in great detail how to, for any given series of characters forming a question, arrange Chinese characters such that a coherent answer is produced. Searle's point is that a well-functioning "Chinese Room" would pass the Turing Test, but that the American inside would never actually understand Chinese. What the Chinese Room lacks, he says, and what all the computers that the Chinese Room model must lack as well, therefore, is meaning, or intentionality. The American understands syntax, but lacks semantics: his sentences are "about" nothing at all.(3)

For such a rhetorically appealing and simple thought experiment, Searle's Chinese Room raises a surprising number of objections and replies. For most of these objections, however, Searle has a successful reply with which to save his argument. Some of those objections and Searle's respective rebuttals are as follows:

1. The Systems Reply: With perhaps the most obvious objection to the argument as stated, opponents reply that it does not matter that the American himself is incapable of understanding the Chinese characters, because it is the room as a whole which is at issue, and the room as a whole does possess intentionality. To this Searle responds that the system as a whole does not in fact possess intrinsic intentionality. To prove his point, he says that the Chinese Room can be reduced to the American himself, where the American has memorized all the syntactical rules for organizing the characters. Although the rules he follows, whether they be in a book or in his head, contain intentional symbols, he himself does not participate in this intentionality. Thus the intentionality is derived, not intrinsic.

2. The Robot Reply: Consider a computer inside a robot with cameras and sensory technology as well as motor skills. Wouldn't that computer be able to experience real world objects and thus make connections with those objects and thus have representations attached to its syntax? Searle again responds with a slight adaptation of the Chinese Room. Suppose the Room is placed within said robot. All that changes is that the rules the American uses to arrange characters are derived from cameras and sensory data instead of from a pre-written manual. The American still has no access to any intentionality.

3. The Brain Simulator Reply: This objection posits a computer which models exactly the neuron firings of a Chinese person who is actually understanding and responding to the questions in Chinese. Although such a computer is surely incredibly complicated, we must admit that it understands Chinese as well as any brain. To this apparent knock-down objection Searle once again has a modification of the room: Suppose that instead of arranging symbols in response to the Room's input, the American instead has a complex set of instructions of valves to turn on and off, which control water pipes that directly simulate the neuronal workings of a Chinese brain. Still, the American does not understand Chinese, and intentionality is lacking from the system. (4)

Now at this point the reader has probably begun to think A) that this paper has nothing to do with neurobiology, and B) that Searle's thought-experiment seems strangely invincible. The first point will shortly be remedied, but the second is more complex. When we add up all of Searle's modifications of the Chinese Room to produce a new and improved deluxe Chinese Room, it can be described as follows: It internalizes rules about language, which it learns from sensory data, and that data is processed and outputs are created through a system of neurons exactly akin to a brain. How is this model different, beyond superficial differences, from a human being? The answer is that it is not. And yet the system still has no subjective experience which it can pair up with the words it uses. The American inside of this human/robot has no qualia that match the sentences he arranges. What we have done here is to attempt to build a human out of physical, objective pieces, and we have failed. This seems to reflect deeply on a neurobiology which would like to reduce the mind and all its subjective experience to mere physical brain functions.

Now, for some, this is a reductio ad absurdum proof of the fallacy of the Chinese Room argument. Searle himself is unwilling to admit that the argument gives rise to any irreducibility of qualia, maintaining that he is not a dualist when it comes to the mind/body problem, but merely a "monist interactionist," or more recently, a "biological naturalist." But he is accused by friend and foe alike of some type of dualism, and the accusation certainly fits the argument he makes.

The reason for the equating of a argument's resulting in dualism with the argument's being considered reduced absurdity are manifold. Besides the classic problem of interaction between the mind and body under a dualistic system, dualism also poses the "problem of other minds." Indeed, if human minds have a certain subjective aspect to them that can experience qualia, then that aspect cannot be known publicly. Thus if Searle is correct that his walking, talking, learning, neuron-based Chinese Room has no intentionality, then the part of humans which is capable of intentionality is inaccessible to other minds. If we cannot credit Searle's new and improved Chinese Room with the ability to think, we cannot credit anyone with the ability to think.

Is this a problem? Solipsism, the idea that no minds exist except one's own mind, is quite a radical philosophy, but it coheres. Although it may hurt the reader's feelings, I cannot be sure that he or she has a mind, nor can he or she be sure that I have one. I cannot be sure, in fact, that I am not surrounded by perfectly functioning Chinese Rooms (or English Rooms in my case), computing answers to my questions and making small talk while lacking all subjective experience, and thus lacking semantics.

It is possible to agree with Searle that the Chinese Room lacks intentionality, thereby hold a dualist view of mental states, and yet still believe in the existence of other minds. All this requires is an assumption that other peoples' behavior implies that they also have minds. Whether this means I must also grant minds to computers who pass the Turing Test is a difficult question, but I leave that to computer scientists. Before I come to this position of the necessary assumption of other minds, however, perhaps it would be wise to entertain a few other objections to Searle's Chinese Room which might succeed in saving the idea of a physical mind with intentionality.

Patricia and Paul Churchland formulate a "connectionist" reply to Searle's thought experiment. They propose that Searle's conception of how the brain works, and thus how the new and improved Chinese Room works, is flawed and simplified. Brains, the Churchlands argue, are not serialist processes in which a single entity goes to work sorting out input and then producing output. Rather they are a vast network of parts working in parallel, just as the brain's neuronal processes work simultaneously in many separate physical locations. (5) In response, Searle formulates yet another addition to his Chinese Room, this time turning it into a Chinese Gymnasium, with many Americans working cooperatively and in parallel to produce the output of coherent Chinese conversation. Under this formulation, Searle argues, A) each individual American still does not understand Chinese, and B) the system as a whole does not understand Chinese. Searle is no doubt correct with regard to the individual workers. But as for the system as a whole, it is difficult to dismiss the idea that the entire group does not collectively understand Chinese, since the system does indeed receive sensory data, collectively process it, and produce seemingly meaningful behavior. Since the subject of experience here is split into many different possible parts, it is difficult to track down where the lack of qualia occurs. If the system is to have intentionality, it must as a whole have experiences which it connects to syntax. Whether or not the collective of Americans in the Chinese Gymnasium has qualia is far more difficult to determine than whether the single American in the Chinese Room does. The Churchlands have at least succeeded in muddying the waters. Robert French argues, similarly to the Churchlands, that Searle underestimates the complexity necessary for the Chinese Room to function properly. He writes that asking the Room questions about categories, neologisms, and subjective experiences all would require immense complexity for it to answer, complexity that might not be possible in our current models of artificial intelligence. (6)

These types of objections weigh heavily on Searle's argument, but they offer very little in terms of a positive explanation of how intentionality does arise. Merely stating that a very complex collective of non-intentional parts can have the "emergent property" of intentionality is not implausible, but nor does it satisfy our search for a detailed mechanism of intentionality in the brain. Our only detour around this problem, though, seems to be the original conclusion of this paper, a sort of generous solipsism that assumes other minds. If philosophy of mind, computer science and neurobiology are to avoid this outcome, they will have to create a much clearer picture of how intentionality might "emerge" from complex systems.

References


1)An Interview with Fred Dretske, Fred Dretske, author of "If You Can't Make One, You Don't Know How it Works" talks about epistemology and the mind.

2)The Turing Test Page, A detailed explanation of the Turing Test, along with plenty of relevant links.

3)Searle's Chinese Room Argument, A good introduction to Searle's thought experiment.

4)Larry Hauser's Chinese Room Page, Professor Larry Hauser enumerates the multitude of objections to the CRA, along with his own personal reasons for rejecting it.

5)Neural Representation and Neural Computation, Patricia Churchland and Terrence J. Sejnowski detail their Connectionist theory as well as many other issues of representation in the brain.

6)The Chinese Room: Just Say No!, Robert French argues against the CRA for complexity reasons.

7)The Chinese Room and Strong AI, Another interesting rejection of the Chinese Room

8)John Searle's Chinese Room Argument, More debate about the CRA

9)Electronic Book Web: The Chinese Room Revisited, One last look at the differing views of the Chinese Room.

10)What is AI, A site dedicated to Alan Turing discusses all sorts of AI issues.
http://12.108.175.91/ebookweb/discuss/msgReader$877


Decision-Making
Name: Erin Fulch
Date: 2003-05-15 16:34:42
Link to this Comment: 5687

<mytitle> Biology 202
2003 Third Web Paper
On Serendip

"When making a decision of minor importance, I have always found it advantageous to consider all the pros and cons. In vital matters, however, such as the choice of a mate or a profession, the decision should come from the unconscious, from somewhere within ourselves. In the important decisions of personal life, we should be governed, I think, by the deep inner needs of our nature."

-Sigmund Freud-

In the act of making a decision, choosing a meal, choosing a profession, or intentionally blinking, it may seem that an individual has exclusive control: that a decision is made with sole regard to the pro and con variables to which Freud refers. Basic human experience suggests that, under specific circumstances, an individual analyzes conditions, makes a decision based upon current and recalled experience, and reacts accordingly. A slightly more scientific analysis confirms that given a set of environmental conditions, or a stimulus within that environment, necessitates a specific response or an intentional lack thereof. Consequences are anticipated based upon prior experience, options are weighed, and a decision is made to act or not act accordingly. It would appear, initially, that an individual, the decision-maker, bases decision on some nebulous, intuitive desire: the ever-lurking, ever-elusive free-willcomponent of human behavior seems essential to this activity. To a well-seasoned student of neurobiology, however, the role of free-will is known to be limited, unlocalizable, and, perhaps, absent in many cases. Such an educated individual recognizes that it is, in fact, the nervous system that makes decisions. If we take this assumption for granted momentarily, the mechanism of the brain's control over decision-making can be elucidated.

The analysis of a simple decision made by an organism less complex than Homo sapiens makes a highly complicated processing mechanism more tangible. Analysis of directed vision provides several key insights into the mechanism governing decision making at its simplest level. Visually-guided eye movement exemplifies a nearly universal mammalian activity mediated by the reward system of the brain. Neurons of the lateral intra-parietal area (LIP) of the primate parietal cortex are thought to carry information involved in transforming incoming visual signals into commands to move the eyes (1). Furthermore, it has been demonstrated that neurons in parietal area LIP signal the reward expected from eye movement response when animals choose between alternative movements (2,3). The lateral parietal area neurons of macaque monkeys were analyzed experimentally and were found to make saccades away from salient visual cues. The majority of LIP neurons were active in signaling the location of the visual cue, while most of these neurons were found to have weak saccadic activity in the absence of visual stimulation (4). Similar experiments have demonstrated that LIP neurons fire maximally before a narrow range of visually-guided eye movements terminating within a sharply defined neural response field apparently response for encoding information in accordance with eye movements rather than the location of visual targets on the retina. The subsequent conclusion that intentional animal movement is demonstrated by this neural activity resulted from experiments in which primates were trained to shift gaze to one of two uniquely colored LED's (light-emitting diodes) acting as fixation stimuli. The amount of neural activation was noticeably higher when an animal directed gaze from a given LED (5). Thus, neural activity associated with a visual stimulus was altered by the specific eye movement an animal made.

Such investigations have lead to further conjecture that LIP neurons play an active role in the processes utilized in choosing a specific movement for execution based upon the estimated gains and losses associated with specific behavior. Decision theory, derived from both evolutionary ecology and behavioral economics, has been to these studies. The principles of decision theory suggest that LIP neurons carry signals correlated with probability that an eye movement response will yield a gain as well as the amount of gain expected to result from that motion, variables suggested by neurobiologists, economists, and psychologists to be used by animals and human beings alike in decision making processes (5). When animals have been presented with multiple response options, the subsequent choice made as well as neuronal activity is correlated with expected gain (2).

As human beings, we believe ourselves to be far more complex than primates, integrating emotion and values into our decisions. We believe that our actions are determined, not merely by our brain's response to reward and punishment, but rather by a personal propensity towards a certain goal, chosen based upon our emotional tendencies and so-called higher intentions. In addition to animal studies suggesting that decisions are governed by the brain's mechanistic computation of the consequence of particular actions as well as the value of those actions in order to update reward expectations, scientists have attempted to reveal the contribution of the brain responsible for emotion to decision-making. Studies have been conducted which show that damage of the ventromedial prefrontal cortex preclude the use of emotional signals that are necessary for guiding decisions in the advantageous direction. The amygdala, also integral to emotional processing, has been tested utilizing studies of damage in that region and potential interference in decision-making. Gambling task paradigms have been designed and used to simulate human decisions in terms of uncertainty, reward, and punishment (2). In the gambling task, subjects were asked to choose between decks of cards yielding high instantaneous gain and large future or long-term loss and desks yielding low immediate gain and increased long-term gain. Indications of emotional state were made by skin conductance responses (SCRs). The development of anticipatory SCRs, typically generated before choosing risk cards, was correlated with advantageous decisions. When a normal subject is faced with a decision to select a card, the neural activity pertaining to this information is transmitted to VMF cortices, which activate the amygdala. Amygdala activation reconstitutes a somatic state that integrates the occurrences of reward and punishment related to that a particular deck of cards (2,6). It was found that patients with damage in the amygdala as well as those with VMF damage were impaired in the gambling task and were unable to develop anticipatory skin conductance responses (SCR's) during high risk decision activities and, therefore, unable to evoke somatic states after winning or losing money, precluding the enactment of such a state when deliberating a decision with future consequence (7). Furthermore, upon receiving awards or punishment, patients with VMF damage were able to generate SCR's while amygdala patients failed to do so. The notion that bilateral damage to the amygdala is associated with decision-making impairments in the gambling task also is supported by the observation that amygdala patients demonstrate poor judgment and decision-making in their real-life social behavior (7). For example, subjects with lesions in the amygdala judged unfamiliar individuals to be more approachable and more trustworthy than did control subjects.

Thus, the brain mediates our decisions by receiving and sending us signals based on previous experiences. However, is it not possible that human beings retain control over the recognition of what is or is not an advantageous decision? It has been shown that the amygdala, once again, is responsible for the assignation of emotional value to stimuli(6). A considerable amount of the limbic system, that portion of the nervous system responsible for emotional reactions, is reactive to stimuli encoded hereditarily. Pre-programmed behavior patterns are located in neural circuits whose connections are established prenatally, as development occurs. These qualities are considered innate, and include those reactions which may be regarded as instinctual, including defensive decisions for the evasion of predators and the avoidance of danger. Additionally, we acquire emotions acquired over the course of life: neutral stimuli acquire a certain affective character as associations are formed between objects and situations with which we come into contact. Subsequently, any stimulus-producing environment with which we come into contact at a particular instance must possess an affective load, more or less strong and more or less conscious (7).

A series of organized responses originating in the central nucleus of the amygdala can be classified into several types: behavioral, autonomic, and endocrine (7). Behavioral responses are those that refer to movements enacted, including responses related with the expression of emotions. Accompanying these movements are myriad physiological alterations which involve the autonomic nervous system. In concert with endocrine responses, hormonal alterations caused by the adrenal medulla, these autonomic changes permit an organism to respond emotionally according to a given situation appropriately. An individual undergoing such an emotional experience may or may not be consciously aware of it: reactions incorporated in that experience are decidedly physical, taking place throughout an organism (6). These responses are perceived by the brain via nervous paths that carry the information from the periphery of the organism to the central nervous system. These physical changes are noted by the brain, which receives an endless flow of signals, informing it of the state of a physical being(5). The signals that the limbic system emits as a result of the emotional assessment of stimuli are sent to the brain, informing it of the consequences of the emotional response which it triggered initially. Complex organisms, such as human beings, possess high levels of processing capacity and a cerebral cortex, which is greater in volume and complexity than that of a simpler organism (6). The complex cortex enables the accumulation of information regarding past experience and assists the creation of a models of eventual reality. The cerebral cortex allows construction of a notion of the future based upon knowledge of the experienced world. The frontal lobes control this future planning, which is based in part on the value assigned to given stimuli by the amygdala and the subsequent responses generated by the limbic system.

Thus, Freud summarized the highly complex process by which human beings make decisions with ideal deference to the importance of the brain in that process. The deepest needs of our nature truly do govern our decisions. Choices are made to optimize an organism's condition, taking into account both past experience and future consequence. Decision theory suggests that we make decisions based on the value of the outcome of an event. However, as is seen in patients with damage either to the amygdala of other areas of the brain associated with emotional input and output, it is the responsibility of the brain itself to ascribe value to certain choices. The expectation of either reward or punishment resulting from specific actions is stored by neurons. Although human intuition may lead us to believe that decisions are personal, governed by the desires of our heart, rather than our mind, the making of decisions necessitates interplay between the systems of the human body, including those which are responsible for emotional responses. It is in the making of the very most personal choices that failure can occur if the brain is damaged in those regions necessary to control decision, emotion, and behavior. If the communication between those regions breaks down, we are unable to formulate understanding of past experience so that we may properly anticipate future consequence. Decision-making, therefore, exemplifies the biological mechanisms by which behavior is governed. No intangible component of the human heart dictates choices made. Instead, it is by the precise integration of experience within the brain that human beings are able to make decisions and respond in such a way that our needs are optimally met.

References

1)Decision Making Parietal Cortex, An Article from the Glimcher Lab

2)Different contributions of the human amygdala and ventromedial prefrontal cortex to decision-making, Demasio, Lee, and Bechara's Article.

3)The Platt Laboratory

4)Demasio Article

5)Glimcher and Platt Press Release

6) Liang, K.C., Melia, K.R., Campeau, S., Falls, W.A., Miserendino, M.J.D., & Davis, M. (1992). Lesions of the central nucleus of the amygdala, but not the paraventricular nucleus of the hypothalamus block the excitatory effects of corticotrophin-releasing factor on the acoustic startle reflex. Journal of Neuroscience, 12: 2313-2320.

7)Emotional Participation in Decision-Making, Simon's article.


Synesthesia: Who and How
Name: Alexandra
Date: 2003-05-15 18:39:23
Link to this Comment: 5691


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Vladimir Nabokov, at an age younger than that of the title character's in his novel, Lolita, complained to his mother that the colors on the wooden alphabet blocks he was playing with were wrong. As a fellow synesthete, she understood what he meant. (1) (2) Synesthesia is the "involuntary physical experience of a cross-modal association." (1) In Nabokov's case, the type synesthesia he was experiencing is called chromatic-graphemic synesthesia, in which certain letters trigger certain colors. (3) The word synesthesia originates from the Greek syn= together and aesthesis=perception. Synesthesia manifests itself with different senses and at different intensities. Though the neurological reality of synesthesia had been questioned by the scientific community, enough research has now been done documenting its existence even if the hypotheses explaining how and why it occurs may contradict each other.

The senses involved in synesthesia and also the intensities of the experience vary. Synesthesia typically involves one sense triggering another sense. Determining the actual trigger of the synesthesia is not always easy. For instance, some synesthetes who experience colored speech perception report that words with the same initial sound but different letters like photo and fish trigger different colors, while words with the same intial letter but not sound like photo and psychology trigger the same color. (3) This is not true for some synesthetes or in certain instances. For example, the words sex and psychology may have the same color. (3) Synesthetic relationships are usually unidirectional, which means that while a sound may trigger a certain sight, the opposite does not occur. (1) Smell and taste are rarely triggers or responses of synesthesia. (1)

Also interestingly enough, the correspondences between certain inducers and induced responses, are both systematic and idiosyncratic. (4) Unlike synesthesia brought on by lysergic acid diethylamide (LSD) or other hallucinogens, the induced responses towards certain stimuli remain constant. The letter A, for instance, is always bright red for one person. What makes this response idiosyncratic, however, is that, for a different person, the letter A is always pale orange. Thus, while the senses and the responses involved differ between synesthetes, all display a regularity of responses, which is usually unidirectional.

Some individuals, however, experience synesthesia between multiple senses or in both directions. This experience can be thought of as a more intense form of synesthesia. For instance, one of Simon Baron-Cohen's test-subjects saw colors when she heard sounds and heard sounds when she saw colors. Her particular synesthesia, unlike most synesthetes who consider their condition to be a gift, could be thought of as a disorder since it led her to avoid situations that were either too colorful or too noisy. (3) Gail Martino and Lawrence E. Marks, however, argue that weak synesthesia is common among humans, and should be differentiated from what is commonly perceived as simply "synesthesia" when in fact it should be called "strong synesthesia." (4) The majority of people, for instance, connect brighter colors with higher pitched sounds. (2)

Because it is a physical condition synesthesia has a clinical diagnosis. Perceptions are considered synesthetic if can be categorized in a certain way. They are involuntary but elicited. (1) (5) They must be projected outside of the body and not experienced only in the mind's eye. (1) (5) They are durable and generic. (1) (5) That is the responses do not change over time, and the responses are never pictorial or elaborated. (1) Responses are memorable, which means that while a person may remember that someone's name was green before remembering or instead of remembering his or her actual name. (5) Lastly synesthesia is emotional and noetic meaning that the experience is accompanied by a sense of certitude and conviction about its reality. (1)

Estimates of the number of synesthetes in a population range from between1 out of 25,000 (1) to1 out of every 2000 (6). Although different people experience different types of synesthesia, certain traits characterize synesthetes. Genetics, as the anecdote about Nabokov suggests, play a part in synesthesia. A synesthete will often have a parent or sibling who shares the experience. The transmission rate from parent to daughter is close to 75% and from parent to son it is 25%. (6) The difference in transmission percentages could be due to the fact that most synesthetes are women. Although agreement exists about women experiencing synesthesia more often, the reported gender ratios differ between researchers. Baron-Cohen's survey done in the United Kingdom reported a sex-ratio of about 6:1 (female:male) (6). Richard Cytowic in the United States found a ratio of 3:1. (1) Other typical characteristics of synesthetes are left-handedness and excellent memories. (1) Many claim that their superior memories stem from their parallel sensations. For example, one may say, "I know it's two because it's white." (1)

While citing statistics about the typical synesthete, does give an authoritarian edge to the description of synesthesia, it is not enough to prove the neurological basis for or the veracity of the phenomenon. A growing body of evidence supports the idea that synesthesia, or as Marks would say "strong synesthesia," has a physiological basis. The fact that drugs, like LSD and mescaline, and also some limbic epileptic seizures produce synesthesia suggests the neurological basis of synesthesia. Also synesthetic subjects of an experiment done by Baron-Cohen showed a 92 percent rate of consistency in their color-sound associations after one year, while control subjects exhibited only a 37 percent rate of consistency after only one week. (2)

Several brain imaging studies, while certainly intriguing, also show how the synesthetes' brains function differently from those of non-synesthetes may show the "realness" of the phenomenon. A study published in 1996 in the journal, Brain, by Dr. Eraldo Paulesu and Dr. Baron-Cohen, the PET scans of six synesthetes and six control subjects. (2) The subjects, who were all women, listened to sound-cues while blindfolded. The researchers found that the synesthetes showed increased activation in visual areas of the cortex upon hearing the sounds while the control group did not. (2) This suggests that sound-cues, or indeed any of the inducers of synesthesia, do trigger responses in the areas of the cortex which interpret the synesthetic responses.

Although not directly related to synesthesia, the work of Dr. Mriganka Sur, the head of the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology, has implications for explaining the phenomenon. He cross-wired ferret's brains showing the possibility of a connection between the visual and auditory cortexes. His study demonstrates how the auditory area of the brain adapts to become an area associated with the analysis of visual symbols. (7) By removing any auditory input, Sur impoverished the auditory cortex of input, which then attracted the nerves connected to visual input. (7) This was possible because both sets of nerves come close to each other in the thalamus, which made the "short-circuit" possible. (7) While the visual cortex still processed visual information, the auditory cortex did as well. (7) Although this does not explain synesthesia on a neurological level, the "non-fixedness" of the two areas of the brain might relate to synesthesia. Also the "short-circuit" which occurred in the thalamus might explain why synesthesia between these two senses is most prevalent.

While the studies done in connection with brain functioning and with color-associations are definitely relevant, a study done by Dr. Vilayanur S. Ramachandran, director of Center for Brain and Cognition at the University of California, San Diego, shows the most elegance. His team created a random matrix of fives and scattered twos among the fives. The twos were not easy to see, and it took a normal person 1-2 seconds to realize that there were twos in the matrix and even longer to realize that they were arranged in a pattern. (8) Ramachandran said, "But when you show this matrix to a synesthete, he says, 'Oh, I see a red triangle'-instantly. This shows that he's literally seeing those twos as red, the fives as green, and therefore, he's not making this up at random." (8) The difference in response times (both noticing the twos and in noticing the pattern) and in the type of responses between the synesthete and the control subject would also show that "synesthesia is an authentic sensory phenomenon." (8)

While these sets of data verify the existence of synesthesia as a neurological phenomenon, different the hypotheses, of researchers, such as Cytowic, Baron-Cohen, Grossenbacher, and Ramachandran drive at the why's and where's of synesthesia.

Richard Cytowic, the man who resparked the scientific study of synesthetics again and the author of The Man Who Tasted Shapes, hypothesizes that, on the neural basis, synesthesia is the result of a lowered level of neuraxis that depends on the left-brain hemisphere, in which the hippocampus plays an obligate part. (1) Although counter-intuitive, synesthesia results from a decline in blood flow to the left-hemisphere of the brain. (1) (2)He bases this on the test results of his subject MW, whose hemispheric flow of energy dropped an average of 18 percent while experiencing synesthesia. (1) This suggests that synesthesia could be thought of as more "primitive" response to sensory input. He suggests the importance of the hippocampus in the experience of synesthesia because it is necessary for the experiencing of altered states of consciousness similar to synesthesia. (1) Cytowic interprets his brain study and the role of the hippocampus as meaning that synesthesia is left over from a time before the sensory pathways separated. (2) In this sense, synesthesia may be, as Dr. Cytowic has said, "closer to the essence of what it is to perceive." (2)

Simon Baron-Cohen, on the other hand, explains synesthesia as the result of a breakdown in cortical modularity. (3) He suggests that adult synesthesia results from the lack of completion in the modularization process during infancy. (3) This would necessitate Neonatal Synesthesia hypothosis, which states that infants experience synesthesia until about 4 months of age at which point their senses have become modularized so that they no longer could be considered synesthetes. (3)

Dr. Peter Grossenbacher, senior staff member at the National Institute of Mental Health, proposes that synesthesia results from the disinhibition of neural pathways that normally suppress "irrelevant sensory input and allow focused perception in a single sensory mode." (2)

Ramachandran's explanation of synesthesia seems to build on Grossenbacher's. He posits that certain individuals have a gene mutation which causes the disinhibition or the "defective pruning" of the connections between adjacent modules in the brain. (8) He noticed that "low" synesthetes experienced multi-modal sensations in connection with numbers, while "higher" synesthetes experienced the sensation with days of the week or even with months. (8) His team realized that all inciters were sequences. In higher synesthetes, he proposes that it is the concept of numerical sequence that gets linked to synesthesia. (8) The type, or "level," of synesthesia depends upon where the mutated gene is expressed. (8) As he says, "If the gene is expressed in the fusiform, you get a lower synesthete driven by visual features of the grapheme: if it's higher up, near the angular gyrus, you get a higher synesthete in whom the color is evoked by the concept." (8) Although this genetic basis for synesthesia has not been proven yet, it seems like a feasible explanation which could be tested in the future.

Despite the differing opinions on the causes of synesthesia, it seems clear that strong synesthesia, though rare, is real and has a physiological basis. And although most people do not experience synesthesia, they can reap the benefits of synesthesia through the appreciation of art of synesthetes. Though impossible to prove, Nabokov's synesthesia must have contributed to his writing style. His own synesthesia may have inspired his short story "Referential Mania." In it he writes,

"Phenomenal nature shadows him wherever he goes.... His inmost thoughts are discussed at nightfall, in manual alphabet, by darkly gesticulating trees. Pebbles or stains or sunflecks form patterns representing in some awful way messages which he must intercept. Everything is a cipher and of everything he is the theme."

Clearly whether or not synesthesia has a scientifically understood basis does not affect how real it feels to those who experience it. While the rarity of synesthesia makes it difficult to study, future research will hopefully determine whether it is explained best as dependent upon the amygdala, from the breakdown of cortical modality, or the disinhibition of neural pathways. Still, even if not completely understood, it has implications about how the exterior world is in fact as much a creation, if not more, of the interior world and of the brain.

References

1) Psyche Website Vol. 2, Issue 10, This website contains Cytowic's, "Synesthesia: Phenomenology And Neuropsychology: A Review of Current Knowledge"

2) New York Times,contains archived Science Times

3) Psyche Website Vol. 2, Issue 27, This website contains Baron-Cohen's, "Is There a Normal Phase of Synaesthesia in Development"

4) Ohio University Website, very useful, contains important theories on synesthesia and papers too


5) WearCam, wearable computers website(interesting if not amusing) also has information about synesthesia

6)Walter J. Freeman Neurophysiology Lab at Berkeley,website has interesting manuscripts

7) Rediff, contains interview with Dr. Sur about rewiring ferrets brains

8) Neurology Reviews.Com,contains article on Ramachandran entitled "The Mind's Eye-Neuroscience, Synesthesia, and Art"


The Truth about Reality: Auditory Hallucinations a
Name: Melissa Os
Date: 2003-05-16 11:44:06
Link to this Comment: 5701


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

The Ron Howard film, A Beautiful Mind, is about the battle that Nobel Prize winner John Nash had with schizophrenia(8). Nash's doctor at one point says to him that the problem with battling an illness such as schizophrenia with will power, without medication is that mind is where the problem is in the first place. In a way this statement is the heart of the study of neurological biology- that the brain affects all that we do therefore when the brain is ill it may be the hardest battle to win. With schizophrenia, one of the hardest of the malfunctions of the brain to overcome is auditory hallucinations. Auditory hallucinations, which plague many schizophrenic patients, produce a voice in the head of those with the illness, voices that feel and are just as real as anything else to the patients. This brings about the question what is real and what is a not and how can anyone tell. As in hallucinations of sight, who and what are to say if they are real or not. By studying the hallucinations of schizophrenics, one can determine the space between reality and the imagined and if there is a difference.

Schizophrenia is a brain illness that affects about 1% of the populations, affecting one's "ability to accurately interpret the world around oneself" (1). The word schizophrenia, a term given by Eugen Bleuler in 1911, means "split-mind," however this is in reference to the disintegration in thought rather than a split personality (something many people have thought to be a split personality) (7). The disorder affects ones thought process, behavior and communication skills. Delusions, auditory hallucinations, incoherent speech patterns, and a multitude of volatile behavior are some of the typical symptoms of schizophrenia (6). A general characterization of the disorder is that schizophrenia is a term that encompasses several disorders which have to do with disturbances in thought, perceptual, social, and emotional processes (7). Because these symptoms vary from patient to patient and because the disease leaves the patient with a sense of a false reality many people with schizophrenia remain undiagnosed for years and even sometimes the duration of their life. Although many studies have been done, the cause of schizophrenia is still not fully known, thus there is no cure only treatment through medication and therapy. Research has found that there are possible various causes including genetic factors (since people with the disorder are more likely to have children with the disorder) as well as environmental and social factors such as family disturbances and things of that nature (1). Some "stimulant drugs, such as amphetamines, cocaine or PCP (angel dust), can trigger episodes of illness" (1). In more detail, the cause of schizophrenia or at least some of the symptoms is associated with abnormalities in the structure of the brain.

The brain of a schizophrenic, in most cases, has an enlarged cerebral ventricles, while some have structural abnormalities in the temporal lobes (1). However these assertions were not proved until recently. While many scientists and psychologists believed that schizophrenic disorders were linked to neurological defects, they could not be certain. Brain imaging changed that fact and gave people proof of abnormalities in the brain. Many of these images, CT scans and MRI scans show schizophrenic patients with enlarged brain ventricles; however this is a much debated topic. In truth, the connection between enlarged brain ventricles and schizophrenia cannot be determined outright, because "enlarged ventricles are not unique to schizophrenia –they are a sign of many kinds of brain pathology(7). As in with many functions of the brain, it is hard to decipher particular aspects of causation due to the complexities of the brain. Therefore the causes and brain abnormalities are not clear in schizophrenic patients. However the aspect that allows one to study with these perceptions of reality in these patients is there hallucinations.

Imagine hearing voices in your head- that sound just like a person sitting next to you talking- threatening you – taunting you. People who suffer from schizophrenia often experience auditory hallucinations, in which they hear voices that are distinct and differentiated from their inner thoughts and voices (1). Not all schizophrenics experience these auditory hallucinations or what are known as AHs. In fact, only 70% of schizophrenics experience AHs which consist of spoken speech. According to Dr. Ralph Hoffman, "other types of hallucinations...AHs of non-speech sounds such as music, as well as visual and tactile experiences –also occur among some patients, although much less frequently" (3). Since this is an occurrence in only some schizophrenics, auditory hallucinations, have been connected to abnormalities in the "activation of speech perception areas of the cerebral cortex" (3). However many studies have been done to attempt to pin-point the difference in brains in schizophrenics who experience hallucinations and those who do not. This difference may be the difference between reality and imagination or more interestingly the content of the gap (or space) of what the brain can not perceive as speech or vision.

A recent study did magnetic resonance imagining (MRI) on patients with schizophrenia to determine the specific deficits in the temporal lobes. The MRI was done on 72 men with schizophrenia, 41 of which had a history of auditory-verbal hallucinations and 31 had no hallucinations. 32 control patients were used with healthy brains and no history of mental illness. The results of this experiment concluded that "the utility of voxel-based morphometric methods in schizophrenic research...point towards disruption to a 'paralimbic' neural network, as underlying schizophrenic psychopathology in general, with abnormalities of the left insula specifically related to hallucinations" (2). Thus there are specific parts of the brain, the insula, in this case which are abnormal in the brains of schizophrenics with auditory hallucinations. Unfortunately, for patients, this research is in its elementary phases and "response to drug treatment is often incomplete or non-existent" (4). As might be expected these hallucinations can be debilitating.

Nonetheless, these aspects of schizophrenia, as well as the research being conducted on the brain, are intriguing to think about in terms of reality and what that really means. Auditory hallucinations can be seen in the context of vision. When thinking about vision- there is a place in which our eye can not see, a gap of sorts called the blind spot. The optic nerve in the eye carries information from the eye to the rest of the brain; however when there are no photoreceptors the brain gets no information from the eye and thus makes it up- fills in the gap. It is in this gap that the brain completes the image or creates one to replace what can not be seen by the eye(5). Perception thus is something that is subjective, maybe something that is different for everyone. We believe certain images, colors to be true in our reality. This theory of the gap can be applied to these auditory hallucinations. Maybe these patients are hearing something- maybe there brain is creating a sound which is just as real as the reality normal people experience everyday. Another aspect of this gap may be that when the brain creates either a sound or a vision free will is at play. Therefore, each individual's reality is unique based on their own experience.

"You can't come up with a formula to change the way you experience the world" (8). This is another quote from a Beautiful Mind, from the doctor of Nash when Nash claims to be able to solve his own illness. Experiencing the world is different for everyone and particularly for Nash or those like him. John Nash heard voices, people in his head created by his brain. They spoke to him and felt real to him, so real that even he could not differentiate the external world from the internal. The truth may be that there is no difference. Reality is what the brain perceives it to be and at times reality is different for different people. Filling in the space, creating images and sounds where there are none are the operations that the brain undergoes everyday. Auditory hallucinations are like the hallucinations in our eyes everyday when our brain fills in a picture. This may seem insignificant or obvious, however it just makes it more understandable that maybe we are not as different as we think. Maybe there is no such thing as normal and maybe reality is just what our brain perceives it to be – nothing more and nothing less and that can be a unique and individualized experience.

References


1)Schizophrenia General Information Homepage.

2) Bullmore, ET, XA Chitnis, AS David, J. Shapleske, A. Simmons, J. Suckling, SL Rossell, PWR Woodruff. "A Computational morphometric MRI study of schizophrenia: Effects of Hallucinations" Cerebral Cortex, 2002, pg. 1331-1341.

3)Treatment for Auditory Hallucinations, on Schizophrenia web page.

4) Berman, M. Robert. "Transcranial magnetic stimulation and auditory hallucinations in schizophrenia" The Lancet, Vol 355 March 5, 2000.

5) Biology 202 homepage, class notes on blind spot.

6)PsychNey-UK, disorder information sheet.

7) Weiten, Wayne. Psychology: Themes and Variations. New
York: ITP Company, 1995.
8) A Beautiful Mind. Dir. Ron Howard. Per. Russell Crowe, Jennifer Connelly. Universal, 2002.


Synesthesia: Imagination or Reality?
Name: Tiffany Li
Date: 2003-05-16 12:35:50
Link to this Comment: 5705


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Synesthesia, the mixing of senses, denotes the rare capacity to hear colors, taste shapes, or experience other equally startling sensory blendings. It is a condition which has been known by scientists for over a century, but brushed aside as an artifact of drug use or mere curiosity. However after decades of neglect, a revival of inquiry is under. Synesthesia has been the object of numerous studies to try and elucidate the validity of synesthesia as a genuine involuntary sensory experience rather than a form of dementia or fraud. This question has plagued researchers in the field for years. Is synesthesia pure deceit or does it have a real neurological explanation? If synesthesia is a neurological condition, does it depend on abnormal neural connectivity or is it mediated by neural connections that exist in the normal brain?

Synesthesia can arise in one of three ways (1). The majority of synesthetes have developmental synesthesia (1). This type of synesthesia is said to begin early in childhood. The second type of synesthesia may occur after a brain injury or a seizure. This rare form is called acquired synesthesia (1). The last possible form is pharmacological synesthesia, which can ensue after the ingestion of hallucinogenic drugs such as LSD or mescaline (1). These three types of synesthesia open the door to many questions concerning the true nature of synesthesia. Is it the result of the I-function or some other portion of our nervous system? A common explanation of synesthesia has been that the affected people are simply experiencing childhood memories and associations (3). For example a person may have played with refrigerator magnets as a child, associating the number 5 with red and 2 with blue. Another theory if that synesthetes are simply being metaphorical when they describe things, just as we would describe a shirt as being "loud" or cheddar as "sharp", except that they are particularly good at it (3). These theories predominantly support synesthesia as being created or imagined rather than a condition created by the nervous system. Although it might appear that simple at first, research has provided interesting information counteracting the concept of deceit or dementia.

One of the leading writers on synesthesia, Cytowinc, derived five diagnostic characteristics to identify synesthesia. The first is that it is involuntary but elicited. Therefore it is a passive experience that is uncontrollable. The second characteristic is that synesthesia is projected. It is perceived to take place outside of the body in an area immediately surrounding the synesthete. Synesthesia is also characterized by being generic, meaning that synesthetes' experiences are unelaborated. They simply see blobs, lines, spirals, and lattice shapes, feel smooth or rough textures, and taste agreeable or disagreeable tastes such as sweet, salty, sour, or metallic. They do not see prairies with sheep for example as we might when listening to a piece of music. The last characteristic is that synesthesia is durable. Although the inter-sensory associations may vary greatly among synesthetes, their individual synesthetic concurrents (the induced sensory attribute) do not change over time. This was shown in a study which compared the consistency with which synesthetes and non-synesthetes assigned color and names to 117 letters and words. After the lapse of a year, synesthetes displayed remarkable consistency, naming the same color for 92% of the items, as opposed to only 38% for non-synesthetes (1).

In addition to these characteristics, there are additional trends among synesthetes. There is a similarity of reports from different cultures and different times in history, which provide a strong case for the authenticity of synesthesia (4). Synesthesia runs in families, in a pattern consistent with either autosomal or x-linked dominant transmission (2). Both parents are able to pass the trait to either sex child, yet it has never been transmitted from father to son. Synesthesia can occur in more than one generation and multiple siblings can be affected in the same generation (2). One of the most famous family case's is that of the Russian novelist Vladimir Nabokov. His mother and his son were both affected by synesthesia (2). Synesthesia occurs predominantly among women, although the ratios on this data are incongruent. Cytowic states a 3:1 ratio within the U.S. while Baron-Cohen state an 8:1 ratio in the U.K. Cytowic estimates synesthesia to occur at a frequency of 1/25000 (2). Additionally the neuroimaging data (using PET) shows different cortical blood flow patterns in women with synesthesia compared to without (4).

The most common type of synesthesia is that of colored hearing, where a certain color is associated with a sound. There are many forms of synesthesia (2). The five senses -sight, smell, sound, taste, and touch- have ten possible synesthetic pairings. However synesthetic relationships are usually unidirectional, meaning that a particular synesthetic pairing such that hearing may induce sight, but that sight does not induce hearing. This one-way relationship increases the permutations from twenty to nearly thirty (2).

These cognitive, genetic and physiological findings challenge the possiblility of synesthesia being mere fraud or dementia. A neurological explanation appears more plausible based on the data provided. If synesthesia were imaginary, the synesthete should be able to control the senses it mixes in addition to the times he/she wishes to do so. However it is apparent from Cytowic's research that synesthesia is involuntary. Synesthetes cannot control when their inter-sensory associations occur or what type of inter-sensory association they have. There are also differences in cortical activity between synesthetes and non-synesthetes and a familial pattern, which reinforce the fact that synesthesia cannot be deceit or general dementia. Therefore the nervous system rather than imagination seems to play the central role in synesthesia.

Conformation that synesthesia is real brings up a whole new set of questions. How do some people experience this strange phenomenon? How does the mixing of the senses occur? Does it depend on abnormal neural connections, or is it mediated by neural connections that already exist in the brain? The root of synesthesia has been a debate between scientists and researchers for years. However they have come to a general consensus that synesthesia has a definite genetic component.

The most widespread explanation is that synesthesia is caused by some kind of cross wiring in the brain. This implies that there is either a physical cross-wiring between the sensory regions of the brain or that there is an imbalance in the chemicals traveling between these regions (3)(3). Often neighboring regions of the brain inhibit one another's activity to minimize the cross talk between different sensory systems. A chemical imbalance could reduce this inhibition either by failing to produce the inhibitor or by blocking the inhibitor's action. This would elicit the activity in the area previously inhibited, leading to cross-modal associations (3). Although this is possible, it is just as plausible that synesthesia is caused by the physical cross-wiring of neurons. There are numerous studies supporting this cross-modal theory. For example, Ramachandran and Hubbard conducted brain-imaging experiments where they presented black and white numbers to number-color synesthetes. They observed that brain activation arose not only in the number area, as it would in normal subjects, but also in the color area (3). This implies that there is a genuine connection between sensory systems within synesthetes' brains. This leads us to the question, how these connections arise?

Maurer suggested that human infants are born with dense interconnections between cortical sensory systems and that synesthesia results from a partial failure of the normal pruning process that eliminates these connections (1). He argues that early in infancy, probably up to the age of four months, all babies experience sensory inputs in an undifferentiated way, postulating that all babies are synesthetic at some point in their lives (4). Evidence of undifferentiated connections between neural structures can be see in neonates of other species. If one looks at the connections between the neural structures in neonates that of other species. For instance, the neonatal hamster has transient connections between the retina and the main somatosensory and auditory nuclei of the thalamus. The kitten has similar connections between visual, auditory, somatosensory, and motor cortex (4). This refutes Piaget's theory that the different sensory systems are independent at birth and gradually become integrated to one another over time (4).
Cytowic on the other hand proposes that synesthesia has less to do with cortical activity, but rather an emphasis on the limbic system. He argues that cortical metabolism plummets 18% below average activity and that the limbic system displays higher activity (2). This reinforces that fact that the I-function is not involved in synesthesia, since the I-function is associated with high cortical levels. Additionally it postulates that synesthesia does not arise from abnormal neural connections but instead from connections that normally exist in normal adults. Cytowic goes as far as proposing that synesthesia is a premature display of a normal cognitive process (3). Where synesthetes see the intermediate processing that occurs when the brain is integrating sensory information. An analogy to this would be that when we watch television, we see the terminal stage of the broadcast. A synesthete on the other hand, would be able to intercept the transmission anywhere between the studio camera and the television screen (3). Therefore synesthetes see the processing which occurs in our brain. I personally find this speculation a bit too far fetched although I do agree with the fact that it seems more plausible that synesthesia is mediated by neural connections which already exist in the normal adult brain.

The reason I speculate that this is true is because hallucinogenic drugs can induce synesthesia in non-synesthetes (1). This strongly suggests that there must be normally existing inter-sensory connections in adult brains, rather than the formation of new connections between pathways as Piaget proposed or abnormal differentiation of sensory systems as Mauer proposed. I rather contend that it is a mixture of both preexisting inter-sensory neural connections and a chemical imbalance. This is plausible because if synesthesia were caused by abnormal neural connections, drug induced synesthesia could not occur because there wouldn't be the necessary neural connections to do so. Also Piaget's theory is invalid because neurons cannot form new connections as quickly as the drug acts. Therefore is appears that synesthesia is caused by a chemical imbalance which enables preexisting inter-sensory connections to take place. In non-synesthetes the inter-sensory connections must be inhibited as described earlier.

The true nature of synesthesia a genuine sensory experience is apparent. Perhaps the difficulty some have in accepting synesthesia as a real condition does not lie in their inability to understand the proposed phenomenology, but perhaps in their refusal to take any subjective reports seriously (5). "Fundamentally subjective experiences tend to be discounted or rejected" by physicians which only give credence to objective data (6). However I advocate that synesthesia is caused by normal existing inter-sensory neural connections which are triggered by a chemical imbalance which prevents this normal inhibition.


1)Mechanisms of Synesthesia: Cognitvie and Physiological Constraints,Grossenbacher, Peter G. and Lovelace, Christopher T. "Mechanisms of Synesthesia: Cognitive and Physiological Constraints". Cognitive Sciences. Vol.5, No 1. January 2001.

2)Synesthesia: Phenomenology and Neuropsychology",Cytowic, Richard E. "Synesthesia: Phenomenology and Neuropsychology". Psyche, 2(10), July 1995.

3)Hearing Colors, tasting shapes, Ramachandran, Vilayanur S. and Hubbard, Edward M. "Hearing Colors, Tasting Shapes" Scientific American. April 15, 2003.

4)Is there a normal phase of synesthesia in Development?,Baron-Cohen, Simon. "Is There a Normal Phase of Synaesthesia in Development?" Psyche, 2(27), June 1996.

5)Synesthesia and Mehtod,Korb, Kevin. "Synesthesia and Method" Psyche, 2(10), 2(24), January 1996.

6)One's own Brain as Trisckster-Part II, Day, Sean A. "One's own Brain as Trickster-Part II" Miami University. Oxford Ohio.


Aggression: neurological basis
Name: Jen Hansen
Date: 2003-05-16 13:14:14
Link to this Comment: 5706


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

This past semester the reoccurring idea in our neurobiology class has been the notion that 'brain=behavior', this paper tests whether the behavior of aggression is a good example of this concept. Although behavior might be a principle to be imagined; as a result of interconnected assemblages of input/output boxes derived from the permeability of the neuron membrane, I believe that my research will demonstrate that further sophistication is necessary in order to understand aggression as a behavior associated with the brain.
Aggression is an action; it is one's intention to harm someone. It can be a verbal attack- threats, insults, or sarcasm- or a physical punishment or restriction. (1) Within a psychiatric context, aggression is defined as a behavior deliberately aimed at inflicting physical damage to a person or property. (2) Despite the fact that aggressive acts are a major cause of suffering in society, scientific attempts to systematically classify human aggressive behavior for clinical purposes have been relatively few. (2)
This lack of research is in sharp contrast to the vast literature and research that has been conducted on animal aggression which has been studied in detail as to behavioral, neuroanatomical and biochemical aspects. (2) This is most likely the reason why a review of the current literature shows that researchers often use references to animal research as it might apply to human behavior. However, the current trend in research is now turning towards investigations of aggression in humans to try and address the information deficiency. This paper analyzes the strength of validity of the three research articles that address whether there are neurological attributes to the behavioral characteristics considered to be traits of aggression as well as relate it to the notion of 'brain = behavior'.

Article #1: Subtypes of aggression

This article begins by stating the need for investigation into animal reseach, and defining the researchers' field of study within that context. The researchers point out that aggression, as such, does not constitute a diagnostic entity, and that the psychiatric disorders most closely associated with aggression are psychoses, substance abuse, conduct disorder, intermittent explosive disorder, neurological disorders involving the frontal and temporal lobes, and personality disorders. (2) The researchers also point out that there is currently no specific antiaggressive drug, but rather a variety of psychotropic drugs are administered for the secondary antiaggressive properties. (2) The focus of the article is on subdividing aggression as it occurs in children's behavior, and it is the researchers' contention that this carries implications for treatment of other aggressive patients as well. (2)
As previously stated, animal research in this area is extensive, thus Vitiello and Stoff begin with a summation of current animal data. The various forms of animal aggression have been linked to specific areas of the brain. These are: predatory behavior is associated with the lateral hypothalamus in cats; intermale behavior with the anterior hypothalamus in monkeys and the orbital prefrontal in rats; territorial aggression with the frontomedial hypothalamus and medial amygdala in rats; material irritability with the amygdala and medial hypothalamus in cats; and fear-induced aggression has been associated with the amygdala and medial hypothalamus in cats. ((2) The researchers hope to be able to establish similar connections for humans.
The researchers also explore animal research as it pertains to neurotransmitters. Here, they mention that aggressive behavior in animals is accompanied by decreased serotonergic and increased noradrenergic and dopaminergic activity. (2) Additionally, a decrease in a serotonin metabolite has been associated with reduced levels of impulsive aggression. (2) They conclude that the animal evidence points toward qualitatively different forms of aggressive behavior, from a phenomenological, physiological, neuroanatomical and biochemical point of view. (2)
Vitiello and Stoff postulate that it is the existence of these subtypes of aggression in human behavior that have made it so difficult to scientifically define human aggressive behavior. Thus far, in adult psychiatry, two different lines of inquiry have been identified relevant to human aggression. (2) The one which seems most associated with brain activity is impulsive aggression which is characterized by a lack of inhibitions, not necessarily antisocial tendencies.
Such behaviors have been associated with specific neurological injuries to the temporal or frontal lobes and substance abuse. (2) Typically this behavior is explosive, uncontrolled, and accompanied by anger or fear. (2) A decreased level of serotonergic activity indicated by decreased level of serotonin metabolites has been associated with such behavior in an increasing body of research data. (2), (3) Serotonin is one of the neurotransmitters that has been recently associated with aggressive behavior. Serotonin originates in the midbrain region where the cerebral hemispheres and thalamus-hypothalamus are bridged to the spinal cord, and distributed to the brain. (4) The focus of research on the neurotransmitter serotonin is worth noting because of the role it plays in the 'brain=behavior' notion.
Vitiello and Stoff found correlations between bullying behavior in boys (Kicking, biting, hitting, and intimidation) 9 to 12 year-old and the animal studies which pertained to predatory aggression. (2) Children engaging in such activities are often fearless and therefore, less inhibited, and less sensitive to punishment. (2) Data on the role of neurotransmitters in such behaviors is scarce. An interesting finding, however, was a negative correlation between CSF 5-HIAA (serotonergic chemical) and self-reported aggression towards others with expressed emotionality toward the child's mother. (2) This goes along with reports of decreased serotonin activity in some forms of adult aggression. (2)
This also points to how animal studies indicate that environmental factors can play a part in hormonal and neurotransmitter production. When an animal is exposed to stressful conditions, or when its position in the sociological order of its group is altered, changes in brain activity occur. (2)
In their conclusions, Vitiello and Stoff state that because of serotonergic dysfunction being so closely associated with impulsive aggression, drugs such as trazodone and buspirone have been anecdotally used. (2) They call for more intensive studies into this area particularly since impulsive-affective aggression is usually accompanied by a high state of arousal. This has been known to impair cognitive functions, making a pharmacological approach more appropriate. (2) There research appears to be valid and well-supported by evidence in their claims.

Article #2: The role of serotonin

This article further investigates the relationship between a serotonin deficiency and aggressive behavior. Currently, serotonergic drugs are used to treat a wide variety of disease such as migraine, depression, and anxiety. A serotonin deficiency has been associated with suicide, impulsive violence, depression and alcoholism. A complex subject, serotonin is capable of multiple actions which are mediated by the interaction of a neuromodulator, 5-hydroxytryptamine, 5-HT. (5) At least 14 receptors are involved as well. (5) Pharmacological studies have suggested that the activation of a certain receptor might lead to an increase in anxiety and locomotion and to a decrease in food intake, sexual activity, and aggressive behavior. As a result, the research team altered the 5-HT.sub.1B receptor, located in several areas such as the basal ganglia, and central hippocampus, in mutant rats' as this is the rodent homolog of the human receptor. (5)
The researchers give a detailed account of the methods employed with the mice and those methods which were used to verify that the changes in the receptors had taken place. The activity of the mice was then analyzed by observation in an open field against normal controls. Tests were performed where the mutant mice were isolated and then exposed after a period of four weeks to a mouse which would be perceived as an intruder. (5) The latency of an attack on the intruder and the number of attacks were recorded as indices for aggression. The mutant mice attacked more quickly and more often than a control group. Additionally, the intensity of the attacks was greater. (5) The researchers concluded that the increased aggressiveness was a direct result of the low serotonin levels artificially induced by altering the serotonergic system.
The methodology of these researchers seems to have been very thorough, and even though, it was an animal study, it had definite implications toward human behavior. This further enforces the connection between serotonin and human aggression.

Article #3: Advantages of low serotonin levels

This article shows that when it comes to subjects as complicated as the brain and human behavior, issues tend to be clear-cut. Thus, one gets a different viewpoint to the controversy concerning the role of serotonin in behavior.
For example, one in forty people suffer from some sort of obsessive compulsive disorder. (6) Drugs which block response to the neurotransmitter serotonin have had some success in treating these disorders. According to Freedman and Stahl reports of the press linking these drugs to increased violence and even murder have ignored scientific evidence that shows that they can be safe and effective. (6)
A group of drugs known as SSRIs, serotonin-selective reuptake inhibitors, such as clomipraminel and fluoxetine, have been shown to be the most effective pharmacological treatment for obsessive-compulsive disorder (OCD). (6) These drugs have effects on other amine system as well, such as dopamine.
The researchers essentially review the various psychotropic drugs available and state for which disease they are used. They state that federal reviews and sound comprehensive analyses have failed to show any reliable evidence of fluoxetine-induced violence. (6) They recommend calmly implemented curiosity and steady monitoring for side effects as a means of heading off any negative effects which may be incurred. (6)
Although I believe that serotonin-blocking drugs can aid in helping OCD, the researchers merely stated that studies exist which show these drugs to be safe, rather then giving details of the evidence which shows this. Considering the literature, which has shown a definite connection between low levels of serotonin and violence, particularly in animal studies, merely stating that this is not true struck me as insufficient, and contributed little to my overall understanding of the subject?
By far, the first article was the most influential of my understanding of the subject, since it first reviewed animal research and then detailed human research, and showed what conclusions could be extrapolated. Also, the first article, while technical in detail, was more easily comprehended since it did not deal with long, multi-syllabic scientific jargon. Though a certain amount of jargon is necessary in writing an article intended for one's peers, it seemed as if the researchers in the last article were using such terms to obscure the fact that the article was low on content.
Nevertheless, whether or not low serotonin levels are conducive to increased aggression in every case is debatable and certainly worth of more investigation. While human aggression is obviously influenced by environmental factors, how these factors interrelate and what effect they might have on brain function makes for a fascinating field of study, in which surprisingly little has been done. The first and second articles make a strong argument for a connection between serotonin levels and aggressive behavior. However, since it is clear that environmental factors can cause such a decrease in animal populations, a pharmacological approach to such a deficit may be addressing only the symptom of the true causes for such aggression.
From these research articles, I find it difficult to assert that people are still far away from understanding the role of neurotransmitters in affecting aggression. Furthermore, many areas of the brain have been implicated in affecting emotion including violence (aggression), but the results and claims have been numerous and vague. I think that as we learn more about the brain, as much of it still remains a mystery, we can to explain aggression using the idea of 'brain=behavior'. Advances in science and in particular neurobiology can be attributed to not examining problems from one perspective. Doing so results in narrow conclusions that are only applicable to one area. For formation of broader, well-rounded perspectives, one must study the problem from different scientific perspectives. Aggression cannot be simply explained by a serotonin deficiency, therefore, there may be other components not yet understood that will be revealed with the advances made in the scientific community, more specifically neurobiology. My prediction is that as our technologies improve; discovering a more thorough picture of the brain's influence over aggressive behavior will be possible.


1)Anger and Aggression Information Web Page, a rich resource about aggression and anger

2) Vitiello, Benedetto; Stoff, David M. (1997, March) Subtypes of aggression
and their relevance to child psychiatry. Journal of the American Academy of
Child and Adolescent Psychiatry. pp. 306-315.


3)The Journal of Neuroscience, Van Erp, Annemoon M; Miczek, Klaus A. (2000, December) Aggressive Behavior, Increased Accumbal Dopamine, and Decreased Cortical Serotonin in Rats. The Journal of Neuroscience, 20(24): 9320-9325 , great research article

4)Serendip Web paper '00, 'The effect of the neurotransmitter serotonin on autistic symptoms', good reference of serotonin

5)Science , Enhanced aggressive behavior in
mice lacking 5-HT (1B) receptor.

6) Freedman, Daniel X; Stahl, Stephen M. (1992, July) Psychiatry. The Journal
of the American Medical Association, Vol. 268. pp. 403-406.


Men and Women: Commonality in emotion
Name: Annabella
Date: 2003-05-16 13:19:37
Link to this Comment: 5707


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Man: A person (usually an adult male) regarded in terms of the qualities of courage, strength, or responsibility, etc., traditionally associated with adult males. An adult male human being, and senses principally based on this. (2)

Woman: The female human being; the female part of the human race, the female sex. The allusion to qualities conventionally attributed to the female sex, as mutability, capriciousness, proneness to tears. (2)


Over the course human evolution there has always been grand polemic as to whether or not men and women are truly different? Unfortunately, in most cases 'different' has been taken to mean inferior, leading to the development of sexism in which men were considered superior to women. Today as society grows closer and closer to the goal of equality, we have developed a tendency to whitewash any differences that do exist between men and women, thus eliminating the discussion and discovery of similarities and differences between the sexes. Throughout this paper the similarities and differences between men and women will be investigated by looking at both physical and psychological issues, with the purpose of answering the question-are men and women really so different after all?


None would argue the fact that the most obvious difference between men and women is in their physical shape. The physical differences are rather obvious and most of these can be seen and easily measured. Weight, shape, size and anatomy are not political opinions but rather tangible and easily measured. The physical differences between men and women provide functional advantages, and have survival value. Men usually have greater upper body strength, build muscle easily, have thicker skin, bruise less easily and have a lower threshold of awareness of injuries to their extremities. Men are essentially built for physical confrontation and the use of force. A man's skull is almost always thicker and stronger than a women's. The stereotype that men are more "thick-headed" than women is not far fetched. (6) A man's "thick headedness", and other anatomical differences have been associated with a uniquely male attraction to high speed activities and reckless behavior that usually involve collisions with other males or automobiles. Men invented the game "chicken", not women. Men, and a number of other male species of animal seem to charge and crash into each other a great deal in their spare time.


Women on the other hand have four times as many neurons connecting the right and left side of their brain. This latter finding provides physical evidence that supports the observation that men rely easily and more heavily on their left brain to solve one problem one step at a time. Women have more efficient access to both sides of their brain and therefore greater use of their right brain. This may be why women are often better at skills that involve both hemispheres of the brain, like reading, which requires the ability to translate visual symbols into language...It may be why men have more difficulty talking about their emotions, since talking is a left-brained activity, while emotions and feeling are generally right-brain activities. (6) Women can focus on more than one problem at one time and frequently prefer to solve problems through multiple activities at a time. Men, on the other hand, become much better in more general types of activities. Men will be better at activities which require gross motor control, larger motions, and will be able to more easily see a physically larger picture like a landscape or a night sky. The way the male brain develops naturally gives men a view of 'one general'. (6) Nearly every parent has observed how young girls find the conversations of young boys "boring". Young boys express confusion and would rather play sports than participate actively in a conversation between 5 girls who are discussing as many as three subjects at once! (7) This is not a difference that dissipates overtime. Women are more likely to spend free time with their 'girl friends' discussing the latest news and gossip, while men will gather to watch football or engage in a physical activity.


While it is relatively easy to categorize the physical differences between men and women it becomes harder to see the psychological differences. One difference can be seen in the thought processes of men and women. Both sexes often arrive at the same conclusions, but the thought processes that are used are often radically different. As previously shown, women are more prone to multi-task (i.e. talking about three subjects at once). In general women are labeled as 'global thinkers,' they consider multiple sources of information, and will view elements in the task in terms of their interconnectedness. Women come to understand and consider problems all at once. (6) This could be construed as both a good and bad thing. While it would allow for a broader perspective, it would also give the opportunity to become overwhelmed with issues. For example, you can view these individual issues "like plates of spaghetti, where each strand of spaghetti touches every other noodle on the plate," and sometimes they can get knotted. (7)


This idea is backed by the Kraft Roads Theory, which states that a woman views life as a vast network of roads, all interchanging, exchanging, crossing over, going in many directions, all running at the same time. Every event, circumstance, experience, encounter, meeting, relationship, and even each new emotion; is stored as a new road or pathway being created within a woman's life perspective. (7) According to the Kraft-Roads theory a man views life as a single road. Everything that happens in his life happens on his one road. He can start, stop, change directions, back up, and even decide to take an entirely new road, but ultimately, he travels this one road. It is very interesting to consider that even with this seemingly vast differences in thinking, for the most part men and women usually arrive at the same conclusion.


After having assessed the differences and similarities in the thought processes of men and women, the next step is to investigate their memory. One main difference in the memories of men and women is that women's memories seem fixed to recall memories that have strong emotional components. This applies to their recall system in general. A woman will access her memories by first recalling the emotions attached to them, usually the stronger the emotion the sharper the memory. From this it is safe to assume that memories that have connecting emotions are also recalled faster than those with out. Men on the other hand tend to recall events using strategies that rely on reconstructing the experience in terms of elements, tasks or activities that took place. Profound experiences that are associated with competition or physical activities are more easily recalled. (4)


Closely connected to memory is the ability to solve problems. Men and women approach problems with similar goals ,but with different considerations. While men and women can solve problems equally well, their approach and their process are often quite different. For most women, sharing and discussing a problem presents an opportunity to explore, deepen or strengthen the relationship with the person they are talking with. Women are usually more concerned about how problems are solved than merely solving the problem itself. Men approach problems in a very different manner than women. For most men, solving a problem presents an opportunity to demonstrate their competence, their strength of resolve, and their commitment. How the problem is solved is not nearly as important as solving it effectively and in the best possible manner. Men have a tendency to dominate and to assume authority in a problem solving process. (6) Take for example a group of children on an Easter Egg Hunt. A group of boys will first establish a hierarchy, based on demonstrations of ability, they might then send out 'scouts' or try to accumulate information of where the eggs are most likely to be hidden, and then proceed to hunt. A group a girls will not have a clearly established leader, but will explore together based on "collective intelligence." (8) In the end both boys and girls will end up with the same number of eggs, but once again they have achieved their goal in a radically different manner.


Now that we have seen how differences in problem solving abilities affect group dynamics, how do these differences affect a men and women when performing as single entities. To explore this issue, a recent study was conducted to identify if the differences in problem solving abilities affects academics. A general population sample was given a survey that touched upon the topics of Physical Science & Technology, Life Science, Geography, History, Social Science, Religion, Art/Language & Literature, Performing Arts, and Sports/Hobbies & Pets. The survey concluded that women were smarter than men when it came to Art, Language, Literature, Geography, Sports, Hobbies, Pets and Life Science. Men scored better in History, Performing Arts, Religion, Social Studies, Physical Science and Technology. Overall women were only 'smarter' by a 2 percent average. This survey seems to deny the general idea that either sex is smarter than the other, demonstrating that while each sex might have different strengths, neither is superior. (5)


As we study the differences between men and women, one attribute becomes consistent: the ability and manner in which emotion is processed. It seems as if women are wired to act, and recall based on emotion. While me are geared towards action. It remains then, to scrutinize the distinction between the way men and women feel or sense emotions. There is evidence to suggest that a great deal of the sensitivity that exists within men and women has a physiological basis. It has been observed in many cases that women have an enhanced physical alarm response to danger or threat. (1) This may explain why women are more adept at 'reading' emotions than men are. In a way a women's lack of physical strength is compensated by her ability to predict the makings of a dangerous situation. Between people, dangerous situations are usually foreshadowed by certain emotions. An enhanced ability to read these emotions is a type of defense mechanism that women have. This is not to say that men cannot sense emotions only that they are oblivious to the finer indicators of emotion; men see the end product.


From all of the gathered evidence, one can easily jump to the conclusion that since women are more adept at reading emotions than men, that women are more emotional than men are. While this might sound true, if we dig a little deeper, studies have shown that maybe this is not the case. Recently, Psychologist Anna Kring conducted two studies - one to determine whether women are "more emotional" or just "more expressive", and the other to explore whether gender roles account for expressive differences between women and men. In both, women were shown to be more facially expressive of both positive and negative emotions. (3) However, when charts comparing their heart rate and palm-sweat, were compared, both men and women showed relatively equal internal signals of feeling emotions. This is important since it supports the idea that while men might not outwardly express emotions as readily as women do, internally they are just as sensitive. What then is the causation of typical male stoicism? If sex is not what causes the difference in expressed emotion, we must look to an external source--maybe gender roles are responsible? In any society there are there are predominant stereotypes about sex and emotion. In today's society feminine gender roles traditionally include such attributes as being nurturing, affectionate, warm and caring, while masculine characteristics are generally the opposite: aggressive, powerful and assertive. (3) It is easy to see how these stereotypes have influenced the opinion that women are more emotional than men.


In this paper the question has been raised, what are some of the differences between men and women? Through an investigation of the physical and psychological differences, the opinion has emerged that the main distinction between the behaviors and actions of men and women can be found in emotion. In general it is assumed that emotion plays a greater role in the thought processes and actions of women than in men. However, as we read through the evidence the one main difference between the sexes can also be construed as their main similarity. The main evidence for this idea can be found in the idea that psychology labels emotion as the main disparity of the sexes. For example, in memory, women will recall events based on the strength of the attached emotions, while men will recall memories attached to profound events associated with competition. But what are profound events, if not emotionally striking occurrences. For example, winning a race or football game is a striking memory to a man because of the extreme emotions associated with the actions. Another difference that is usually attributed to emotion, is a woman's tendency to outwardly express more emotions than a man. However, this myth was dispelled by a study in which various body functions (heart rate, palm sweat, breath rate) were measured, and the outward reactions of the participants filmed, all while watching an emotional movie. While women outwardly expressed more emotion, on average both sexes' body rates were the same. This study demonstrates that while men will outwardly express less emotion than women, they feel just as much as women do. These points flow over to prove that what is perceived as being the greatest difference between mean and women, is actually a great source of commonality. The study of the differences between men and women is broad and unending. Many have, and many will continue to expound on the topic. We have barely begun to touch upon a few of the issues in our investigation of physical, and psychological differences. Yet, as we have seen demonstrated, once a closer look is given to the evidence our greatest difference is often our greatest connection.

Web Resources

1)Emotion Definition Home Page, a page that give a simple, yet very educational definition of emotion
2)Oxford English Dictionary, an excellent online source for any type of definition. Especially useful for historical information on words.
3)Science A GOGO Page, An online science magazine, that published the most current studies and statistics that are being published.
4) Work Place Doctors, a forum where Doctors and medical professionals answer and debate weekly questions.
5)Stem Net Home Page, this is where the results of 'project 15' are posted. A study done on the academic differences in men and women.
6)Oregon Counseling, a rich resource in published medical papers, in particular those relating to psychology.
7)Kraft-Roads Theory Site, this is the home page of the Kraft-Roads theory. It gives a wonderful explanation of the theory, and has some very informative links.
8)Whitman College of Institutional Research Site, an informational source of statistics gathered primarily from surveys given to college students around the country.
C


Speaking of Happiness
Name: Kat McCorm
Date: 2003-05-16 13:35:45
Link to this Comment: 5708


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Speaking of Happiness:
The Interplay between Emotion and Reason

"And this is of course the difficult job, is it not: to move the spirit from it's nowhere pedestal to a somewhere place, while preserving its dignity and importance."

I cry. There is pressure behind my eyes, my skin turns blotchy and my lips tremble, and mucus clogs my airways, making it difficult to breath. I hate crying in front of others: not because I want to hide how upset I am, but because the second that most people perceive my emotional state as fragile, they assume my reasoning and mental functions are also not sound. The outward expression of an inward instability is something we save for those who we know and trust best. They do not view our emotionality as a weakness; they already know us to be strong. Crying is represented in our culture as a lack of control. When upset, the "ideal" is to keep a cool head (and a poker face), not allowing emotions to enter into the decision making process. Control has become such a constant in American society that we are hardly aware of our continual fight to attain it or maintain our illusion of it. Feeling overcome by an emotion causes individuals to face their own inability to govern situational outcomes, and their own responses to these outcomes. However, I submit that without our emotional base, rationality would have no reason or foundation upon which to operate: emotion and reason are not mutually exclusive, but rather interfere constructively within the construct of the human brain.

A multitude of opinions are found on the subject: are emotions more a function of the heart or of the head? According to Antonio Damasio (1), emotions and feelings are an integral part of all thought; yet we as humans spend much of our time attempting to disregard and hide them. In the view of source (2), experience is the result of integration of cognition and feelings. In either view, it remains indisputable that emotions are not what we typically make them out to be: the unwanted stepsister of our cultural sweetheart reason. Reason in our culture denotes intelligence, cognition, and control. Emotions seems such a "scary" concept to our collective mind because they can be so overwhelming, and can cause us to lose the control we are so reticent to relinquish. Consequently, the perceived division between emotion and reason has resulted in more polar divisions that we experience on a daily basis: the great schism between the humanities and the sciences, for example. However, as is pointed out by an NIMH study (3), this historical division between emotion and cognition is losing it's utility as research progresses. The integration of the concepts is reflected in the interdisciplinary interest: from neurobiology to psychology, and the implications are far reaching. Increasingly, the rose colored glasses of the psychologists are being related to the rose colored thoughts of the biologists.

An early interpretation of the relationship between emotion, cognition and physiology was that of William James, who thought of emotions as results of physiological processes of the autonomic nervous system (6). According to his observations, first comes cognition, then a physiological response, and then an emotion. In response to an event such as the death of a friend, first the cognition steps in about what this means, then the body begins to cry, and because we are crying, we begin to feel sad. Or, if a person happens to bump into a bear in the woods, the person would cognitively recognize that bears a dangerous, and so they would start running and have an elevated heartbeat and sweaty palms. Then, in turn, the mind would interpret this physiological response as the emotion of fear. Another later theory was proposed by Walter Canon and Philip Bard. This theory (4) proposes that in response to a stimulus, a signal is sent to the thalamus, where the signal splits and goes toward the experience of an emotion and the other half towards producing a physiological response. This theory was based on observations that because emotions and physiological responses happen simultaneously, neither can be the causation of the other. Again, here we may use the example of the bear in the woods: when the person sees a bear, the reaction is virtually instantaneous: the signal from the hypothalamus splits causing the person to be afraid and run all (apparently) within the same heartbeat. However, the Canon-Bard theory does not even deal with the event of cognition as related to either physiological response or emotion. For this reason, most modern neurobiologists are not satisfied with either theory based on their own observations. The question remains: what physiological structure can we pinpoint as the source of our emotions? And perhaps most importantly, can this pinpointed structure place emotion within the brain?

Generally, if we grant it true that brain is equal to behavior, and we admit that our behavior is a result of both our emotions and our reason; it must be conceded, then, that the brain must doubly reflect both reason and emotion, and must serve as host to both. A recent NIMH study further investigated this concept, and attempted to trace emotional and cognitive memory to a physical structure in the brain. It was found that the amygdala has a large role in the storage and integration of both. There is much evidence that emotions and cognitive processes are in some way interdependent based on their common root within the structure of the amygdala. One example of this is the research of Monica Luciana (3). According to her findings, spatial working memory is influenced by goals and emotional states, as she tested through the use of dopamine and serotonin agonists and antagonists. When dopamine agonists are used (emulating an emotion of happiness), subjects' performance on spatial abilities test was improved. The use of serotonin agonists (emulating negative emotions) slightly impaired performance, as did dopamine antagonists. These results are consistent with the everyday practices of "psyching yourself up" before an exam, or, more generally, "thinking positive". In the same study, depressed patients were found to have increased blood flow to the amygdala, and therefore increased amygdala activity. In reflecting on this, I observed that many of my own depressed periods occurred at times when friends and family described me as "thinking too much." Could this be related to the over activity of the amygdala, a structure which serves as an integration center and "memory 'enabler'?" Shakespeare describes a similar experience in As You Like It (5), saying: "it is a melancholy of mine own, compounded of many simples, extracted from many objects, and indeed the sundry contemplation of my travels, which, by often rumination, wraps me in a most humorous sadness." His observations that his sadness was the result of rumination and contemplation supports the findings that the amygdala is a center for cognitive and emotional memory, and it's over activity results in depression.

Considering that there is substantial evidence for the presence of emotion as a structure within the brain, does this necessarily imply that emotion and reason can be seen as mutually supportive? Formerly, I claimed that without emotion, cognition would have no base upon which to operate. This claim was a result of many of my own observations of emotion as a paradigm through which thought is filtered and develops. One example of this is the "downward spiral" of depression. Once triggered, depression establishes itself as the viewing window for all subsequent circumstances and events. The thought a person then has about the circumstances surrounding her develop out of depression, and therefore more depression ensues. Even if a person is conscious of this phenomenon, perception can be near impossible to change. This interpolation between emotion and cognition is reminiscent of the blind spot dilemma: even though we are conscious of a blind spot, we cannot choose to see it. Emotion and reason are so interrelated that we cannot choose when and how to separate them out. However, our cultural and (dare I say?) individual obsession with control makes both the blind spot and the emotional paradigm hard to accept. But, Rose-colored glasses result in rose-colored thoughts, even if we choose to ignore it.

I love. I, the self, love something in the environment around me. But this is not just some ethereal feeling, which cannot be placed or defined or seen, which can only be felt and described by musicians and poets. Love, traditionally, is placed on an invisible pedestal, floating mysteriously above us, our rationality, our flesh. But I love. I feel weak in the knees, butterflies in the stomach, head over heels in love. Somehow, we still connect this otherworldly experience with physical sensations. And yet, we are reticent to bring love from it's abstract home into our own bodies, our own minds; afraid that if we recognize our emotion as something totally contained with our brains, something totally human, we will wake to find some of the wonder, tenderness, and luster gone. Is this also reflective of some human insecurity? What is it within the human brain that wants so badly to appear to have control we can become convinced of the assets of weed emotion out of our brain and our behavior? Not until we can bridge the illusioned gap between our emotions and our cognition can we understand fully either our brain or our behavior.

References

1) A.R. Damasio, Descartes' Error, 1994

2) Thinking, Emotions, and the Brain,

3) From Neurobiology to Psychopathology: Integrating Cognition and Emotion, on the NIMH website/
4) Laughing out Loud to Good Health,

5) William Shakespeare (1564–1616). The Oxford Shakespeare. 1914. , on the bartleby website.
6) Theories of Emotion--Understanding our own Emotional Experience. ,


ADHD and Cocaine
Name: Kate Tucke
Date: 2003-05-16 14:39:01
Link to this Comment: 5710

<mytitle> Biology 202
2003 Third Web Paper
On Serendip

I have three brothers, all of whom were adopted. Two of them have the same birth mother. They both had cocaine in their bloodstreams when they were born. The doctors at the time assured my parents that it would leave their systems and they would be fine. At the time, they were told there would be no side effects. Now it seems that this may not be the case. Both boys have been diagnosed with AD/HD and are both on medication to control it. Doctors now believe there may be a link between cocaine use and attention deficit/hyperactivity disorder. In this paper I will explore that relationship.

ADHD affects 3-5 percent of children and almost 2 million American children. It does not have physical characteristics that can be used to identify it. Instead, there are certain characteristics that most people with ADHD exhibit. Some of these are inattention, hyperactivity, and impulsivity. To be diagnosed with an attention disorder, the person must have had the problem from before seven years of age, had it for more than six months, and it must interfere with their lives. If a person exhibits these symptoms but it does not interfere with jobs, school or relationships, then the person does not have an attention disorder.(1)

There are several treatments for ADHD. Medication has been hugely successful in helping people concentrate, but it is also helpful to seek counseling as well to help learn how to manage the disorder through behavioral changes. ADHD drugs are controversial because they are stimulants. The drug most commonly prescribed is Ritalin. For many years it was thought to be less potent than other stimulants, particularly illegal ones. In one study comparing the effects of cocaine to Ritalin, researchers found that Ritalin is actually more effective at producing a "high" than cocaine.

When taken orally, Ritalin does not produce a high. When injected as a liquid, however, it produces a feeling very much like using cocaine. Dr. Nora Volkow used PET imaging to determine how Ritalin affects the brain. Cocaine blocks autoreceptors of dopamine, causing an excess to be present in synapses. Researchers initially thought that while cocaine blocks about 50% of these receptors, Ritalin would do the same but be much less effective. They discovered, however, that a typical dose given to children is actually more effective than cocaine. It blocks 70% of the autoreceptors. (2)

Since this discovery, the question that remains is, why isn't the drug abused more frequently. There is a very low reported incidence of Ritalin abuse. Researchers speculate that this may be because, when taken orally, the drug takes much longer to enter the system and therefore does not produce a "high" like cocaine.

There is a correlation between cocaine use and ADHD. Many cocaine users are later diagnosed with ADHD, leading doctors to speculate that they were actually cases that had gone undiagnosed. These people were self-medicating with cocaine. Being treated with medication for ADHD can help them stay off cocaine, as subjects report that their cravings for cocaine decrease as they start to take buproprion as treatment for ADHD.

Another link between cocaine and ADHD is in children exposed to cocaine prenatally. Studies have shown that children whose mothers used cocaine while pregnant are more likely to have problems with paying attention. (3) This leads to several questions. First, is it truly possible to determine this link? These studies are all based upon peoples own reported amount of cocaine consumption, and they could be untruthful for a variety of reasons. Second, are there compounding factors? In the same study, the researchers found that there is also a correlation between nicotine use and attention deficit in the child. Since many of the women in the study who used cocaine also smoked cigarettes, it is difficult to determine which one is the causal factor. Third, and most importantly, is cocaine a causal factor at all? These studies have been based off of gathering information about the birth mother and the child and did not measure the biochemistry of the brain at any point in the process. Yes, there does seem to be a link, but it is not necessarily causal. It has already been shown that children with ADHD tend to have a parent with at least some signs of ADHD as well. If the disorder is genetic, then it follows that the children of cocaine users would have ADHD, since, as we have seen before, in many cases cocaine users are undiagnosed cases of ADHD. It would be easy to assume that the use of cocaine while pregnant is the cause of ADHD, but the link is not so easy to establish. The correlation can be explained by the simple fact that ADHD tends to be inherited genetically.

But for a moment, let us examine the possibility that cocaine use does cause ADHD in children. What could be the cause of this? Cocaine causes a certain influx of neurotransmitters into the brain, producing a pleasurable effect for the user. If the user is pregnant, and the cocaine enters the womb through her use of it, the infant would receive the same effect in its forming brain. It seems possible that the brain could adjust itself to having that chemical effect and therefore produces less of the neurotransmitters involved that would otherwise be produced naturally. In other words, perhaps ADHD in these children is a side effect of the cocaine being removed from their brain. It could be a sort of life long withdrawal.

While this is certainly an interesting theory as to why children who were exposed to cocaine prenatally tend to have difficulties concentrating, I find it more likely that the problem was simply hereditary. It has already been shown that some people who have been undiagnosed with ADHD through adulthood will use cocaine to medicate themselves. Without more scientific study, it would be impossible to assert that cocaine use was the sole cause of ADHD in the children exposed in the womb.

Another issue to then consider is the issue of medicating children through a drug that closely resembles cocaine. What are the effects on the child and will the child then go on to abuse it or abuse other substances later on? Although Ritalin is a stimulant, it does not function to give people a "high" like we think of with cocaine. Researchers have speculated that this may be because it is taken orally and reaches the brain at a more gradual pace. Children who take Ritalin actually show a decreased tendency to try illicit drugs later on in life. Perhaps this is because, unlike those cocaine users, these children do not feel the need to self-medicate. The need to try other substances is decreased because they have been medicated already with Ritalin.

Many parents, however, are quite skeptical about Ritalin and are unwilling to give it to their children. A simply search about ADHD on any search engine will turn up many websites dedicated to decrying the evils of Ritalin and of medicating children. There are several reasons for this, many of which actually have nothing to do with the links between Ritalin and cocaine.

Many parents simply do not like the idea of medicating their children for what they consider normal behavior. ADHD has been very controversial because it is involved in the so-called "normalizing" of children. Any child that acts out in class or is disruptive can be labeled with ADHD. These children who are labeled as different can then be normalized, simply by taking a drug. But is it good to make children conform to normality? Perhaps their spunk is simply part of their personality, and not a disorder which must be eliminated.

In this case, I must say that parents' fears are excessive, if the children are properly diagnosed. The diagnosis for ADHD includes the stipulation that the behaviors considered part of the disorder are somehow interfering with that child's functioning. A proper diagnosis would not label a child who simply has a short attention span as having ADHD unless it is somehow keeping them from having a normal life. Unfortunately, we once again encounter the word normal, which raises so many objections from the critics of ADHD. In this case, however, I do not believe it is improper to use it. If a child cannot learn in school because that child cannot focus on the task at hand, then there is a problem that should be solved. In many of these cases, the child is frustrated and unhappy as a result of the disorder. They do not understand why they act the way they do or why they cannot do tasks that they should be able to. In some cases, children develop very low self-esteem because they do not think that they have the ability to do what other children can. In these circumstances, I view medication as a very viable treatment. It can improve the quality of children's lives greatly and help their own happiness increase. I agree with those who are concerned however, that the ADHD is greatly over diagnosed. We, of course, must be careful only to treat those children who need it. But this is the case with any disease, especially mental illness.

I find the fear about ADHD to be a reflection of a general attitude towards mental illness, rather than a reaction to that particular disorder. People often regard mental illness as less valid than other physical illnesses. The same prejudice exists with other common disorders, such as depression and bipolar disorder. People are often unwilling to admit that a problem exists unless they have physical evidence, such as a broken arm. Unfortunately, although something may be "broken" in a person with a mental illness, it is impossible to see. That does not make it less valid, though. It would be horrible if psychiatrists were to stop prescribing antidepressants simply because there are no physical symptoms of depression. There are so many mental illnesses that can only be diagnosed by looking at the symptoms. It is simply a mistrust of the field of mental illness as a whole that leads people to discredit ADHD because there are no physical symptoms.

In my exploration of ADHD, I have come across several questions that have not yet been answered. There are those who believe it is wrong to medicate children who have it and those who believe it is essential. There are those who believe it is widely over diagnosed and those who believe that recognizing the children who have it will greatly improve their lives. Both sides of the debate have some merit, though I am more likely to agree with those who believe that ADHD is a mental illness and that medication can be used to correct it. The adult cocaine users are a confirmation of this; they show that there are people in society who do need to be medicated. These people have chosen to medicate themselves, though they most likely did not think of it that way. They were driven to use cocaine because of an imbalance in the brain; an imbalance that can be fixed by other, legal drugs now that we know what it is. Yes, Ritalin may function in a manner similar to cocaine, but the low levels of abuse make it a much safer alternative. If children with ADHD are left unmedicated, they are more likely to turn to drugs later on in life. (4) So although parents may dislike the idea of "drugging" their children, they really need to accept that sometimes drugs are a necessity, and that legal and safe drugs are a better alternative than watching their children turn to illegal ones later on in life.

References

(1)NIMH. "Attention Deficit Hyperactivity Disorder.", a summary of ADHD

(2)Vastag, Brian. "Pay attention: Ritalin acts much like cocaine".

(3)Bandstra ES, Morrow CE, Anthony JC, Accornero VH, Fried PA. "Longitudinal investigation of task persistence and sustained attention in children with prenatal cocaine exposure".

(4)Joseph Biederman, Timothy Wilens, Eric Mick, Thomas Spencer, and Stephen V. Faraone. "Pharmacotherapy of Attention-deficit/Hyperactivity Disorder Reduces Risk for Substance Use Disorder".


Imagine: A Day in an Autistic Brain
Name: C.Kelvey R
Date: 2003-05-16 15:29:45
Link to this Comment: 5711


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

While many components of autism continue to mystify the scientific world, there is a common pattern within the brain that may explain autistic behavior. Understanding the unique pathways and coping techniques of the nervous system of an autistic person will facilitate understanding autistic behavior and possibly direct future research. The brain of an autistic person experiences excessive stress due to external and internal sensory inputs, which the nervous system cannot effectively integrate and process. As a result, autistic people express seemingly "maladaptive behavior" (1). These behaviors may include social phobia, compulsive behavior, communication difficulties and savantism (1). In truth, an autistic person's behavior is not "maladaptive" but rather, a logical reaction to the activity within a brain as it tries to integrate a collection of signals, process the information and then effectively communicate. While the cause and pathology of autism remains a mystery, the differences between the brains of non-autistic and autistic people further support the theory that autistic behavior is the result of the brain's inability to effectively integrate sensory inputs. Due to the stress the brain experiences from ineffectively integrating inputs, the autistic person creates an environment to control the activity of the brain. Therefore, instead of observing how inputs from the environment affect the brain, autism involves observing how the brain affects a person's attempt to limit the inputs from the environment.

Imagine you are autistic for a day. How would you behave and how would your behavior be a reflection of the activity in your brain? How does your brain process interactions with animate and inanimate objects? What are the physiological differences in your brain compared to a non-autistic brain? As the rate of cases of diagnosed autistic children increases, with as many as 1.5 million adults and children in the United States with some form of the disorder, it has become increasingly important to understand autism (2). To develop an understanding of the stress in the autistic brain, imagine you are autistic for a day and see how the interactions throughout the day are processed within the brain and what unique physiological differences may be further examined to explain your limited integration of inputs.

'The day begins with your mother coming into your room to wake you, giving you a hug and saying, "Good morning". You respond with a look of bewilderment and begin repeating "good morning" in a monotonous tone.' The social interaction between autistic children and adults demonstrates the complexity of communication. An autistic child often has difficulty interpreting adult communication, which is often indirect (3). Adults tend to increase the complexity of their communication by incorporating emotional cues into their speech. The autistic brain has difficulty processing and integrating the various cues that converge in social interactions. As a result of these dynamic interactions, people with autism may appear emotionally insensitive or unresponsive while internally they are struggling with their heightened sensitivity and emotional state (3). Even seemingly simple statement such as "good morning" is expressed with facial expressions as well as a variation in sound to convey a meaning (4). As a result, the autistic nervous system cannot process the various inputs and the autistic child focuses on the words and repeats "good morning" while ignoring any response to the emotional aspect to the interaction. In addition, the expression of emotion requires the greatest convergence of information to the brain from different signals that originate both internally and externally to the nervous system (3). Therefore, the stress placed on the brain in order to process the information may result in the emotion center of the nervous system 'shutting-down' and another person will interpret the expressed behavior to be of a child who seems to lack emotion. As a result, it is often easier for autistic children to show affection and connect with the basic personalities of animals who do not emphasize verbal expressions using emotional cues (3).

Research on the corpus callosum may identify possible physiological causes of the difficulty autistic individuals have with incorporating verbal and emotional cues. The corpus callosum's large fiber pathways allow the left side of the brain to communicate with the right side (5). The middle and back part of the corpus callosum is smaller in autistic individuals (5). Observations of autistic individuals confirm they indeed lack hemisphere specificity. As a result, unclear background responses in both hemispheres are greater in response to stimuli in the left visual field than in response to stimuli in the right visual field (6). The effects of a lack of communication between the right and left hemispheres and more notable a possible malfunctioning of the right hemisphere is demonstrated by the processing of emotion. Emotion is considered to be 'whole-brained' activity with the left side interpreting the "what" of emotion and the right side determining the "how" of emotion (7). If the right side is malfunctioning and unable to send the message of "how" then the cues of emotion are lost, the inputs are never integrated into a completed output that contains both the "what" and "how". Furthermore, because language is typically located on the left side of the brain, an autistic person may repeat a basic verbal expression without integrating emotion into the expression.

Interestingly, the corpus callosum tends to be wider in women, particularly in the posterior regions, with CAT scans showing better connections between the hemispheres in women (8). Given that the prevalence of autism is four times greater in boys, there may be a structural difference in the brain of males that heightens their susceptibility to autism (7). The observation that women tend to be 'multi-taskers', integrating ideas and actions, may be correlated to a wiring of the brain that may strengthen a particular region and thus helps prevent the onset of autism.

'You are then taken to day-care where there is a room full of toys and playmates. You may choose the toy car and then go to the corner and spin a wheel of the car.' In the environment of a playroom, an autistic child actively attempts to limit the sensory inputs to the brain. A doll or action figure is less attractive to an autistic child because it requires more pattern recognition related to animated thought and therefore, collections of inputs that must be integrated (4). Instead, the repetition of a spinning wheel calms a stressed brain in a room full of inputs. Repetitive information is easier to process and a stressed brain receives comfort from repetition because it reduces further stress in that area of processing (4). Just as word repetition allows an autistic person to process the words independent of the associated emotion, focusing on the wheel creates a simple field of vision with a repeating pattern and removes the potentially complex significance of a car. The corner is the location of choice because it provides seclusion. Significant stress collects in the auditory centers of the brain more quickly than in visual processing regions (4). Therefore, a quiet location, separated from the collection of incomprehensible and seemingly disorganized noise of the other children, is the most comfortable and calming location for an autistic child.

The difficulty autistic individuals have dividing their attention in a roomful of visual and auditory inputs may be related to the two areas of the cerebellum that are abnormal sized in autistic individuals. Research conducted primarily by Dr. Eric Courchesne found a correlation between the abnormal sized areas of the cerebellum in autistic individuals and the difficulty autistic individuals have with detecting cues when given tasks that required their attention to be divided between auditory and visual stimuli or between several different visual stimuli (9). Without being able to rapidly and automatically shift and distribute attention, only one or a few of the many stimuli may reach awareness (6). Furthermore, the task of integrating all of them into a coherent story is rendered very difficult (6). This difficulty with integrating visual and auditory stimuli suggests that the combination of many stimuli sending signals to the brain would lead to stress on the nervous system and the subsequent preventative or reactive behavior observed in autistic individuals.

'You are picked up from day care and are taken to do the grocery shopping with your father. While in the store you surprise him by calculating the grocery bill before the cashier.' Autistic people's ability to complete seemingly impossible calculations may be a coping mechanism to prevent sensory overload. To limit inputs, an autistic person often focuses on one particular action, saying or thought. This narrowed focus may be a habit developed to avoid pain, but the result is a unique skill (4). Isolating a specific function, such as mathematical computations, allows the autistic child to concentrate exclusively on a particular thought (4). The inclusion of additional factors that are not related to the problem corrupts computation. Therefore, people who do not have the skill of isolating inputs have more difficulty at highly developing a particular brain function, such as mathematics. In addition, solving a problem takes place in one part of the brain, while communicating the answer uses a different part of the brain. Therefore, autistic people are able to calculate without understanding or complicating the problem with "the language of mathematics" (4). To reduce the stress in the nervous system from too many sensory inputs, the autistic child develops the mental abilities to regulate the reception of environmental stimuli.

'At the end of the day, your family decides to take you to dinner at a restaurant only to discover that they have to leave when you begin to scream.' This observed behavior is the reaction of an overloaded and stressed brain. The tantrums of an autistic child may be caused by the brain's inability to integrate sensory inputs and form consistencies within the brain. The inputs from an autistic's person's senses cannot converge within the brain and develop a single picture, but instead remain fragmented. As fragmented pieces, each input sends signals that the brain does not dull because the brain cannot create a consistent and coherent pattern and determine which signals are repetitive.

Thus, a non-autistic person entering a restaurant will smell food, hear a cash register, see crowds of people and dull the inputs after creating a coherent story that each input is simply the cues related to the environment, a restaurant. An autistic person on the other hand is unable to connect the inputs, understand their relation and then simply 'dull the senses'. Instead, the nose smells the plates of food, the body feels the touch of strangers, the eyes see the waiter arriving at the table and the ears hear the ringing of the cash register, while the autistic individual has no idea where to focus and what is the appropriate output for the seemingly fragmented signals. An instinctual expression of stress in the brain, that does not require organized communication, is a tantrum. The idea of a pleasurable meal at a restaurant becomes pain for an autistic person.

Within the brain there is a constant attempt to give order to an ever-changing world. The term used to describe the activity of the nervous system that provides consistencies is the lateral inhibition network (LIN). The LIN attempts to clarify the world. Instead of seeing every detail, the 'edges' are located and the brain fills in the irrelevant information. In an autistic child, the LIN is in essence defective and therefore, an autistic person cannot create consistencies or 'throw out' irrelevant information. An autistic child will focus on the wheel of the toy car because they are struggling to create a 'story' for the spinning object and therefore, can not move beyond to the more complex car. In a social situation, they cannot create a coherent story that will enable them to dull certain inputs. As a result of a defective LIN, an autistic person has millions of fragmented signals, which overload the nervous system and result in behaviors that limit inputs to the brain.

Interestingly, autistic brains have a reduced number of Purkinje cells with a loss of up to 41% compared to normal brains (5). The Purkinje cells are considered inhibitory neurons. They selectively suppress and limit the excitatory impulses they receive from over 200,000 other cells and act as the sole output from the cerebellar cortex (10). The Purkinje cells receive the inputs and while the "competing 'voices' contribute to a high level of background activity, the Purkinje cells sculpt and compose this 'noise' into coherent "musical phrases" the rest of the brain can clearly understand" (10). A reduced number of Purkinje cells may not only limit the autistic person's ability to inhibit or dull certain signals to the brain but also inhibit the ability of the nervous system to create a coherent story. Furthermore, without being able to inhibit or integrate signals, there is a constant threat of overloading the nervous system in social situations similar to a dinner at a restaurant.
In addition, autistic people have elevated levels of beta-endorphins, which are linked to the perception of pain (11). Autistic people have a high tolerance to physical pain (11). However, this tolerance for external physical input may actually be the result of beta-endorphins that are released to alleviate the internal pain or stress of the nervous system. The observed pain tolerance may simply be the side effect of a process directed at a different target.

Whatever the cause, the basic pattern in an autistic person's brain, expressed through their behavior, originates from a collection of inputs from within and outside the body that the nervous system cannot integrate and effectively process. As a result, there is a sensory overload, which stresses the brain. To cope with this sensory overload, an autistic person attempts to limit inputs from the environment so they are processed gradually. Instead of seeing an autistic child as 'maladaptive' or 'unpredictable', attempts should be made by the non-autistic brains to develop the coherent or predictable stories and understand the adapted patterns underlying autistic behavior. Understanding patterns within autism would not only remove various components that mystify both observers and scientists but also help direct future scientific research. If autistic children cannot 'naturally' coordinate subconscious thoughts into appropriate behavior then can autistic children be taught to form those patterns and integrate information? In other words, how malleability is the I-function? The I-function is involved in coordinating subconscious thoughts into behavior and ultimately, produces a reaction, which may not be explained by a conscious thought. Furthermore, the I- function tries to provide a coherent story with a seemingly logical response. Therefore, could the I-function of autistic children be trained to coordinate subconscious thought and produce an 'appropriate' reaction? Could the brain be trained to see patterns and process two sets of stimuli? Can the synaptic pathways be consciously organized in order to develop an unconscious reaction? Furthermore, can autistic children be given tools before the habits of desperate communication or limited interactions develop? Identifying common patterns in the mystifying behavior of autism may not only be the link between an autistic child spinning a wheel or playing with the toy car but also may provide an understanding of how much further than the sky, the brain can reach.

References

1) Learning Interrupted: Maladaptive behavior in the classroom

2)Autism

3)Insights on autism and other observations inspired by "Thinking In Pictures" by Temple Grandin Part 2An excellent philosophy/neurobiology website!

4)Insights on autism and other observations inspired by "Thinking In Pictures" by Temple Grandin

5)
Difference in brain size may hold key to autism research

6)Neuroanatomical and Neurophysiological Clues to the Nature of Autism

7)Brain Hemispheres, Emotions and ASD

8)Male and Female Brains

9) The Cerebellum and Autism

10)Purkinje World

11)Sensory Abnormalities


Absence Seizures
Name: Laurel Jac
Date: 2003-05-16 16:08:50
Link to this Comment: 5713


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Absence seizure disorder is rare and almost always occurs in children between the ages of five and fifteen. Teachers of children with absence seizures are usually the first to spot the problem because when the child experiences a seizure, they exhibit a blank stare, and do not remember anything that happened during the trance-like state, so these children are frequently noted as having attention problems.(1) They lose their train of thought, they do not answer when called upon, they are easily distracted.1 All of these characteristics are byproducts of the child's disorder. Usually the children do not realize there is a problem. If left untreated, the seizures can lead to learning disabilities, but in most cases, absence seizures do have side effects. It is important to treat the disorder ecause while experiencing a seizure, the child may lose some muscle control and collapse.(2)

Absence seizures occur rapidly, lasting less than thirty seconds. There are no precluding events that signal their onset. They occur almost exclusively in children and early adolescents.(1) In most cases, absence seizures cease as the child ages. While experiencing a seizure, the child exhibits a blank stare, and when the seizure ends, the child is not disorganized, resuming normal activity. During absence seizures, chewing movements, rapid breathing, or rhythmic blinking may be observed.

Seizures are cause by sudden, large discharges of electrical impulses from neurons.
Neurons communicate by sending electrical discharges, and seizures occur when an
abnormal amount of electrical discharge is released. There are different types of seizures that are defined by the location in which they occur in the brain as well as the intensity of the discharge. Diagnosis of any type of seizure disorder—epilepsy—can be made after an electroencephalogram (EEG) has been administered to measure the electrical activity within the brain. Any abnormal activity can be detected this way, however, a normal EEG does not rule out epilepsy. Some diseases can cause seizures, so blood tests may also be taken. A cranial CT scan or a cranial MRI can help to observe the structures within the brain.(2)

Treatments for absence seizures are called anticonvulsant medications which help to limit the number of seizures experienced.(2) There are varied responses to the medications, and the general side effects include drowsiness, dental problems, and allergic reactions. Many over-the-counter medications and herbal remedies interfere with proper functioning of anticonvulsants.(5) Some common anticonvulsants are valproic acid, ethosuximide, and lamotrigine.(3) Once treatment has begun, it usually lasts for about two years(3), at which point the child will be taken off medication and monitored for continued abnormal brain activity.

A few months ago, my eight-year-old cousin Sophie was diagnosed with absence
seizures. Her grade school teacher was the first to suggest that she might be having
seizures. Sophie is a very cheerful, somewhat ditzy little girl, always animated and
engaged. She was considered "spacey". Her older brother, being a typical brother,
teased her for spacing out all the time. He would emphatically wave his hand in front of her face, and say "Sophie, earth to Sophie!" With the diagnosis of absence seizures, her moments of day-dreaming were explained. Her brother has only recently stopped feeling guilty for teasing her so much. Now that Sophie is taking medication which prevents the seizures, her episodes have almost completely ceased.

It is still too early to tell if she has suffered any damage from having experienced seizures for a few years without receiving treatment. Sophie's physicians explained to my aunt and uncle that the necessary medications may dull Sophie's vibrant personality. Some children lose affect while taking the treatment.(4) Fortunately, Sophie's personality is intact.

Apart from disease-related causes, the etiology of absence seizures are unknown.(6) There seems to be a genetic connection, but the exact process of heritability of such disorders are still being researched. Recent genetic studies in rats have aided exploration of the connection between GABA and absence seizures. The administration of y–vinyl GABA into the thalamic relay nuclei, the thalamic relay neurons are affected which leads to the exacerbation absence seizures. (7)

A recent study published by the Department of Pediatrics from the University of
Arkansas for Medical Sciences compared children with absence seizures and children
with ADHD. A common behavioral trait used for diagnosis of both disorders is
inattention and staring. Misdiagnosis of either condition can be quite damaging to the
patient. The child with absence seizures, diagnosed with ADHD, delays the appropriate
treatment, meaning that the seizures will continue. The study employed tests that were
administered by the parents of the children. Despite the confusing overlap of many
behaviors used in diagnosis of both ADHD and absence seizures, researchers found that
when rated by parents, children with absence seizures were recorded as having a low
occurrence of incompletion of homework assignments and not remaining on task.
Children with the inattentive type of ADHD were noted to have a very high frequency of
these two behaviors. This distinguishing factor is important so that the child receives the proper diagnosis, and treatment can begin as soon as possible.(8)

During a seizure, there is a lapse in consciousness, yet at the end of the seizure, the
patient does not remember or realize that anything has happened. The I-function of the
brain is temporarily on hold, inactive. The increased electrical activity in the brain causes the heart rate to increase and can, in some cases, cause a twitching of the nerves in the eyes and mouth, all of which do not register with the patient.(4) The I-function does not know what is going on in the brain. Perhaps there is a problem with the connection between the brain and the I-function in those with absence seizures, or any epileptic disorder. The person does not know what is happening even though they do not appear unconscious. Perhaps future research will lead to more information on the interaction between the I-function and the rest of the brain, and, ultimately, the nervous system.

References

Resources
1)Intelihealth
2)Absence Seizure Treatment
3)Information on Absence Seizures
4)Epilepsy Foundation
5)E-medicine Absence Seizure Information
6)Absence Seizure Information
7)Genetic Studies in Rats
8)Williams, J., Sharp, G. B., DelosReyes, E., Bates, S., Phillips, T., Lange, B., Griebel,
M. L., Edwards, M., & Simpson, P. "Symptom difference in children with absence
seizures versus inattention," Epilepsy and Behavior, vol. 3, 2, June 2002, 245-248.


PMS: Is It Really a Figment of the Female Imaginat
Name: Alanna Alb
Date: 2003-05-16 16:41:16
Link to this Comment: 5718


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip


Is Premenstrual Syndrome (PMS) a common and complex issue within the female community? The answer is a definite yes. However, is PMS well understood by our highly advanced modern society, especially the women who suffer from it? The answer to that question, unfortunately, is a definite no. While the common signs and symptoms of PMS have been recognized, medical professionals still cease to understand what exactly causes these symptoms, and they still do not exactly know how the fluctuation of female hormones can severely impact brain activity. Multiple theories have been developed, but we are still left with more questions that remain unanswered. Various medicinal and nutritional treatments have been suggested to alleviate the symptoms of PMS, and while these have greatly improved the quality of life for women all over, the issue remains that PMS is not treated as the real disorder that it is. Society often views PMS from a very negative outlook, assuming that women who complain of such symptoms are delusional or are just "making it up" since these symptoms only seem to come at that "time of the month", and just seem to magically disappear near the end of the menstrual cycle. Even as I was researching this topic, I encountered great difficulty in finding substantial resources from which to write this paper – I believe that this comes from a combination of a lack of understanding of what PMS really is, and society's reluctance to accept it as a true disorder. There really does not seem to be a whole lot of available information about PMS. As a sufferer myself of severe PMS, I desire to shed some new light on this taboo topic in the hopes that it will increase others' understanding and awareness of it.

What exactly is PMS? In my research I struggled to put together a clear, straightforward definition from all of the confusing and varied definitions given from different sources. From what I have read, the general consensus seems to be that PMS is a biopsychosocial disorder with distinct accompanying symptoms that occur right after ovulation and continue right up until the beginning of menstruation. The long list of PMS symptoms frequently includes: breast tenderness and swelling; weight gain; an uncomfortable "bloated" feeling; swelling of hands and feet; headaches; nausea; food cravings (like chocolate, sugary and salty foods); increased irritability and mood swings; depression; sadness or uncontrolled crying; changes in sleep or appetite; feeling of being disoriented, forgetful, or confused; feelings of anxiety or loss of control; clumsiness; dizziness; fatigue; acne; and on the rare occasion, suicidal thoughts (8). No two women are alike in the combination and intensity of the symptoms they have. Every woman experiences a different range of symptoms at varying levels of intensity, and these may also vary on a month to month basis – this high degree of variability makes it very difficult for researchers and the women themselves to pinpoint the exact cause of their symptoms, and also to be able to give a clear-cut definition of PMS and what the disorder entails.

It has been theorized that the drastic fluctuations in hormones that normally happen with the onset of ovulation and the body's response to such changes are primarily responsible for causing PMS. The ovarian hormones estrogen and progesterone have been found to have a powerful effect on brain function, because the brain is extremely sensitive to the fluctuations of ovarian hormones (1). Blood flow to the brain, brain cell growth, and the functioning of neurotransmitters are all affected by these hormones (1). Before the onset of menstruation, estrogen and progesterone are no longer at a tolerable balance for some women's bodies. For example, the level of estrogen may increase and the level of progesterone may decrease. PMS occurs when the body responds negatively to the changes in these female hormone levels. It is not quite known as to why some women do not experience the PMS syndrome in response to their menstrual cycles while other women do.

The parts of the brain where estrogen receptors are located are the cerebral cortex, hypothalamus, hippocampus, amygdala, and limbic forebrain system. The cortex area controls judgment, attention span, concentration, moods, perceptions, and interpretation. The limbic area of the brain is primarily responsible for memory, appetite, sleep, and emotions. Research has strongly suggested that the functioning of the female brain is highly dependent on estrogen levels (4); thus, if the levels of estrogen were to suddenly drop or increase, it only makes sense to conclude that brain function would be altered; thus, many women would experience the overwhelming effects of PMS as a result. Research has also indicated that the neurological pathways between the targeted brain centers are capable of remodeling themselves in order to accommodate fluctuating estrogen levels – when these hormonal levels return to normal, the neuronal pathways also go back to their original states (4). This could provide a reasonable explanation as to why PMS symptoms seem to fade away at the beginning of menstruation, when hormones begin to return to their normal balance again.

Physical symptoms aside, the most infamous symptoms associated with a woman suffering from PMS are extreme mood swings such as: sadness, depression, anger and rage. When estrogen levels change, it can severely impact neurotransmitters in the brain. One of these affected neurotransmitters is serotonin, which is responsible for controlling mood. Serotonin appears to play a significant role in PMS. When serotonin activity is altered by estrogen changes, mood is drastically altered as well. That is most likely why many women feel so emotional before their period, with feelings of depression and anger often going to the extremes. The prevalence of depression in PMS has caused researchers to seriously question its role in PMS symptoms (7). Previously, studies used to only focus on the hormones themselves; however, when PMS sufferers responded positively to the antidepressants given to them for their depression, serotonin was pinpointed as a primary factor in the onset of PMS (8). Serotonin activity-enhancing drugs such as Zoloft and Prozac have been found to greatly improve the extreme moodiness that many women suffer from.

Serotonin may not only be responsible for mood swings, but also for food cravings (6). When serotonin levels plummet, women automatically reach for foods like chocolate, chips, cake, cookies, and candy. Since they are high in sugar and fat, these foods will quickly raise serotonin levels. Unfortunately, the downside is that the "sugar-high" disappears just as quickly as it comes, leaving women in the same state that they were in before. As difficult as it may sound, it is recommended that women satisfy their cravings with foods that are high in complex carbohydrates and low in sugar, salt, and saturated fat, and eating six small meals a day (as opposed to three big ones) to help offset hunger pangs. Eating more healthily will help to relieve the physical and mental discomfort associated with PMS.

Researchers have also discovered that particular vitamin and nutrient deficiencies may also be to blame for the symptoms of PMS. America's high sugar/high fat consumption has made the modern diet full of nutritional holes. This plus the fact that many women today lead very hectic lives that do not allow them the time to sit and eat a proper meal explains why the modern woman seems to be suffering from many nutritional deficiencies. A donut or candy bar seems to be the quickest item to grab before rushing out of the door, but it is not the healthiest. Women who suffer from PMS have been found to be lacking in B vitamins, especially vitamins B2, B6, and B12. Vitamins A, C, and E are lacking in their diet. Deficiencies in calcium, magnesium, and iron are also to blame for the food cravings, fatigue, and mood swings. Doctors who are understanding and sympathetic towards the signs and symptoms of PMS have strongly suggested that eating more healthily and integrating the diet with nutritional supplements may help to improve symptoms. When all else fails, oral or injected contraceptives or prescription antidepressants may be the last alternative.

Studies involving brain imaging techniques have displayed brain activity in particular areas of the brain – this is a great phenomenon, because it is concrete evidence that PMS does have a physiological basis, and is not just a psychological disorder which many people mistakenly believe it to be. One particular study has been conducted over a series of years, in which women with PMS were scanned before their period, during the worst time of their cycle, and then a week after their cycle. The images revealed that there was a significant difference between the brain activities before the menstrual period and after. When a participant was not suffering from the effects of PMS (after her period), her limbic system, temporal lobes, and prefrontal cortex showed normal activity. When PMS was present (right before her period) her limbic system exhibited lots of activity, while her temporal lobes and prefrontal cortex showed very little activity (2). One of the study participants, a 25 year-old woman named Andrea, had her brain scanned right before her period and one week after. The images from right before her period showed decreased prefrontal and temporal activity, as well as increased cingulate gyrus (limbic) activity. The images from one week after showed a fuller prefrontal activity and a decrease in cingulate gyrus activity (2). All of the women in the study exhibited very similar brain images. This is very evident that hormonal fluctuation with the onset of ovulation and menstruation directly impacts brain activity, which in turn causes women to feel out of control and exhibit the behaviors that are so commonly seen in pre-menstrual women.

It is important that we pay serious attention to these research findings, because they will help us to better understand the mechanisms responsible for PMS, as well as to help us to find better treatments and to increase support for women suffering from the disorder. Some stories have been documented of women for whom the PMS symptoms were so extreme that they were subject to angry outbursts and aggressive behaviors. One woman actually attacked her own husband with a carving knife during an argument (2).

I find it incredibly shocking that with all of the available evidence, PMS is still not seen as a true physiological disorder by society. Many seem to assume that since a woman who complains of PMS does not seem to exhibit any tangible signs/symptoms of the disorder, that there is really nothing wrong with her – that it is just "all in her head". While it is true that PMS cannot be distinguished by a broken arm or an infected wound, it is still very much real, and causing millions of women terrible discomfort and suffering. Suffering women are still not receiving the support and help that they need to help relieve the intensity of the symptoms. Only recently have health care providers recognized PMS as an actual disorder. Many women have still not told their doctors about their symptoms or even discussed PMS with their doctors, even if their daily lives are severely affected by the disorder (5), and this is probably because they think that their concerns will be dismissed as trivial woman issues. Women are encouraged to see female physicians about problems concerning PMS (3), because women physicians tend to be much more sympathetic then male ones. Women may also be reluctant to discuss PMS with their doctors because the treatment options suggested to them are usually very extreme and scary to hear. Some doctors will automatically suggest a hysterectomy or ovaries removal to correct the problem, when just a simple change in diet and increase in nutritional supplementation is all that is needed to alleviate severe symptoms.

There are no easy answers to provide here. PMS is a disorder that the medical professionals do not entirely understand or acknowledge, and the debate continues as to whether PMS is an actual disorder. For now, all we can do is to continue research on PMS and make a great effort to increase society's awareness and understanding of this disorder that affects so many women. Women should be able to come forward and ask for the help and information that they need without having to be afraid that their concerns will be ignored. We also need to provide much more available support and assistance to the women who suffer from the disorder, so that they may be able to recognize their symptoms and receive the appropriate treatment that they need in order to lead better lives.

References


1)Always Home Page, New Studies Lead to a Better Understanding of PMS

2)Brain Place Home Page, Images of PMS: Is it Real? You Bet!

3)Cool Nurse Home Page, PMS

4)Document, The Female Brain Hypoestrogenic Continuum from the Premenstrual Syndrome to Menopause: a Hypothesis and Review of Supporting Data

5)Health A to Z Home Page, Is It More Than PMS?

6)Health A to Z Home Page, The PMS and Food Connection

7)Tampax Home Page, Possible Causes of PMS

8)Serotonin Home Page, Serotonin and Premenstrual Syndrome


IS THERE HOPE?: An unconventional understanding of
Name: Nicole Meg
Date: 2003-05-16 16:50:27
Link to this Comment: 5719


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Stress is a ubiquitous part of daily life. Various people are exposed to different levels of stress and also respond to the same level of stress in different ways. For many who are mentally ill, stress seems to have correlated with onset of illness or with the worsening of a preexisting illness. There has long been a debate among health professionals whether nature or nurture is the cause of mental illness, but perhaps the dichotomy of this argument is fallacious. Rather than a stressful event triggering the onset of a genetically predisposed illness like schizophrenia, or a genetically inherited characteristic of the illness making one's nervous system more susceptible to stress, there appears to be a positive feedback mechanism that combines both nature and nurture factors. While writing my previous two papers on schizophrenia and bipolar disorder, I encountered a vague commonality between the two mental illnesses: the suspected existence of some third factor, beyond the recognized nature and nurture components, which may orchestrate this interaction of environment with genetic predisposition to trigger mental illness. This made me wonder whether there is an "awareness of self" element, that may be called an I-function, capable of influencing its own biological fate by intervening to control at least part of the body's reaction to stress, once a person is trained to harness this ability. Exploration of stress and its relationship with the inner workings of the brain may shed some light on these issues.

It's impossible to define exactly what induces stress from person to person—what guise it may assume--but the effects of stress seem to be physiologically similar, regardless of the source. When a person perceives a threat, their limbic system immediately responds through activity of the autonomic nervous system, the complex network of endocrine glands that automatically regulates metabolism. The sympathetic nervous system (SNS) turns on the "fight or flight" response. (c) The body signals the hypothalamus to secrete corticotrophin releasing hormone that causes the pituitary gland to secrete adrenocorticotrophin hormone that causes the adrenals to secrete cortisol, which results in the experience of heightened alert. (b) (d) (k) (l) Collectively, this and less important processes of the SNS create a state of metabolic overdrive where the body is prepared to battle stress.

The stress cascade once served an evolutionarily beneficial purpose, as it's fight or flight response prepared our ancestors for violent threats like a saber-toothed tiger attack or other impending danger. Today the same metabolic change may be triggered before a job interview or exam, during a traffic jam or even when paying bills, causing the body to feel that these events are life-threatening even though they clearly are not. In a normally functioning person, the hippocampus signals the stop of cortisol production and the parasympathetic nervous system (PNS) releases a different array of biochemicals to return to a state of homeostasis, or metabolic equilibrium, once the impending stressful situation has passed. (c) However, malfunction of this negative feedback mechanism may result in excess levels of cortisol production. Unchecked cortisol release over an extended duration of time has been shown to damage and even destroy nerve cells in the hippocampus, which controls memory. (a) Because the hippocampus is the part of the feedback mechanism that signals when to cease cortisol production, a damaged hippocampus causes the hormone levels to become unmanageable, further compromising memory and cognitive function. (d) Thus the positive feedback cycle of stress and degeneration continues.

The stress cycle plays a role in a wide range of diseases, conditions, and psychiatric problems such as psychosis, affective illness (including manic-depression and major depression), alcoholism, cardiovascular disease and impairment to the immune system. (a) Stress and its biological effects on the brain seem to affect the same areas where dysfunction leads to serious mental illness, particularly schizophrenia. Not surprisingly, studies have shown that excessive cortisol production, damage to the hippocampus, and impairment in certain types of memory related to the hippocampus commonly occur in patients with schizophrenia. (a) Brain scans of patients with schizophrenia have shown smaller hippocampal volumes than people without the disease and other studies have demonstrated memory and the ability to coordinate and carry out tasks—functions associated with the hippocampus—to be characteristic of schizophrenics. (a)

A bountiful dossier of research evidence suggests that a stressful environment may both trigger a mental illness to be expressed and also intensify symptoms after onset of the illness via a positive feedback mechanism. Brain scans of newly diagnosed patients with schizophrenia have lead researchers to believe due the extent of the observed deformities of the hippocampus that deformity existed before onset of illness, possibly as a result of genetic predisposition or impaired fetal development due to insults in the womb. (j) Such an impaired hippocampus could prevent the subject from effectively handling relatively low levels of stress thus allowing a mildly stressful event to exacerbate the problem via a cycle of stress and further damage to the hippocampus.

According to the "two-hit" theory regarding the origin of schizophrenia, genetic vulnerability or impaired fetal neurological development in the womb due to nutritional inadequacy or viral exposure, may leave an individual with a vulnerable nervous system. This sets the stage for schizophrenia but a second event in adolescence or early adulthood leads to the development of the illness. This "second-hit" may be a major life event; an episode of environmental stress. (a) For a patient with schizophrenia, events like the death of a parent or other loved one or a change in living location can trigger acute anxiety, depression and psychotic episodes. Even seemingly mild stressful events such as a job interview or a date can have a devastating effect on an individual with a vulnerable nervous system. (a) One groundbreaking study found that 46% of patients who experienced their first bout of schizophrenia underwent some stressful life event in the preceding three months. (a) Perhaps schizophrenic people lack a coping mechanism for stress as an adverse effect of their illness. Research suggests that patients with schizophrenia are more affected by stress physically as well as emotionally than controls, showing different changes in heart rate under stress and a greater overall risk of cardiovascular disease. (a) Attributes of schizophrenia--such as difficulty in filtering out what is happening in the outside world and misattribution of internal thoughts and feelings, along with an inability or lessened ability to interpret social cues--may work to amplify stress.

So far I have explored how genome and environment may interact in a positive feedback fashion to cause mental illness. When considering such an explanation, one might infer that becoming mentally ill is inevitable given a certain combination of genomic and environmental stress factors but if there exists a third factor that could be controlled, it may be possible to break this positive feedback cycle. When we think of curing illness, we tend to rely heavily upon finding a medicinal fix. But the biochemical problem may be rooted in an unhealthy lifestyle or inadequate behavioral coping mechanisms—things that cannot be abolished with 12.5mg of clozapine. The key to schizophrenia treatment may lie within the identity of this third factor. The illusive third factor could be the I-function: a person's awareness of disturbances in their own nervous system. If one could become aware of the activity of their nervous system and could generate a way to break the positive feedback cycle associated with the stress cascade, then they may experience some degree of recovery.

The challenge is to prevent the SNS from remaining chronically aroused by simulating the negative feedback response of the PNS. (m) This may require techniques that work to activate the relaxation response, an integrated psycho-physiologic response originating in the hypothalamus that leads to a generalized decrease in arousal of the central nervous system. (c) (e) At least 37 studies conducted as early as the 1970's found great clinical success in the cultivation of the relaxation response. (f) While the fight or flight response involuntarily occurs with the exposure to perceived stress, two voluntary steps be consciously taken to elicit the relaxation response. One is the repetition of a word, sound, prayer, phrase or muscular activity and the other is a passive return to repetition when any other thoughts intrude. (g) Progressive muscle relaxation, meditation, autogenic training, yoga, and repetitive physical exercise may be used to consciously elicit the relaxation response and to ultimately harness the I-function to help an individual modify to some degree the behavior of their nervous system. (g) Other therapies also cultivate the I-function, for instance Cognitive Behavioral (CBT) and Stress Inoculation Training (SIT), which help people recognize the inappropriate or negative thought patterns and behaviors associated with their illness so that the patient can consciously replace them with positive thoughts. (i)

Assuming the memory problems of schizophrenia are related to stress, these and other stress management techniques could potentially prevent the onset or lessen the severity of schizophrenia, delay relapse in those already ill and reduce overall anxiety. Many of these strategies have repeatedly been proven successful. (i) Studies indicate that cells within a damaged hippocampus can regenerate when stress or cortisol is reduced. (a) Rather than relying solely upon the search for an immediate medicinal cure, treatment possibilities may be found within the ability to harness one's own I-function.

Implicating the I-function as a factor in expression of a mental illness like schizophrenia suggests that genetic predisposition does not necessarily enslave one to an imprisoned life dictated by a genetic code and environmental setting. Rather, it suggests that careful manipulation of external environment and conscious acceptance of stressful situations can evoke some degree of control over the illness and introduces hope to an otherwise bleak prognosis. The widely reported success of stress management treatments for mental illnesses has proven that it is possible to become in tune with the I-function. However, it would be naive to say that stress management guarantees a happy ending. Once most mental illnesses are fully expressed, extensive biological damage may prevent relating to these patients without the aid of medications and, even then, some may be incapable of such therapy. The key here is to address the illness as soon as symptoms become apparent in order to train the individual to develop a sense of self awareness so that they can control their response to future stressful events.


References


a) Schizophrenia and Stress

b) New Cause of Weight Gain

c) The Franklin Institute Online , Stress Affects Your Brain's Structure and Function

d)Serendip Home Page, The Cortisol Conspiracy and Your Hippocampus

e) Mandle C.L. et al., Journal of Cardiovascular Nursing 10 (3): 4-26, 1996

f) Diet and Body Page , The Relaxation Response

g) The Relaxation Response

h) Meichenbaum, D (1996). Stress inoculation training for coping with stressors. The Clinical Psychologist, 49, 4-7

i) The National Alliance for the Mentally Ill , Phychosocial Treatments

j) Washington University in St. Louis School of Medicine , Brains Scans of Schizophrenic Patients Versus Controls

k) Nature Online , Stress and Immunity

l) University of Michigan Research News Online

m) Relaxation Response Therapy


Schizophrenia: Cause and Effect
Name: Adina Caza
Date: 2003-05-16 18:42:19
Link to this Comment: 5720


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Schizophrenia is a severe mental illness that affects one to two percent of people worldwide. The disorder can develop as early as the age of five, though it is very rare at such an early age. 3 )National Institute for Mental Health , Most of the diagnosed men become ill between the ages of 16 and 25 whereas most of the women become ill between the ages of 25 and 30. Even though there are differences in the age of development between the sexes, men and women are equally at risk for schizophrenia. 4 )Psychiatry 24x7, There is of yet no definitive answer as to what causes the disorder. At the moment, it is believed to be a combination of factors with present studies leaning toward the genetic aspect of the problem, trying to locate one, or more genes which may be responsible. However, not all researchers are convinced that genetic make-up is the only problem, looking into other possible causes such as pre-natal viruses and early brain damage, which causes neurotransmitter problems in the brain. 3 )National Institute for Mental Health ,

These problems are what cause the symptoms of schizophrenia, which include hallucinations, delusions, disordered thinking, and unusual speech or behavior. No "cure" has yet been discovered, although many different methods have been tried. Even in these modern times, only one in five affected people fully recovers. 4 )Psychiatry 24x7 , The most common treatment is the administration of antipsychotic drugs. Though, new research has possibly led to the use of other kinds of drugs. More testing still needs to be done in order to find the best possible medication, as new drugs with seemingly opposite effects are competing to be the most useful. Previously, other treatments that were used, and are occasionally still given are electro-convulsive therapy, which runs a small amount of electric current through the brain and causes seizures, and large doses of Vitamin B. 3)National Institute for Mental Health ,

Due to neurological studies of the brain, antipsychotic drugs have become the most widely used treatments. These studies show that there are widespread abnormalities in the structural connectivity of the brains of affected people. 2 )E-Mental Health , It was noticed that in brains affected with schizophrenia, far more neurotransmitters are released between neurons, which is what causes the symptoms. At first, researchers thought that the problem was solely caused by excesses of dopamine in the brain. However, newer studies indicate that the neurotransmitter serotonin also plays a role in causing the symptoms. This was discovered when tests indicated that many patients showed better results with medications that affect the serotonin as well as the dopamine transmissions in the brain. 8 )Health-Center , A new drug called clozapine has been recently discovered/ created. It increases the amount of dopamine released and even though it lowers the receptor occupancy of the D2 dopamine receptors in the brain, it does lower the extrapyramidal side effects common to antipsychotic drugs. 5 )Science Magazine ,

New test and machines also enabled researchers to study the structure of schizophrenic brains using Magnetic Resonance Imagery (MRI) and Magnetic Resonance Spectroscopy (MRS). The different lobes of affected brains are continually examined and compared to those of normal brains, showing several structural differences. The most common finding is the enlargement of the lateral ventricles, which are the fluid-filled sacs that surround the brain. The other differences, however, are not nearly as universal, though they are claimed by researchers to be significant. There is some evidence that the volume of the brain is reduced and that the cerebral cortex is smaller. 2 )E-Mental Health ,

Tests showed that blood flow was lower in frontal regions in afflicted people when compared to non-afflicted people. This condition has become known as hypofrontality. Other studies illustrate that people with schizophrenia also often show reduced activation in the frontal regions of the brain during tasks known to normally activate them. 1 )E-Mental Health , Even though many tests show that the frontal lobe function performance is impaired, and although there is evidence of reduced volume of some frontal lobe regions, no consistent pattern of structural degradation has yet been found. 2 )E-Mental Health ,

There is, however, a great deal of evidence that shows that the temporal lobe structures in schizophrenic patients are smaller. Some studies have found the hippocampus and amygdala to be reduced in volume. Also, components of the limbic system, which is involved in the control of mood and emotion, and regions of the Superior Temporal Gyrus (STG), which is a large contributor in language function, have been notably smaller. The Heschl's Gyrus (which contains the primary auditory cortex), and the Planum Temporale are diminished. The severity of symptoms such as auditory hallucinations have been found to be dependent upon the sizes of these language areas. 2 )E-Mental Health ,

Yet another area of the brain that has been found to be severely affected is the prefrontal cortex. The prefrontal cortex is associated with memory, which would explain the disordered thought processes found in schizophrenics. Test done on humans and animals in which the prefrontal cortex has been damaged showed similar cognitive problems as those seen in schizophrenic patients. The prefrontal cortex has one of the highest concentrations of nerve fibers with the neurotransmitter dopamine and scientists have learned that the relatively new antipsychotic drug, which increases the amount of dopamine released in the prefrontal cortex, often improves cognitive symptoms. They also found that the prefrontal cortex contains a high concentration of dopamine receptors that interact with glutamate receptors to enable neurons to form memories. This means that dopamine receptors may be especially important for reducing cognitive symptoms. 7 )Society of Neuroscience ,

While these drugs do help control the symptoms of schizophrenia, they do not get rid of the disorder and they are accompanied by, sometimes sever, side effects. It is becoming clearer ever day, just what damage schizophrenia is doing to the brain, but researchers are nowhere near to finding all of the answers. Different researchers are still arguing over the conclusiveness of the data that does exist. Other scientists are trying to discover the cause of schizophrenia. Is it caused by various genes, by a virus, or from trauma? This too is still a mystery. The only thing that is truly known is that the disorder is debilitating and that it affects nearly every portion of the brain. Obviously, much more research still needs to be done to help those who suffer from it.

However, recent studies have shown that there is a substantial amount of evidence supporting a genetic link. The disease has a high heritability, but researchers have as up yet still not been able to come up with a model of the inheritance, which led the researchers to believe that there might several loci with interacting susceptibility. The scientists conducted a study on twenty-two extended families with a genome-wide search for loci susceptible to schizophrenia. The results of this test provided researchers with evidence of a link between the disease and chromosome 1 (1q21-1q22). The results also confirmed a reported link to chromosome 13q32. Obviously, the test needs to be conducted several more times with different family groups to confirm or deny the findings of this test. 6)Science Magazine ,

I think that much more research needs to be done in finding the gene(s) responsible for the disease. Due to the higher probability of being diagnosed with the disease in those with afflicted family members, I am led to believe that the primary cause for the disease, that affecting the majority of people, is in fact genetic. Now that the human genome has been mapped out, research as to the genetic factors of this and other diseases will hopefully go faster and farther. However, I do not doubt the possibility of the other suspected causes triggering the disease, but it is my under-informed opinion that these are separate and isolated causes. This is why I also think that more research should also be done in finding out which, if in fact any, pre-natal viruses are responsible for degenerating the brain. If the diseases were to be identified, more research could be done in finding a vaccine which could then be administered to pregnant women to lower the risks of the baby's contracting the disease. Unfortunately, there is not much that can be done for early trauma, that will be in the hands of the child's caregiver. Our ever-increasing knowledge and the help of continually advancing technology will provide us with more answers, fewer questions, and closer to being least wrong.

References


1) E-Mental Health,

2) E-Mental Health,

3) National Institute for Mental Health,

4) Psychiatry 24 x 7,

5) Science Magazine,

6) Science Magazine,

7)Society of Neuroscience,

8 http://www2.health-center.com/mentalhealth/schizophrenia/causes"> Health-
Center
,


Love and Stochastic Resonance
Name: Geoff Poll
Date: 2003-05-16 19:08:09
Link to this Comment: 5721


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

We start with 100 billion neurons, connected over 100 trillion times in every combination possible making millions of circuits and subsystems. (6) This is the human nervous system (NS) and it accounts for every second of our lives. The NS works through a complex network of these neurons, the smallest functional building block of the NS and the brain. While each neuron receives inputs and creates outputs, the system immediately complicated when we find that these two functions are not always linked to one another. Neurons are known to fire spontaneously and will do so quite often, even in the absence of any input. Take this to the level of the billions of neurons there are in the brain and we end up with noise-non-useful signals that act as a nuisance for the precise mechanisms ruling the nervous system. (2)

We start over, now with over 6 billion people scattered across the face of the earth. They live their lives, make choices, fall in love, act within social norms, rebelling against their families, die out and continue the cycle into the next generation without a thought about neuronal function. It is hard to blame them, human behavior is complicated enough without the complications of physiological functions that have no function and behavioral unpredictability that there is no counterpart for in any physiological mechanism in the NS.
While it is this very unpredictability that gives us an identity as humans and sets us apart as individuals, there is just something about neurons firing without a purpose and the random nature of human behavior that remains unexplained that beg an attempted explanation.
We start on a journey to find a purpose for spontaneous neuronal firing. For this we go back to 1981, the debut year for a physical theory called Stochastic Resonance (SR). SR was introduced originally to account for the Earth's major fluctuations between warm and ice ages. It was discovered that the climatic shifting pattern was a relatively stable fluctuating every million years or so. Consequently it was discovered that another phenomenon, having to do with shifts in the Earth's orbit, followed a similar time pattern. (2)

Establishing a link from the correlation was not as easy as mapping out a linear relationship on an x/y axis, because the Earth's ecosystem is a complex system, meaning that it has many factors influencing its outcomes both small and large at all times and does not follow any kind of linear patterns. Anyway, the orbit shifts were not enough to put the Earth into an ice age. (2)

The answer came with the discovery of the possible functionality of all the noise in the Earth's dynamic system. The noise of this ecosystem resembles that of the brain, getting in the way of the standard inputs and outputs, and is just as similar as the static on a TV screen scrambling the reception of the signal bringing the picture.

The Earth's noise were smaller climatic changes which, if anything, got in the way of signals that the ecosystem would have been tuned into, like the changes in orbit. SR presented scientists with an interesting alternative; they put together mathematical model where the noise represented the Earth's internal driving mechanisms and the orbit shifts were external forces. (2)

The equation explained how a relatively small force might exact a great amount of influence on a complex system like the ecosystem of the Earth, and together with internal noise, or the yearly fluctuations in solar radiation due to the short term climate changes, set up a stable system. (2)

This idea can be thought of in terms of its "double-well" diagram. The diagram is made up of two semi-spheres facing upward, each representing one of the two major climate states, warm and cold (imagine a lowercase W). The two shapes are connected at their upper, middle limit and a particle representing the Earth sits at the bottom of one of the wells. Now imagine the particle is moving as a function of the Earth's short-term climate changes. The movement takes on a variable quality, but keeps the particle in constant movement. It bounces around the bottom of the well and never goes over the middle barrier separating it from a major climate shift. After a million or so years the orbit of the Earth shifts, increasing the forces acting upon the Earth's always active internal ones and it is enough to cause an ice age. (2)

This theoretical model represents a stable state that accounts for noise that has a direct role in the dynamic system, that of enhancing an otherwise weak signal. This is the idea of SR, 'stochastic' describing the random nature of the background activity (noise) and 'resonance' the amplitude enhancement of the target signal. In most models there is an optimal level of noise at which point SR will emerge. In the double well model, we can imagine either too little activity in the particle sitting at the bottom of the well, and too much inducing a randomness in shifts between major states where no stability is achieved. We see that the Earth exists in a stable state not in spite of the noise, but because of it. (2)

The jump from shifts in the Earth's climate to states of the brain is not an impossible one to make, but it should be made with caution. Like the Earth, which has many small factors affecting its smaller scale activity all the time, the brain is made up of neurons firing multiple times every second. To make sense of this firing and the possible role they play in the larger scale processes of brain we need to briefly examine the mechanics of the neuron. (2)

The model of the neuron we will work with is called the Leaky Integrate and Fire neuron (LIF). Its name almost completely describes its functionality. The neuron is a tiny cell that receives informational input, through shifts in membrane permeability, from other neurons. The model is "leaky" because it is always receiving input and it collects that input to a certain threshold level, or membrane potential, after which it fires an action potential (AP) and resets. (2)

Our relatively simple model fails to account for a couple things. The first is that each neuron fires AP's one at a time but the neuron is always receiving input from at least 1,000 other neurons, most of these signals not carrying any significant information. Most of the input it receives represents noise, a nuisance to the very sensitive, information-driven system. The neurons creating this noise are not firing for any reason that we know of, they are just firing. (2)

With the introduction of SR, there has been a shift in the thinking about this "spontaneous" firing, and while there do not exists explanations for the mechanism that causes the neuron to fire in the pattern (or non-pattern) that it does, scientific applications of the phenomenon of SR demonstrate convincingly how crucial this nuisance is to the nervous system and may even provide us with a beginning direction towards some of the more mysterious questions to which we seek answers.

In order to understand of how noise might play a role in the stability of neural systems, we look to a human ear that can no longer hear noise of any sort. A healthy auditory pathway detects sound through tiny hairs that sense the condensed waves coming in through our earlobe and trigger AP's through the auditory nerve eventually creating the perception of sound. If the ear receives enough damage, these hairs no longer exist. Without them, hearing aids, which amplify sound, no longer have any effect. (2)
An interesting new technology called cochlear implants involves the inserting of 22 electrodes into the cochlear nucleus of the inner ear, acting as a crude substitute for the hairs that are gone. These electrodes are attached to the remaining 10,000 neurons in the cochlear nucleus and trigger action potentials related to the various frequencies entering the ear. Patients with these implants receive a crude representation of sounds in the world. (2)

The way SR comes into this model is through one of the limitations of the implant. More than the obvious drawback of neurons, able to distinguish thousands of sounds, with only 22 electrodes, scientists studying audition have hypothesized that even the 22 signals are not being perceived at their full strength because of the background noise that the thousands of neurons would have filled in. Healthy sensory neurons in the inner ear will fire when there is no sound or otherwise input coming in. This is hypothesized to be an effect of SR on the healthy perception of sound. (2)

A group of scientists in Los Angeles studied this phenomenon with group of patients with cochlear implants and one with healthy hearing capabilities. They used a mechanism that would send different frequencies of sound into the ear with or without different levels of white noise. They found that in the hearing impaired patients, optimal levels of the white noise significantly enhanced their ability to hear lower frequency sounds and distinguish between slighter frequency changes at the higher levels. (9)

The same manipulation also improved, though not as drastically the hearing of subjects without any impairment. This suggests that although our bodies seem to make use of SR, the auditory system and probably many others, are not working at their optimal level. (9)

This does not mean that we should begin to integrate white noise into the background and dinner parties in order to enhance our picking out of the higher and lower end frequencies being passed around. As we move towards the more pragmatic and everyday effects of SR, we will begin to understand it as a phenomenon just as variable and dynamic as the systems it works within.

Taking the next step away from neuronal function and towards everyday usage was Usher and his colleagues, who tested people's speed and accuracy in answering single digit multiplication problems while being listening to random tones played at varying frequencies. Even though the noise in this case is an auditory input, what Usher tested was memory of multiplication tables or math ability. In either case, these were distinctly different cognitive abilities than listening was. (7)

Usher proposed a global model where the randomized auditory input would be interpreted as noise and would provoke noise-like firing of neurons all over the brain, especially in that part functioning to compute the math problems. He was indeed able to find an optimal level at which subjects improved their efficiency in answering the problems. (7)

If Usher's model seems familiar it should. Most of us either work better or worse under loud or quiet conditions. He most likely thought of the experiment while listening to music in attempt to focus or think clearly. This kind of focus, that music is able to bring out in some of us, and that randomized tones may very well do that same for, has to do with so many of our daily functions. Studying is the first that comes to mind, and under the model SR, listening to loud music for concentration makes more sense than the silence. But it has to do with individual baselines, remember SR hinges on an optimal level of neuronal noise. The most likely scenario is that this baseline varies for people depending on the time of day and weather, among the many other variables that probably come into play.

This is the basis of human behavior. SR gives a foundation for the way we are able to focus and exist in stable states, through the finding an optimal level of noise however that may be. It also gives an idea of where the variability in our behavior comes from, being just one of the many aspects effecting of our dynamic NS.

So much of our day to day life has to do with focusing, at least in terms of neurons. For the student this meant listening to music to be able to concentrate on the reading a text. For an athlete the focus might look very different. Some of the best performances occur within athletes when they lose themselves completely, and this may very well be attributed to an effect of SR occurring both in the brain for their mental focus, and in the physiological systems connecting the NS to the rest of the body where they will not feel anything but the endorphins or adrenaline rushing through the body. As with SR in other systems, there is an optimal level, and if the athlete has too much adrenaline or anxiety, she may lose her focus or at least its stable form.

But now, if our brains are capable of creating this kind of focus why do we not have access to it. In some ways we do. We can surround ourselves with certain well-known environments or train ourselves to keep better contact with our minds, but no one can be in control all the time. Evolutionarily there is an advantage to being unpredictable. This is observed in animal behavior, much of which carries inherent variability, even if only on small scales. (1) People, on the other hand, are notoriously bad at being unpredictable, at least when they are aware of it. Subjects asked to produce random results in number sequences or with playing cards ended up considerably more predictable than random. (10)

That is not to say that we do not act randomly, it is very much a part of us. Again, athletes who lose themselves in the game after practicing enough variations of certain situations will be able to create play unpredictably without thinking.(1) This kind of variability coming from an effect like SR that we can not always harness comes with a price; even good athletes have off days, and they can not choose when they will be, though they can live their lives as consistent as possible, trying not to induce any great state shifts from the stable system.

Our societies are ruled by these laws. State shifts are advantageous and promote evolution. Creativity in society stems from individuals' ability to be variable but so does criminal deviance.

But we still have not accounted for falling in love. While it would not be wise to make too much of SR in terms of love, the effects do fit nicely. Imagine a bunch of neurons being able to focus on only one thing. For the student it was the text, the athlete had the basketball, and for the lover there is the object of that attraction. If you could trace every particle in your body up to that moment you could probably makes sense of it, but for the scope of this paper, we will leave something up to good old chance, and the thought that if there was not some inherent variability, either through the role of noise or that of other factors, we would all fall in the love at the same time with the same people. Instead all of our input goes in and combined with the daily ups and downs and every million years or so comes an orbital shift. Hopefully it is worth the wait.

References


1)Variability and Brain Function and Behavior, Professor Grobstein's article, good stuff

2)Stochastic Resonance Thesis,Master's thesis from U. of Melbourne. Very useful.

3)Cochlear Implants , Description of Chochlear Implants

4)SR slide show , helpful stochastic resonance slide show

5)SR in cat cortex neurons , pdf of Pyramidal V1 slice

6)Self Organized Criticality and SR , interesting pdf on SOC and SR link

7)SR in memory retrieval , Memory retrieval pdf, interesting study

8)Visual Perception of SR , cool looking demos of SR in visual perception

9)Cochlear Implant Study , pdf on good study

10) Ward, Lawrence.  Dynamical Cognitive Science.  Cambridge, MA: MIT, 2002


What does sleep do to us?
Name: Kate Shine
Date: 2003-05-16 19:44:14
Link to this Comment: 5722


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Before one really starts to dwell upon it, sleep seems like the most natural and mundane of states. Of course, it is natural, so like any common biological phenomenon we do not usually question it. But what exactly is this thing that our bodies insist upon doing for a full third of our lives? Surprisingly, scientists can offer no consistent explanation of the meaning for or purpose of human sleep. Empirical study of the neurology of sleep creates even further complications. But however murky the topic first seems, sleep and sleep disorders can shed light on the very nature of behavior, consciousness, the I-function, and the self.

A common misconception about sleep is that it is merely the absence of almost all but the most necessary activity in the brain and body. This idea leads to the assumption that sleep is necessary to conserve energy. There is evidence that the nervous system does slow down during the first stages of sleep (known as non-REM or NREM sleep), but during REM sleep many systems increase to a waking or beyond waking level, and there are even cells which fire five to ten times more during REM sleep than during a waking state. (6) This stage of sleep is also known as "paradoxical sleep," for obvious reasons.

This theory that the purpose of sleep is to retain energy in an animal by slowing the metabolic processes does have some merit. Smaller animals do tend to sleep more than larger animals (9), and this makes sense, as they have less ability to store energy compared to the energy they must expend. Additionally, mammals have a metabolic rate that is 7-10 times higher than cold-blooded animals due to their need to regulate temperature endothermally (6) and only warm-blooded animals (including mammals and some birds) have what can be considered true sleep cycles. Hibernation is a condition that is widely accepted as reducing the metabolic costs of thermoregulation for certain mammals, and it is similar to states of very deep NREM sleep. (6,10)

But there are many problems with this explanation. The metabolic rate of humans, for example, only decreases about 10% during sleep, and most animals would reap the same benefits by simply resting in an awakened state. Temperature goes through a regular cycle of high and low daily, and although the low correlates with a time when animals usually sleep, it persists even in the absence of sleep. (6) So why would nature create an intricate state such as sleep and endanger a prey animal by making it oblivious to almost all external stimuli? Finally, this hypothesis does not account at all for the "paradoxical" REM sleep state.

The strong drive for sleep indicates that it is more important of a function than can be explained simply by the mediocre metabolic gains. When rats are deprived of sleep they will die after about two and a half weeks, but the precise cause is disputable. Some do think it is caused by an inability to regulate temperature and metabolic rate (8) , but others think it could be caused by the collapse of the immune system. (10) And as humans who generally (in this part of the world) have plenty of food and live in regulated indoor climates, we have all have either experienced or observed the deleterious effects of little or no sleep on a person's mental condition. How does this symptom factor into the equation?

Humans are programmed to receive about eight hours of sleep per day, specifically at night. Most people would not fair well spreading this sleep out at intervals throughout the day, for example one hour in every three, as has been shown through experiment. (6) Serotonin, a neurotransmitter associated with drowsiness, is only secreted in the brain during hours of darkness. (6,9) It takes ten to twelve days for the body to reset this rhythm in response to a change in the opposition of light and dark. (6) This is one tenet of a function known as "circadian rhythm," created by an internal clock which has a duration of 24.5 to 25.5 hours (11,6) in humans and is located in the suprachiasmatic nucleus in the hypothalamus. (12,6) Another factor, known as the homeostatic component, ensures that individuals deprived of sleep for a longer period will sleep more deeply and for a longer period the next night. Interestingly, if NREM sleep is lost it will be made up, and so will REM sleep (9) , selectively. Clearly we have evolved to sleep not intermittently or randomly, but in a single nightly block, one including both NREM and more active REM sleep.

There are many other theories about the physiological function of sleep, and some also have evidence supporting them. One idea is that sleep requires animals to put themselves in a safe place and remain quiet to avoid predation. Although this is true, wakeful rest would again seem more sensible than sleep for this purpose. Another theory is that the body restores its resources and grows during sleep. This is supported by the well-known fact that the production of growth hormones increases significantly in stage four of NREM sleep, whenever during the day that it occurs. But protein synthesis in the body actually decreases overall during sleep (6) , and this theory still does not account for REM sleep.One intriguing hypothesis recently put forward by Joel Benington and Craig Heller at Stanford University argues that the purpose of sleep is to restore glycogen; a concentrated energy source, in the brain. (8)

They propose a simple homeostatic feedback mechanism that would account for the homeostatic nature of sleep. They believe that the glycogen used by the brain is stored in the glial cells, and that it cannot be replenished during the active periods of waking or it would interfere with brain activity.They pinpointed a specific neurotransmitter, called adenosine, which they believe builds up as the glycogen is depleted and causes drowsiness and eventually sleep. Evidence for this has been shown in experiments with rats, in which analogues of adenosine caused sleepiness and NREM sleep in rats that had already had enough sleep. The more of the chemical that was given, the deeper and more synchronized the EEG's showed the rats' sleep to be. It is widely accepted that stimulants such as caffeine block adenosine receptors in the brain. (8,6)

Benington and Heller also have an explanation for how REM sleep fits into this cycle. They say that adenosine causes neurons to lose positive potassium ions and become less excitable, and this is why they fire in synchrony in NREM sleep. They propose that REM sleep is a way for neurons to regain the positive charge once it has been too far depleted without waking so the cycle can be repeated again and again throughout one single nights sleep. However, there is no conclusive evidence that glycogen is depleted in the brain during the day, and the true function of glial cells is not even known. Scientists are also skeptical about the focus on a single neurotransmitter, since many chemicals and hormones have been tied to the instigation of sleep. It is rare that the nervous system would only have one way to do anything, especially regulating a process as seemingly important as sleep.

In order to understand the true nature of sleep we must understand what exactly happens during REM sleep and why it is different from NREM sleep or wakefulness. On an EEG readout, brain activity during REM sleep does not have large slow waves indicating neuronal synchronization and low activity as during NREM sleep, but rather it looks busy and chaotic, almost indistinguishable from a waking readout. The eyes dart about rapidly as if seeing things, and this is why it is so named. But REM sleep is also characterized by almost complete atonia, or paralysis of most major muscles. This odd combination of activity and paralysis occurs about every ninety minutes in the middle of the deepest stage of NREM sleep, and gets progressively longer in the early hours of the morning.

Dreaming has often been linked to REM sleep, and many of the hypotheses about its function stem from this fact. When people are woken from REM sleep they will remember a dream about 85% or 90% of the time. But people will also remember dreams about 70% or 75% of the time when woken from stages 1 and 2 of NREM sleep, and about 50% when woken from stages 3 and 4. (6) It could be argued that these reports are simply memories carried over from REM sleep, but even about 25% of people left alone to relax in a dark room will report having "dreams" during times in which they were awake. (6)Creating visual stories is obviously something the brain is prone to do in any state when left to its own devices.

However, if these dreams are written down after awakening, the REM dreams will almost always be longer, up to about two written pages. While NREM dreams seem to consist of simple visualizations or desires, REM dreams have consistent themes and discernible storylines. There are also many common traits found in REM dreams that can give us clues to atmosphere of the mind during this state. Although we often wakingly remember our dreams as a smooth stream of fluid visual events, when we actually read the accounts of dreams it seems that they are really more of a series of still and often fragmented visual images and emotions. In the dream we do not recognize this, but if in a dream we go back to look at an object or person again, it will often have shifted form or identity.

Ulric Neisser of Cornell University believes that mental images are created whether in wake or in sleep because of the brains natural tendency to anticipate outcomes, but that they only last for a second or two. (10) For example, before we turn on a blender, we have a brief image of the blender in motion. This seems to be evidence for the brain's tendency to experiment that we discussed on the first day of class. It is constantly collecting data and formulating hypotheses stemming from these observations. But during sleep the various systems creating input to this function are generating their own information rather than reporting from the external world. According to Anthrobus, during sleep processing areas dealing with things like color and speech are less synchronized, so the actual data coming in is less sensical (10), and less easily remembered. But there is always the constant consciousness of self during sleep- the sense that one is there and that everything happening somehow relates. I gather that this is evidence that the I-function, the storyteller function of the mind that incorporates a sense of self, is very active during REM sleep.

Investigation of sleep disorders provides even further insight into the boundaries between the two types of sleep and waking consciousness, because these disorders often dissolve the barriers enough for us to peek over the edge. Partial sleep arousal disorders, including sleepwalking and night terrors, occur during NREM sleep. Although they both often used to be considered symptoms of severe psychological impairment (10), they are now considered to be harmless and even a normal part of development in children. (4)Sleepwalkers arise during sleep and often speak or perform mundane, usually harmless activities. Some even safely drive cars to specific locations. However, these activities are usually not remembered by the individual when they wake, and if they are they are like vague memories rather than plots in a unique storyline. These activities seem autonomic, resulting perhaps from established pattern generators in the brain that do not require "thought" in the way we usually conceive of it.

S.R.E.D., or sleep related eating disorder, is another partial arousal NREM disease related to sleepwalking. Individuals cannot control their eating while asleep, and although they may be partially "conscious," and about 15% to 20% of the time will report that they were awake and remember the episode, they have no control over their actions. (1) They seem to have lost both what would typically be thought of as the sense of self, as well as free will, or the ability to make decisions. Obviously "they" are making decisions, because their nervous system is creating the actions, but it seems the I-function is disabled during this stage of REM sleep. But who has the eating disorder, if not the "self"?

RBD, or REM behavior disorder, is like a version of sleepwalking that occurs during REM sleep. It is much more rare because it requires a complete loss of atonia, which usually paralyzes people during the intense dreaming state. But the behaviors observed during RBD episodes are usually much more vigorous, violent, and dangerous than those observed during sleepwalking. The morning after episodes sufferers of RBD will often remember a dream that occurred which can be linked to the actions they performed, but the person will have no idea that they actually performed those actions in reality. RBD is strongly linked to other neurological disorders, especially Parkinson's disease. About 40%-65% of males with RBD will go on to be diagnosed with Parkinsons (5,1), which leads to the assumption that in both disorders a specific part of the brain dealing with ability to regulate muscle function is affected. It is also interesting that while Parkinson's disease is not an overwhelmingly male disorder, RBD is. (5,1) Are male dreams simply more violent so male sufferers are more likely to seek treatment, or is there another variable at work possibly having to do with the I-function?

Many imaginative ideas have been thrown out to explain the meaning of REM sleep specifically. A popular theory is that memories are sorted through and consolidated into long-term memory during this stage of sleep. The Nobel Prize winner Francis Crick deemed it a way to "take out the trash" of the brain. (7) However, there is no convincing evidence that sleep improves memory. Psychologists since Freud have argued that dreams are a way for the mind to express its most repressed desires through symbolic imagery, which is somewhat plausible, although the universal images Freud proposed are not. REM dreams can be very violent and emotional, as displayed in RBD, but they are not always or even often this way. Kreuger argues that we dream to exercise important but rarely used brain circuits (10), Maurice says we circulate the fluid in our eyeballs, (10) and Maier thinks that dreaming gives a chance for the body to listen to the internal organs and immune system and heal itself. (10) This last argument is most intriguing, because sleep does often seem to be a healing process when one is sick, but why the I-function would need to know about our inner ills is unclear.

Possibly the most evidenced theory about REM sleep is that of its role in development. Animals that are more helpless at birth have much longer REM sleep cycles than animals that are more self-sufficient at birth, such as whales or dolphins. (9) Human babies have much longer REM sleep cycles than adults, and elderly people have a very low portion of REM sleep. In addition, children aged three to five, although they have a large portion of REM sleep, can rarely remember or articulate any sort of dream. By age seven they usually can remember specific dreams with self-centered plots and visualizations. (6) There is some basis that babies and children may have so much REM sleep simply to reduce pressure on their underdeveloped and overworked homeostatic regulating systems, because temperature does decrease during REM sleep. (6) But it also may be that they are developing their I-functions and their abilities to hypothesize and visualize personal stories. But this would also indicate that humans might not be unique in this type of intelligence, as even mammals commonly considered stupid have the same mechanism. Of course our I-functions could be structured differently, or just may not be the unique source of intelligence.

The investigation of sleep, including its dual forms and multiple purposes, tears away the layers of what we consider our normal waking reality and gives us a peek inside the mechanism of the human nervous system and the self. If the I-function is considered the storyteller of the nervous system, then it must be very active during the intricate dreams of REM sleep. However, we often equate the I-function with self and with ability to control, but where has this ability gone in sufferers of RBD? Obviously we need the accurate input of the unconscious to make real, conscious decisions. But do we really even need the I-function? Sleepwalkers and sleepeaters perform complicated actions and even perhaps even personally significant actions without the I-function. But we wouldn't want to blame them for their actions. They may have some sense of self, but no ability to control their stories. We only consider the real self to exist when waking, consistent reality is combined with the I-function. But both the unconscious and the I-function need a time to rest and to run free of reality. It seems to me that NREM sleep is a time when the body and nervous system has a chance to rest and run regularly, free of the pressures of the world and the ever-vigilant I-function, and that REM sleep is a time when the I-function has a time to play, free of the overly regular constrictions of reality. Any other benefits of sleep may stem from these purposes.


References

1) 2)"Way #19: MINIMIZE SLEEP", a Rabbi argues that the need for sleep can be overcome

3)Sleep Foundation Website , "Living with Narcolepsy,"

4)Dr. Greene website, a doctor answers mothers' questions about partial arousal states

5) National Sleep Foundation homepage , information about RBD

6) Sleephomepages website , very extensive web resource about every aspect of sleep

7)"Why We Sleep (or can't)", , article in the Harvard Mahoney Neuroscience Institute Letter

8)"Why Sleep?", , article in BioScience

9) from Center for Sleep Research through Encarta Encyclopedia online

10)"Why do we sleep? Why do we dream?", , part of Going Inside( website, author John McCrone

11)"Good sleep, good learning, good life", article part of Supermemo website

12)"Circadian Rhythms?", , part of Helioshealth website

13)"Sleep Terrors", part of Helioshealth website


Schizophrenia
Name: Arunjot Si
Date: 2003-05-16 21:07:52
Link to this Comment: 5723


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Schizophrenia is one of the most devastating mental disorders – a disorder where the dimensions of reality and imagination intermingle, leaving the victim in a lonesome world of his own making. As family and friends watch helplessly, a brilliant life is overtaken by strange demons, potential entrapped in an absurd web which the victim cannot see, let alone escape from. Unfortunately, due to the complexity of this disease, medications have only the potential to control and reduce the symptoms. The hope of a panacea for schizophrenia remains bleak given our present day knowledge of its pathophysiology. This paper summarizes the current understanding of schizophrenia and promotes an approach that keeps in mind the logistical limitations of science – where we combine an alleviation of symptoms alongside a concomitant focus on reintegrating the afflicted into society. Facilitating such societal behavior modification and acceptance will attain optimum potential of the affected individuals improving both themselves and the community.

In order to formulate optimal management, one needs to elucidate schizophrenia's neurobiological basis. Even though we've made some breakthroughs in understanding the disease, the precise etiology remains elusive. Schizophrenia, which in Japanese translates literally as the "split mind", represents a class of disorders characterized by a fundamental disturbance of thought process, emotion and behavior leading to delusions, hallucinations and social withdrawal (11). Almost two million Americans suffer from this illness at some point during their lifetime, which is almost one percent of the population (3). It mostly affects young adults, striking both men and women equally between the ages of eighteen to twenty-five years with the onset uncommon after thirty years and rare after forty years of age (7).

While we do not yet know what "causes" schizophrenia -- many pieces of the puzzle are coming together. Despite the controversy there is increasing evidence of subtle, though crucial differences in the neurobiology of afflicted individuals both anatomical and biochemical. Brian imaging has allowed scientists to search for anatomical differences between the brains of normal and schizophrenic individuals. Schizophrenics and even their unaffected first-degree relatives have been shown to have a smaller than average thalamus (3). This may explain some of the symptoms that are found in schizophrenics, as this region of the brain is known to be the hub of sensory and motor signals making it pivotal in information processing. Another popular area of study is the size of ventricles, which are cavities in the brain filled with cerebrospinal fluid. The ventricles are noted to be enlarged in schizophrenics, which indicate loss or shrinkage of brain tissue (12). The temporal lobe is most consistently found to be decreased in volume, especially the hippocampus and amygdala (6).

Researchers have established that people with schizophrenia generally have a neurochemical imbalance. Because of this, researchers have begun studying the neurotransmitters that allow communication between brain cells to find the possible cause. Biochemical differences have focused on dopamine, which is a hormone that regulates behavioral functioning and has been found in excess within the frontal lobe of schizophrenic brains. Because lowering the dopamine level reduced the symptoms that arise with schizophrenia, for many years, the "dopamine hypothesis" has dominated the scientific front. The support for the neurotransmitter theory has come from experiences with anti psychotic drugs improving symptoms of schizophrenia, which are dopamine antagonists like phenothiazine and actually block dopamine usage in the brain. Conversely, drugs stimulating dopamine system like amphetamine aggravate schizophrenia.

A relatively recent concept proposes that schizophrenia's genetic predilection first manifests itself by different neuroanatomical organization during fetal life. At this stage of development, the brain is wiring up; nerve cells grow and divide, building their initial connections with each other. In vivo MRI studies have shown certain groups of nerve cells to migrate to the "wrong" areas when the brain is first taking shape, leaving certain regions of the brain permanently shrunken or miswired. Various stressors such as viruses during second trimester, lack of oxygen, hypo and hyperperfusion of the brain have also been implicated in such neurobiological changes (4).

Of course some theories hearken back to the blueprint of fetal life and examine differences in our genetic makeup. Certain traits/alleles have been loosely associated with an increased risk and family history of schizophrenia. Family twin and adoption studies suggest that 65-85% of susceptibility to schizophrenia may be attributed to genes (10). The odds of any individual to develop schizophrenia are 1 in 100 (or 1%). But if you have a sibling or parent with the disorder, the odds increase dramatically, perhaps to 1 in 10. If an identical twin (who has essentially the same genetic blueprint) develops schizophrenia, the odds of developing the disorder during their lifetime jump to about 1 in 2 or fifty percent (12). Conversely adoptive children with biological parents with schizophrenia have an increased incidence even though they have none or minimal contact with their biological parent (6).

To prove genetic contribution, various genome scans of schizophrenics have been done with the help of DNA markers. The results do not support any single gene that leads to the phenotype of schizophrenia. However, several genes have been implicated with an increased risk of developing the disease (10). Chromosome 6p, 5p and 8p have all been cited in various studies (8).

Though genetic and neurobiological factors have a very significant role in this devastating illness, environmental stressors are often what tip the scale. It is not guaranteed that if one identical twin becomes schizophrenic, the other will. Thus one can assume that other aspects such as environment play a role as well. "Your genetic code may predispose you to a disorder – it may set the 'stage', but other environmental factors will need to be present for the disorder to actually develop (12).

Various stressors may be present in the form of tramatic early childhood relationships which may leave an indelible mark on the child's psyche. During adoloscent years, severe illness or death in a family appears to be a common precipating event of this mental illness. Family discord and marital conflict between parents is another stressor which compounded by lack of communication, lack of family and societal support can become an instigator in triggering problems in an already fragile mind. Even in families with little marital conflict, parents may sometimes confuse their children by offering two competing messages called a "double bind". Holding a coveted object in one hand while stating that the child may not have it yet indicating by body language that the child may have the object brings about not only a double message, but confusion. If this confusion is consistent, it may be conducive to schizophrenia (13).

Though we certainly are still far from attaining a cure for this disorder, there are various treatments available today that have replaced electric shock therapy of the 1950's. Modern day antipsychotic drugs include neuroleptics, which are clinical antagonists of the D2 dopamine receptors: examples are haldol, thorazine and clozapine (9). However, these medications need to be used for a prolonged period and their side effects are substantial. Atypical neuroleptics like clozaril, risperdal, and zyprexa are more popular because of less side effects.

Antipsychotic drugs mostly act by blocking biochemical discrepancies such as dopamine surges. These medications reduce the psychotic symptoms of schizophrenia and usually allow the patient to function more appropriately. The medications can be initially sedating making them especially useful in individuals that are agitated. Their main effect however is in diminishing hallucinations, agitation, confusion and delusions of a psychotic episode. Antipsychotic medications should eventually help an individual with schizophrenia deal with the world more rationally.

Although most schizophrenic patients need anti psychotic medications, the drugs by themselves are far from sufficient. Medications tend to be more beneficial for preventing "positive" symptoms of schizophrenia such as hallucinations and delusions, but are less effective for its "negative" symptoms such as a flat affect and apathy. Psychiatric care and social relationships are just as important. Behavioral techniques including social training are extremely useful. Schizophrenic patients need to be coached, prompted and corrected as they rehearse behavior modifications and observe their models. Various medical and paramedical sessions need to be utilized for optimal management and support. Family intervention significantly reduces relapse rates at 12 and 24 months. As per one review, seven families would have to be treated to avoid one additional relapse in the family member with schizophrenia. Behavioral and compliance therapies have been helpful to improving adherence to anti psychotic medicine (9).

A major obstacle in treating schizophrenia however is denial, not only by the patients but the families as well. To the patients their world of delusions and hallucinations is very real. Their imaginary friends or persecutors actually exist for them. So first and foremost, they and their families need to be convinced that there is a problem. Another major issue for non-compliance is that most of the anti-psychotic medications are needed for prolonged periods and have long-term side effects. Drowsiness, slurred speech, tremors, decreased libido, weight gain and tardive diskinesia, for example facial tics and abnormal movements of parts of the body are a few of these side effects (2). These medications, while treating the illness change the persona of the individual and may make social interactions more difficult.

Besides treating the patient and providing family support, societal reinforcement is crucial. Mental illness in many ways is worse than a physical abnormality. Society does not see what the problem is, and this may be interpreted as a sign of weakness, family pathology or societal ill. Educating the masses not only about schizophrenia but mental illness in general is a major step in reintegrating these individuals into the society so that they are treated no longer as outcasts but as equals. Family psychoeducation reduces distress, confusion and anxiety within the family and this in turn helps the person to recover.

Acceptance, understanding and perseverance seem to be the key to rehabilitating these tormented individuals. Friendship and trust need to be built in order to persuade patients to participate in therapy sessions and continue medication. Brian Grazer's award winning film, A Beautiful Mind, is a great example of how acceptance and support allow John Forbes Nash, one of the most brilliant intellectuals of our time, to not only live with his demons, but also make ingenious contributions to his career, resulting in the emergence of a Nobel Laureate (14).

Thus rather than push these patients into closets as though we were ashamed to be associated with them, let us teach society regarding mental illness. "Mental Illness Awareness" sessions should be held especially in places where young people are involved i.e. colleges and schools. Let us organize community awareness so that the unreal reality of these very volatile minds is understood. These young intellectual minds have a lot of potential. Understanding and support may help tap this potential and channel it into very useful projects and tasks. In a culture where terms such as "crazy" and "loony" are thrown about loosely and often in jest, increasing one's knowledge and awareness to mental illness will lead to enhanced societal sensitivity.

With ongoing biochemical and genome research, it is hoped that soon there will be improved drugs available or gene therapy once further genetic mapping is done. However, due to complex genetics of schizophrenia it may be awhile before the specific genes that are responsible are identified and even longer before treatments based on these findings become possible. Meanwhile, we need to proceed beyond the stigma of mental illness. After all, schizophrenia is not the direct result of any individual's action or failure, but a brain equals behavior phenomena. We need to provide education, acceptance and support so that even while these sufferers may not be able to wholly eradicate their hallucinations, they can learn to cope and live despite them. I'd like to close with a scene from A Beautiful Mind, where Professor Nash tells the officer sent to evaluate the receiver of the Nobel Prize "you see I...I am crazy. I take the newer medications and I still see things that are not here. I just choose not to acknowledge them." Even as his wife later asks him if anything is wrong as he sees his illusionary roommate, he says "Nothing – nothing at all. Lets go" (14). In other words, lets go on with life...


References

1) Schizophrenia Homepage,

2) Schizophrenia Treatment Side Effects May Cause Patients To Discontinue Medication , By Dr. J. Hellewell

3) MRI Scans Reveal Subtle Brain Differences in People with Schizophrenia , by V. White

4) Reality vs. Fiction: The Mysterious Illness Schizophrenia , by N. Megatulski

5) Schizophrenia Schizophrenia: Volume Matters , by X. Bosch

6) Study Shows First Significant Genetic Evidence for Schizophrenia Susceptibility , by J. Blouin et al.

6. Study Shows First Significant Genetic Evidence for Schizophrenia Susceptibility by J. Blouin et al.

7) Diagnostic and Statistical Manual of Mental Disorders. Fourth Edition. Washington D.C.: American Psychiatric Association, 2000.

8) Kendler, Kenneth et al. "Clinical Features of Schizophrenia and Linkage to Chromosomes" American Journal of Psychiatry, 2000.

9) Lawrie, Stephen and McIntosh, Andrew. Clinical Evidence. June 2002.

10) Levinson, Douglas et al. "Genome Scan of Schizophrenia" American Journal of Psychiatry, 1998.

11) Littrell RA. "The Neurobiology Schizophrenia" Pharmacotherapy. Lexington: 1996.

12) Narine, James. Psychology: The Adaptive Mind 1997.

13) Lefton, Lester. Mastering Psychology 1992.

14) Grazer, Brian. "A Beautiful Mind." 2001.


Life, The Universe, and Everything:
Questions o

Name: Sarah Feid
Date: 2003-05-17 12:02:46
Link to this Comment: 5725

<mytitle> Biology 202
2003 Third Web Paper
On Serendip

Introduction

The question of consciousness is an ancient one. The origins, nature, and location of the mind have all been scrutinized and theorized about at great length, resulting in many disparate hypotheses which are often in direct opposition to each other. The speculation of the mind about itself tends to fall into one of three categories, substance monism, dualism, or pluralism. Monism and dualism are particularly relevant to the philosophy of the mind, and each of these umbrella philosophies will be explored in this paper, as will their respective usefulness to the study of mind.

The Philosophies

Substance monism is the belief that all existence is composed of a single substance. Hobbes' philosophy of the mind, a physical monism related to modern materialism, states that all mental realities are properties of the physical reality which only seem to be causal, and that therefore the only true substance is the physical. Berkeley, in contrast, believed that physical reality was simply a collection of thoughts, and that therefore the mental is the only true substance of existence. Spinoza and Russell, however, held that the physical and mental realities were both "modes" of one universal, more basic, neutral substance. (1) (2) Dennett calls consciousness a "virtual captain," an emergent or summative property of the complex human nervous system. (3) Physical monism is considered the most widely-held view today (4), especially in scientific circles where everything revolves around empirical evidence.

Substance dualism is the belief that two substances exist which are fundamentally different in nature; in the philosophy of the mind, these two incongruent substances are the mental and physical. Descartes taught that the immaterial mind and material body, while independent by nature, could interact in a causal relationship; the point of connection between the two was taught to be the pineal gland. Contrary to this idea, Leibniz and Malebranche espoused forms of dualistic parallelism, stating that the physical and mental realities coexist and are synchronized, but that any actual interaction between them is an illusion. Hart holds that there is a problem with our idea of interaction and causation which prevents us from understanding the nature of the dualist ontology, and Chalmers posits that mental-physical interaction is the only truly difficult problem in philosophy of the mind. (4) (5) Eccles denies that mental substance is immaterial, rather claiming that it is a material of a different world. The interaction mechanism between them consists of what he calls dendrons in the cerebral cortex and psychons in the mind, with each psychon linking to a single dendron. (3) Dualism is often espoused by those who seek to reconcile science with religion, as it openly allows both for empirical evidence and blind faith simultaneously.

The Usefulness

Since the nature of science is to deal with the physical world, no philosophical theory dealing with the spiritual or non-physical can be scientifically tested. Nevertheless, "evidence" supporting such a theory is often presented in philosophical apologias. Often, the arguments are of a claimed non-physical nature, such as logical reasoning; occasionally, they are combined with neurological findings to round out the theory. This raises the question, what really can be used to support such theories? And which theory seems most helpful, useful, or promising as an aid to a full understanding of the human experience?

Physical monism seems useful in that it allows complete concentration on exploration of the physical world. A physical monist, unconcerned with the possibility of another realm of existence's influencing the reality he observes, is likely to be intent on discovering a physical mechanism for all he sees. He believes that every aspect of every interaction he observes can be explained and understood in terms of this world. Such a belief can spark curiosity as eternal as the universe itself – literally. If the monist is correct, then the search will eventually end, with everything explained and existence itself an open book. Since this end does not seem anywhere in sight, however, that fact is arguably irrelevant at this time with respect to the usefulness of the theory for complete understanding of life, the universe and everything.

However, what if monism is not realistic? If the empirical universe is in fact constantly interacting with some immaterial force or forces, monism does science a disservice. By claiming a lack of such interaction, monism backs research into a corner in which it could struggle fruitlessly with equations and observations that do not and cannot match. If the mind really is another substance altogether, monistic research will enter an infinite loop in its observation – constantly insisting that it can be located physically in the brain, and naturally unable to do so.

At this point in time, physical dualism carries with it an inherent blind spot. If the mind is considered a non-testable thing, dualists may not bother trying to test it or locate it. The philosophy, especially the more religious strains, has an archaic feel reminiscent of medieval medicinal techniques and diagnoses. How different are the statements, "You're sick because you're possessed," and, "You're conscious because there's a spirit in you"? The "it's a mystery" explanation is a common one in many religions, and can often feel dissatisfying, condescending, and ignorant. One is hardly inspired to engage in research to illuminate fundamental questions of existence when one is told that that is impossible to do. In terms of its usefulness as a theory in contributing to the eventual understanding of the universe, it may be seen as a dampening influence rather than an inspiration. It assigns an indefinite nature to interesting and important things, like mentality, and offers no solution to arriving at a comprehension of those things.

However, what if dualism is realistic? If there really is a spiritual/non-physical aspect to the universe, dualism provides immediate understanding of that fact. Rather than waiting until purely physical explanations for things are exhausted and then having recourse to another realm, dualism accounts for the non-physical from the beginning. It can also be a source of inspiration for those who believe that working to understand the physical realm is no less fascinating because of its lack of solitude. It can also be incredibly useful for inspiring alternative methods of investigation and discovery. Spirituality often encourages meditation and other practices which are seen to accomplish unorthodox healing. This sort of phenomena give science an idea of what the mind is capable of, and essentially give it things to look for in the brain.

Conclusion?

Physical substantial monism is often espoused by scientists on the basis of lack of evidence. Since things like alternate realities, non-physical substances, and transcendent states of existence cannot be tested, they are easily dismissible. Some adopt monism because their understanding of the complexity of the nervous system, physics, and evolution progresses to such a level that it seems unnecessary to look further for the explanations of all we experience. Indeed, the universe can seem incomprehensibly large and complex, and the realization of how little is known can inspire almost religious awe.

But can monism or dualism be supported by science, or a lack of science as the case may be? The very nature of a non-physical reality indicates that it cannot be tested by physical tools, so the fact that there is no physical evidence in support of a mental/spiritual realm (which in and of itself is an arguable statement) cannot possibly be used to argue against its existence. Even evidence supporting purely physical explanations of certain events cannot logically rule out possible interaction with non-physical elements. It may possibly be, for example, that the order observed in the universe is the result of a physical-mental/spiritual interaction, and that we simply take it for granted that that is purely physical. How can we really know? This calls into question the ultimate validity of citing a system we do not fully understand as a source of information.

It seems that both theories involve a certain amount of faith. Dualists have faith that their mental observations are sufficient evidence of another aspect of the universe. If the mind is really an emergent or summative physical property, then the inability of the mind to describe itself and its functions in terms of the brain is as normal as the inability of a face to see itself – it must use abstractions, analogies, reflections. Monist, on the other hand, must have faith that the scientific method can accurately assess the physical world – and only the physical world. If observations of this world are fundamentally tainted by aspects of a mental/spiritual world, then the scientific method is crippled in its ability to differentiate the two, and cannot really offer evidence either way. We don't know, and yet quite often we believe anyway; what is that but blind faith? Such faith is integral to both sides of this debate, and it may be that it is fundamentally necessary to the human condition. Just like the eyes generate orderly perception by filling in edges, perhaps faith is necessary to perceive the world around us, filling in the gaps between what we think we observe and what is truly underlying it.

References

1) Dictionary of Philosophy of Mind Definition of monism. Very useful and thorough resource hosted by Washington University in Saint Louis.

2) The Internet Encyclopedia of Philosophy Definition of monism. Another useful resource, hosted by the University of Tennessee at Martin.

3) John Eccles on Mind and Brain An interesting paper about the intersection of Eccles' philosophy and theosophic thought.

4) Dictionary of Philosophy of Mind Definition of dualism.

5) Consciousness and Its Place in Nature A paper by Chalmers, hosted by the University of Arizona.


The "Critical Period"
Name: Stephanie
Date: 2003-05-17 23:55:54
Link to this Comment: 5726


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

What is the Critical Period Hypothesis and why is it important?

The Critical Period Hypothesis proposes that the human brain is responsive to language during a limited time. This can be compared to the critical period referred to in to the imprinting seen in some species, such as geese. During a short period of time after a gosling hatches, it begins to follow the first moving object that it sees. This is its critical period for imprinting. (1) The idea of a critical period of language acquisition is influenced by this phenomenon. After observing that children seem to learn language much more quickly and with more ease than adults, it was hypothesize that childhood is the time when language acquisition (unconscious, usable knowledge of language patterns) is possible (as opposed to language learning, which is the conscious memorization of language rules).

The Critical Period Hypothesis has been taken as common knowledge b y the average person. We often go by the "you can't teach an old dog new tricks" philosophy of language learning: expose children to as much of the target language as possible while their still young because after that, it's too late. This informs how think about English language instruction in the United States. In voting booths around the country, people are deciding how best to teach the numerous children in the U.S. who do not speak the language that will help them to gain job opportunities and respect later on in life. More and more national and local legislation is being made concerning the teaching of English, and science can help people make the best decisions about how to deal with this social issue. Stephen Krashen's theories of second language acquisition have been influential in the field of education, but are too often ignored by the general voting public. His work deserves to be given a closer look because what we know about he brain seems to support his hypotheses. Yes, there will be holes in his theory, but he seems to be a little less wrong than those who influence the majority of people making decisions about ESL policy.

Does anybody have any ideas about why the "critical period" exists?

The phenomena of the nearly effortless language acquisition of most children can be explained by three different hypotheses. The first is that as time progresses, the neuronal pathways harden and can no longer be modified. Another explanation is that the input that is received as an adult is not that type of input that appeals to the part of our brains that learn language. The final explanation is that a there exists a part of our brains that stands between language input and the language learning are of our brains. (2) Stephen Krashen calls this the Affective filter.

There are more ways to explain the "critical period" but I am choosing to focus on these three. I feel that they are the most influential and also have the most affect of the way that the language classroom is run.

Well, what do we need to know to figure out which theory makes the most sense?

The LAD is important is important to understand if one wants to understand the theories of "critical period". Noam Chomsky suggests that the human brain also contains a language acquisition device (LAD) that is preprogrammed to process language. (3) (4) Chomsky noticed that children learn the rules of grammar without being explicitly told what they are. They learn these rules through examples that they hear and amazingly the brain pieces these samples together to form the rules of the grammar of the language they are learning. This all happens very quickly, much more quickly than seems logical. Chomsky's LAD contains a preexisting set of rules, perfected by evolution and passed down through genes. This system, which contains the boundaries of natural human language and gives a language learner a way to approach language before being formally taught, is known as universal grammar. This knowledge is unconscious; it is not connected to what we in this class have called the I-function. Retrieval of it for use in conversation flows naturally, without much conscious effort.

Okay, the existence of an LAD sounds probable, but Universal Grammar sounds kind of strange. Is it possible?

The common grammatical units of languages around the world support the existence of universal grammar: nouns, verbs, and adjectives all exist in languages that have never interacted. Chomsky would attribute this to the grammar that each normal human brain contains, despite the diversity in experience; all languages were formed from a fundamentally similar source. The numerous languages and infinite number of word combinations are all governed by a finite number of rules. (5) Charles Henry suggests that the material nature of the brain lends itself to universal grammar. Language, as a function of a limited structure, should also be limited. (6) Universal grammar is the brain's method for limiting and processing language.

So, the brain contains an LAD, which holds Universal Grammar. So why can't everyone access it all the time?

The Critical Period Hypothesis says that as the brain matures, access to the universal grammar is restricted, and the brain must use different mechanisms to process language. This is influenced by the common view that the young brain learns things easier and better. Many of the teaching methods that we see are based on this assumption. Believing that the brain can no longer absorb things unconsciously, language classes often focus on memorization of rules.

The Input hypothesis says that the simplified speech that we receive in childhood, with its short sentences and focus on meaning rather than form, is the type of information that can be unconsciously processed by the LAD. In childhood there is a special focus on meaning. Often adults modify their speech for children. An example of this is baby talk. These child-aimed statements modify traditional use of language to convey the feelings and concepts that the speaker intends to get across. (7)

The Affective Filter Hypothesis says that under certain conditions information is not able to reach the LAD. A part of the brain acts a filter, allowing the learner to access the LAD more or less easily. Anxiety is the number one force to strengthen the affective filter. Classrooms where children are put on the stop are forced to perform language acts that they are not ready to perform are situations that build up the affective filter and stop the student from dealing with the language with their LADs. (7)

Which explanation makes the most sense and how can it be put to use?

The rigid neural pathway hypothesis does not work to explain critical period because research is showing that the brain remains plastic long after childhood. (8) (9) The comparison to the imprinting found in ducks also does not hold up because the part of the brain that is modified during the critical period imprinting in birds is unique to these animals (1).

The input hypothesis and the affective filter hypothesis make more sense. This concept fits well with our box design of the brain. The LAD can be pictured as one box and the affective filter as another. Information must pass through the affective filter before reaching the LAD; language information does not need to pass through the I-function. First language learning is largely unconscious. As a child learning language, we do not consciously commit to memory the basic rules of our L1.

The Critical Period phenomenon is a result of facts outside of the rigidity of the brain after puberty. Stephen Krashen proposed the best explanation for it that is currently available. Krashen's theory of SLA makes use of both the input hypothesis and the affective filter hypothesis. (10) The teaching approach that he and Tracy Terrell designed tales into account these hypotheses. Many other teaching methods attempt to go through the I-function attempting to have people memorize the rules of grammar. The natural approach recognizes that first language learning is unconscious.

A danger of relying so heavily on the Critical Period Hypothesis is that children are taught as if their LAD were freely accessible. Although I would argue that the LAD is accessible and malleable during childhood (as well as) adulthood, to ignore the other factors that are also involved in language acquisition is hurtful to the success of a child's language acquisition. Many state policies on ESL are based on the assumption that children can learn enough English in one year to get by in an English-only classroom. In this English-only classroom, children will supposedly absorb more English just from the exposure. This is based on the idea that a child's brain is like a sponge waiting to absorb language and that we need to expose children to as much English as possible as early as possible. There is not recognition of the fact that the same things that limit adult second language learning can also inhibit the language learning of children and that this inhibitors need to be considered in the curriculum.

Have we figured out the best way to teach language?

As with other things, the study of language acquisition is an exercise in getting progressively less wrong. Because we know so little about how the brain performs this function, we have a lot to be wrong about. The conclusions that we've reached only bring up more questions: Is the LAD a specific region of the brain? Do we have as much control over the affective filter as Krashen would lead us to believe? Is it possible to consciously acquire language knowledge and transfer it to the LAD? This is an important issue that linguists, biologists, educators and politicians need to work on together. The public cannot afford to stay ignorant about these matters, either. It could be you next-door neighbor, your student, or your relative who is affected by the ESL policies that are currently being enacted without proper knowledge about the subject. This is the danger of lacking a willingness to admit that the currently accepted answer can become less wrong.

References

1) Learning Who is Your Mother, The Behavior of Imprinting , By Silvia Helena Cardoso, PhD and Renato M.E. Sabbatini, PhD

2) A concept of 'critical period' for language acquisition, Its implication for adult language learning , By Katsumi Nagai.

3) Universal Grammar [Part 1] , Forum area of Gene Expression website

4) The Biological Foundations of Language, Does Empirical Evidence Support Innateness of Language? , By Bora Lee

4) Evolution of Universal Grammar , By Martin A. Nowak, Natalia L. Komarova, and Partha Niyogi

6) Universal Grammar , By Charles Henry

7)Krashen, Stephen & Tracy Terrell. The Natural Approach: Language Acquisition in the Classroom. Oxford: Pergamon Press, 1983.

8) Brain Plasticity, Language Processing and Reading Society for Neuroscience

9) Brain Reorganization Society for Neuroscience

10) A Summary of Stephen Krashen's 'Principles and Practice in Second Language Acquisition By Reid Wilson


Memory, a geographical study
Name: Katherine
Date: 2003-05-21 17:49:56
Link to this Comment: 5728


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

We carry it with us, but it has no form; it gives us information, but we cannot photograph it, record it, or touch it with our hands. What is it? It is our memory. We typically think of memory as a central component in our mental functioning, without having a physical aspect of its own. But memory is as much a function of the body as it is of the mind. (And can we in fact separate the two as Descartes claimed?) There is a fundamentally physiological foundation to memory. The physical structure of the brain is intimately tied to the function of our sensations, memories, and thoughts. However, a functional memory is not composed merely of isolated images that exist in a vacuum. The topography of memory seems to depend, rather, on the nature of connectivity, both between moments and through time. How does the brain create this unity? Perhaps the exploration of this question, as well as of the physical foundations that memory holds in the brain, would lead us to a better understanding of consciousness, memory, and its breakdowns.

What are the implications of the physical anatomy of memory? It will be useful to look first into the way in which memories are established in the brain. It is reasonable to assume that memories are stored in both cortical and subcortical structures. This is because decorticated animals can learn many behavioral tasks, and simple animals with rudimentary nervous systems can learn and show evidence of "memory." Richard Thompson and colleagues at the University of Southern California have demonstrated that the cerebellum plays a role in the conditioning of certain responses (1). Since many memories depend on sensory processing, and since sensory processing is carried out in multiple systems, it is reasonable to think that different types of information are stored in different places in the brain.

The two main types of memory are short-term and long-term memory. Short-term memory is memory of information that has just been presented, and involves the prefrontal cortex as well as the sensory association cortex. In 1995, Oyachi and Ohtsuka performed an experiment to suggest that short-term memory of the location of objects requires neural activity of the posterior parietal lobe. In the experiment, the activity of the human posterior parietal cortex was temporally disrupted by means of transcranial magnetic stimulation. When this is performed, an alternating current is passed through an electromagnetic coil placed against the skull, and the magnetic field temporarily disrupts the activity of the neural circuits in the brain located beneath the coil. The investigators found that disruption of the right posterior parietal cortex disrupted performance of a task that required people to move their eyes towards the location of a stimulus that had been presented two seconds earlier—thus, it affected their short-term memory, which depended upon neural circuits in the visual association cortex. Short-term memory involves other brain regions as well, such as the prefrontal cortex. Damage to this region or temporary deactivation by cooling the cortex disrupts performance on several tasks using visual, tactile, or auditory stimuli.

Long-term memory comprises the second type of memory. There are three main steps involved in the formation and filing of long-term memories. First, the information is registered (perceived via sensory channels), then it is encoded (further processed for identification and association), and finally it is stored in various anatomical sites in the form of "engrams." An engram is a unique pattern of neurons connected through activation at the same time.

Professor Eric R. Kandel, of Howard Hughes Medical Institute (2). has dedicated many years to discovering what molecular changes take place in cells when an organism learns a new behavior. Kandel and colleagues looked at the sea snail Aplysia, a simple invertebrate and relative of the octopus, which has relatively few nerve cells. The research team was able to teach the animal to retract its gills in response to a stimulus, then analyze which nerve cells were involved in the action. They discovered that the snail's learning could change that circuit. This work showed for the first time that learning and memory involve changes in the strength of synapses, and that learning could thus be mapped as a definitive series of biological signals.

Kandel went on to find that the formation of long-term memories require the growth of new synapses between neurons, and thus there must be genes that promote the growth of these new synapses. The research team traced the formation of long-term memory to the nucleus of the neuron, showing that the series of biological signals that leads to a memory ends with a molecule called CREB. This molecule turns on many genes that stimulate the growth of synapses, thus allowing for the formation of persistent long-term memories. The researchers found in 1990 that, by blocking CREB, all the noted events were stopped, suggesting that CREB was pivotal to the physical and genetic basis of long-term memory. Following this, the Kandel lab also found that formation of long-term memories is normally prevented by a series of inhibitory processes that determine if short-term memory will be converted into long-term. One of these inhibitors is a molecule called CREB-2, which blocks the action of CREB (now called CREB-1), thus blocking long-term memories from forming. The removal of CREB-2 allows for the immediate rise of long-term memory. Therefore, in the sea snail, it was found that the removal of CREB-2 was necessary in addition to the presence of CREB-1, in order for memory to be activated.

Donald O. Hebb (1904-1985) (3). did work on the neurological events effecting memory and learning. His theories are based on the construction of neurons in the brain as interconnected with many other neurons, each receiving input from many synapses on its dendrites and cell body. The resulting neuronal loops contain neurons whose output signal may be either excitatory or inhibitory. The neuronal loops run from the cortex to the thalamus or other subcortical structures, such as the hippocampus, and back to the cortex. Each neuron is believed to both send and receive thousands of outputs and inputs, therefore the number of possible neuronal loops is immense. Hebb proposed that each psychologically important event—a sensation, perception, memory, thought, or emotion—can be conceived as the flow of activity in a given neuronal loop. He explained synaptic modification as the basis of memory, referring to when the synapses in a particular path become functionally connected to form a cell assembly. He assumed that if two neurons, A and B, are excited together, they become linked functionally, saying: "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased" (Hebb, D.O. Organization of Behavior. New York: John Wiley, 1949, p. 62). Hebb believed that one cell could become more capable of firing another because synaptic knobs grew or became more functional, which increases the area of contact between the afferent axon and the efferent cell body and dendrites. Hebb assumed that any cell assembly could be excited by others, which provided the basis for thought or the making of ideas. The essence of an "idea" is that it occurs in the absence of the original environmental event to which it corresponds. Hebb's theory attempted to explain psychological events by the physiological properties of the nervous system. "Hebb synapse" is the term now used to designate synapses that undergo change during learning. Hebb proposed that there must be changes in the synaptic junction that make the synapse more "efficient."

Our understanding of the way memory works may be aided by the study of memory abnormalities. One of the most dramatic and intriguing phenomena caused by brain damage is anterograde amnesia, which appears at first to be the inability to learn new information. On closer inspection, we see that the basic abilities of perceptual learning, stimulus-response learning, and motor learning are in tact, but that complex relational learning is gone. The term anterograde amnesia refers to an inability to remember events that occur after brain damage. A severe form of anterograde amnesia, termed Korsakoff's syndrome, which in the United States is usually a result of chronic alcoholism, and results from a deficiency in thiamine, a B1 vitamin (4). A person with Korsakoff's can remember events that occurred twenty years ago, but not what happened twenty minutes ago. Often the person will subsequently fill in "false" information in order to construct a full situation in their mind or to answer a question, and frequently the person is not aware that they are fabricating their own information, or even that they are having memory problems at all.

A famous case of anterograde amnesia occurred with a patient known as H.M., who has been studied because his amnesia is relatively pure and quite severe. He developed amnesia after having surgery that was performed as treatment of seizures that focused on his temporal lobe. The surgery removed his hippocampus, which subsequently prevented him from laying down new long-term memories (5). He can recall early memories well, has normal intellectual and language ability, but is unable to learn anything new after an operation. He cannot identify people by name that he met since the operation (when he was 27 in 1953), and he would not be able find his way back home were he to leave his house. By studying H.M.'s case, various conclusions regarding the anatomy and the physical nature of long-term memory have been reached. The physical structure of his hippocampus was removed; thus, the chemical transactions needed for long-term memories to be established in the brain became impossible. Unlike some of those affected with memory disorders, H.M. is aware of his problem, and often describes his feelings in this way:

"Every day is alone in itself, whatever enjoyment I've had, and whatever sorrow
I've had. Right now, I'm wondering. Have I done or said anything amiss? You
see, at this moment everything looks clear to me, but what happened just
before? That's what worries me. It's like waking from a dream; I just don't
remember."

The key is that H.M. is unable to make connections and associations between his memories, and thus they are unable to form as a useful tapestry in his mind. He has lost perception of depth in time. It thus becomes clear that the application of associations and connections are necessary for a memory that is fully functional. We saw hints of this idea with Hebb's theory of the linking of neurons and the neuronal loop.

Another example which illustrates this need for a "linking" ability comes from the psychologist A.R. Luria (1902-1977). He spent many years studying the memory of a man referred to as "S.," who had an extraordinary ability to remember and retain information. His memory was so vast, in fact, that S. had to practice the ability to forget. More importantly, however, S. felt the most burdened not by the sheer vastness of his memory capability, but by his inability to form organized linkages and connections between memory images. Luria said that S. became especially overwhelmed when trying to understand poetry amongst its metaphors, explaining that "each expression gave rise to an image; this, in turn, would conflict with another image that had been evoked." (6). Interestingly, the sensations experienced by S. such as a sound at a specific pitch, were indeed able to form associations with other sensory understanding. He would often see sounds and numbers as images, and described this when presented with a tone of 2000 cycles per second: "It looks something like fireworks tinged with a pink-red hue. The strip of color feels rough and unpleasant, and it has an ugly taste, rather like that of a briny pickle...you could hurt your hand on this." The unique experiences that S. had with his memory reminds us of the Borges story about Funes, the man who could remember each moment in perfect isolated detail, but was tormented by the fact that he could not arrange these memory images into a living and connected flow. Luria described S. as having an incredible imagination, so strong that there was often not a great difference to him between the things he imagined and reality.

The idea of memory as a connected flow is especially important because it acts in analogy to the form of our consciousness. We do not exist in isolated moments entirely, but in a constant "stream of consciousness," where events, thoughts, and sensations are perpetually linked to other events, thoughts, and sensations. We exist in a single stream, rather than in separate states at separate times. Therefore, there is a certain unity of consciousness, which is strung together by the motion of time. Problems with memory occur when there are breakdowns in this "time unity," when there is a breakdown in flow and connectedness as perceived by the brain. When this happens, there arises a problem in keeping a certain amount of unity both over time and simultaneous to each event. We see the nature of connectedness in memories themselves, when one sensation can often be the gateway that opens onto a sudden stream of other memories. We know that the transmission of signals depends upon the connection between neurons in the brain. We have proven the physical placement of memories in areas of the brain such as the hippocampus. It is only by the combination of these observations and theories that we can come to a more complete understanding of memory, and the way in which we move through its terrain.


References

1)Website for University of Southern California, information on Thompson's research.


2)Website for the Howard Hughes Medical Institute; information on Kandel.


3)Informational page about Hebb, hosted by the University of Southhampton.


4)Website for the Alzheimer's Society; information about Korsakoff's Syndrome.


5)Information on the hippocampus and H.M.'s case, from the Center for Psychology at Athabasa University.


6)Website containing excerpts from A.R. Luria's text, The Mind of a Mnemonist, p. 120.


7)Kandel, E.R., J.H. Schwartz & T.M. Jessell. Essentials of Neural Science and Behavior. New York: McGraw-Hill, 1996.


A "Biased" Look at Near Death Experiences
Name: Patricia P
Date: 2003-05-26 16:07:40
Link to this Comment: 5729


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

Is there anything that awaits us after we die? Is there any credible proof of the soul? Can it exist without the body? And finally, the most challenging question, can near death experiences be a part of solving the puzzle? In the past 3 decades, the phenomenon of near death experiences (NDE) have brought those of religious, philosophical, psychological and scientific backgrounds out of every corner of the world in order to propose their theory on these paranormal out-of -body experiences. There are few, even among skeptics, who would refute that people who claim to have had near death experiences believe that they have experience something greater than we can explain, and that completely sane individuals are moved to entirely new beliefs about life and a life after this. What is widely disputed is why. Without knowing very much on the subject, I naturally assumed that the most compelling argument could be made from a standpoint more scientific or psychological in nature.
Primarily, it is important to analyze exactly what the most common NDE account consists of. Secondly, it is essential to look at the all the different theories that have evolved, through intense medical, psychological and neurological research. And thirdly, to consider some vital components (along with specific accounts) of the near death experience that make any other vantage point but that of a spiritual one seem incomprehensible, even to the most rational thinker. The perspective of a person who is un-biased and void of any personal tie to a near death experience is a perspective that many people share on this often overlooked phenomena and a perspective that I am ready to offer. I was shocked at not only the personal accounts of those who had experienced NDEs first hand, but at the loopholes strewn across what I would have considered the more 'dependable' science's explanations of near death experiences. It makes more "sense" that the near death experience is in fact a transcendental experience.
The debate surrounding the near death experience began in the early 1970's, (with the exception of the subject's appearance in Plato's Republic and a few other poorly documented religious or philosophical texts.) The general public and both the scientific and religious world displayed the most interest with the publication of American psychiatrist Raymond Moody's Life after Life, in 1975. (2) From this best seller spawned the interest of other psychologists, medical specialists, neurologists and theoreticians who have conducted a myriad of studies on the phenomena. Although there are variations reported by different researchers, the basic template for the near death experience was introduced by Moody: The individual "(1) feels at peace; (2) hears a noise; (3) is separated from the body and can sometimes observe his/ her own body and watch events taking place (e.g. resuscitation); (4) enters darkness, like a void or tunnel; (5) meets other people - often relatives and associates who have already died; (6) sees a light that becomes brighter; (7) experiences a life review; (8) reaches a form of border or barrier, (e.g. a river, mist, fence, door, and (9) is finally made aware that there is a need to return to physical life." (2) However, this outline is a mere skeleton compared to thousands of accounts of actual experiences; a near 7 million people have reported hauntingly similar near death experiences. (1) Most NDE accounts focus on the "love" and "clarity" of the experience, which is hard to quantify. Personal accounts focus on a sense of peace and euphoria during the experience. Some are greeted at the end of the tunnel by religious figures, by relatives and even by pets. The common perception of a reenactment of their life is frequently described as one without judgment (which contradicts the popular perception that these may be a religious person's affirmation of their previous beliefs,) and is more of a reflective and profound experience. In fact, most who experience near death experiences are moved to great life style changes and take on new philosophies including an appreciation and almost anticipation for death. A conversation with someone who has had a near death experience is often drawn back to the presence of the light, and its indescribable nature, as being "loving, timeless, and spaceless." (6) The landscape is described in many cases as incomparably beautiful and pure. All accounts are unique, but carry a common thread of pure bliss and oneness. But there are several theories that valiantly attempt to account for this 'bliss,' along with the perceived light at the end of a tunnel and several intricate explanations for why people may have these "hallucinations."
One major theory, the lack of oxygen theory, focuses on the fact that the NDE euphoria is a result of a lack of oxygen to the brain (or hypoxia.) This is compared to the calming and pleasing feeling one gets right before they faint. However, NDE is unique in that it lasts much longer than a fainting spell. It is also important to address that while someone has fainted, they may feel euphoric, but they are unable to account for things that are taking place around them. In situations of lack of oxygen or excess of carbon dioxide, people are known to hallucinate, but most typically these are fearful, disorienting, and less pleasant hallucinations. If an NDE were to be considered an episode created from a lack of oxygen, it is radically uncharacteristic by being peaceful, calming, orderly, and much longer than just momentary bliss. Those who have experienced both hallucinations from lack of oxygen and NDEs also vehemently insist that these are 'unmistakably' different. (4)
Connected to the latter theory is the hallucination theory. The hallucination theory differs from the lack of oxygen theory in that a type of high, and not a blissful low produces it. The body, upon recognizing the trauma of death, responds by releasing endorphin hormones, which act on the central nervous system in order to suppress pain. This could be thought of as similar to the "runner's high," when the runner suddenly reaches a point of extreme pain and stress followed by an effortless burst of power. More complex substances than endorphin hormones have been studied, including LSD and Ketamine. Psychologists Dr. Ronald Siegel was at the forefront of causing drug induced hallucinations with the use of LSD, but were only similar to the NDE. (3) Although Ketamine was also found to reproduce many of the key features of the NDE, including the travel through a dark tunnel into the light, many of the volunteers became paranoid, very uncharacteristic from the near death experience.
Neurologists predominantly support the temporal lobe theory. There is valiant evidence in support of the theory when traits of the NDE are found in patients with certain temporal lobe damage or temporal lobe epilepsy. However similar the experience seems, this is the continuing flaw: the patients who are reacting from a form of temporal lobe stimulation are characteristically scared, sad, and lonely. (3) This makes all the difference, as we see with many of the theories that have attempted ways to mimic the experience. The main problem is the subjects are missing, for lack of a better word, the love that is described as such an integral part of the experience.
Psychologists and theorists have also been hard at work for explanations. Darwinians may be more apt to agree with Darwin's theory, or rather the theory that the human race is intentionally attempting to fit the role of the "fittest" by "being able to better adapt to the inevitable ending of their lives." (3) This is incomplete for several reasons, but mainly serves as only a weak explanation for why we may benefit from near death experiences, but can not in any way account for why a person may have them.
The depersonalization theory makes the argument that the body is under incredible stress, and therefore the mind is tortured and attempting to separate itself from the pain. (3) This would explain a theory as to why the common experience of seeing your body from a birds eye view would be pleasurable to the psyche, however, it does not account for why it happens. Nor does it touch on why there is increased awareness, clear memory, and any of the strong spiritual connection.
The memory of birth theory is also very entertaining to consider. The theory is based around the idea that this near death experience is in fact a memory of the birth trauma. The tunnel is the birth canal, the white light is the outside world, and the doctor is the being that greets them. (3) Unfortunately, while there is no proof in the medical world today to disprove the afterlife theory, there is medical proof that makes this implausible, along with being illogical for several other simple reasons. First of all, a child traveling through the birth canal is completely blind. Secondly, the travel through the birth canal is not only traumatic (falling short of the NDE's euphoric and "loving,") but is also experienced with great pressure against the walls of the mother, making it impossible to look down a long canal to the outside and see the doctor. (3)
What I found to be the strongest theory opposed to near death experience as an actual spiritual voyage was the dying brain theory. This theory is unique in that it utilizes all of the strengths of the afterlife theory to support the idea that these must be physical malfunctions of the brain while dying. For example, what is so strong about the afterlife theory is that all of these people's experiences have been similar, down to very minute details, and would be hard to manipulate if the experience was untrue. However, Dr. Susan Blackmore in Dying to Live, sees this as a greater advantage to the theory that these are brain processes (as all of our brains are built similarly,) in an attempt to except and survive the dying process. (3) She claims an actual voyage may hold more differences than those described by people who have experienced NDEs. This argument is compelling in bringing doubt to the idea that it is a spiritual journey, but in no way does it defend why the brain would create such an elaborate scene. Offering the connection between the physical and chemical similarity of our brains and of these incidents actually becomes quite easy when we realize that this general argument could be used to support anything we experience as humans collectively.
It can not be overlooked that people of different cultures experience culturally specific variations in their NDE. For example, a Christian will be greeted by what they perceive as Jesus Christ, while a non denominational person may only be greeted by relatives and so on. This would possibly disprove the solidarity of a Christian theory, or possibly a theory derived by any religion, but it does not make a case that these experiences are not out-of-body and are not spiritual rather than medical. Additionally, almost all experiences involve the person feeling as if they are not judged, contradictory to much Christian expectation of the after life, and several return with the belief that there is no hell. This afterlife may not match a religion, but could plausible be a truth beyond the individual religions, and something that may take on different forms. Although popular culture in America may not find this satisfying, it is important to note that there are beliefs in which the entire near death experience of all people coincides with their image of the afterlife. A Roman Catholic remarked that it was 'beyond any denomination, while an Anglican explained that 'all religion is basically the same.' (2) This does not mean that all who have experienced a NDE would agree, but up until this point, (and considering these people are eyewitnesses,) this is some of the most conclusive theories we have so far.
Before we ask ourselves to believe that these are actual voyages into another realm, or possibly heaven, it is important to recognize that by accepting even the existence of the experience, we question whether we can be conscious separate from our body (considering that at the time of the event, the individual is pronounced dead.) In many cases, the 'degree' of dead is arguable, but in most of the cases, it is rationally implausible for the person to experience and sort of elaborate thought at all while possessing little to no vital signs. It becomes easier to accept this event as a true out-of-body experience once we consider that there may exist conscious thought outside the body regardless of its origins. (i.e. They may not be standing at the gates of the afterlife, but they are imagining they are standing somewhere without the sufficient vital signs to be considered alive! )
No theory can be respected until it attempts to explain the case of Pam Reynolds. Although there are several similar accounts, this "eye-witness" evidence is unusually strong. Pam's case was quite unique in that she underwent an operation for a "giant basilar artery aneurysm." A weakness in the wall of the artery at the base of her brain had caused it to balloon out much like 'a bubble on the side of a defective automobile tire.' (5) The location of the aneurysm, along with the size, demanded a unique procedure known as hypothermic cardiac arrest or "standstill" requiring her body temperature to be lowered to 60 degrees, her heart beat and breathing stopped, and her brain waves flattened along with the blood drained from her head. As the operation began, Pam began to make observations that she would later recant with astounding detail.
She recalled that with brighter and more focused vision, she could see her doctors and nurses in the room. "I thought the way they had my head shaved was very peculiar." (5) She went on to describe the instruments used in her operation with alarming detail. She overheard a nurse comment that her veins and arteries were particularly small. After this she is pulled by the light and begins to experience a meeting with past relatives, but when she returns she notes the music that is playing while she is being operated on: "Hotel California." For all of this she was beyond unconscious, and legally pronounced dead, however her accuracy with these facts along with several others has astounded those studying the subject around the world.
Most methods for reducing this complex psychological phenomenon into more simplistic laws and process seem to rely on the divide and conquer approach. However, no explanation seems to be able to account for the magnitude or fullness of this experience shared by so many. Scientific and medical explanation seems to lack almost as much as the psychological ones. With the advancement of so many understandings in the medical and scientific field, I find it hard to conceive of an understanding we could arrive at that would nullify the fact that someone pronounced dead is able to accurately describe people, objects, and events in the room. The acknowledgement of this event lends credibility to those who have changed their lives as a result of this out of body experience.
I claimed earlier that I was prepared to offer an un-bias perspective on the validity of a near death experience. Later, I realized I assumed this because I am not a psychologist, a neurologist, a doctor or a scientist. However, I am not unbiased. We cannot be unbiased as human beings if we admit that, for example, we would be deeply disappointed to discover that Pam Reynolds's story was an elaborate hoax. We are devoted to the search for purpose and meaning in our lives, and it is phenomena such as these that give hope. It is this hope that makes every person bias. Questioning the reality of these experiences as real metaphysical experiences brings on a plethora of other questions. It is easier, once we look at all the NDE debate has to offer, to build a sturdier bridge connecting us to the acceptance of the soul existence independent of the physical body. We a closer to accepting our divinity as spiritual but not denominational. We are closer to all answers of the universe if we have discovered a "window" or experience by which we can trust as true but not explain. For nearly 7 million people, there has been a period of time in their life when they were pronounced dead, and then proceeded to enter into either a "mass hallucination," or a plain of bliss that awaits us all. If we look at the situation with a new type of skepticism, we come to the conclusion that when questioning what exists after death, the most qualified person to answer may be someone who has been pronounced dead. The burden of proof rests in the hands of science, not the supernatural.
.

References

1) Brushes With Death: Scientists Validate Near-Death Experiences , ABC News look at Near Death Experiences.


2) An Introductory Analysis of the NDE (Near Death Experience), Analysis of Near Death Experiences

3) Scientific Theories of the NDE a>, Scientific Theories of Near Death Experiences


4) NEAR-DEATH EXPERIENCE: Reductionist Arguments "Explaining" Near-death
Experiences (NDEs) a>, Flaws in the Reductionist Argument of Near Death Experiences.


5) Pam Reynolds's Story a>, Story of Pam Reynolds


6) 17 Near Death Experiences: Accounts from Beyond the Light
a>, Personal Accounts of Near Death Experiences.


physilogical effect of playing videogames
Name: chazhardis
Date: 2004-01-09 15:06:47
Link to this Comment: 7596


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


CBN
Name: Christ Bau
Date: 2004-01-13 00:55:54
Link to this Comment: 7603


<mytitle>

Biology 202
2003 Third Web Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT





| Serendip Forums | About Serendip | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 11:57:21 CDT