Serendip is an independent site partnering with faculty at multiple colleges and universities around the world. Happy exploring!

Remote Ready Biology Learning Activities

Remote Ready Biology Learning Activities has 50 remote-ready activities, which work for either your classroom or remote teaching.


Biology 103 Web Paper Forum


Comments are posted in the order in which they are received, with earlier postings appearing first below on this page. To see the latest postings, click on "Go to last comment" below.

Go to last comment

Selective Advantages of the Mutant CFTR Gene
Name: Erin Myers
Date: 2002-09-26 13:41:55
Link to this Comment: 2910

Selective Advantages of the Mutant CFTR Gene

Cystic Fibrosis is the most common, lethal disease among Caucasian people.  In the United States4, Canada6 and the United Kingdom9 approximately 1 in every 2,500 children are born with cystic fibrosis.  1 in 25 Caucasians are carriers of the mutant gene that causes Cystic Fibrosis4.  Such a lethal disease that until recently killed its victims long before they could reproduce, should have died out long ago through natural selection.  But the CFTR (cystic fibrosis transmembrance regulator) mutant gene's survival for over 52,000 implies a selective advantage of the mutant gene that causes Cystic Fibrosis2.

 

Cystic Fibrosis is an autosomal recessive disease.  For one to have Cystic Fibrosis both of his or her parents must have the mutant gene (ignoring the minute chance of spontaneous mutation).  He or she has inherited two mutant genes, one from each parent (he or she has no normal CFTR genes.  When two carriers have a baby there is a 25% chance their baby will not carry a mutant CFTR gene (homozygote dominant), a 25% chance their baby will have cystic fibrosis, carrying two mutant CFTR genes (homozygote recessive), and  a 50% chance their baby will be a carrier of a mutant CFTR gene (heterozygote), as illustrated in the diagram below.

 

MOTHER

normal (N)   mutant (n)

 

F

A    N

T

H

E

R     n

non-carrier

NN

 

carrier

Nn

 

carrier

Nn

Cystic Fibrosis

nn

 

 

 

 

 

 

 

 

 

 

 

 

Cystic Fibrosis affects the respiratory and digestive systems.  The CFTR protein normally forms channels in cell membranes, though these channels flow chloride ions.  In the lungs this washes away bacteria, mucus and other debris.  In the intestines it washes away pathogens and brings digestive enzymes in contact with food.  In sweat glands these channels recycle salt out of the glands and back into the skin before it is lost to the outside world2.  In a person with Cystic Fibrosis thick mucus blocks these channels.  Their body cannot perform these functions.  This leads to mucus buildup in the lungs, a prime breeding ground for bacteria.  In the small intestine the enzymes that break down fat cannot get to the food and digestive problems arise.  On a hot day a person with Cystic Fibrosis is at risk to dehydration10.

 

For years scientists have been studying the possible benefits of the mutant CFTR gene.  In 1967 A.G. Knudsen, L. Wayne, and W.Y. Hallett published an article in the American Journal of Human Genetics.  They collected data of the numbers of live offspring of the Grandparents of CF children.  They found that the mean number of offspring for grandparents of CF children was higher (4.34) than the grandparents of a control group (3.43) with only 0.30 standard error, concluding that CF heterozygotes was associated with successful childbirth, and selectively beneficial.

 

In more recent years scientist identified the advantages of mutant CFTR carriers surviving cholera.  The lethal strain of Cholera, Vibrio cholerae, produces a toxin that binds to the cells of the small intestine opening all of the transmembrance regulating ducts pumping out considerable amounts of chloride ions and water — about five gallons a day1.  If the salt and water are not quickly replaced the infected person dies of dehydration.  Sherif Gabriel, a cell physiologist of UNC Chapel Hill, experimented with mice that carried the CF mutation and cholera.  Not surprisingly, the intestines of mice with cystic fibrosis infected with cholera secreted no fluid.  They lacked chloride channels.  The amazing discovery he found was that mice that carried one mutant CFTR gene secreted only half the liquid than the non-carrying mice.  Gabriel concluded that when cholera infected humans that carried a mutant CFTR, half as much fluid secretion may have been enough to flush the intestines of the toxin without succumbing to diarrhea, dehydration and death2. This selective advantage to the many European outbreaks of cholera may explain the high frequency of the gene mutation in Caucasian people of European decent.  This argument, however, has been challenged recently on the basis of time.

 

Not enough time has past since Cholera reached Europe in the 1800's for the frequency of mutant CFTR carriers to be as high at 1 in 25.  Gerald Pier of Harvard Medical School aggress that heterozygote advantage caused the high frequency of mutant CFTR genes in the Caucasian population but the advantage is for the resistance to typhoid fever.  They found that Salmonella typhi, the causal agent for typhoid fever invades the gastrointestinal cells by attaching to the normal CFTR protein as the first step in eventual infection of the bloodstream, but blockage of these chloride channels also block entrance of the Salmonella typhi into the cells and Salmonella typhi cannot attach to the most common CFTR mutation.  The widespread incidence of typhoid fever over much of European History would account for the high frequency of CFTR mutation in Caucasian people of European decent.  On the other hand Michael Swift of New York Medical Center has proposed that CF heterozygotes are more resistant to asthma3.  All of these hypotheses explain the heterozygote advantage of Cystic Fibrosis but fail to explain why they would not be just as beneficial to other parts of the world.

 

The reason the frequency of CFTR heterozygotes4 in the Caucasian Populations is so much higher at 1 in 25 than those of Hispanics (1/46), Blacks (1/60), and Asians (1/150) has to do with a comparative disadvantage that out ways advantages in the indigenous areas of these people.  Physiologist Paul Quinton of the University of California at Riverside argues that the cystic fibrosis mutation may not have survived outside Europe because in hot climates it entailed an additional disadvantage.  He asserts that salty sweat associated with Cystic Fibrosis is more of a disadvantage in hot climates than protection against diarrhea.  Experiments have shown that CF carriers have saltier sweat than people with two normal alleles.  The Human body is made to conserve salt, which until recently was a precious commodity.  Humans in hot climates that spent a lot of time running after prey could not afford to loose as much salt as a carrier would.  Carriers in this environment would be more susceptible to dehydration2.

 

It seems that one way or another, dehydration is the variant in the frequency of cystic fibrosis.  In the cooler climate of Northern Europe the mutant CFTR gene protected people from many fatal diseases that cause diarrhea and dehydration (cholera, typhoid, E. coli), but in the warm climates of the Americas,  Southern Europe, Africa and Asia this advantage was out weighed by the threat of heat related dehydration.  It is interesting to see the advantages of diversity even in something as small as a gene.

 

 

 

Internet Sources

1How Cholera Became a Killer, the one deadly strain of Vibrio Cholerae

2Hidden Benefits, the 52,000 year survival of the mutant gene that causes CF

3Cystic Fibrosis and Typhoid Fever, rejection of the cholera hypothesis

4US Population Frequency, statistics of the frequency of CF affected and carriers

5Selective Advantage of CF Heterozygotes, 1960's study of live births among CF carriers

6Canadian Cystic Fibrosis Foundation, tons of basic facts with search option

7WebMD—Cystic Fibrosis, symptoms, cause, treatment, references

8Cystic Fibrosis Research Directions, more sophisticated fact sheet

9United Kingdom Cystic Fibrosis

10Scientific American CF article, genetic defects underlying the disease


The Atkins' Diet: Friend or Foe?
Name: Kathryn Ba
Date: 2002-09-26 22:09:44
Link to this Comment: 2918


<mytitle>

Biology 103
2002 First Paper
On Serendip

In a society that is continually obsessed with being thin and dieting, the following quote might seem like an intriguing promise:
FORGET THE FIGHT AGAINST FAT! BREAK THE SUGAR-STARCH HABIT TODAY AND ENJOY STEAK, EGGS, CHEESE, EVEN WINE AS YOU GET HEALTHY AND LOSE WEIGHT WITH SUGAR BUSTERS! (1).
Sounds good doesn't it? All the dieter must do is eliminate carbohydrates and he or she will loose weight, right? If this sounds too good to be true, you are not alone in your hesitation. An increasing number of nutritionists and doctors are warning that diets that eliminate carbohydrates in favor of protein and fat are not effective and may be dangerous to one's health. As Sheila Kelly, a clinical dietician, says, "It's a seductive concept. Watch the pounds melt away while you eat all of the high-fat foods you want. Even better, don't bother watching your caloric intake or worrying about regaining your weight. All you have to do is avoid 'poison' carbohydrates" (2). The Atkins' Diet, one of the best known of the low carbohydrate diet programs, promotes the idea that carbohydrates are an overweight person's barrier to loosing weight (3). This essay will examine specifically what the Atkins' Diet calls for and the mounting body of evidence against low carbohydrate diets.

In 1972 Dr. Robert Atkins published Dr. Atkins' Diet Revolution and in 1992 published Dr. Atkins' New Diet Revolution, an updated version of his first book. Atkins' books promote a controlled carbohydrate diet and provide the dieter with a four-step program to loosing weight (3). The first step is a 2-week "induction" period, during which one attempts to reduce his or her carbohydrate intake to less than 20 grams a day. During the remaining three steps the dieter incrementally raises his or her carbohydrate level, but never surpasses one's "critical carbohydrate level." Noncarbohyrate foods are permitted whenever the dieter is hungry and Atkins also recommends large amounts of nutritional supplements (4). By following these four steps, the dieter will induce ketosis, a process Atkins describes as equivalent to fat burning. When a person's body does not receive enough carbohydrates to burn for energy it turns to fat for its energy. He says, "There is nothing harmful, abnormal or dangerous about ketosis" and that is it a natural process within the body. The dieter will only have to wait about two days for ketosis to begin, which, according to Atkins, explains why a dieter following the Atkins' dieter sees results so quickly. For whom is this diet safe? Atkins suggests that overweight people over the age of 12 can benefit from his diet (3).

Why does Atkins claim that carbohydrates are responsible for weight gain and play a role in the failure of many diets? A basic understanding of what carbohydrates are is necessary for one to answer this question. Carbohydrates are nutrient-rich starch and sugars that effect blood sugar, called glucose. Muscle and liver glycogen stores are fueled by carbohydrates, as is the brain (5). For this reason, the intake of carbohydrates in children is particularly important because they affect learning ability (6). Carbohydrates are the body's primary source of fuel and if a person is attempting to lose fat but is simultaneously eating carbohydrates, he or she will use those carbohydrates as energy and the excess fat will remain, according to Atkins. He says that a person must eliminate carbohydrates in order to induce the fat burning process of ketosis, as outlined above (3).

The importance of carbohydrates in the diet is reflected by the government's "Food Pyramid." The foods at the base of the pyramid are considered the staple of a healthy diet: refined carbohydrates such as bread, rice, and pasta. At the top of the pyramid, foods that should be avoided or limited, are fats and oils (7). The American Heart Institute and the National Institutes of Health recommend that a balanced diet include 250 to 300 grams of carbohydrates a day, about 15 times the amount recommended by Atkins (2). Ross Feldman, an exercise physiologist, says, "there is no evidence that eating a diet rich in carbohydrates is associated with obesity" (5). Why does such a large discrepancy exist? Many sources do agree on one thing; the Atkins' Diet may temporarily help with weight loss but it may also pose significant health risks.

The primary health risk of the Atkins' Diet is dehydration. After carbohydrates are significantly reduced, ketosis begins and the dieter initially looses liver glycogen. This storage of carbohydrates is lost because the body does not have enough glucose to maintain blood sugar so it turns to the liver glycogen. Glycogen consists of a large number of water molecules and when the body converts glycogen to glucose, the water is lost from the body. This explains much of the initial weight loss on the Atkins' Diet, rather than Atkins' claim that the initial weight loss is fat (1). The large amount of water loss poses the risk of dehydration, but is not the most potentially severe consequence of the Atkins' Diet. The high fat content may put the dieter at risk for coronary heart disease, hyperlipidemia (high blood fat), and hypercholesterolemia (high blood cholesterol). The high protein content may put extra strain the kidneys, which can lead to electrolyte imbalance, decrease the kidneys' ability to absorb calcium, which could lead to the early stages of osteoporosis (5). The results of a study conducted by the University of Kentucky after a computer analysis of a week's worth of sample Atkins' menus report that a dieter is at risk of cancer, among other serious risks (4).

One might wonder why the Atkins' Diet has been successful even though studies have exposed these serious risks. Perhaps the most obvious reason is that the rapid weight loss provides the dieter with rapid reinforcement for his or her weight- loss effort. The dieter might assume that the weight-loss is fat reduction, as Atkins would have us believe, while ignoring possible heath risks. But how will the dieter enjoy the weight loss? Actually, no one knows. There have been no long-term studies on the effectiveness of the Atkins' Diet, even by Atkins. Atkins cites vague reasons that his diet has long-term worth by claiming that the permitted, low- carbohydrate food is so delicious that dieters would have little difficulty following his diet for an extended period of time. The longest amount of time that Atkins cites for successful weight loss maintenance is six to twelve months. In fact, one study estimated the weight regain from the Atkins' Diet to be 96%. At any rate, the need for a long-term study on the effectiveness of the Atkins' Diet is clearly needed, as supported by many sources (1), (3), (4), (7).

Without results from a long- term study one can not safely assume that low- carbohydrate diets, such as Atkins', are effective. The dieter is ultimately responsible for his or her own decision to ignore health risks in favor of shedding a extra weight. As Keith Anderson, spokesperson for the American Dietic Association, says, "We need to know much more before people start making claims...Shouldn't diet doctors prove safety first, rather than write books and then say 'OK, prove harm?'" (7). Instead of opting for an extreme dieting method, such as Atkins, one might better benefit from using common sense. There is no magical weight loss program and routine of exercise and a healthy, realistic, balanced diet is a dieter's best bet.

References

1) href="http://www.cce.cornell.edu/food/expfiles/topics/levitsky/levitskyoverview.html">Cornell Cooperative Extension: Food and Nutrition, article entitled Low-Carbohydrate Diets: Heresy or Hype

2)HealthAtoZ.com, article entitled Low-Carb Diets Unhealthy Trend

3)Atkins Nutritionals: Home, the Atkins Homepage

4)Quackwatch Home Page, a critical article about low-carbohydrate diets

5)DiscoverFitness.com, a article about fad diets

6)CNN.com, an article entitled 'Extreme eating' may equal extreme problems

7)ABCNEWS.com, an article entitled The Low Fat Legend


The Rise of the Machines:

The Controversy o
Name: Laura Bang
Date: 2002-09-28 13:55:05
Link to this Comment: 2964


<mytitle> Biology 103
2002 First Paper
On Serendip

     Robby, Gort, Rosie, T-800, C-3PO - what do these names have in common? They are some of science fiction's most memorable androids - artificial intelligence robots resembling humans - from Hollywood's imaginings of humans' experiments with creating artificial intelligence. (6) There are several branches of artificial intelligence (abbreviated 'AI'), but the one to be focused on for this paper is the branch of AI that is trying to imitate human life. If successful, these artificial humans would have a major impact on our way of life and how we view ourselves.

     There are many different definitions of AI. Artificial intelligence, according to John McCarthy of Stanford University's Computer science department,

"is the science and engineering of making intelligent
machines, especially intelligent computer programs.
... Intelligence is the computational part of ability to
achieve goals in the world. Varying kinds and degrees
of intelligence occur in people, many animals, and some machines." (1)

Scientists who work in the field of AI are primarily working to make intelligent machines, not androids or other machines that attempt to fully imitate human intelligence and behavior. They are first seeking to create intelligent computer programs that can interact with their users. (3)

     Yet Hollywood and the science fiction genre in general most frequently portray the branch of AI dealing with the creation of artificial humans (6), and the primary definition of artificial intelligence in Merriam-Webster's Dictionary is "the capability of a machine to imitate intelligent human behavior." (4) One can conclude that this definition indicates the creation of androids and other artificial humans because it mentions behavior as well as intelligence, and behavior is a humanistic characteristic, not just a quality of an intelligent machine.

     We as real humans are fascinated by the idea of an AI machine that can very closely imitate humans (at least this is the case in the U.S.). All five "Star Wars" movies are in the top fifteen of the 250 top-grossing movies in the U.S. (7) These movies all feature "droids" who have decidedly human characteristics, the most memorable of which is C-3PO, who also happens to look somewhat like a human coated in metal. Eleven of the top fifty science fiction movies (by popular vote, not the top-grossing) have humanistic robots or androids as key characters (7), and two of the American Film Institute's top 100 movies of all time are science fiction movies featuring humanistic robots, specifically C-3PO from the original "Star Wars" and HAL from "2001: A Space Odyssey." (8)

     In most science fiction movies and books featuring AI robots, the focus is either on their lack of emotions, or their heightened bad emotions (such as jealousy, hatred, etc.). These AI machines are almost always seen as less than human, however closely they are able to imitate humans, most recently portrayed in the movies in 2000's "AI: Artificial Intelligence." (6) The real humans have trouble understanding the humans they have created. In this sense, the controversy of AI somewhat resembles the controversy of cloning.

     Clones are a small step in creating AI humans because the clones were not conceived naturally - they would never exist if we did not cause their conception, thus artificially creating them. If a human were successfully cloned, how would the clone feel, knowing that he/she was not created naturally, knowing that he/she is a DNA replica of someone else? And, perhaps more importantly, how would the naturally conceived humans treat the artificially conceived humans?

     How would you treat someone if you found out that he/she was an AI human - how would you treat that person if he/she looked exactly like a human and your first impression was of a human, but then you found out that he/she was entirely built by humans?

     The controversy over the creation of AI humans has been compared to the revolutionary stir caused by Charles Darwin's publications of The Origin of Species and The Descent of Man. (2) There are some who are excited by the prospect of AI humans; there are some who are scared that machines will enslave the real humans, like in the movie, "The Matrix" (7); and there are some who are still not sure what to believe. Some claim that "... because computers lack bodies and life experiences comparable to humans', intelligent systems will probably be inherently different from humans." (3) Others call AI research "incoherent ... impossible ... obscene, anti-human, and immoral." (1)

     The repercussions of creating AI humans are manifold, but one of the most frightening is how our view of ourselves would change. If we are able to create true AI humans, then are we as real humans any different than machines? Are our minds more complex than computers or can humans really be imitated by AI? "... [A]ny brain, machine or other thing that has a mind must be composed of smaller things that cannot think at all. ... Are minds machines?" (3)

     There is so much controversy about AI research, and this "fairy story is hardly past its 'once upon a time.'" (5) Current technology has barely begun to take its first tentative steps toward creating AI, and if AI humans become a reality at any point in our future, there will be still more questions to be answered. If we can create artificial life, then what are we? How do we know we are not someone else's AI "project"? If we manage to create the "perfect" AI human, then do we believe that they have souls and therefore are able to continue in an afterlife? And - much more disturbing to religion - what about God? If we believe that God created all living things, and then we create artificial life - which is still life - then do we become gods? It is these last troubling thoughts that frighten people the most about AI and also put the pressure on AI researchers - if they succeed, they will essentially overthrow God.

"With relief, with humiliation, with terror, he understood that he too was a mere appearance, dreamt by another."
~ from "The Circular Ruins" by Jorge Luis Borges (9)

References

1) What is Artificial Intelligence?
    John McCarthy, Computer Science Department, Stanford University
    Last updated: 20 July 2002

2) Artificial Intelligence and the Human Mind
    Joseph M. Mellichamp, University of Alabama
    last updated: 4 May 2002

3) AI Topics
    AI Topics for students, teachers, journalists, and others interested in AI
    Provided by the American Association for Artificial Intelligence (AAAI), 2002

4) Merriam-Webster Online Dictionary

5) AI Magazine, Volume 13, Number 4 (Winter 1992)
    "Fairytales" - Allen Newell

6) Official Movie Site for Warner Brothers' "AI: Artificial Intelligence"

7) Internet Movie DataBase

8) The American Film Institute's (AFI) "Top 100 Movies of All Time"

9) Borges, Jorge Luis. Labyrinths. New Directions Publishing Corp., New York: 1964.

10) Click here for the transcript of a chat I had with an AI robot online.


Williams Syndrome
Name: Roseanne M
Date: 2002-09-28 14:05:09
Link to this Comment: 2965


<mytitle>

Biology 103
2002 First Paper
On Serendip


When I was 14 years old, my baby boy cousin was born. I was thrilled to have another cousin since I only had 2, both of which were much older than I. However, as the years passed, I noticed that my cousin looked neither like my aunt or uncle; he had puffy eyes and thin lips that resembled nothing of his parents. Soon he was 3 years old and still illiterate aside from the fact that he mumbled words or beats to songs, which continued for the next few years. Compared to other children of his age, he was lighter in weight and very active- active to an extent of being violent and hurting others around him. It was obvious that smacking other children was his way of showing affection in order to make friends; he didn't realize what he was actually doing since he smiled and laughed while the other child cried. However, after noticing that he could not make friends this way, he would get rather irritated and run crying to his mom. My cousin grew more and more aggressive and impatient and above all, because he was still illiterate, my aunt and uncle could not send him to a 'regular' nursery school. When he turned 5, I asked my parents if he would remain illiterate and what the consequences were. 'He has Williams Syndrome' my parents answered, 'it is very rare with no cure.'

It has been 7 years since my cousin has been a part of my life and I only recently knew what exactly he was 'wrong.' Because this is such a rare disorder that many people have never heard of, I thought it would be a great opportunity to research further what the symptoms are, and create awareness to those who have never heard of Williams Syndrome before. My cousin is now 7 years old with features and characteristics much like what I have researched below.

Williams syndrome is the deletion of one of the two #7 chromosomes and is missing the gene that makes the protein elastin (a protein which provides strength and elasticity to vessel walls) (3). Named after cardiologist Dr. J.C.P. Williams of New Zealand, it was recognized in 1961 (2). Dr. Williams recognized a series of patients with similar distinctive physical and intellectual characteristics. It was soon discovered that Williams syndrome is a very rare genetic disorder, occurring in about 1/25,000 births- the Williams Syndrome Foundation only hears of 75 cases a year (1) (4). The disorder is present at birth but its facial features become more apparent with age. These features include a small upturned nose, long philtrum (upper lip length), wide mouth, full lips, small chin, and puffiness around the eyes. Blue and green-eyed children with this syndrome can have a prominent "starburst" or white lacy pattern on their iris (1). This disorder has a 50% chance of passing it on to his or her children. There is no cure to Williams Syndrome.

Those with Williams Syndrome have some degree of intellectual handicap. Children with Williams Syndrome experience developmental delays such as walking, talking and toilet training. After my cousin started to walk (at the age of 3), he walked on his toes instead of from heal to toe and 'tipie-toed' wherever he went, which he still continues to do. Distractibility is often a problem, but gets better as they grow older. They also demonstrate intellectual strengths and weaknesses. Their strengths can be speech, long term memory, and social skills, while their weaknesses can be fine motor and spatial relations. People with Williams Syndrome have extremely social personalities (2). I can recall a time when my family was at a restaurant and my cousin suddenly jumped from the table to say hello and waved at other children there. He does this to people of all ages, color, and sex. His friendly gesture puts a smile to everyone's face. They have unique and expressive language skills, and are extremely polite. They are unafraid of strangers and show a greater interest in contact with adults than with children of his own age (2).
People with Williams Syndrome can have significant and progressive medical problems. The majority of them have some type of heart or blood vessel problem. Typically, there is narrowing in the aorta (producing supravalvular aortic stenos is SVAS), or narrowing in the pulmonary arteries (3). There is an increased risk for development of blood vessel narrowing or high blood pressure over time.

Children with Williams Syndrome may have elevations in their blood calcium level. Children with hypercalcemia can be extremely irritable and therefore may need dietary or medical treatment (2). In most cases, the problem is resolved naturally during childhood, however the abnormality in calcium or Vitamin D metabolism may continue for life. Along with these abnormalities, many children have feeding problems because of low muscle tone, severe gag-reflex, and poor sucking and swallowing. Because of this most children have lower birth-weight than their brothers or sisters and their weight gain is slow (2). My cousin has a younger brother, now 4 years old, who is more 'plump' looking than his brother. I am assuming they would be the same height in a year or two; my cousin with Williams Syndrome is small for his age and is not as tall as the average 7-8 year old. As a result, they are smaller than average when fully mature.

Although my cousin may have 'weaknesses' and 'differences' from the average human, he shines in his other uniqueness and qualities that should be considered before categorizing him as 'the one with a disorder.' From this research I could conclude that despite the possibility of medical problems, most people with Williams Syndrome are healthy and lead active, full lives.


References

1)The Williams Syndrome Foundation, The umbrella organization for Williams Syndrome foundations, groups, and sites.

2)The Lili Claire Foundation, An organization made from one family that dealt with Williams Syndrome

3)Medical Site on Williams Syndrome, a detailed medical site on Williams Syndrome

4) The Williams Syndrome Foundation UK, The UK Williams Syndrome Foundation


Euthanasia: Should humans be given the right to pl
Name: Mahjabeen
Date: 2002-09-28 16:28:26
Link to this Comment: 2973


<mytitle>

Biology 103
2002 First Paper
On Serendip


Should humans be allowed to play the role of God? Legalizing euthanasia would do just that! The power to play with people's lives should not be handed out under a legal and/or medical disguise. Thus euthanasia should not be legalized.

The term 'Euthanasia' comes from the Greek word for 'easy death'. It is the one of the most public policy issues being debated about today. Also called 'mercy killing', euthanasia is the act of purposely making or helping someone die, instead of allowing nature to take its course. Basically euthanasia means killing in the name of compassion. On the contrary, it promotes abuse, gives doctors the right to murder and in addition, is contradictory to religious beliefs.

Whether one agrees or not, past experiences as well as the present continuously point out that euthanasia promotes abuse. Dr. J Forest Witten warned that euthanasia would give a small group of doctors "the power of life and death over individuals who have committed no crime except that of becoming ill or being born, and might lead toward state tyranny and totalitarianism." (1)

An example of this very statement by Dr. J Forest Witten was seen in Pennsylvania, in 1947 when forty seven year old Ellen Haug admitted having killed her ailing seventy-year-old mother with an overdose of sleeping pills. Her excuse was that she couldn't endure her crying and misery. Ellen said that her mother had suffered too long and Ellen, herself was on the verge of collapse. Her excuse was that "if something had happened to her, what would have become of her mother?" (2) Her reason was not only vain; as a matter of fact it was very selfish. Ellen was not putting her mother out of misery but she was getting herself rid of a responsibility. She was merely taking the advantage of calling her cold-blooded murder euthanasia. Likewise, a recent Dutch government investigation of euthanasia came up with some disturbing findings. In 1990, 1,030 Dutch patients were killed without their consent. Twenty-two thousand and five hundred deaths were caused due to withdrawal of support, 63% (14,175 patients) were denied medical treatment without their consent and twelve percent (1,701 patients) were mentally competent but were not consulted. These findings were widely publicized before the November 1991 referendum in Washington State, and contributed to the defeat of the proposition to legalize lethal injections and assisted suicide.(3) Euthanasia, at the moment is illegal in most parts of the world. In the Netherlands it is practiced widely even though it remains illegal. The Dutch incident is an ideal example of how euthanasia has promoted abuse in the past and therefore as the old proverb goes we should "learn from past mistakes to avoid future ones".

Euthanasia gives physicians, who are only humans-the right to murder. Doctors are people who we trust to save and cure us, we regard them as the people who have been trained to save our lives but euthanasia gives doctors the opportunity to play God and most seize this opportunity. A perfect example of an opportunist would be Dr. Jack Kevorkian, better known as "Dr. Death" who took advantage of his patients' sorrows and tragedies and murdered them. In fact, Kevorkian has helped more than 100 people commit suicide and not all of his patients were terminally ill. In addition, in the late 1980s the lunatic created a machine for murder, it was a "suicide machine" that allowed a person by pressing a button, to dispense a lethal dose of medication to himself or herself. Later, Dr. Kevorkian was sentenced to ten to twenty-five years in prison for second-degree murder for providing lethal injection to a seriously ill patient.(4) Dr. Jack Kevorkian, however, is not the only example of a doctor who tried to "play God".

One can also learn a lot from the mass murder that took place in Germany during World War II. Over 100,000 people were killed in the Nazi's euthanasia program. During the War, the doctors were responsible for, selecting those patients who were to be euthanized, carrying out the injections at the killing centers, and generating the paperwork that provided a medically credible cause of death for the surviving family members. Surprisingly, organizations such as the General Ambulance Service, Charitable Sick Transports, and the Charitable Foundation for Institutional Care transported patients to the six killing centers, where euthanasia was accomplished by lethal injections or in children's cases, slow starvation.(5) Throughout the past and the present, euthanasia has given doctors an excuse to get away with their crimes; it has given mere humans the power to play God.

The physician's role is to make a diagnosis, and sound judgments about medical treatment, not whether the patient's life is worth living. They have an obligation to perform sufficient care, not to refrain from giving the patient food and water until that person dies. Medical advances in recent years have made it possible to keep terminally ill people alive for beyond a length of time even if it is without any hope of recovery or improvement. The American Medical Association (AMA) is well known for their pro-abortion campaigns and funding. Ironically, the AMA funds many hospices and other palliative care centers. They have a firm stand on life. The AMA has initiated the Institute for Ethics, designed to educated physicians on alternative medical approaches to euthanasia during the dying process.(6)

Other than promoting abuse and giving doctors the right to murder, Euthanasia also contradicts religious beliefs. Euthanasia manages to contradict more than just one religion and is considered to be gravely sinful. For instance, the Roman Catholic Church has its own opinion on Euthanasia. The Vatican's 1980 Declaration on Euthanasia said in part "No one can make an attempt on the life of an innocent person without opposing God's love for that person, without violating a fundamental right, and therefore without committing a crime of the utmost sin." It also says that "intentionally causing one's own death, or suicide is therefore equally wrong as murder, such an action on the part of a person is to be considered as a rejection of God's sovereignty and loving plan."(7)

In fact, a Jewish Rabbi Immanuel Jakobovits warns that a patient must not shrink from spiritual distress by refusing ritually forbidden services or foods if necessary for healing; how much less he may refuse treatment to escape from physical suffering. As there is no possibility of repentance or self-destruction, Judaism considers suicide a sin worse than murder. Therefore, euthanasia, voluntary or involuntary is forbidden.(8)

Islam too finds euthanasia to be immoral and against God's teachings. Actually, the whole concept of a life not worthy of living does not exist in Islam! There is absolutely no justification of taking life to escape suffering in Islam. Patience and endurance are highly regarded and rewarded values in Islam. Some verses from the Holy Quran say- "Those who patiently preserve will truly receive a reward without measure" (Quran 39:10) and "And bear in patience whatever (ill) may befall you: this, behold, is something to set one's heart upon" (Quran 31:17). The Holy Prophet Mohammad (PBUH) taught "When the believer is afflicted with pain, even that of a prick of a thorn or more, God forgives his sins, and his wrong doings are discarded as a tree sheds off its leaves." When means of preventing or alleviating pain fall short, this spiritual dimension can be very effectively called upon to support the patient who believes that accepting and standing unavoidable pain will be to his/her credit in the hereafter, the real and enduring life. (9) This shows that euthanasia is contradictory to most religious beliefs and is certainly baloney to those who believe in God and the sanctity of life.

Euthanasia should not be legalized. It is by no means a solution to human suffering. Though euthanasia is a controversial subject, it is evident that it only disrupts the normal pattern of life and leads toward creating a more violent and abusive society. Life is a gift and not a choice and practices such as euthanasia violate this vital concept of human society.


References


(1) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia." End of Life and Euthanasia, the above-mentioned book can be found here.
(2) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia."
(3) Anti-Euthanasia Homepage
(4)Cavan, Seamus. "Euthanasia: The Debate Over the Right to Die."
(5) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia."
(6) American Medical Association Homepage
(7) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia."
(8) Humphry, Derek and Wicket, Ann. "The Right to Die:Understanding Euthanasia."
(9) Euthanasia and Islam.


Turner's Syndrome-A Woman's Disease
Name: Melissa Br
Date: 2002-09-29 15:00:13
Link to this Comment: 2993


<mytitle>

Biology 103
2002 First Paper
On Serendip

Imagine that you are 13 years old. All your friends are growing: they are getting taller; they are starting to menstruate; they seem to know exactly what to say at the right moment. You, on the other hand, are conspicuously shorter than your peers; you don't have your period and you seem to blurt out whatever comes to your mind. You would probably feel that you are awkward and begin to develop low self-esteem. This could be the life of a teenage girl with Turner's Syndrome.

Turner's Syndrome is a chromosomal problem that affects one in every 2000 females (1). So in the tri-college community, there may be at least one woman with Turner's Syndrome (TS). Although, you may not know someone with Turner's Syndrome it can safely be assumed that you have unknowingly encountered someone with the disease because of the frequency of the illness. Turner's Syndrome is named after Dr. Henry Turner who described some of the features of TS like short stature and increased skin folds in the neck(1). TS is sometimes also called Ullrich-Turner Syndrome because of the German pediatrician who, in 1930, also described the physical features of TS (1).

Why is it that TS only affects women? Well, TS arises from an abnormality in the sex chromosome pair. In the human body, there are 46 chromosomes grouped into 22 pairs of autosomes (all chromosomes that are not the sex chromosome) and the sex chromosome pair which influences whether a girl has TS. Men have a sex chromosome pair that is XY where the X chromosome comes from the mother and the Y chromosome comes from the father. Women have an XX chromosome pair with one X chromosome coming from the mother and the other X chromosome coming from the father. However, a female baby who has TS has only one X chromosome or is missing part of one X chromosome (1). The female baby receives only one X chromosome because either the egg or the sperm ended up without a chromosome when it was being split in half to make sex cells. The baby girl may be missing part of one X chromosome because there is a deficiency in the amount of genetic material (4).

TS is determined by looking at a picture of the chromosomes which is known as a karyotype. This technique was not developed until 1959(1). Karyotyping was not available to Dr. Turner and Dr. Ullrich in the 1930s. These doctors defined the disease by the physical features that a TS sufferer may have. Some of these are lymphoedema of hands and feet, or puffy hands and feet, broad chest and widely spaced nipples, droopy eyelids, low hairline and low-set ears. There are also clinical ailments that are associated with TS like hearing problems, myopia or short-sightedness, high blood pressure and osteoporosis. People who suffer from TS also have behavioral problems and learning difficulties (1), (3).

In spite of the physical, social and academic problems that a woman with TS may have, she can still be successful in life. Women who have TS have become lawyers, secretaries and mothers. It may be more challenging for a woman suffering with TS to accomplish her goals but they are not impossible. TS is a "cradle to grave" condition which means that it is lifelong and must be treated throughout the sufferer's life span (1). When the girl or woman has been diagnosed she should go under the care of an endocrinologist who is a doctor who specializes in hormones.

There are various medical methods that could be used to make the girl's life as normal as possible. Girls can have an average stature by undergoing growth hormone treatment before growth is completed. Oxandrolone, an anabolic steroid, can also be used to promote growth. Oestrogen is used when the girl is about 12 or 13 to produce physical changes like breast development and for the proper mineralization of bones. Progesterone should also be used at the appropriate time to start the period (1), (3).

Sufferers of TS also have problems like heart murmurs or the narrowing of the aorta which may require surgery. Women with TS are more prone to middle ear infections. If they recur frequently, they may lead to deafness so a consultation with an ear, throat and nose specialist would be helpful. Some of the health concerns of women with TS are encountered by all women. High blood pressure afflicts women with TS as well as diabetes and thyroid gland disorders but the latter afflicts women with TS at a slightly higher rate than non-sufferers of the disease. Osteoporosis may start earlier in TS sufferers because the women lack oestrogen so HRT (Hormone Replacement Therapy) may be considered to delay the onset of Osteoporosis (1), (3).

Women who have TS are further challenged socially because they are disruptive; they blurt out whatever comes to mind and have difficulty learning social skills. A recent study suggests that women with TS may be more disruptive depending on whether the X chromosome comes from the mother or the father. If the woman's X chromosome came from her mother she has more problems learning good social skills than a girl whose X chromosome came from her father. The study insinuates that the X chromosome from the mother instructs the girl to misbehave while the X chromosome from the father tells her to control herself (2).

A girl's disruptive behavior may make her feel uncomfortable in social situations. Her discomfort increases if she has difficulty speaking clearly. However, visits to a speech therapist can improve her ability to speak well. Such behavior can be particularly detrimental in school. Furthermore, people who have TS usually have learning disabilities so they find school less appealing. Parents should present teachers with a leaflet entitled "TS and Education, An Information Leaflet for Teachers" which will help the teacher better instruct the child in class and make learning a less burdensome activity(1).

School is where children and teenagers spend most of their time. For girls who suffer from TS school becomes less welcoming during the pubescent years when social, physical and academic skills are increasingly important. Negative experiences can bring about low self-esteem. Young women who suffer from TS should join a support group where they can find allies and express their feelings. Alternatively, the reticent girl can keep a journal where she can privately reveal her concerns about her life as a TS sufferer. Parents who notice that their daughter is being adversely affected by her inability to "fit in" with her comrades should seek professional help (3).

There are many challenges faced by women who have TS. Some of these challenges require a lot of medical assistance while others only require small alterations to the sufferer's daily life. TS is not an ailment that is intermittent or can be cured. The woman with TS lives with the syndrome every day for the rest of her life. It is important to remember that TS is not transmitted from person to person but it is a syndrome that is borne out of chance; the possibility randomly exists that a female embryo may not have two complete X chromosomes. Since TS does not affect men it can be overlooked despite the frequency with which women are born with it because we live in a patriarchal world. We, as women, should be allies to highlight the diseases that only women have.

References

1) Turner Support Syndrome Homepage,gives information about Turner's Syndrome to those interested in TS.
2) Bizarre Facts in Biology, unusual biological information from recent studies
3) TeensHealth. Provides information about health problems faced by teenagers.
4) Endocrinology and Turner's Syndrome, gives information about how endocrinology is helping those affected by Turner's Syndrome.


Instinctive Behavior
Name: Amanda Mac
Date: 2002-09-29 15:50:53
Link to this Comment: 2994


<mytitle>

Biology 103
2002 First Paper
On Serendip

Perhaps it can be said that the distinguishing factor between humans and animals is that animals act out of instinct and humans out of will. What are instinctive behaviors and do humans ever act out of instinct rather than their own will? This paper will determine innate activity and decide whether or not this may be an appropriate difference between animals and humans.
Ethnologists, those who study animal behavior, believe that every species have routine movements that appear to be automatic in a way that relates to their structural systems (1). Konrad Lorenz, one of the leading scholars in this field, names these patterns as "Fixed Action Patterns" (2). Further defining instinctive behavior, ethnologists found particular characteristics, which include inherent structured systems and the adaptive functions (1).
Inherent structured systems are highly correlated with innate activity; many behaviors of animals are sufficiently unvarying and provide as particular characteristics of bodily structures. For example, the web spinning movements of spiders are a direct usage of its bodily construction. Or, the burrowing habits of marine worms employ operations of structure (3). Such movements that are typical to instinctive behavior include, eating, care of body surface, escape from predators, social behavior, and sexual interaction. Most of these innate activities involve the particular usage of a physical structure that is specific to each species.
Not just simple responses to an external stimulus play a role in instinctive behavior; instinctive activity involves sequences of behavior that run a predictable course. These behaviors may last seconds, minutes, hours or even days. Exemplifying this, we can refer to a particular species of digger wasp, which finds and captures only honeybees. With no previous experience, a female wasp will unearth an intricate burrow, find a bee, paralyze it with a careful and precise sting to the neck, pilot back to her discreet home, and, when the larder has been supplied with the correct number of bees, lay an egg on one of them and seal the chamber. The female wasp's whole behavior is designed so that she can function in a single specialized way. Ethnologists believe that this entire behavioral sequence has been programmed into the wasp by its genes at birth (3) thus resulting the high correlated sequences between heredity and instinctive behavior.
Given that instinctive behavior supposes to be hereditarily based, and therefore shaped by the forces of natural selection, it follows that most of the outcomes of instinctive activity contribute to the preservation of an individual or to the continuity of the species; instinctive activity tends to be adaptive, which implies the alteration of a living organism to its surroundings. There are two different types of adaptation; one, which involves the accommodation of an individual organism to a sudden change in environment and the other, occurs during the course of evolution and hence is called evolutionary adaptation. (1) Looking at the development of monotremes and marsupials, we can observe evolutionary adaptation. When Australia became a separate continent some 60 million years ago, only monotremes and marsupials lived there, with no opposition from the placental mammals that were emerging on other continents. Although only two living monotremes are found in Australia today, the marsupials have filled most of the functions open to terrestrial mammals on that continent (3). Thus, these animals developed changes in their genetic structures over time, creating different innate behaviors.
Overall, one of the main distinctive features of instinctive activity is the ability to react to an external stimulus the correct way the first chance (and every time thereafter) the animal receives. This feature distinguishes this particular behavior from what ethnologists call learned behavior, which scientists have discovered are actions that take place from conditioning an animal to learn the right way. Will, which can be defined as the power of choosing one's own actions (4), may be related to learned behavior; in order to choose, one must have a sense of what the outcome will be, therefore causing the choice to be learned rather than instinctive.
The physiological adaptations that made humans more flexible than other animals allowed for the development of a wide range of abilities and an unparalleled adaptability in behavior. The brain's great size, complexity, and slow maturation, with neural connections added through at least the first twelve years of life, means that learned behavior largely modifies stereotyped, instinctive responses. So, those behaviors that form heredity and adaptation change, according to each individual, to develop into learned actions. Scientists believe that each new infant, with relatively few innate traits yet with a vast number of potential behaviors, must be taught to achieve its biological potential as a human (3). Therefore, many of the human actions are instinctively learned behaviors in that the brain which is genetically structured to obtain learned information.
While animals mostly act out of instinctive behavior and humans, due to their particularly designed brain, act out of learned behavior (or as I related to will) this is not a sufficient characteristic to distinguish between humans and all other animals. Ethnologists do believe that there are some features of humans that are instinctive, such as eyebrow raising when eyes widen in social interactions, but this field remains unsound. There seems to be many arguments, which claim that all behaviors within the animal kingdom are learned and others who believe that most are instinct. Therefore, the difference of learned and instinctive behavior is not one that can classify animals and humans.


References

(1) 1) Encyclopedia Britannica Homepage , an online reference guide

(2) 2)Nobel Peace Prize Homepage , an autobiography on Konrad Lorenz

(3) 3) Microsoft Encarta 2000 "Animal Behavior."

(4) 4) Flexnar, Stuart Berg ed. The Random House Dictionary of English Language, 2nd Unabridged ed. "will," Random House: New York. 1987.


PMDD: Fact or Fiction
Name: Margaret H
Date: 2002-09-29 16:17:37
Link to this Comment: 2995


<mytitle>

Biology 103
2002 First Paper
On Serendip


"PMS, PMDD, or whatever label you put on it, is, has been, and probably always will be one big excuse for being grumpy and nasty," posts Marianne E (1). A faceless Internet user posting her thoughts on a web forum, Marianne shares an opinion with many other Americans. Many people, mostly men, feel that female sexual disorders exist purely as a defense for a bad mood. A handful of women and a few members of the medical community might agree with Marianne. However, a significant amount of research and medical opinion contradicts Marianne's assertation. As many women can attest, PMDD, or Premenstrual Dysphoric Disorder, can be a fact of life.

It is estimated that 70-90% of women will experience some form of premenstrual grief at some point during their fertile years. Of those women, between 30-40% of women can be diagnosed as having Premenstrual Syndrome. Narrowing the field even more, 3-7% of those women have Premenstrual Dysphoric Disorder (2).

In general terms, PMDD can be considered a severe form of Premenstrual Syndrome, or PMS. Because the two disorders share many of the same symptoms, a problem results in distinguishing between the two. A simple answer exists in terms of severity: a woman with PMDD experiences the same ailments as a woman with PMS, only the woman with PMDD suffers to a far greater degree. The medical community has attempted to provide clinical descriptions to help specify these disorders. A PMDD website maintained by the drug company Lilly describes PMDD as a combination of psychological and physical effects occurring from one to two weeks before a woman begins her period (3). Furthermore, all of the symptoms associated with the onset of a woman's period can be separated into three categories: PMD, or Premenstrual Discomfort; PMS, or Premenstrual Syndrome; and PMDD, or Premenstrual Dysphoric Disorder. The most common symptoms associated with Premenstrual Discomfort consist of physical changes: bloating, weight gain, acne, dizziness, headaches, breast tenderness, cramping, backaches, food craving, and fatigue. Those symptoms associated with Premenstrual Syndrome tend to be more psychological changes: sudden mood swings, unexplained crying, irritability, forgetfulness, decreased concentration, and emotional over-responsiveness. Premenstrual Dysphoric Disorder consists of symptoms more commonly associated with chronic depression: sad, anxious, or empty moods; feelings of pessimism or hopelessness; emotions such as guilt or worthlessness; insomnia; oversleeping; change in appetite, resulting in weight gain or loss; suicidal thoughts/attempts; uncontrollable rage or anger; lack of self control; denial; anxiety; and frequent tearfulness (4).

PMDD is often confused not only with PMS, but also with depression. As previously mentioned the PMDD symptoms must exist in such severity as to inhibit the woman's day to day living, to separate the disorder from PMS. PMDD affects a woman's work environment, personal relationships and family life. What separates PMDD from depression is a sudden disappearance of most symptoms shortly after a woman's period begins. To further complicate matters, if PMDD is left untreated for several years, the symptoms may override the menstrual cycle, occurring during ovulation or at any time during the cycle (5).

Because PMDD shares symptoms similar to many other disorders, debate exists over where to classify PMDD. The fourth edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-IV) lists PMDD in its index, calling it a depressive disorder (6). However, lack of information and understanding of exactly how PMDD works prevents it from being classified in an official mental illness category. Basic research links the onset of PMDD to neurological and hormonal differences in some women's bodies. A study completed by the National Institute of Mental Health linked PMS with abnormal levels of estrogen and progesterone (7). In the article introducing the study as it was published in the New England Journal of Medicine, Dr. Joseph Mortola wrote, "premenstrual syndrome is probably the result of complex interaction between ovarian steroids and central neurotransmitters," (7). A Psychiatric News bulletin describes how PMDD specifically works, "in a press release on the advisory committee's recommendation, Lilly said that although the etiology of PMDD is not clearly established, it "could be caused by an abnormal biochemical response to normal hormonal changes." Routine changes in estrogen and progesterone associated with menses may, in vulnerable women, induce a serotonin deficiency that could trigger the symptoms of PMDD." (8).

Some women's bodies cannot effectively handle the hormonal shifts that occur every week in a menstrual cycle. Lilley suggest that these women lack the level of serotonin, a neurotransmitter, needed to make smooth hormonal and emotional transitions from week to week. Several antidepressants have had the most successful results in terms of strong effects on serotonin levels -- the medical community has dubbed these drugs as SSRIs, or Selective Serotonin Reuptake Inhibitor (9). The FDA has only approved two SSRIs in the treatment of PMDD: Sarafem and Prozac. These two drugs contain Fluoxetine, which is thought to correct the serotonin imbalance in women who experience PMDD.

Three options exist for treatment of PMDD (9). Doctors may choose to take a medicinal approach, administering antidepressants, antianxiety drugs or hormones. Health care providers may also try focusing on the psychobehavioral aspects of the disorder. This includes stress management, psychotherapy, and relaxation. The third option is a nutritional modification, including dietary restrictions, extra vitamins, rigorous exercise, and herbal remedies. Women are encouraged to speak to her gynecologist to find the most appropriate method of treating her PMDD.

Many factors contribute to the reason why PMDD is regarded as a controversial topic. Little is known about the disorder: the American Psychiatric Association has not formally accepted PMDD as a mental illness; PMDD is listed merely as a disorder. Many doctors have found homeopathic remedies are most effective, thereby decreasing the validity of Fluoxetine drugs. Furthermore, since such a small percentage of women suffer from PMDD, it is entirely possible never to hear a personal experience. After hearing just one woman's story, it becomes that much more difficult to doubt the legitimacy behind her experience. With continued research, the medical field may be able to separate the divide between those who see PMDD as fact and those who see PMDD as fiction.

References

1) It Sure Feels Real; Forum response to article, , "Women Behaving Badly?" by Neil Osterweil.


2)USA Today Health Section, "PMS and PMDD Cause Serious Suffering," by A.J.S. Rayl.

3)PMDD informational site, maintained by drug company Lilly

4)Essay: PMS and PMDD - an Expose", by Anthea.

5)informational site, maintained by drug company Lilly

6)ABCNews.com, "The PMDD Debate: A Real Condition, or Just PMS by another name?"

7)Medicine and Biology article,"Estrogen, Progesterone Implicated in Provoking PMS," by Kenneth J. Bender, Pharm.D., M.A.
8)Psychiatric News, FDA Panel Recommends Fluoxetine for PMDD.

9)PMDD Facts for Health informational website ,


The Biology Behind "Rolling:" Trends, Effects and
Name: Emily Sene
Date: 2002-09-29 16:43:31
Link to this Comment: 2996


<mytitle>

Biology 103
2002 First Paper
On Serendip


MDMA (3, 4-Methylenedioxymethamphetamine), or ecstacy, belongs to a category of
substances called "enactogens" which literally means "touching within." "It is a Schedule I synthetic, psychoactive drug possessing stimulant and hallucinogenic properties." (6) It is composed of chemical variations of the stimulant amphetamine or methamphetamine and a hallucinogen, usually mescaline.

An slide show of the chemical process that occurs when MDMA is introduced to the body
can be found on the website www.dancesafe.org. (2) Basically, the group of brain cells that are affected by ecstacy is the serotonin neurons. Each one of these cells has multiple axon terminals which release serotonin to the rest of the brain. The exchange of serotonin from cell to cell occurs in the gap between the axon terminal of the serotonin neuron and the dendrites of the next neuron. This region is called the synapse. Serotonin is critical to many brain functions, including the regulation of mood, heart-rate, sleep, appetite, pain, and many others. As a result, it is extremely important that the neurons release the proper amount at the right time.

After the serotonin is released into the synapse, it comes in contact with receptors on the dendrite of another cell. When a molecule attaches to one of these receptors, it sends a chemical signal to the cell body. Based on information from all the receptors put together, the cell body decides whether or not to fire an electrical impulse down its own axon. If a certain amount of receptor binding occurs, the axon will fire, causing the release of neurotransmitters into the synapses of other cells. This is how brain cells communicate and regulate the amount of neurotransmitters present at any given time. Research has shown that the amount of serotonin receptor binding influences your mood. When more receptors are active, you are happier.

Along with the receptors on the dendrite, serotonin molecules also bond with "reuptake transporters" on the axon's membrane. The transporters are responsible for reducing the amount of serotonin in the synapse if the cell body decides that there is enough receptor binding. One way to look at the system is to think of a revolving door. The serotonin enters on one side and the transporters spin around and deposit it on the other. Molecules can only move from the synapse to the axon.

When MDMA is present in the brain, enormous amounts of serotonin is released into the synapse. This increases serotonin receptor binding which causes changes in the electrical impulses sent throughout the brain. MDMA also causes serotonin that has been removed from the synapse by the reputake transporters to be re-released from the axon. In a sense, the revolving doors are frozen in the open position and the cell body becomes overwhelmed with serotonin. It flows freely into the receptors and is recycled over and over again. This alteration in normal brain function produces the effects associated with ecstacy. These include euphoria, an enhanced sense of pleasure and self-confidence, peacefulness, and empathy. It is fairly common for a pill of ecstacy to be laced with other drugs and this can alter the experience. If speed is present, for example, teeth-clenching, depressed appetite, and insomnia are present in addition to the effects of MDMA. Ecstacy begins to work on the brain after 20 to 40 minutes, with the peak effects after the first hour.

After a few more hours, the MDMA begins being broken down by the body and reputake
transporters resume normal functioning. They usually remove much of the serotonin from the synapse after approximately three hours, although there is still enough present to maintain the feeling of the full effects. However, most of the serotonin will be gone by the end of the fourth hour. An enzyme called "monoamine oxidase" (MAO) is also present in the brain and aids in the breakdown of serotonin.

The state of the brain where serotonin levels begin to return to normal after being under the influence of MDMA is called "coming down." There are fewer activated receptors because so much serotonin has been released in the past few hours that most of the supply in the brain has been used up. At this point, there might be even less serotonin circulating than before the MDMA was introduced. Some people choose to take more MDMA at this point to make coming down easier, but eventually the drug will have no effect at all because there is no serotonin left in the entire brain. Serotonin levels might remain depleted for up to two weeks while the brain rebuilds its supply.

One negative side effect of ecstacy results from this period of recovery after taking a pill. Perpetually lowered serotonin levels have been proven to cause depression. If MDMA is present in the brain on a regular basis, serotonin is never fully replenished before it is released all at once again. Normally, it takes a long time for the brain to produce new serotonin because it involves a complicated series of metabolic reactions. The process does not normally need to be accelerated because the brain would never release such large quantities of serotonin without the influence of MDMA. As a result, when levels are depleted so rapidly, the brain needs time to recover. This is when people experience depression.(2)

The long-term effects of ecstacy use are still relatively unknown. One current theory is that ecstacy is a neurotoxin, meaning it causes permanent brain damage and psychiatric disorders later in life. Much of the current press on this issue is exaggerated, but studies have shown that MDMA degenerates the serotonin axons of lab animals. So far, studies of frequent human users (75 times or more) have shown reduced brain serotonin levels and reduced serotonin uptake. No signs of cognitive or psychological problems were noted. Evidence regarding the memory loss associated with ecstacy use is inconclusive. It is still unknown whether this is due to neurotoxicity or temporary changes in brain chemistry that correct themselves with time.(2)

Ecstacy is not a new drug. It was first synthesized in 1914 by a German pharmaceutical company that believed it had uses as an appetite suppressant. The drug first appeared in America sometime around 1970 and was legal until 1985. Although the most common instances of ecstacy use today are recreational, it was first utilized by a small group of psychotherapists as a tool to treat Post Traumatic Stress Disorder. However, its effectiveness was outweighed by its unknown and unpredictable side effects. Shortly after its use in psychotherapy was discontinued, a new market for MDMA emerged. Ecstacy was initiated into the illicit drug culture around 1980. It was not widely accepted until it was picked up in the club scene and began appearing at raves.(6)

The popularity of ecstacy has continued to increase in recent years. A study done by
Harvard University of 14,000 college students from 119 U.S. four-year colleges revealed that the prevalence of ecstacy use increased 69% between 1997 and 1999. A smaller sample of ten colleges showed that this trend remained consistent in 2000.(1) The conclusion of the study stated that "ecstacy use is a high risk behavior among college students which has increased rapidly in the past decade.".

References


1)"Increased MDMA Use Among College Students: Results of a National Survey"
2)"DanceSafe: Promoting health and
safety within the rave and nightclub community"

3)"Ecstasy.org aims to gather and make
accessible objective, authoritative and up to date information about the drug ecstasy (principally MDMA). The site is non-profit making and is maintained by volunteers."

4)"NIDA: National
Institute on Drug Abuse"

5)"Drugs.com: the internet's drug
information resource"

6)"Narconon: Northern
California Drug Information Web"


Talented Emily
Name: Jennifer R
Date: 2002-09-29 18:09:07
Link to this Comment: 2997


<mytitle>

Biology 103
2002 First Paper
On Serendip

"The house is mte witout you." That is a sentence from a letter my twelve-year-old sister wrote me last week. Emily has what doctors refer to as "auditory dyslexia", which in simple terms means that her brain doesn't properly process what she hears. Emily was officially diagnosed with dyslexia four years ago. After five years in the Special Education Program (SPED) program in the Boston Public Schools and a long battle my parents fought with the city, Emily was finally diagnosed with dyslexia. It took five years for doctors to figure out that Emmy was not going to start reading like everyone else. With those five years behind her, Emily, who is currently in the 5th grade at a new school, reads at a 3rd grade level, overcoming a two and a half-year disadvantage in one year.
When I found out my sister had auditory dyslexia, I did not know anything about it. I was baffled at what was wrong with her. I immediately took interest in finding out more information on what exactly it was and how she got it. Just in the dictionary I found out that "Dys" means 'difficulty' and "lexia" means 'words'. In more complicated scientific terms, Dyslexia is one of several distinct learning disabilities. It is a specific language-based disorder of constitutional origin characterized by difficulties in single word decoding, usually reflecting insufficient phonological processing abilities. These difficulties in single word decoding are often unexpected in relation to age and other cognitive and academic abilities; they are not the result of generalized developmental disability or sensory impairment. Dyslexia is manifest by variable difficulty with different forms of language, often including, in addition to problems reading, a conspicuous problem with acquiring proficiency in writing and spelling. (1)

Dyslexia is a complicated disorder and is not always easily detected. At first doctors thought that Emily had Attention Deficit Disorder (ADD) because it was so common among kids her age. With daily medication, ADD is treatable. However, Emily had something that medicine was not going to fix. There are different types of dyslexia which are important in the prognosis so as treatment can be readily attainable. Emily's doctors said that kids would often recognize individual letters but have trouble getting them in the right order. As well as visual dyslexia, many experience an auditory form of the condition, making it hard for them to recognize different sounds, hold information in short-term memory and process language at speed. (2)
This explains why word problems and sentence formation was Emmy's biggest problem. It is important to be able to distinguish between different types of dyslexia especially in the treatment process.

Many dyslexics also have behavioral issues mainly due to low self-esteem at an early age. However, a study done in London argues that auditory dyslexics tend to be innocent and therefore vulnerable; they have no behavioral problems, other than those caused by the frustration of their disability. (3)
This was the problem with the Boston Public School system; they would place all learning disabled children in the same class without clarifying each individuals need. Emily was put into a classroom with thirty-five children, more than half of which had severe behavioral problems. With nothing being accomplished in the BPS system, my parents started to search for different alternatives.

Like most forms of dyslexia, auditory dyslexia does not have a definite cause, but doctors and scientists study dyslexia in-depth everyday. According to a study published in the July 15 issue of Biological Psychiatry, dyslexia is caused by a genetic flaw in the part of the brain used for reading. While non-impaired reading is concentrated in the back region of the brain where letters and sounds are integrated, the researchers found that this area is disrupted in children who are dyslexic--their brain activity during reading is concentrated in the frontal region, which governs articulated speech. (4)

It is also proposed that dyslexia is genetic and with that doctors might be able to diagnose dyslexia in its earliest stages, allowing treatment and prevention early on in life. In Emily's case, learning disability's definitely run in the family, although they are not as severe as dyslexia, her father and older brother both suffer from attention deficit disorder. Some social scientists argue that learning disabilities are common in dysfunctional families and are found in families with bad parenting. There are numerous studies that prove that theory wrong.

Emmy always hated school, ever since day one. It was always a struggle for Emmy to do her homework after school; even reading a few sentences from a reading book made her mad. At the age of seven, she was a wild, free-spirited little girl, but as soon as she entered the classroom, she would hide under her protected shell, my family refers to as attitude. Never meaning to be unkind, it would happen automatically because she was scared of what her classmates would say to her when she said something wrong. She never raised her hand in class or volunteered to participate in anything. In the school situation, a dyslexic child may find he or she is experiencing failure, but is not able to understand why. This frequently results in low self-esteem and a severe loss of confidence, which can lead to the child being reluctant to go to school. At this stage something has got to be done, and this is when a lot of parents seek specialist help and advice. (5)

At home however, she was a natural born actress. It was clear that Emily had other skills that made her who she was. She needed to be in an environment where she could exercise her other talents and explore new options.

As first grade came to an end, my parents had to make a decision of whether or not to keep her back a year. With this decision, they decided to get her tested for learning disabilities. Tests and tests came back negative... "All Emily needs to do is work on reading a little extra every night", is what psychologists told my parents. My mother wouldn't accept what the doctors told her; after two years in special reading programs and daily after school extra help, my mother resorted to what most people would not have the time to do. She hired doctors from Children's Hospital in Boston to perform the same tests on Emily. When those tests came back, it was evident that Emily had auditory dyslexia and she needed to get help. At that time, Emily was in the third grade and just barely reading at a first-grader's level. The doctors were helpful as to what Emily needed- a new school. She was put on a waiting list at the Carroll School in Lincoln, Massachusetts, where the focus is on learning disabled children.

Doctors were confident that if Emily received the proper teaching methods for her dyslexia that she would be caught up to her level in no time at all. Given the proper help, in most cases a dyslexic child can succeed at school at a level roughly equal to his or her classmates. Moreover, dyslexic children often have talents in other areas, which can raise their self-esteem if they receive lots of praise! Artistic skills, good physical co-ordination and lateral/creative thinking and are often areas in which they may excel. (6)

In the 4th grade, Emily was accepted to the Carroll School after attending their summer program, where she has flourished ever since.
Currently, Emily is a fifth grader at the Carroll School where she is heavily involved in school activities and participates in class discussions, a tactic that was foreign to her until now. At the Carroll, classes are small, with six to eight students. All teaching is direct, multisensory, and integrates technology into the learning process. (7)

Doctors that work with Emily at the Carroll, estimate that she will only need two more years there in order to be caught up to the level she should be on. Emily's self-esteem has sky rocketed and she feels confident in whatever she does. At her new school and new classes, she participates in school productions, learns something new about the Internet and computers every day and is captain of the girls' basketball team. She has brought her classroom skills out into the community as well.

References


1.1)Dyslexia Home Page,

2.2) Dyslexia Home Page,

3.3) Dyslexia Home Page,

4.4) Dyslexia Home Page,

5.5) Dyslexia Home Page,

6.6) Dyslexia Home Page,

7.7) Dyslexia Home Page,


Fad Diets: Seduction and Deceit
Name: Anne Sulli
Date: 2002-09-29 18:55:26
Link to this Comment: 2999


<mytitle>

Biology 103
2002 First Paper
On Serendip

Americans have long been plagued with the serious problem of obesity. As the country obsesses over weight loss and the newest diet plans, the population ironically continues to experience increased body fat. The basic premises to healthy living seem simple: eat a balanced diet while remaining physically active, and burn more calories than those consumed. Americans are even given specific guidelines—outlined in the food pyramid—as to the appropriate quantities of each food. Why, then, is obesity one of the leading health risks confronting Americans? It may be that the seemingly "simple" and healthy road to weight loss is actually an arduous and long-term process. It is therefore enticing to substitute sensible diets and exercise regimens with what are known as "fad diets"—diets that promise quick and easy results. These diets have achieved enormous popularity despite the copious research proving their dangers and inefficiency. The following exploration will hopefully elucidate many of the mysteries and myths surrounding "fad diets."

Although they may assert very different "truths" about human biology and resulting dietary needs, most fad diets share several common characteristics. The majority claim to provide revolutionary information and insight, but are, in fact, simply replicas of older fad diets (2). Many will posit the vast claim that a specific food or group of foods is the "enemy" and should be banned from one's diet (2). This is a myth—there is not a single product which is capable of causing weight gain or loss (2). Fad diets usually promise immediate results and offer lists of "good" and "bad" foods (5). They are usually not supported by scientific research or evidence. Rather, the information they provide is derived from a single study, or by an analysis which ignores variety among human being (5).

The popular diet commonly known as "The Zone" falls into the category of fad diets. This plan was created by Barry Sears, Ph.D., author of The Zone, in 1995. Sears' principal argument is that human beings are genetically programmed to function best on only two food groups: lean proteins and natural carbohydrates (3). He claims that the cultivation of grains is a modern development, and that our genetic makeup has not yet evolved to require such foods. Essentially, carbohydrates cause excessive weight gain and are responsible for America's obesity epidemic (3). Consumption of carbohydrates, according to Sears, stimulates insulin production—a process that converts excess carbohydrates into fat (3). He argues that America's phobia of fat has inspired a diet which is counterproductive. The solution is to substitute complex carbohydrates for fat (2). Critics of this diet argue that Sears' theory regarding insulin production is an "unproven gimmick" (4). The diet is potentially dangerous because scientific research observes a strong correlation between animal fat—which contains more carcinogens from industrial waste than any other product—and cancer (4). Sears also ignores both the problem of cholesterol and the fact that vegetarians have a smaller chance of developing heart disease and cancer (3).

A second well-known fad diet is called Sugar Busters!. This plan, created by H. Leighton Steward and associates, labels sugar as the enemy because it releases insulin and is then stored as body fat (6). Sugar Busters! demands that both refined and processed sugars be abolished from one's diet (this includes potatoes, white rice, corn, carrots, and beets) (6). The revised diet also becomes a high protein, low carbohydrate plan that poses the same threats as does "The Zone." Sugar is not, in fact, naturally toxic and it is dangerous to eliminate complex carbohydrates which are a good source of fiber (6). Again, this plan calls for the complete elimination of a certain food, ignoring the fact that the human body needs a multitude of foods to remain healthy (6). Other fad diets include Protein Power Lifeplan , (5) and Dr. Atkins New Diet Revolution, which also malign carbohydrates (5). Both of these diets promote high fat foods which increase one's risk for heart disease, cancer, high cholesterol, and liver and kidney damage (5).

Fad diets are clearly extreme and often irrational plans that lack valid evidence and scientific research. Aside from being unhealthy, they are often ineffective as well. High-fat diets may promote short term weight-loss, but most of the loss is caused by dehydration (4). As the kidneys try to destroy the excess waste products of fats and proteins, water is lost. High fat diets are low in calories, causing the depletion of lean body mass with little fat loss—another reason for immediate weight loss (4). Fad diets argue that the human body responds to carbohydrates in a way that causes weight gain. If Americans are gaining weight, it is due to the quantities they consume—the excessive calories, not the carbohydrates themselves, encourage obesity. If fad diets work, in spite of being extremely unhealthy, it is because one's calorie intake decreases (The Zone's recommended diet calls for less than 1,000 calories a day) (1). There is nothing miraculous about the foods which these diets prescribe. Furthermore, these diets are extremely difficult to maintain, since they often ban certain products and require the repeated consumption of others—making long-term weight loss impossible.

A proper diet should place long-term health before immediate results. Fad diets do just the opposite—long term use of these plans may pose serious health risks. They tend to be low in calcium, fiber, and other important vitamins and minerals (2). As previously stated, fad diets are usually high-fat diets. This presents a host of dangers: increased risk for heart disease and atherosclerosis (a hardening of the arteries), and an increase in low density lipoproteins (LDL), which carry cholesterol to the body's tissues, are among the most serious (2). Furthermore, a drastic reduction in carbohydrates causes the body to believe that it is being starved (7). Continued practice of these extreme diets may cause irrevocable damage to the liver and kidneys. The liver converts proteins into the necessary amino acids, and urea and nitrogen are the two by-products of this process (7). But excessive protein in the body places great stress and overwork on the kidneys and liver (7).

The obvious health dangers posed by fad diets combined with their failure to encourage long-term weight loss would logically deter people from embracing these "gimmicks." They continue, however, to remain the preferred substitute for healthy diet and exercise plans. What is so appealing about fad diets? Our world is set up in a way that encourages obesity. Modern transportation and technology have rendered physical activity unnecessary . (1) In addition, Americans have access to an enormous variety of delicious, and often unhealthy, foods. It clearly requires great effort to maintain a healthy weight. Rather than suffering the long and difficult process required by sensible diet plans, most are content to embrace the "easy fix"—the fad diet (1). It is, after all, human nature to seek the easy route, the short cut. When someone knows one person that lost weight quickly, he/she is likely to ignore the warnings in the quest for fast results.

It is important to note that it is entirely possible for fad diets to prove effective for certain individuals. Each person's body is different, operating and reacting to certain diets in various manners. Although fad diets are, in general, dangerous and ineffective, it is crucial to note that they may work for the particular individual whose body is programmed to respond positively to such extreme constraints. Similarly, some of these diets show signs of a rational philosophy. Sugar Busters!, for example, advocates caution against sugar products. This argument is indeed valid (it is only when this plan is taken to the extreme that it becomes dangerous). This idea divulges perhaps the most significant gap in fad diet theory—that which involves the great diversity in human genetic makeup. Fad diets operate under the assumption that the body functions and responds to certain foods in a standard and fixed way. Diversity, however, is the most basic principle in human biology. What works for one person may be completely ineffective for another. The fact that fad diets blatantly disregard this most fundamental truth renders them unreliable and ineffective.


References

1)Pros and Cons of Fad Diets

2)Fad Diets: What you may be Missing,

3)Key #1 Follow The Zone Diet,

4)Dubunking the Zone Diet,

5)Popular Diets: The Good, The Fad, and The Iffy ,

6) Is the Sugar Busters Diet For You? ,

7)Protein Fad Diets: Knowledge Does not Always Alter Behavior


Children and Bipolar Disease
Name: Heather D
Date: 2002-09-29 20:15:52
Link to this Comment: 3001


<mytitle>

Biology 103
2002 First Paper
On Serendip


For the past 11 years I have been working with children at my church, and I have found it disturbing that for the past several years, the number of students we have had with major learning disabilities has skyrocketed. We had a range of students with a wide range of problems, from obsessive-compulsive disorder (OCD) to attention-deficit disorder with hyperactivity (ADHD), from conduct disorder (CD) to oppositional-defiant disorder (ODD). Eventually, out of every five children that we had, one of them had some form of learning disability. Though almost all of these children were undergoing some form of treatment or therapy, there were a few that never seemed to get better. One student in particular, seemed to get worse as he received more treatment. At first he was diagnosed with ADHD because he could not concentrate on one particular task. However, when he started receiving treatment for ADHD (including a heavy dose of Ritalin), his behavior became more erratic and at times, violent. Then he was diagnosed with ODD, however the same problem occurred when he started his treatment. Finally, this spring, he was diagnosed as Bipolar, and now that he is receiving the right treatment, he can finally live a somewhat normal life.

It is now estimate that upwards of one million children in the United States are suffering from early onset bipolar disorder, and that more than half are not getting the proper help that they need (1). Though this statistic may be somewhat shocking, it is also evidence of a well needed change in the way we think about bipolar. Originally, it was thought that bipolar was strictly an adult disease. Children with bipolar were always labeled with learning disabilities and often as simply "bad kids," when in reality these children are suffering from serious and frightening disease. Bipolar in children is becoming more common in children, and is only being researched. As these researchers learn more about these children, they are realizing that this disorder is even more frightening in children.

"Typically adults with bipolar disorder have episodes of either mania or depression that last a few months and have relatively normal functioning between episodes, but in manic children we have found a more severe, chronic course of illness. Many children will be both maniac and depressed at the same time, will often stay ill for years without intervening well periods and will frequently have multiple daily cycles of highs and lows. These findings are counterintuitive to the common notion that children would be less ill than their adult counterparts," states Barbara Geller, MD, head researcher from Washington University School of Medicine in St. Louis (2).

This rapid cycling is what has made it hard for doctors to associate these children with bipolar disorder instead of typical hyperactive disorders.

Another major problem with bipolar disorder in children is that no clear treatment path has been established. While it is known that medicines used for hyperactive children does not work at all and can actually make the disorder work, it is not known how other medications affect the bipolar child. Lithium, traditionally used on most adult patients dealing with the disorder, has only been successful with a small number of bipolar children. Mood stabilizers are much more effective in children, but because there are so many varying types, it takes a long time to find the "right" drug for the child. These "stabilizers" are only half of the drug cocktail these children need though. There is also the need for an anti-depressant that will not send these kids flying into mania, and they also need a medication that calms their manic rampages with out sending them into a nasty depression (3).

Many people are now saying that these children simply need psychotherapy and that overmedicating the child is worse than the actual disease. However, it is shown that if the child is not medicated, most therapy is wasted and has no value in for that child in the long run because they are unable to process it due to the disorder itself. Also, if the child goes on completely un-medicated, he or she can develop much more serious symptoms later on such as delusions, hallucinations, borderline personality, narcissism or antisocial personality (3). With the threat of failing in school and even suicide, the need for medication is incredible.

I guess the question that follows this research is how we find the right balance for our children between medication and therapy that allows them to get the most out of their lives. As of now there is not even a test to properly diagnose these children with bipolar because the standard adult test often does not apply to them because of the rapidness of their cycling. More research must be done to ensure these children a more normal life because with the genetic nature of bipolar disorder, this disease is only going to spread further and effect more people who will need this help.
.

References

1)Time Magazine Homepage, an article on children with bipolar disorder
2)"Child Psychiatry Researchers from Washington University School of Medicine in St. Louis Report Bipolar Disorder in Children Appears More Severe than in Most Adults."
3) Child and Adolescent Bipolar Foundation Homepage


Sugar, a Trick or a Treat?
Name: Anastasia
Date: 2002-09-29 21:10:46
Link to this Comment: 3004


<mytitle>

Biology 103
2002 First Paper
On Serendip


As parents walk the streets with their children going door-to-door trick or treating, do they realize the severity behind this celebration of collecting refined sugar? As enthusiastic citizens donate king size Snickers to the cause, do they believe they are making a five year old's dreams come true, or are they aware that cavities and weight gain will result from their kindness? As children dump out their night's accomplishments onto the kitchen floor do they realize that consuming all that candy could result in diabetes? Halloween, although fun, could lead to future problems for all participants. Why aren't there police patrolling the streets trying to stop all the madness that occurs on this one night? How could there be a holiday celebrating the decay of humans everywhere? If sugar is really that bad for you, why do children and adults everywhere enjoy it so much? The truth must be out there somewhere.

The sweet truth behind sugar is that it really isn't as bad as the "experts" make it out to be (1). Sugar is a compound of carbon, hydrogen, and oxygen belonging to a class of substances called carbohydrates. Sugars fall into three groups: the monosaccharides, disaccharides, and trisaccharides. The monosaccharides are the simple sugars, which include fructose and glucose (2). Although our bodies require glucose for energy, we do not need to consume simple sugars in order to obtain it. Complex carbohydrates, for example cereals, breads, fruits and vegetables, provide vitamins, minerals, and fiber in addition to glucose for energy (1). Carbohydrates are the body's main source of energy (3) . From this information it is essential to realize that sugar plays an important role in the body's ability to function. Knowing this, it is time to break through some of the popular myths behind sugar consumption.

The most common myth surrounding sugar is probably that it causes hyperactivity. Hyperactivity is excessive physical activity of emotional or physiological origin, usually seen in young children, which is one of the components of attention deficit hyperactivity disorder. The cause of ADHD is unknown, although there appears to be a genetic component in some cases. Intake of sugars, preservatives, and artificial flavorings is no longer considered to be a factor. In most cases, sugar and carbohydrates seem to have calming effects on children (1). Birthday parties or other holidays like Halloween where children receive candy could be cause for much excitement. It has been shown that people with ADHD have less activity in areas of the brain that control attention. Treatment usually includes behavioral therapy and emotional counseling combined with medications (4).

A second myth needing correction is that sugar causes diabetes. Diabetes is a chronic disorder of glucose (sugar) metabolism caused by inadequate production or use of insulin, a hormone produced in specialized cells in the pancreas that allows the body to use and store glucose. The lack of insulin results in an inability to metabolize glucose, and the capacity to store glycogen (a form of glucose) in the liver and to transport glucose across cell membranes are impaired. Diabetes is the result of many factors including genetics and lifestyle (1). In order to lead a healthy life, diabetics control the amounts of sugar intake in order to maintain a healthy weight and will take medication if required to control blood sugar levels. It is true that diabetics are unable to utilize the sugar in their diet, but sugar itself is not the cause of the disease.

Myth number three lingers in many of the conversations that occur between the Weight Watchers walls and many body conscience individuals. It seems that many misinformed dieters believe that sugar causes weight gain. Correcting this one idea may be the key to their success. When our body takes in more energy (calories) than it can use, it stores this unused energy as fat, which leads to weight gain. There is not one individual food that alone causes weight gain since all foods contain calories. Many sugars contain similar amounts of calories as most other carbohydrates and proteins. Also, it is interesting to note that since more "sugar free" items have taken over the shelves in supermarkets, there has been a rise in obesity numbers in the United States. "Overweight and obesity are among the most pressing new health challenges we face today," says U.S. Department of Health and Human Services secretary Tommy G. Thompson. "Our modern environment has allowed these conditions to increase at alarming rates and become a growing health problem for our nation" (5). The key point to understand is that sugar is part of a healthy diet when eaten in moderation. Rather than imposing restrictions on certain groups of foods, it is more important to enjoy all foods while "emphasizing whole grain products, fruits, vegetables, and lean sources of protein and dairy products" (1).

Many individuals claim that the cause of sugar intake is sparked by sugar addictions. Not possible, claim many sugar experts (6). The term may be used loosely to explain away a so-called "sweet tooth," but an addiction is defined as either an emotional or physical dependence or both, characterized by symptoms of withdrawal. That doesn't happen with sugar or any other carbohydrate. Therefore, it is impossible to be addicted to food. Foods containing sugar and carbohydrates may be viewed as comfort foods. Emotional people may find enjoyment in eating these types of foods when experiencing sadness or frustration.

It is of the belief today that foods high in sugar are bad for you. For example, chocolate, since it is considered candy, is thought of as empty calories with no nutritional value. Recent studies suggest that certain forms of chocolate have health benefits however. This guilty pleasure contains many fats that are good for the body. According to a Hershey study some milk chocolate products contain conjugated lion oleic acid also known as CLA. This trans fat is believed to fight cancer in animals. A second study by the Nestle Research Center found that a change in dark chocolate might help lower cholesterol. Results showed that ten men fed the dark chocolate experienced a drop of 15% in their low-density-lipoprotein (LDL) cholesterol levels. Another study at University of California, Davis found evidence of phenolics in chocolate. Phenolics are the same chemicals found in red wine that helps lower the risk of heart disease. They reduce the oxidation of LDL preventing it from creating plaque in the arteries (7). Overall, from these studies and results it is fair to say that chocolate and many other foods of its kind prove to be beneficial in a person's diet.

To make a long story short, for several years sugar has been the scapegoat. Sugar is not a "bad" food but yet essential to human life. It is important to remember however that in order to maintain a healthy lifestyle eating in moderation is important. If your energy needs are low, go easy on the amount of sugars you consume, as well as the amount of fat. Try consuming mostly nutrient-dense foods, which provide other nutrients besides sugar or fat. Don't be scared to eat sweets once in a while. Dress up on Halloween and don't be scared to bring the biggest pillowcase you can find to make sure you collect the largest amount of candy possible. Sugar, once considered a trick has really been proven to be a treat.

References

WWW Sources

1)The Sweet Truth About Sugar, challenges the myths

2)Sugar, a great definition

3)Sugar Is Sweet By Any Name

4)Attention Deficit Hyperactivity Disorder

5)Obesity Problem Getting Worse in USA

6)Sugar and Artificial Sweeteners

7)Chocoholics


Multiple Personality Disorder
Name: Diana La F
Date: 2002-09-29 21:40:04
Link to this Comment: 3006


<mytitle>

Biology 103
2002 First Paper
On Serendip

When you were growing up, did you have an imaginary friend? Did Mom and Dad have to set a place for Timmy at the table and serve him invisible food, or did all your aunts and uncles have to pet your imaginary puppy when the came over to the house? That's just pretend, though, kids having fun. So is a child pretending that they are someone else, forcing their parents to call them Spike, convinced they have a Harley even though they're only five. But what if this were an adult, someone who should "know better" convinced that they are someone else. If this were to happen, society would label them as crazy or delusional. Or, maybe, this adult suffers from a Multiple Personality Disorder.

Multiple Personality Disorder (or MPD) is a psychological disorder where a person possesses more than one developed personality. These personalities have their own way of thinking, feeling, and acting that may be completely different from what another personality is like (1). To be diagnosed with multiple personality disorder, at least two of the multiple personalities must dominate over the others on a slightly frequent basis (2). This results in an abrupt change in the way a person acts. Basically, they become another person in either an extreme or complete way (3).

MPD was first recognized in the late nineteenth century by Pierre Janet, a French physician. The disorder was later brought more to public awareness by The Three Faces of Eve (1957), a movie based on the true story of a pristine housewife who was diagnosed with MPD when she couldn't explain why she would suddenly become a very sexual person and not remember it. The eighties and the nineties brought on what was seen as an over diagnosis of MPD (1).

MPD is known as Dissociative Identity Disorder (DID) in the psychiatric world (1). The reason for this change of label is that the term "multiple personalities" can be misleading (4). A person with MPD/DID is one person with separate parts autonomously comprising their mind . They are NOT many people sharing one body (5). Although it seems as though these "personalities" seem to be very different, it is important to understand that they are separate parts of the SAME person (4). It is not correct to say that someone with MPD/DID has "split personalities" as this denotes schizophrenia. A person with schizophrenia does not have connected thoughts and feelings, they are "split" (1). A person with dissociation, however, has memories, actions, identities, etc., that are unconnected. Different thoughts and feelings may be connected, but different thoughts and different memories may be connected to some and not the others. Everyone experiences this once in a while. Daydreaming, getting lost in a book or a movie, zoning out, etc. These are all moments of dissociation (4). Just because someone has MPD/DID does not mean they can not function in everyday life (2). Indeed, they usually have this disorder so that they CAN function.

There have been as many as 20 personalities [perhaps even 37] that have been reported (3). About 1% of the population has some form of MPD/DID. In fact, of patients in psychiatric hospitals, possibly up to 20% have MPD/DID but are misdiagnosed. With these statistics, MPD/DID can be put into the same category as anxiety, depression, and schizophrenia as one of the major mental health problems at present (4).

Although the causes of MPD/DID are not completely understood it seems as if childhood neglect and abuse of some sort are the major causes (4). The abuse usually occurs early in life, before the age of nine, and is commonly repeated and prolonged (2). Due to this abuse, children may detach parts mind and create new personalities to separate themselves from their pain (3). After long term abuse, these new "personalities," this dissociation, may become second nature. These children may use this technique to separate themselves whenever they feel anxious or threatened. Due to it's ability to keep a sane, functioning part of a persons mind in tact when all else seems hopeless MPD/DID can be seen as a very effective escape technique (4). It is a very healthy, sane, and safe way for these people to survive an unhealthy situation (2).

MPD/DID can be treated. The first treatment usually used is psychotherapy, to try to help the person integrate the personalities more (1). After that medications, hypnotherapy, and adjunctive therapies are also used. In fact, if treatment is started and completed, MPD/DID may have the best prognosis of any disorder (6).

Everyone has different facets to their own personalities. Without this fact we would not be the complex beings that we are. A person with MPD/DID, however, may have very distinct facets that work independently of one another, sometimes not even knowing that the others exist. These various facets work together to keep the person whole. MPD/DID is a highly evolved psychological survival technique that is not to be looked down upon. Without it, the people who "suffer" from it may not be able to function in everyday life as well as they do, if at all.

References

1)Infoplease Education Network, an interesting educational network with many resources

2)MPD/DID information site, Site put together by a lady with MPD/DID

3)Medical Index
, interesting site with a great amount of information on many medical conditions

4)MPD/DID resource page, site with a lot of information on MPD/DID

5)The International Society for the Study of Dissociation
, another site with a lot of information on MPD/DID

6)Sidran Institute of Traumatic Stress Education & Advocacy, site with abundant information and resources to traumatic disorders and treatment


Being Made Hole
Name: Christine
Date: 2002-09-29 22:12:25
Link to this Comment: 3007


<mytitle>

Biology 103
2002 First Paper
On Serendip

Are you unable to deal with life's little miseries? Feeling stressed? Lethargic? Depressed? You could take a vacation. Or you could try meditation. Or yoga. Or you could try to achieve a permanent high by drilling a hole into your skull. Trepanation, or the drilling of a hole in the skull, is one of the oldest surgical procedures, some trepanned skulls even dating back to 3000 B.C. The oldest skulls have been found in the Danube Basin, but trepanned skulls have been found in virtually every country, even in America, with the highest concentration found in Peru and Bolivia (1). The word trepanation is derived from the Greek, meaning "auger or borer". More specifically, trepanation means "an opening made by a circular saw of any type" (1). Trepanation has been performed over the centuries for various reasons, including a means to liberate the demons or spirits from the heads possessed. Trepanation was also performed for therapeutic reasons, such as for epilepsy, headaches, infections, insanity, and a whole range of maladies. A third reason for trepanning is religious, where the rondelles, or disks of bone from the skulls, were collected and used for charms and talismans which were believed to have power to protect the wearer from illness and accidents. Nowadays the procedure is believed to help the individual expand his or her consciousness, and initiate a spiritual awakening that leaves the trepanned individual forever changed. Devotees of trepanation swear that a hole in their head gives them greater energy, improved concentration and mental capacity, elimination of stress-related diseases, and relief from other ailments that "come packaged with adulthood," leaving them feeling like kids again (2). Can drilling a hole into your head really hold such miraculous restorative powers that cure such a host of life's ills? Are solid-skulled humans one hole away from nirvana (5)?

Those who wish to be trepanned would have difficulty finding a surgeon in the United States to perform such a procedure; in fact, none will. Trepanation is performed in America only to relieve acute pressure on the brain, usually caused by a blow to the head (3). Any legitimate medical practitioner refuses to perform or recognize trepanning as a therapeutic practice, although a few international black market neurosurgeons will do so for the right price. Doctors interested in neurosurgery are required to take five to seven years of intense training to learn the techniques that make trepanning safe, and the notion of trepanning for recreational purposes has been called "quackery," "horseshit," "pseudoscience," and "nonsense" (4). Risks of drilling a hole into the skull include meningitis, blood clots, stroke, epilepsy, and the risk of a bone fragment embedding in the brain during the drilling (7).

However, the desire for a permanent high overrides the risk factors, and those who wish to be trepanned bypass the medical community and do the procedure themselves, usually with fellow supporters standing by in case of an emergency. Almost all of the information available on the procedure is based on first-hand accounts, including a video entitled "Heartbeat in the Brain", where devotee Amanda Fielding had her whole self-trepanation carefully recorded. Ms. Fielding wears old clothes and tapes sunglasses to her face so the blood will not impair her vision as she works. She starts by shaving her head and applying a local anesthetic to the spot to be trepanned, the ideal location being where the skull sutures have ossified, as there is less of a chance there is a blood vessel in that area. An incision is made with a scalpel, and then she starts in with the electric drill (6). In order to have the therapeutic effect, the hole needs to have between a one-quarter and one-half inch diameter. As soon as the skull is penetrated, the bleeding is prodigious. The skull piece is removed, the mess is cleaned, and the hole is bandaged. As the wound heals, skin grows over it, leaving behind a small indentation (7).

The miraculous restorative powers of trepanation has its origin in an alleged mechanism called "brainbloodvolume," coined by the founder of modern trepanning, Bart Hughes, a Dutch librarian. Mr. Hughes was almost a Dr. Bart Hughes, but was thrown out of medical school in Amsterdam in the 1960s because he failed part of his medical exams and because of his advocacy for marijuana use. While in Ibiza, Mr. Hughes was taught that standing on his head for extended periods of time would get him intoxicated, and at a later date, after ingesting the drug mescaline, the mechanism of brainbloodvolume became clear to him. "[I realized] that it was the increase of brainblood that gave the expanded consciousness. An improvement of function must have been caused by more blood in the brain which meant there must have been less of something else. Then I realized that it must be the volume of cerebrospinal fluid was decreased" (8). Mr. Hughes believes that gravity and age rob an individual of his or her creativity and energy that was once possessed during childhood. Babies have high brainbloodvolume because the soft spot (the fontanel) on the head gives the brain room to pulse. As the babies grow, the soft spot hardens and the brain does not have the room to expand. The hardening of the skull, combined with gravity, saps the blood from the head, making the brainbloodvolume plummet (2). Trepanation supposedly reverses the blood loss by expanding the blood vessels in the brain, allowing them to supply more oxygen and glucose to brain tissue as well as speedily remove toxins (7).

Given the circumstances and conditions in which the mechanism of brainbloodvolume was conceived, could it hold merit? Two researchers at the U.S. Health Service conducted a study on cerebral circulation, and concluded that the mechanism is far too complex to understand at the present time. However, they also made two tentative conclusions: the first, that the necessary level of cerebral circulation is maintained by uninterrupted fluctuations of cerebral spinal fluid (CFS), and the second, the limits of the speed of CSF volume fluctuations by the physical and neural characteristics of the brain are fundamental to the protection of the central nervous system from mechanical injury due to fast and unexpected shocks (9). The two tentative conclusions indicate that "the mechanisms of cerebral circulation are maintained by a complex and delicate balance that, far from deficient, can only operate if left unaltered" (7).

Along with the two researchers, other well-respected medical practitioners have vehemently opposed trepanation. They state that brain flow, not brain volume, is related to brain function, and there is no evidence that drilling a hole into the skull will increase blood flow to the brain. Furthermore, since trepanation only affects the skull, nothing they are doing will affect the brain. That is, trepanners do not touch the dura, the compartment that has cerebrospinal fluid in it, so the changes they are claiming to happen cannot be anatomically possible. Rather, doctors and scientists believe that the experienced benefits of the procedure are most likely due to the placebo effect. While dozens of people around the world are being trepanned, it is safe to say that trepanation will not become a trend in today's society; rather, it will appeal only to the radical portion of the population. As the fields of medicine and psychology learn from their past mistakes, medical procedures of the past are abandoned and believed to be better off forgotten. The primitives do not always know best. Perhaps the holes in the heads might really make trepanned individuals feel good after all - just not for the reasons they believe.


References

1)Trephination, an Ancient Surgery

2)You Need it Like...A Hole in the Head, by Michael Colton

3)Brief History of Trepanation, from the International Trepanation Advocacy Group website

4)Cutting the Cranium, by Willow Lawson

5)The Hole Story, by Jon Bowen

6)The People With Holes in Their Heads, by John Mitchell

7)The Therapeutic Benefits of Trepanation - Try to Have an Open Mind, by Daniel Witt

8)The Hole to Luck, interview with self-trepanner Bart Hughes

9)Hemodynamics of Cerebral Circulation, by Yu Moskalenko and A. Naumenko from the International Trepanation Advocacy Group website


Albinism
Name: Brenda Zea
Date: 2002-09-30 00:17:46
Link to this Comment: 3011


<mytitle>

Biology 103
2002 First Paper
On Serendip

Most people have a very biased and stereotyped view of people with albinism. Many see albinos as persons with white hair, white skin and red eyes. This is a common myth that has perpetuated itself because the truth about albinism is not widely known. One in 17,000 people in the United States has a form of albinism. (1) There are many different types of albinism, depending on the amount of melanin in a person's eyes. While some people have the fabled red or violet colored eyes, most albinos have blue eyes. Even fewer have hazel, brown or gray eyes. These discrepancies between reality and the red-eyed albino myth are the reason that most albinos do not even realize that they have a form of albinism. (1)

The two most common types of albinism are Oculocutaneous albinism (also known as type-1 albinism {or tyrosinase-related albinism} this affects hair, skin and eye color) and Ocular albinism (this affects mostly the eyes, but the skin and hair may have slight discoloration). (1) Most albinos have serious vision difficulties. Their eyes do not have the correct amount of melanin and during the fetal and infant stages of their life, this causes abnormal development of the macular hypoplasia (the fovea in the retina), as well as abnormal nerve connections between the brain and their eyes. (2) Many are considered legally blind, or have such poor eyesight that they must use intensive prescription bifocals. A few, however, have good enough visual acuity to drive a car. While limited eyesight can be a problem, many albinos have multiple sight deficiencies. Often albinism can also come with a nystagmus of the eyes or strabismus. Nystagmus is where eyes tend to jump and jerk in all directions, while strabismus means that the eyes do not focus together as a "binocular team. An eye may cross or turn out." (2) This often results in crossed-eyes or 'lazy eye'. (1) Albinos may also encounter photosensitivity (sensitivity to light) or have astigmatism (distorted field of view). When the eye does not have enough pigmentation, it cannot keep out excess light, thus making people incredibly sensitive to bright lights as too much enters their eye. (1)

Extremely rare forms of albinism, such as albinos with Hermansky-Pudlak syndrome, can experience problems with bruising, bleeding, and susceptibility to diseases that affect the bowels and lungs. (1) Of the rarer forms of albinism, Hermansky-Pudlak syndrome is the most frequent. As persons with this syndrome can develop other physical problems in addition to their eyes, their life span is not as long as other albinos' lives.

While this group of albinos is at risk from themselves, 'normal' albinism can create problems of its own. If an albino person spends too much time out in the sun (this occurs mostly in albinos from tropical countries), they can develop skin cancer. While most of these cancers are treatable, they can only be treated if the facilities are available. (3)

Fortunately, albinism is not very common in most cultures because it is either caused by a rare recessive gene (this requires both of your parents to carry the gene in order for you to be albino), or an even rarer case, in which albinism is caused by genetic mutation. The most common type of inheritance is "autosomal recessive inheritance". (1) As only 1 in 70 people even carries a gene for Oculocutaneous albinism (OCA), even if two people who carry it have children, there is only a 50% chance that they will even pass the gene. This means that for each pregnancy there is only a 25% chance that the child will have both the albinism genes. (3)

As albinism is such a rare occurrence, people who are albino are often met with hostility and misunderstanding. Often albino children are teased at school and find it hard to fit in (this is especially true when the child is from a normally dark-skinned race – they stand out from their peers). A real eye-opener for the entire country came when in 1998, Rick Guidotti published a photo-journal in the June edition of Life Magazine. He was one of the first people to portray albinos as normal, fashionable people. While this helped the albino community a little, there is still not a wide acceptance of albinism as old stereotypes thrive even in modern culture. (4) (5)

References

1)NOAH – National Organization for Albinism and Hypopigmentation, A national organization about albinism and albinos

2)Lowvision.org, A small site with interesting albinism facts and photos

3)The International Albinism Center at the University of Minnesota, An extensive website about albinism

4)Albinism Website, A small website about albinos in pop-culture

5)Rick Guidotti Homepage, A famous high-fashion photographer changes his image and focuses entirely on albinos and how to represent them to the world


Ocular Histoplasmosis Syndrome: The Science Versu
Name: MaryBeth C
Date: 2002-09-30 01:33:41
Link to this Comment: 3012


<mytitle>

Biology 103
2002 First Paper
On Serendip

Ocular Histoplasmosis Syndrome is a growth in abnormal blood cells under the retina induced by exposure to a particular kind of histo fungus. Though the manifestation of this syndrome in the eyes is rare, a significant portion of the population has been exposed to this fungus. As the syndrome develops, the part of the retina responsible for close, sharp vision deteriorates and eventually, without treatment this can become complete blindness aside from peripheral vision.

It is extremely rare that the histo fungus affects the eyes. It is most common that the fungus manifests itself in the lungs, thus creating a lung infection that appears like tuberculosis (3). This infection, unlike the ocular infection, is easily treatable with an anti-fungal prescribed medication. Though fungal infections from Histoplasma capsulatum are more unusual in the eyes than the lungs, OHS is the most common cause of blindness in adults aged 20 to 40 (5). On the contrary, when the fungus reaches the eyes, the damage is irreversible and often difficult to detect and diagnose.

This progression, however, is not readily detectible in a routine eye check-up and requires a specific test involving close examination and pupil dilation. The examiner can, however, detect damage to the macula, or the central part of the retina, by presenting the patient with an "Amsler grid" and judging how the patient sees it and if the patient's vision has been affected (2). The examiner may also notice tiny histo spots or swelling of the retina. Once the disease has begun to develop, it is only treatable through surgical means, more specifically, laser photocoagulation of the retina cells . This recoagulation process only prevents future vision loss and does not correct what has already been lost. This surgery s also only effective if the eye's fovea has not been damaged and only if the surgeon is able to eliminate all destructive cells in the retina .

Such was the progression of the disease in my uncle's eyes about ten years ago. His histoplasmosis went undetected and eventually grew into partial blindness. However, my uncle's experience defied the typical progression in some ways. Firstly, the destructive cells were never detected, though he regularly visited an eye doctor. The deterioration continued until it again defied the typified OHS case, and he became completely blind in his right eye. Generally, the histo cells only affect the center of the retina, the macula . In addition, the laser photocoagulation surgery did not stop the progression of blindness, but delayed it. Like common cases of ocular histoplasmosis, he did retain peripheral vision in his left eye. My uncle, however also defied the odds of OHS sufferers, and, though he had some of the most extensive progression of the infection, continued to live his life the way he always had. He became an active, and often victorious member of our local Blind Golfers Association and continued to play basketball, watch sports, read as best he could, and compete in his gym's activities.

Doctors later speculated that the histo fungus could have been picked up in any of Bud's travels, through the "Histo Belt" through the central United States, or many years earlier in his travels to China and Japan. Though his travels in Asia were many years earlier, some of the doctors have suggested that the fungal cells could have remained dormant through the years until they surfaced in the early 90s. This incubation period is much different from that of the lung affliction, as this infection's symptoms appear within two weeks . Research and information is constantly changing concerning our understanding of histoplasmosis, when my uncle was first diagnosed, much of the information he was given was speculative and the surgery he received was still experimental.

Researching histo fungus, histoplasmosis, and ocular histoplasmosis syndrome raised more questions than provided answers. Why the difference in symptoms? Why the difference in time for symptoms? Why the eyes and lungs? Why are the lungs easily treatable and the eyes so difficult? One thing we may conclude, however is that everyone should be tested for this infection, as it is the most common cause of blindness for young and middle-aged adults and incurable, but easily delayed.

References

1)Ocular Histoplasmosis Syndrome, Useful for a general overview

2)Effectiveness of Laser Surgery, Procedure and statistics

3)Frequently Asked Questions, Information on the lung infection

4)Frequently Asked Questions, More general iformation

5)Histoplasmosis , Some new information


The Science of Shyness: The Biological Causes of
Name: Adrienne W
Date: 2002-09-30 01:41:37
Link to this Comment: 3013


<mytitle>

Biology 103
2002 First Paper
On Serendip

Although many people are unaware of its existence, social anxiety disorder is the third most common psychiatric disorder, after depression and alcoholism, according to the Medical Research Council on Anxiety Disorders (1). To paraphrase the Diagnostic and Statistical Manual of the American Psychiatric Association's definition of social anxiety disorder or social phobia it is: "A persistent fear of one or more social or performance situations in which the person is exposed to unfamiliar people or to possible scrutiny by others...The avoidance, anxious anticipation, or distress in the feared social or performance situation(s) interferes significantly with the person's normal routine" (2). Although those who suffer from social anxiety disorder (SAD) are often perceived as shy, their condition is much more extreme than shyness. Unlike shyness, it is not simply a personality trait; it is a persistent fear that must have deeper roots than environmental causes.
As an anxiety disorder, SAD is classified amongst panic disorder, obsessive- compulsive disorder, posttraumatic stress disorder, and generalized anxiety disorder
(3). The question is: what causes this behavior to occur? Is it simply a result of environment or are there biological reasons? Although the present knowledge on SAD is incomplete, there are several causes that are suspected: "a combination of genetic makeup, early growth and development, and later life experience"
(4). It is my hypothesis that, in addition to environmental causes, there are also biological causes of SAD. According to current research, there is compelling evidence that brain chemicals and genetics contribute to the development of SAD.
Jerome Kagan, Ph.D. has researched the genetic causes of SAD at Harvard. In his study of children from infancy to adolescence he discovered that "10-15% of children to be irritable infants who become shy, fearful and behaviorally inhibited as toddlers, and then remain cautious, quiet, and introverted in their early grade school years. In adolescence, they had a much higher than expected rate of social anxiety disorder." This evidence suggests, of course, that people are born with SAD, which indicates that there are biological factors that contribute to its development, not simply environmental factors. Kagan also discovered a common physiological trait in these particular children: they all had a high resting heart rate, which rose even higher when the child was faced with stress. Again, this physiological trait suggests the biological causes of SAD. In this study, Kagan also found evidence that linked the causes of SAD with genetics: the parents of the children with SAD have increased rates of social anxiety disorder as well as other anxiety disorders. There is also other research that suggests that SAD has genetic causes. According to The American Psychiatric Association: "anxiety disorders run in families. For example, if one identical twin has an anxiety disorder, the second twin is likely to have an anxiety disorder as well, which suggests that genetics-possibly in combination with life experiences-makes some people more susceptible to these illnesses" (3).
Evidence of anxiety is also apparent in the animal kingdom, which suggests that it is not simply the result of nurturing, it is an inherent attribute. In the book Fears, Phobias, and Rituals, Isaac Marks found that birds avoided prey that had markings similar to the "vertebrate eye," eye-like markings on other animals, such as moths. In his experiment, these eye-spots were rubbed off of moths. As a result, they were less likely to be eaten and more likely to escape from a predator. Marks concluded that the birds feel scrutinized by the gaze of another animal and thus avoid the "eyes," much like humans with social anxiety avoid situations in which they feel scrutinized or avoid eye-contact. His research suggests that biological factors influence a form of social anxiety in animals.
In addition to genetic causes, there is also evidence that SAD is caused by chemical disturbances in the brain. It is probable that four areas of the brain are involved in our anxiety-response system: the brain stem, which controls cardiovascular and respiratory functions; the limbic system, which controls mood and anxiety; the prefrontal cortex, which makes appraisals of risk and danger; and the motor cortex, which controls the muscles. These parts are supplied with three major neurotransmitters: norepinephrine, serotonin, and gamma aminobutyric acid, all of which play a role in the regulation of arousal and anxiety. Research shows that "dysregulation of neurotransmitter function in the brain is thought to play a key role in social phobia. Specifically dopamine, serotonin, and GABA dysfuncition are hypothesized in most cases of moderate to severe SP." Researchers continue to investigate whether neurocircuits play a role in the disorder. If this hypothesis proves to be true, it will clarify that there are genetic causes to SAD (1). However, the neurobiological information alone clarifies that there are biological causes to SAD.
Although research continues to be conducted on the causes of social anxiety disorder, it is apparent that there are genetic and neurobiological causes. Of course, psychological modeling, or environmental circumstances may also be a factor in the development of SAD; however, there is compelling evidence that chemicals in the brain also cause the anxiety. Research has also concluded that those who suffer from SAD are likely to have a family member with SAD or another anxiety disorder, which supports the hypothesis that there are genetic causes to SAD, as well.


References

1) www.socialfear.com; Provides information on the neurobiological causes of social anxiety.
2) www.socialanxietyinstitute.org/dsm.html; Provides the DSM-IV of the American Psychiatric's Association's definition of social anxiety disorder
3) www.psych.org/public_info/anxiety.cfm; Public information from the American Psychiatric Association.
4) http://socialanxiety.factsforhealth.org/whatcauses.html; A website that provides information on research conducted on the causes of social anxiety.


Information About Menopause
Name: Diana DiMu
Date: 2002-09-30 02:01:14
Link to this Comment: 3014


<mytitle>

Biology 103
2002 First Paper
On Serendip

While many people may find the topic humorous, or even frightening, the subject of Menopause is one I had many questions about. My interest and curiosity in this subject stems from my first-hand experience with it: my mom. After much suspicion that she was going through "the change," my sisters and I recently discovered that she had stopped having her period for the last four years. Much to our surprise, we realized that many of our hunches (such as "Hot Flashes" and "Mood Swings") were correct; they were indeed some of the symptoms associated with the periods before and during menopause. I learned that she was taking progestin, a hormone supplement, as well as certain vitamins, to help against the symptoms associated with menopause. Suddenly her violent mood swings and recent irritability began to make more sense. My mom explained that for the first time in her life she had feelings of "blueness" or depression. Despite the realization that my mother was menopausal, I still did not understand what menopause actually is. What are some of its symptoms? Are they treatable? If so, how? Are there any dangers associated with menopause? If so, how can they be prevented or treated? Through my research I would like to take a closer look at these questions to gain a greater understanding of my mom's situation and help others who might also come across it with their own families and friends.

Many of the symptoms and effects of menopause are not actually a result of menopause but are associated with the period of change leading into menopause. The changes and effects are broken down into three stages: perimenopause, menopause, and post menopause:

Perimenopause:
Perimenopause is the period of gradual changes that lead into menopause. They often affect a woman's hormones, body, and feelings. They can actually stop and start again anywhere between a few months or a few years. This period is also known as the "climacteric" period. During this process, the ovaries' production of the hormone estrogen slows down. The hormone levels in a woman's body fluctuate, causing changes, which are often similar (although much more intense) to the changes associated with adolescence.

Menopause:
Menopause occurs when a woman has her last period. A woman's ovaries stop releasing eggs. This is usually a gradual process; however, it can happen all at once.

Post Menopause:
Post Menopause is simply the time after menopause. Women often have many health concerns, which result from menopause (2).

I would like to focus mostly on the period known as perimenopause because of its many symptoms, which often serve as metonymies for menopause on the whole. After looking at many of these symptoms I will take a more focused look at one of menopause's most well known symptoms and how it can be treated. I will also examine some of the other methods of treatment for menopause, as well as some of the dangers associated with menopause and its treatment.

Perimenopause can begin as early as age thirty, however, the average age is fifty-one. Some of the symptoms associated with perimenopause are as follows:
-Irregular menstrual periods
-Achy joints
- Hot flashes
-Temporary and minor decrease in ability to concentrate or recall information
- Changes or loss in sexual desire
- Extreme sweating
- Headaches
- Frequent urination
- Early wakening
- Vaginal dryness
- Mood changes or "swings"
- Insomnia
- Night sweats
- Symptoms/conditions commonly associated with pre-menstrual stress (PMS)

Perimenopause can be any one or a combination of the above symptoms. The symptoms are often very unpredictable and disturbing, especially if a woman does not know they are related to menopause (2). These symptoms usually last between two and three years, though in some cases they can last between ten and twelve years. It is highly important to note that women in perimenopause have reduced fertility but are not yet infertile. There is still a chance of pregnancy during perimenopause, even if a woman's menstruation is highly sporadic (1).

One of the symptoms most commonly associated with perimenopause are "hot flashes." Hot flashes are sudden or mild waves of upper body heat that can last anywhere from thirty seconds up to five minutes. They are caused by rapid changes in hormone levels in the blood (2). The part of the brain that controls body temperature is the hypothalamus. During perimenopause, the hypothalamus can often mistake a woman's body temperature as too warm. This starts a chain reaction to cool her body down. Blood vessels near the surface of the skin begin to dilate and blood rushes to the surface of the skin in an attempt to cool down the body's temperature. This often causes sweating, as well as producing a flushed red look to the woman's face and cheeks (1). Some women experience a rapid heartbeat, tingling in their fingers, or a cold chill after the hot flash. Seventy-five out of one hundred women have hot flashes. Half of them have at least one hot flash a day, while twenty have more than one a day. Most women experience hot flashes from three to five years before they taper off. Although some women may never have a hot flash, or only have them for a few months, others may experience them for years. There is no way to tell when they will stop. Many women suggest keeping a journal to record what triggers a hot flash so that an attempt to prevent the next one can be made (2). Some suggestions by the North American Menopause Society to help combat hot flashes include: trying to wear light layers of clothing, sleeping in a cool room, deep breathing and/or meditation, and regular exercise to fight stress and promote healthy sleep (1). However, prescription hormone treatment is the most common treatment for hot flashes. Replacement of estrogen that is lost during menopause is the most effective treatment against hot flashes. Hormone replacement therapy is also common treatment for many other symptoms of menopause (1).

Hormone Replacement Therapy (HRT) can come in the form of pills, patches, implants or vaginal creams, to restore estrogen and other hormones that decrease during perimenopause and menopause in a woman's body. While many women find HRT extremely helpful, there are still many side effects to its use. Some women experience pre-menstrual stress (PMS), others experience vaginal bleeding, bloating, nausea, hair loss, headaches, weight gain, itching, increased vaginal mucus, or even corneal changes which may affect a woman's ability to wear contact lenses. Some more serious side effects put women at higher risk for breast cancer and heart disease. Some women use progestin a hormone without estrogen, which is a better replacement therapy for women at risk of blood clots. Progestin is however, a less effective means of birth control.
Many women prefer to use non-hormone therapies to reduce the symptoms of perimenopause and menopause. Regular exercise is a strong recommendation to combat stress and help promote healthy sleeping patterns. A diet high in fruits and vegetables and low in saturated fats is also recommended. Many women try eating soy products to help combat hot flashes(3). Soy contains phytoestrogens, a plant chemical that produces similar effects to estrogen. Others suggest reducing caffeine, alcohol, spicy foods, and even hot beverages(2). Herbal remedies and homeopathy are also quite common solutions to women against using hormones to treat menopause. There are many over-the-counter vaginal creams as well. Menopause Online suggests in increase in the amounts of vitamins E and B6. Research on Vitamin E shows that it can help prevent heart attacks, Alzheimer's disease, and cancer. Vitamin B6 is involved in the production of brain hormones (neurotransmitters). It is often low in people with depression or those taking estrogen in the form of birth control or hormone replacement therapy. Lack of B6, and folic acid has been associated with osteoporosis. An increase in B6 has been shown to help fight heart disease and reduce the symptoms of PMS (3).

Breast Cancer and Heart Disease:
The risk of developing breast cancer increases with age. By the time a woman turns 60, one out of twenty-eight women develop breast cancer. Studies have shown that hormone treatment for ten-fifteen years may slightly increase a woman's chances of developing breast cancer.
Before the age of fifty, women are three times less likely to have a stroke or heart attack then men. Ten years after menopause, women are at the same kind of risk as men. Whether this directly correlates to hormone replacement therapy is not clear (2).

Osteoporosis:
Loss of estrogen can lead to osteoporosis or loss of bone mass. Women may lose between two to five percent of their bone mass per year for up to five years after menopause. Bones can become brittle, more susceptible to breaking. Bone loss begins at age thirty that is why it is important for women to build bone mass early with weight bearing exercise like walking, running, or weight lifting. It is also important to take a calcium supplement to help aid in developing bone mass. At least 1,000 mg of calcium are recommended per woman per day, and 1,200 mg are recommended after menopause. Estrogen replacement therapy can also help in developing and retaining bone mass. There are currently newer non-hormonal medications that are effective as well (2).

What then is the best way to treat the symptoms of menopause? I am not sure whether there is enough conclusive evidence to determine how harmful the use of hormone replacement therapy is. It is currently found to be an effective treatment with varying degrees of side effects. Loss of hormones like estrogen can result in loss of bone mass, as well as leaving a woman's body more susceptible to diseases like breast cancer and heart disease. How much of an affect does hormone replacement therapy have on these diseases and how helpful or harmful is it? This is something I would like to conduct further research on before I give a "better" hypothesis. Before concluding, I'd like to take a closer look at one more aspect of menopause that is often overlooked or misjudged: psychological changes.

Psychological Changes:
Although there is no scientific study to support that menopause contributes to true clinical depression, many women do suffer from "feeling blue" or being discouraged. During perimenopause, a woman's hormonal rhythm changes. These hormonal changes contribute often to mood swings. For many women, the hormone changes of menopause coincide with other stresses during midlife. In addition, many women experience changes in their self-esteem and body image. Many women can react to menopause by feeling overwhelmed, angry, out of control, or even numb. It takes someone in the medical profession to determine whether a woman is clinically depressed or just feeling the effects of menopause. Often women can combat their feelings of sadness with herbal remedies like Saint John's Wort, or by changes in their lifestyle to reduce stress. Often times, irritability is closely linked to disturbances in a woman's sleeping pattern, which can be treated by treating hot flashes among other things. Stress-reducing techniques like meditation and deep-breathing are effective for some, while regular exercise and a healthy diet, getting enough sleep, and pampering yourself are all positive ways to help combat stress and sadness. Many women recommend talking to friends and family about menopause. Some even take this a step further and form self-help groups where women can speak to each other about their common experiences with menopause. Often realizing there is another woman out there who understands what you are going through is beneficial to feeling less depressed and overwhelmed by menopause (1).

I think menopause, like depression is something which has a lot of pre-conceived notions by the public and is not necessarily well understood. I think it deserves more research and acknowledgement as a legitimate and substantial occurrence in a woman's life that deserves more respect and understanding, as well as more open acknowledgement. It should not be something which needs to be hid or made the butt of a joke. There is still much research to be done concerning menopause and its treatment. I think once women feel they can openly address menopause they will feel less stress and anxiety towards it.

WWW Sources
1) North American Menopause Society, Menopause Guidebook: Helping Women Make Informed Healthcare Decisions Through Perimenopause And Beyond.
2) Menopause - Another Change in Life.
3)Menopause Online.
4)National Osteoporosis Foundation.
5) National Center for Homeopathy.
6)National Breast Cancer Foundation.


The Socialization of Human Birth as Protection for
Name: Chelsea Ph
Date: 2002-09-30 02:14:58
Link to this Comment: 3015


<mytitle>

Biology 103
2002 First Paper
On Serendip

The Socialization of Human Birth as Protection for Bipedalism

The topic of human birth is quite an interesting one. For example, why do we give birth the way we do? Why is labor so incapacitating to human females, and how has natural selection been a factor? I have investigated the way in which the process of human pregnancy has evolved over time, and found a strong link between the biological and the sociological. As humans evolved from quadrupeds to bipeds, the birthing process evolved from a private process to a social process. The socialization of human birth allowed bipedalism to flourish. If birth had remained private, the disadvantages to bipedalism in regards to the continuation of the species would have eventually necessitated a revision of the trait.

Comparing our birth process with that of our primate relatives gives a very logical argument to why human birth became a social process. "The baby monkey emerges facing toward the front of the mother's body so she can reach down with her hands and guide it from the birth canal...the human infant must undergo a series of rotations to pass through the birth canal without hindrance" (1). The sheer complexities of human birth naturally dispose it toward being a social act. Because of the necessities of bipedalism, the pelvis of a human female is much narrower than that of other primates, meaning that numerous physical complications arise, and birth is physically more painful.

Growth of brain and cranial size among hominids also added to the difficulty of labor and delivery. The human brain triples between birth and adulthood, whereas the brain of other primates only doubles. "What humans seem to have accomplished is the trick of keeping the brain growing at the embryonic rate for one year after birth. Effectively, if humans are a fundamentally precocial species, our gestation is (or should be) 21 months. However, no mother could possibly pass a year old baby's head through the birth canal. Thus, human babies are born 'early' to avoid the death of the mother." (2). Humans have maintained a gestation length comparable to that of chimpanzees (the gestation for chimpanzees is approximately 230 to 240 days), despite the fact that the young are born in such different stages of development relative to their adult selves.

Another very practical argument for necessity of socialization to bipedal survival is the fact that a human female is physically unable to assist herself or the baby during birth if something goes wrong. "I suggest that early hominid females who sought assistance or companionship at the time of delivery had more surviving and healthier offspring than those who continued the ancient mammalian pattern of delivering alone. Thus, the evolutionary process itself first transformed birth from an individual to a social enterprise..." (1). In cases when the baby is breeched, or with other complications arising when the baby is in the birth canal, assistance from another can be the difference between life and death.

Another danger in birthing alone, most women feel the need to push during contractions before their cervix is properly dilated (10cm), especially in the case of a longer labor or a breech, this results in the baby's head becoming trapped in the birth canal, then necessitating a rapid delivery to keep the child from attempting to breathe (as it will once its body is exposed to cooler temperatures), but increasing the risk of internal ripping of the mother's cervix and/or uterus with heavy bleeding, damage to other organs and death (3). Experience must have quickly taught early hominids that assisted birth was best.

Though the term "midwife" was not coined during the medieval ages, the role it describes is almost as old as bipedalism. Another part of the argument for this are references made to women in this capacity from Greek and Roman times, in medical documents, Egyptian papyri, the Bible, and Hindu scrolls. The documents indicate these women as having an invaluable, but more importantly established part of human society, already subject to its own set of rules (4).

Beyond midwifery, there are many factors that have been working to change the process of human birth. One factor is the development of more effective medicines for pain, tools such as forceps to use during delivery, and the advent of written record so that future generations could expound more easily upon the work of others (which is how the practice of caesarian section became such a viable option), even across certain geographical boundaries. Another was the changing diet and it's effect on the human body. I would argue that as these new factors came into play, natural selection began gradually to be overshadowed.

As man was able to control food sources (and consequently became less mobile) more effectively through farming, new foods became staples in the human diet. "Increased consumption of carbohydrate-rich foods, decreased mobility, and nursing at infrequent intervals all interact to make this possible, enabling women to conceive within 10-15 months of the last birth. Weaning earlier is made possible by the availability of appropriate infant foods in the form of cereal grains and, in some places, milk from domesticated animals. Ultimately the birth interval is reduced to approximately 2 years resulting in population increase." (2).

As the success of human birth and the ability to conceive more frequently in a lifetime became greater, the obstacles bipedalism presented were surmounted. Increased birth rates meant increased variation, providing a larger pool of genetic traits to be selected for or against. Early hominids used their intelligence to compensate for deficiencies in speed and agility. Birth evolved from a private to a social process in order to increase the rates of survival for both mother and child. With time, this socialization led to the development of various techniques and technologies capable of compensating for the physical limitations on birth in bipeds.

References

1) Bernard Bel, a quote from "Evolutionary Obstetrics" (In W. R. Trevathan, E.O. Smith & J.J. McKenna, eds., Evolutionary Medicine. New York: Oxford University Press, 1999, pg. 183-207), from Bernard Bel's "New Directions". An interesting site with intelligent arguments concerning all aspects of health, including the "medicalization" of human birth.

2) Glenn Morton's Creation/Evolution Page, Morton, G.R. "The Curse of a Big Head." Arguments as to the correlation between increased brain size and human sweat glands, pains during childbirth, and need for clothing. Inspired by Genesis 3:16-21, when God punishes Adam and Eve for eating from the tree of knowledge. The argument is supported by fossil record and other biological/anthropological evidence, and is, on the whole, not bad.

3) Glenn Morton's Creation/Evolution Page, a quote from Wenda R. Trevathan's Human Birth, (New York: Aldine de Gruyter, 1987), p. 92 from G.R. Morton's "The Evolution of Human Birth", an article providing information in support of the theory that human birth has not evolved significantly since Homo Rudolphensis.

3) Parkland School of Nurse Midwifery, a concise and informative page on the history of midwifery.


The Health Benefits of Fasting
Name: Will Carro
Date: 2002-09-30 04:14:07
Link to this Comment: 3017


<mytitle>

Biology 103
2002 First Paper
On Serendip

There has been much contention in the scientific field about whether or not fasting is beneficial to one's health. Fasting is an integral part of many of the major religions including Islam, Judaism and Christianity. Many are dubious as to whether the physiological effects are as beneficial as the spiritual promoted by these religions. There is a significant community of alternative healers who believe that fasting can do wonders for the human body. This paper will look at the arguments presented by these healers in an attempt to raise awareness of the possible physiological benefits that may result from fasting.

Fasting technically commences within the first twelve to twenty-four hours of the fast. A fast does not chemically begin until the carbohydrate stores in the body begin to be used as an energy source. The fast will continue as long as fat and carbohydrate stores are used for energy, as opposed to protein stores. Once protein stores begin to be depleted for energy (resulting in loss of muscle mass) a person is technically starving. (1)

The benefits of fasting must be preceded by a look at the body's progression when deprived of food. Due to the lack of incoming energy, the body must turn to its own resources, a function called autolysis. (2) Autolysis is the breaking down of fat stores in the body in order to produce energy. The liver is in charge of converting the fats into a chemical called a ketone body, "the metabolic substances acetoacetic acid and beta-hydroxybutyric acid" (3), and then distributing these bodies throughout the body via the blood stream. "When this fat utilization occurs, free fatty acids are released into the blood stream and are used by the liver for energy." (3) The less one eats, the more the body turns to these stored fats and creates these ketone bodies, the accumulation of which is referred to as ketosis. (4)

Detoxification is the foremost argument presented by advocates of fasting. "Detoxification is a normal body process of eliminating or neutralizing toxins through the colon, liver, kidneys, lungs, lymph glands, and skin." (5). This process is precipitated by fasting because when food is no longer entering the body, the body turns to fat reserves for energy. "Human fat is valued at 3,500 calories per pound," a number that would lead one to believe that surviving on one pound of fat every day would provide a body with enough energy to function normally. (2) These fat reserves were created when excess glucose and carbohydrates were not used for energy or growth, not excreted, and therefore converted into fat. When the fat reserves are used for energy during a fast, it releases the chemicals from the fatty acids into the system which are then eliminated through the aforementioned organs. Chemicals not found in food but absorbed from one's environment, such as DDT, are also stored in fat reserves that may be released during a fast. One fasting advocate tested his own urine, feces and sweat during an extended fast and found traces of DDT in each. (5)

A second prescribed benefit of fasting is the healing process that begins in the body during a fast. During a fast energy is diverted away from the digestive system due to its lack of use and towards the metabolism and immune system. (6) The healing process during a fast is precipitated by the body's search for energy sources. Abnormal growths within the body, tumors and the like, do not have the full support of the body's supplies and therefore are more susceptible to autolysis. Furthermore, "production of protein for replacement of damaged cells (protein synthesis) occurs more efficiently because fewer 'mistakes' are made by the DNA/RNA genetic controls which govern this process." A higher efficiency in protein synthesis results in healthier cells, tissues and organs. (7) This is one reason that animals stop eating when they are wounded, and why humans lose hunger during influenza. Hunger has been proven absent in illnesses such as gastritis, tonsillitis and colds. (2) Therefore, when one is fasting, the person is consciously diverting energy from the digestive system to the immune system.

In addition, there is a reduction in core body temperature. This is a direct result of the slower metabolic rate and general bodily functions. Following a drop in blood sugar level and using the reserves of glucose found in liver glycogen, the basal metabolic rate (BMR) is reduced in order to conserve as much energy within the body as can be provided. (2) Growth hormones are also released during a fast, due to the greater efficiency in hormone production. (7)

Finally, the most scientifically proven advantage to fasting is the feeling of rejuvenation and extended life expectancy. Part of this phenomenon is caused by a number of the benefits mentioned above. A slower metabolic rate, more efficient protein production, an improved immune system, and the increased production of hormones contributes to this long-term benefit of fasting. In addition to the Human Growth Hormone that is released more frequently during a fast, an anti-aging hormone is also produced more efficiently. (7) "The only reliable way to extend the lifespan of a mammal is under-nutrition without malnutrition." (5) A study was performed on earthworms that demonstrated the extension of life due to fasting. The experiment was performed in the 1930s by isolating one worm and putting it on a cycle of fasting and feeding. The isolated worm outlasted its relatives by 19 generations, while still maintaining its youthful physiological traits. The worm was able to survive on its own tissue for months. Once the size of the worm began to decrease, the scientists would resume feeding it at which point it showed great vigor and energy. "The life-span extension of these worms was the equivalent of keeping a man alive for 600 to 700 years." (8)

In conclusion, it seems that there are many reasons to consider fasting as a benefit to one's health. The body rids itself of the toxins that have built up in our fat stores throughout the years. The body heals itself, repairs all the damaged organs during a fast. And finally there is good evidence to show that regulated fasting contributes to longer life. However, many doctors warn against fasting for extended periods of time without supervision. There are still many doctors today who deny all of these points and claim that fasting is detrimental to one's health and have evidence to back their statements. The idea of depriving a body of what society has come to view as so essential to our survival in order to heal continues to be a topic of controversy.


References

1)"Dr. Sniadach – True Health Freedom 3

2)fastingforbetterhealth

3)"Ketosis by Sue Reith"

4)"Nutriquest, March 11th, 2000 – Ketosis and Low Carbohydrate Diets"

5)"WebMD – Detox Diets: Cleansing the Body"

6)"Fasting"

7)"Fasting – Good Morning Doctor"

8)"The health Benefits of Fasting"


Water Births
Name: Lawral Wor
Date: 2002-09-30 05:13:32
Link to this Comment: 3018


<mytitle>

Biology 103
2002 First Paper
On Serendip

Over the past few years, there has been a resurgence in interest in homebirths and other "alternative" ways of giving birth. There has been a rise in if not the actual incidences of births involving a midwife instead of an obstetrician, at least in the coverage in the news and in parenting magazines of midwives and their art. The debate has gone back and forth over whether midwives are reliable sources of support when having a baby. This debate has become so common that it is becoming part of our collective culture, most recently in the Oprah Book Club book Midwives: A Novel by Chris Bohjalian (1). One of the reasons their credibility has been questioned is that midwives are more likely to participate in an alternative birthing style, such as water births. The debate around water births has been almost as lively as that around midwives themselves. Both the supporters and opponents of the practice are passionate about their arguments, both of which can be very convincing.

The number of hospitals that offer water births has risen in the last decade, but most are still performed by midwives in birthing centers or in the home. Even with the growing number of hospitals offering this service, "the American College of Obstetricians and Gynecologists has not endorsed the technique. It says there is not enough data to prove safety" (2). Most of the studies on water births have been conducted in the United Kingdom, where the practice is offered in most hospitals. While water births are not still the norm, having the mother rest in a tub of warm water during labor is very common there.

Spending part of the labor process in water has proved to be beneficial even if the mother leaves the tub before actually giving birth. "Warm water helps a laboring woman's muscles relax-which often speeds labor. When a woman is more relaxed and comfortable, the uterus functions optimally, reducing stress to both mother and baby. Water appears to lower a woman's stress and anxiety level, thereby lowering stress-related hormones which cross the placenta. Many women repeatedly report the wonderful pain relieving properties of water" (3).

The benefits of a water birth for a mother go beyond the simple relaxing qualities of water. The tissues of the vagina become much more elastic in water, making the actual birth easier for the mother. "A 1989 nationwide survey published in The Journal of Nurse Midwifery on the use of water for labor and birth reported less incidents of perineal tearing with less severity" (4). This reduced stress on the vaginal tissues and birth canal converts into less stress during birth for the child. Combined with the relaxed uterine muscles, water births are considerably less physically stressful for both the mother and the baby.

Even with all of these benefits, many hospitals in the United States still do not endorse, let alone offer, water births in their maternity wards. Attached to these benefits are some very serious risks. The same warm waters that help to relax muscles during labor can be incredibly harmful to the mother after she has given birth. The warm water can keep the muscles relaxed after the deliver of the placenta and prevent blood clotting. While immersed in water, it is also harder to tell how much blood is being lost as it is diluted in the bath water. "Also, if the placenta is delivered under water the combination of vasodilatation and increased hydrostatic pressure could theoretically increase the risk of water embolism" (5). These risks are all still in the theoretical stage, but they are risks nonetheless.

There are also risks for the baby in water births. While the British Medical Journal reports that there is a 95% confidence in live water births, they do concede that there is a risk of water aspiration (6). Instinctively, babies should not breathe in until they are confronted with air. They should continue to receive oxygen through the umbilical cord until they start to breathe on their own or the cord is cut. Studies have shown, however, that "babies who do not get enough oxygen during childbirth [due to stress in the birth canal or placement of the umbilical cord] may gasp for air, risking water to enter their lungs" (2). A more preventable, but still serious, problem is the increased chance of snapping the umbilical cord during a water birth. There have been no studies on why the risk of snapping the umbilical cord is higher among water births, but it is speculated that the increased movement involved in bringing the child to the surface of the water after the birth is to blame (6).

The debate concerning the safety of water births will continue as the practice becomes increasingly popular in the United Kingdom and elsewhere. There are no studies that directly link water births to the risks I have outlined; they are deemed theoretical or consequential. One must decide to weigh the benefits with the possible risks before choosing to undergo a water birth. As the practice becomes more accepted and studied, more conclusive studies will be carried out, making the decision easier for expectant mothers.


References

1)Bohjalin, Chris A. Midwives: A Novel. Vintage Books, 1998.

2)"Water birth drowning risk." BBC, August 5, 2002.

3)Birth and Women's Center - Water Births.

4)"Why Water." Global Maternal/Child Health Association and Waterbirth International.

5)LMM Duley MRCOG, Oxford. "Birth in Water: RCOG Statement No 1." Royal College of Obstetricians and Gynaecologists. January 2001.

6)"Perinatal mortality and morbidity among babies delivered in water: surveillance study and postal survey." British Medical Journal. August 21, 1999.


The True Importance of Moisturizers to Healthy Ski
Name: Margot Rhy
Date: 2002-09-30 10:17:53
Link to this Comment: 3023


<mytitle>

Biology 103
2002 First Paper
On Serendip


Facial skin care products are promoted vigorously in the cosmetic industry with claims of tremendous benefit to good, healthy-looking skin. Consumers search for rejuvenation and protection of their largest organ, important on the basic biological level that it acts as a barrier, shielding the body from the environment, as a temperature regulator, as a basic immune defense, and as the sensory organ. However, these consumers generate a huge commercial business for reasons purely aesthetic; the face is simply what others notice first in personal presentation to the world. Functioning within a need to find perfection, consumers crave an easy solution to removing blemishes, fine lines, wrinkles, dark spots, and all other types of skin care problems. Therefore, the question of how important these products, especially moisturizers, are to healthy skin and what separates these moisturizers becomes worthy of understanding in a market that carries so many different kinds of products, all with different ingredients and all advertising the same positive outcomes.

There exists in the market an elementary understanding of what a proper skin care regimen should consist of, promoted by all companies operating at all price levels. Basically, lines carry products based on different skin types, oily, dry, combination, or sensitive, and then divide treatment into the basic steps of exfoliation, treatment, hydration, and protection (1). Skin moisturizers, then, are an essential step in this routine and they are designed for different skin types to soften the skin, to lubricate "without blocking pores and smothering the skin (2)."

A moisturizer's base is some type of an emulsion of oil and water with another agent, altogether acting to limit the natural evaporation of water from the skin. When the product is an emulsion of water in oil, the oil is more dominant and serves moderately dry skin effectively. When the product is based on oil in water, products are less moisturizing and are formulated for normal to slightly dry skin. Furthermore, products that are purely oil based are best for only for extremely dry skin and completely oil-free products are best for oily to normal skin types (2)

Typically, the other active ingredient is another kind of oil. Natural and essential oils are chosen depending on what vitamins, antioxidants, essential fatty acids, and fragrances they bring to the moisturizer (2) These ingredients can be seen in various advertisement campaigns and on various labels, chosen for their function and cost-efficiency. Moisturizers may also contain humectants which prevent water loss by attracting moisture to the skin. These are synthetic forms of phospholipids, which exist naturally as an "evaporative and protective barrier in the outer layer of the epidermis (2). However, these synthetics, forms of glycols such as propylene glycol or glycerin, only work well in environments with sufficient humidity in the air to draw from. Also, these can cause irritation or inflammation because they serve as a barrier, knowing how to both keep moisture in but also knowing how to prevent moisture from entering externally (3)Another ingredient may be liposomes, somewhat of a new development in skincare formulations. These come from phospholipids, have an aqueous core, and can carry vitamins, drugs, and other active ingredients in their phospholipd layers for delivery to the dermis. Since they have a cell membrane-like structure, "they can readily pass through the epidermis and are thought to be accepted into cells of the dermis by membrane fusion (2)What becomes apparent is the amount of chemistry, biology, and technology that enters into the industry of beauty. This serves as a reflection of the want consumers have for the most effective types of products science can give them and their need for the best skin care solutions they can buy easily, with little consideration given to more personal factors that affect healthy skin.

As a result of this focus on consumerism and easy solutions, little attention is given to the idea that lifestyle choices manifest themselves in our physical health and appearance. Nutrition and rest are two basic and easy-to-overlook contributors to healthy skin. Exercise helps the skin since it works to "maintain a clear circulation, calming the nerves and promoting a deeper, more revitalizing sleep (3) Water is obviously necessary for the skin's maintenance of the right amount of moisture as well as a person's general good health (3) These make positive contributions to healthy skin while the following are choices that prove damaging. Smoking deprives skin tissue of oxygen and nutrients through the effects of carbon monoxide and nicotine in the circulatory system, giving a pale complexion and early wrinkles. Caffeine and alcohol dehydrate the skin, the latter being particularly even more damaging by impeding circulation, removing moisture and nutrients, and even leading to broken or distended capillaries (3) Those who market skin care products, ranging from medical doctors to make-up consultants to fashion houses, gain definite monetary profit by promoting only products and by not acknowledging other factors in good skin care, thus adding complication to the issue. Do consumers really need their products to improve their skin's health? Or is this need merely generated by the beauty industry and dermatology field? For consumers, clearly it becomes a matter of buying a better appearance. They have made this evident by accepting what they are told and by allowing skin care products to become a billion dollar industry (3) ; this market exists because people want it to.

Moisturizers can be beneficial to the skin but those who support this the most and the strongest are those who profit off of them. They can replenish moisture to the skin but may lead to damage as well if a consumer does not understand the role of specific ingredients. They can be effective but lifestyle choices cannot be overlooked in lieu of them. Perhaps what needs to be understood by everyone willing to spend their idea of the right amount on their understanding of the right product is that these lotions and creams cannot serve alone as the key factor to healthy skin.
.

References

1)Dermadoctor Website, archived article on the Dermadoctor website, an online source for skincare

2) Altrius Biomedical Network

3) American Academy of Dermatology website , archived article on the American Academy of Dermatology website


The True Importance of Moisturizers to Healthy Ski
Name: Margot Rhy
Date: 2002-09-30 10:23:18
Link to this Comment: 3024


<mytitle>

Biology 103
2002 First Paper
On Serendip


Facial skin care products are promoted vigorously in the cosmetic industry with claims of tremendous benefit to good, healthy-looking skin. Consumers search for rejuvenation and protection of their largest organ, important on the basic biological level that it acts as a barrier, shielding the body from the environment, as a temperature regulator, as a basic immune defense, and as the sensory organ. However, these consumers generate a huge commercial business for reasons purely aesthetic; the face is simply what others notice first in personal presentation to the world. Functioning within a need to find perfection, consumers crave an easy solution to removing blemishes, fine lines, wrinkles, dark spots, and all other types of skin care problems. Therefore, the question of how important these products, especially moisturizers, are to healthy skin and what separates these moisturizers becomes worthy of understanding in a market that carries so many different kinds of products, all with different ingredients and all advertising the same positive outcomes.

There exists in the market an elementary understanding of what a proper skin care regimen should consist of, promoted by all companies operating at all price levels. Basically, lines carry products based on different skin types, oily, dry, combination, or sensitive, and then divide treatment into the basic steps of exfoliation, treatment, hydration, and protection (1). Skin moisturizers, then, are an essential step in this routine and they are designed for different skin types to soften the skin, to lubricate "without blocking pores and smothering the skin (2)."

A moisturizer's base is some type of an emulsion of oil and water with another agent, altogether acting to limit the natural evaporation of water from the skin. When the product is an emulsion of water in oil, the oil is more dominant and serves moderately dry skin effectively. When the product is based on oil in water, products are less moisturizing and are formulated for normal to slightly dry skin. Furthermore, products that are purely oil based are best for only for extremely dry skin and completely oil-free products are best for oily to normal skin types (2)

Typically, the other active ingredient is another kind of oil. Natural and essential oils are chosen depending on what vitamins, antioxidants, essential fatty acids, and fragrances they bring to the moisturizer (2) These ingredients can be seen in various advertisement campaigns and on various labels, chosen for their function and cost-efficiency. Moisturizers may also contain humectants which prevent water loss by attracting moisture to the skin. These are synthetic forms of phospholipids, which exist naturally as an "evaporative and protective barrier in the outer layer of the epidermis (2). However, these synthetics, forms of glycols such as propylene glycol or glycerin, only work well in environments with sufficient humidity in the air to draw from. Also, these can cause irritation or inflammation because they serve as a barrier, knowing how to both keep moisture in but also knowing how to prevent moisture from entering externally (3)Another ingredient may be liposomes, somewhat of a new development in skincare formulations. These come from phospholipids, have an aqueous core, and can carry vitamins, drugs, and other active ingredients in their phospholipd layers for delivery to the dermis. Since they have a cell membrane-like structure, "they can readily pass through the epidermis and are thought to be accepted into cells of the dermis by membrane fusion (2)What becomes apparent is the amount of chemistry, biology, and technology that enters into the industry of beauty. This serves as a reflection of the want consumers have for the most effective types of products science can give them and their need for the best skin care solutions they can buy easily, with little consideration given to more personal factors that affect healthy skin.

As a result of this focus on consumerism and easy solutions, little attention is given to the idea that lifestyle choices manifest themselves in our physical health and appearance. Nutrition and rest are two basic and easy-to-overlook contributors to healthy skin. Exercise helps the skin since it works to "maintain a clear circulation, calming the nerves and promoting a deeper, more revitalizing sleep (3) Water is obviously necessary for the skin's maintenance of the right amount of moisture as well as a person's general good health (3) These make positive contributions to healthy skin while the following are choices that prove damaging. Smoking deprives skin tissue of oxygen and nutrients through the effects of carbon monoxide and nicotine in the circulatory system, giving a pale complexion and early wrinkles. Caffeine and alcohol dehydrate the skin, the latter being particularly even more damaging by impeding circulation, removing moisture and nutrients, and even leading to broken or distended capillaries (3).

Those who market skin care products, ranging from medical doctors to make-up consultants to fashion houses, gain definite monetary profit by promoting only products and by not acknowledging other factors in good skin care, thus adding complication to the issue. Do consumers really need their products to improve their skin's health? Or is this need merely generated by the beauty industry and dermatology field? For consumers, clearly it becomes a matter of buying a better appearance. They have made this evident by accepting what they are told and by allowing skin care products to become a billion dollar industry (3) ; this market exists because people want it to.

Moisturizers can be beneficial to the skin but those who support this the most and the strongest are those who profit off of them. They can replenish moisture to the skin but may lead to damage as well if a consumer does not understand the role of specific ingredients. They can be effective but lifestyle choices cannot be overlooked in lieu of them. Perhaps what needs to be understood by everyone willing to spend their idea of the right amount on their understanding of the right product is that these lotions and creams cannot serve alone as the key factor to healthy skin.
.

References

1)Dermadoctor Website, archived article on the Dermadoctor website, an online source for skincare

2)Altrius Biomedical Network, supported by dermatology community

3) American Academy of Dermatology website , archived article on the American Academy of Dermatology website


Why Stress Affects Us Physically
Name: Kyla Ellis
Date: 2002-09-30 11:13:45
Link to this Comment: 3026


<mytitle>

Biology 103
2002 First Paper
On Serendip

Kyla Ellis
Biology 103
Web Paper
9-30-02

Why Stress Affects Us Physically

We deal with stress daily. In our every-day vocabulary "stressed" becomes an emotion as in: "How are you?" "I'm feeling happy, how are you?" "I'm feeling stressed." It is a negative word, and is not an emotion we aspire to. But if "stress" comes from the nervous system, why does it affect our body? Why does stress cause us to lose sleep, break out, and become depressed? In my paper, I will attempt to explain what stress is, how it happens, and then what it does to our bodies and why it does those things.

First, let me clarify: Stressors are internal or external factors that produce stress. Stress is the subjective response to the factors (10). All humans and animals have developed internal mechanisms through evolution that allow our bodies to react to a stressor. The term "stress" has a negative connotation, but it can also be a positive thing, such as when performers go onstage, they rely on stress to provide the adrenaline rush necessary to helping them perform. Most stress is not due to life-threatening situations, but rather to every-day occurrences such as public speaking, or meeting new people. I'd like to point out also that the intensity of stress depends on how it is perceived. For example, a deadline contraction can be for some people an opportunity to manage their time more efficiently, while for others it can be the end of the world.

There are four categories of stress. The first is Survival Stress. The phrase "fight or flight" comes from a response to danger that people and animals have programmed into themselves. When something physically threatens us, our bodies respond automatically with a burst of energy so as to allow us to survive the dangerous situation (fight) or escape it all together (flight). The second is Internal Stress. Internal stress is when people make themselves stressed. This often happens when people worry about things that can't be controlled or put themselves in already-proven stress-causing situations. The third category is Environmental Stress. It is the opposite of Internal Stress, it is caused by the things surrounding us that could cause stress, such as pressure from school or family, large crowds, or excessive noise. The fourth category is called Fatigue and Overwork - This kind of stress builds up over a long time and can take a hard toll on your body. It can be caused by working too much or too hard at a job, school, or home. It can also be caused by not knowing how to manage your time well or how to take time out for rest and relaxation. This can be one of the hardest kinds of stress to avoid because many people feel it is out of their control (3).

One site I looked at compared a person undergoing stress to a country whose stability is threatened. The country reacts quickly and puts out a number of civilian and military measures to protect the country. On the one hand, the readiness to quickly respond in such a way is vital to the long-term survival of the nation; on the other hand, the longer this response has to be maintained, the greater the toll will be on other functions of the society(10).

Stress affects us physically, emotionally, behaviorally and mentally. When there is a threat, the body physically reacts by increasing the adrenaline flow, tensing muscles, and increasing heart rate and respiration. Emotions, such as anxiety, irritability, sadness and depression, or extreme happiness and exhilaration come out. Behaviorally, one might possibly experience reduced physical control, insomnia, and irrational behavior. Mentally, stress may severely limit the ability to concentrate, store information in memory and solve problems ("Test anxiety" happens because the brain has a reduced ability to process information while under the effects of stress) (1).

Has anyone ever told you they were stressed out because of acne? The fact that they are stressing out about it might be making the problem worse. How can what happens on your face be related to what happens to your central nervous system? Acne forms when oily secretions from glands beneath the skin plug up the pores. There is a stress hormone known as corticotrophin-releasing hormone (CRH). An increase in the CRH signals oil glands in the body urge the oil glands to produce more, which can exacerbate oily skin, thus leading to acne(4)(5).

If people are stressed, they may lose sleep due to the fact that there is something on their mind, making it hard for them to stop worrying about it long enough to fall asleep. However, stress hormones also make it harder to sleep. CRH has a stimulating effect and when it is produced in the body in greater quantities, it makes the person stay awake longer and sleep less deeply. In this way, stress is also linked to depression, because people who do not get enough "slow-wave" sleep may be more prone to depression(6).

The hippocampus is an important part of our brain and its functions. The hippocampus is responsible for consolidating memories into a permanent store (9). The hippocampus is the part of the brain that signals when to shut off production of stress hormones called cortisol. However, these hormones can damage the hippocampus. A damaged hippocampus causes cortisol levels to get out of control, which compromises memory and cognitive function, creating a vicious cycle(7).
In conclusion, stress is not just simply a process that overtakes the central nervous system. It affects the body in many different and seemingly unrelated ways. To live a healthy lifestyle includes striking a good balance between work, down time, and sleep, which should help reduce the effects of stress.


References

1)Coping with the Stress of College Life ,
2)Staying Well ,
3)Understanding and Dealing with Stress
, 4)Science News,
5)CNN.com,
6) Stress and Sleep deprivation ,
7)The Cortisol Conspiracy and Your Hippocampus,
8) Stress Management ,
9)The hippocampus of the Human Brain,
10. 10) The Neurobiology of Stress and Emotions,


One Last Call for Alcohol
Name: Elizabeth
Date: 2002-09-30 13:05:50
Link to this Comment: 3029


<mytitle>

Biology 103
2002 First Paper
On Serendip

For centuries, man has relied on alcohol as a relaxant, often employing its

sedative qualities to induce sleep. However, while a stiff drink before bed may

initially help one fall asleep, recent research shows that alcohol adversely affects

sleep patterns. Not only do recreationally drinkers experience disruptions in their

nightly sleep, but alcoholics also damage their ability to obtain quality sleep, perhaps

irreparably. Also, sleep problems such as insomnia may cause a person to abuse alcohol,

leading into a vicious cycle which corrupts their ability to sleep peacefully. In

addition, sleep problems may pave the way for an alcoholic's relapse, as they seek out a

familiar form of relaxation. The harmful effects of alcohol outweigh the initial sleep

inducing benefits, as an overindulgence in alcohol may result in permanent difficulties

with sleep.


Sleep takes place in two distinct stages. The first, called slow wave state

(SWS), is a deep, restful sleep characterized by slowed brain waves. The second is known

as rapid eye movement (REM) sleep, a less restful state associated with dreaming.

Alcohol affects sleep patterns by interfering with the monoamine neurotransmitters which

control the body's ability to sleep peacefully (1). In those

individuals who drink alcohol but do not abuse the substance, a drink or two before bed

helps lessen the amount of time needed to fall asleep. However, contrary to popular

opinion, alcohol will not promote a good night's sleep. Even a trace presence of alcohol

in the bloodstream disrupts the second half of a person's sleep cycle, leading to

wakefulness in the middle of the night and an inability to fall back to sleep. Such

disturbances lead to daytime fatigue, which can affect a person's ability to undertake

such everyday tasks as driving a car. Alcohol consumed up to six hours before bedtime

can still disturb one's sleep cycle that evening. Unfortunately, the majority of alcohol

consumption takes place from dinner on, leaving many susceptible to a fitful night

(4).


Alcohol also has the tendency to exaggerate existing sleep problems, such as

insomnia and sleep apnea. An insomniac, a person who has difficulty sleeping, may seek

the aid of alcohol in order to fall asleep, but a reliance on alcohol leads to

wakefulness later in the night and a compounded inability to fall back to sleep. Sleep

apnea, a breathing disorder in which the pharynx, or the upper air passage, constricts

during sleep, affects the body's ability to get enough oxygen. Usually, the shock of not

being able to breathe wakes the person, but if she or he has been drinking, the body may

not react to the situation as quickly as is necessary. As a result, those with sleep

apnea run a significant risk when they consume too much alcohol. Even by ingesting as

few as two alcoholic drinks a night, those suffering from sleep apnea place themselves at

a much higher likelihood for heart attack, stroke, or death by suffocation

(2).


The effects of alcohol on sleep increase among those who abuse the substance on

a regular basis, otherwise known as alcoholics. Insomnia affects 18% of the alcoholic

population, a higher percentage than found in the population at large (1)

. As a person consumes an excess amount of alcohol, the sedative properties of the

substance lower significantly, and the alcohol no longer enables one to fall asleep

quicker. In fact, consuming too much alcohol makes it increasingly difficult to fall

asleep. Once sleep finally sets in for an alcoholic, the time spent in both SWS and REM

modes is reduced, resulting in an overall reduction of sleep time. While studying

recovering alcoholics during their periods of withdrawal, researchers observed an

increase in the amount of time spent in SWS and REM sleep with a corresponding increase

in the amount of time needed to fall asleep (5). However, although SWS

and REM times were increased, they were not restored to their optimal levels. Research

indicates that the damage an alcoholic does to his or her system while abusing the

substance may be irreparable (1). In any case, sleep patterns are

significantly affected for at least two years, if not for life.


The reverse side of alcohol and sleep problems is the effect an inability to

sleep may have on one's reliance on alcohol. Insomniacs may at first employ a drink

before bed as a sleep aid, noticing its relaxing properties. However, as alcohol in fact

worsens a person's ability to sleep, this initial benefit will wear off as time

progresses, leading the insomniac to drink more and more in order to produce the desired

effect. Unfortunately, as discussed earlier, an over-consumption of alcohol actually

reduces its sedative qualities and increases the difficulty of being able to fall asleep,

a dangerous side effect for a person who already has problems falling asleep. If the

insomniac becomes too reliant on alcohol for sleep purposes, he or she may develop a

dependence on alcohol which could progress into alcoholism, permanently disrupting their

sleep patterns and causing an interference in their ability to perform simple tasks.

Also, an inability to sleep may cause a recovering alcoholic to seek the familiar

comforts of alcohol, triggering a relapse (4).


Alcohol related sleep problems affect more than just adults. Drinking while

pregnant or nursing has been shown, besides other damaging effects, to alter the sleep

patterns of a newborn baby. The baby absorbs the alcohol into its bloodstream just as an

adult would, leading to wakefulness throughout the night, frightening dreams, and a

decrease in the restful quality of the baby's sleep. Adequate rest is essential for a

developing child. In turn, interruptions to a healthy sleep cycle can cause serious

detriments to the baby, including such dangers as fetal alcohol syndrome (3)

.


Alcohol affects the ways humans sleep. Even a small drink six hours before bed

interferes with the resting process. The dangers and side effects of disrupted sleep

increase with alcohol abuse, and some sleep problems may lead to alcoholism, or serve as

an excuse for a recovering alcoholic's relapse. These negative effects of alcohol on

sleep can extend even to small children by way of their mother. It's best to be aware of

these risks before overindulging in alcohol, especially if one has a condition such as

sleep apnea, which may be aggravated by alcohol, sometimes with fatal results.


References

1)Alcohol's Effects on Sleep in Alcoholics,

2)Alcohol Alert,

3)No Thanks, I'm Sleeping,

4)Drinking the Night Away,

5)Kentucky Sleep Society,


Manatees and the Human Fault Factor
Name: Katie Camp
Date: 2002-09-30 13:06:12
Link to this Comment: 3030


<mytitle>

Biology 103
2002 First Paper
On Serendip

Manatees appeared on earth during the Eocene period about 50 to 60 million years ago. In general, adult manatees are about twelve feet long and weigh 1000 to 1500 pounds (1). They require warm water, a supply of "submerged, emergent, and floating plants" (3) such as hydrilla, turtle grass, ribbon grass, and manatee grass (4) for food, shelter and breeding grounds. Manatees are very gentle sea mammals whose curious personality leads humans to perceive them almost like their companion pets. Although manatees are playful, "scratch[ing] themselves on poles, boat bottoms, and ropes," they "do not seek interaction" (4) with people. Their humble mannerisms and general slow reactions do not mesh well with the increasing fast pace human life on Florida waterways. "More than 90% of direct manatee mortality" caused by humans is a result of boat accidents (5). Most often manatees are hit in their state of "torpor" or rest when they float near the surface of the water and are in the direct line of contact with boat propellers. Within the past few years the number of manatees killed by watercraft has remained steady, in the high seventies and low eighties. In 1999, a record high of 82 manatees were killed in such situations.

In some "controversial protection measures" (5) boats have posted speed limits on the water and boaters are advised for certain behavior, like to wear polarized sunglasses so that manatees close to the surface of the water can be seen more easily. One might assume then that the resulting boat collision deaths would decrease. This year, 2002, however, has already proved this idea wrong. Reports from the Florida Marine Research Institute state that this year up as recorded to September 27, 83 manatees have already been killed by human contact (6).

Not only do collisions with watercraft cause many manatee deaths a year but the phenomena of algae blooms, otherwise called red tide, account for many other manatee deaths. Harmful algal blooms (HABs) are caused by the cycle of algae in the sea. The germination of algal cysts can only happen in warm temperatures with increased light (7) . This obviously follows the pattern of manatees' habitats. When the cysts break open and result with a simple reproduction of a cell, it then "blooms," cells divide exponentially. Their concentration can cause toxicity in the water, accumulating in "dense, visible patches near the surface of the water" (7). The species which commonly poison manatees, found in the Gulf of Mexico is Gymnodinium breve.

Even though we want to attribute the loss of manatees by algal blooms to natural phenomena, it is obvious that certain human actions contribute to the worsening HAB situations. In the past few decades, the resultant deaths have increased while HABs have existed for years and so it poses the question as to why manatees are now being so affected by this. Human interaction with the environment has inevitably caused change in the environment which has then affected the manatees' response to HABs. Due to pollution, algal blooms have become more concentrated. Pollution also causes decreased resistance in manatees. Construction has destroyed wetlands which used to filter pollution. Habitats suitable for the manatee population become less and less with the same construction and so more manatees then congregate in the same area (8). The most popular area has become Crystal River which flows about seven miles into the Gulf of Mexico from Florida. The manatee's "need for fresh or low salinity drinking water" pulls them towards Crystal River and its containment of many major springs also provide for warm water, even when the waters of the gulf turn cold for the winter (1). Therefore, when two hundred or more manatees migrate up Crystal River every winter, their presence in close quarters makes the spread of infectious disease and toxins, like red tide, spread more quickly than otherwise.

These issues of manatee endangerment in Florida spark debate with the question of why manatees are considered endangered and then how should they be protected. Currently the approximately 2500 manatees of Florida are protected under the Endangered Species Act of 1973 and the Federal Marine Mammal Protection Act of 1972 (3). Many counties in the state of Florida attempt to further protect manatees by developing their own regulations. The population of Florida, however, argues over the development of better protection for manatees. Currently this can be seen with the petition submitted by the Coastal Conservation Association of Florida to re-evaluate manatee's definition as an endangered species under the Florida Endangered Species Act (2). This argument of certain groups in Florida over the protection of manatees and their definition as an endangered animal is a defensive reaction to human's direct involvement in destroying the species.

Clearly manatees have existed for millions of years and have undoubtedly changed over time. Their gestation period of thirteen months means that they only reproduce every two to five years and their change occurs slowly (3). It is then doubtful that manatees would be able to grasp the new development in red tide concentration, etc to develop different, more efficient immune systems to survive this destruction. Numbers like "20 % of the total population d[ying] in 1996 alone," it is in "serious doubt as to the manatees' survival into the next century" (4). Obviously, humans have not killed off every manatee with direct collision accidents. In fact the eight categories of death for manatees include three human related occurrences like boat collisions or death by a flood gate and five categories of "natural" causes like diseases and toxins (9) . While manatees may not be killed a majority of the time by human contact, 44 % are killed. And even though the majority of deaths are declared "natural," humans' involvement and interaction with the environment has changed and effected the environment in which the manatees live. Therefore, no matter what the particular stated cause of death is of manatees, somehow their deaths and so their endangerment as well, can be linked to humans.

References

World Wide Web Sources

1)Manatee Introduction and Background, part of The Florida Water Story homepage.

2)The Future of the Florida Manatee: And Ongoing Concern, recent opinion piece written on the petition to reconsider manatees' placement under the Florida Endangered Species Act.

3)About manatees..., a general description and explanation of manatees and their behavior.

4)Manatees, People---&The Buddy System, part of The Florida Water Story homepage.

5)Manatee Protection Efforts, part of The Florida Water Story homepage.

6)Number of Manatees Killy by Boats Reaches Record High, recent article (September 27, 2002) on number of manatee deaths this year.

7)What are Harmful Algal Blooms (HABs)?, general overview of how algal blooms work.

8)Manatee Habitat & Water Quality Issues, part of The Florida Water Story homepage.

9)Descriptions of Manatee Death Categories, general overview of categories used to determine death statistics.


Alcohol: From the Cradle to the Grave
Name: Heidi Adle
Date: 2002-09-30 13:53:11
Link to this Comment: 3032


<mytitle>

Biology 103
2002 First Paper
On Serendip

Heidi Adler-Michaelson
2002-09-29
Biology 103 Web Paper 1

Alcohol: from the Cradle to the Grave

"My baby was born drunk. I could smell the alcohol on his breath." (2). Maza Weya is an Assiniboine Indian. She grew up on the Fort Belknap reservation in Montana. She says that her "twin brother excelled at everything so [she] excelled at being an alcoholic" (2). Things worsened over the years, she had a child somewhere in between it all, and later started drinking perfume in the hope that it could help her quit. Her family intervened and took the child away from her and some time later she received the notification that her sister had adopted her son and she wasn't even there to give her consent. It was after this blow that she decided to go to rehab. Her son is 5 feet tall and weighs 95 pounds. The first time she talked to her son on the phone, she told him about her drinking problem during, before, and after the pregnancy. She says: "...he asked me why I didn't love him enough that I wouldn't drink while he was inside me...He asked if I had given him up because he wasn't perfect, because he was damaged" (2).

What could possibly lead a pregnant woman to drink during pregnancy? Well, of course there are those that have been addicted for years and find it impossible to quit of 9 months. It is true that most developmental problems in the fetus are generally linked to chronic and abusive drinking (1). But recent studies have shown that similar if not greater damage can be done to the unborn child whose mother does binge drinking (2).This is a concept defined as having five or more drinks at one setting (5). "The highest-risk groups of women in terms of drinking during pregnancy are women with master's degrees and higher and women who dropped out of high school" (4). The Centers for Disease Control found four times as many binge drinkers in 1995 as in 1991 (2).

Science has its own thoughts on this topic. During the first three months of pregnancy, the fetus is most vulnerable. The alcohol passes from the mother's bloodstream to the baby's (9). According to the March of Dimes "When a pregnant woman drinks, alcohol passes swiftly through the placenta to her fetus. In the unborn baby's immature body, alcohol is broken down much more slowly than in an adult's body. As a result, the alcohol level of the fetus's blood can be even higher and can remain elevated longer than in the mother's blood" (5).

Not to long ago, researchers discovered how exactly alcohol affects the development of the brain of a fetus. According to this research, getting drunk just once during the final three months of pregnancy may easily be enough to cause brain damage. "This is the first time we've had an understanding of the mechanism by which alcohol can damage the fetal brain. It's a mechanism that involves interfering in the basic transmitter system in the brain, which literally drives the nerve cells to commit suicide" (10). It is during the third trimester of pregnancy that a period called synaptogenesis begins. During this period, that continues into childhood, the brain develops rapidly and is most sensitive to alcohol. The researchers have found that parental alcohol affects two brain chemicals, glutamate and GABA, which helps the brain communicate with itself. The research is still going on with concentration on the link between damage to certain parts of the brain and problems in the adult (10).

How exactly are children who were forced to drink in their mother's womb different? There are many different definitions with only minor variations. As proposed by Sokol and Clarren in 1989, the proposed criteria are 1) prenatal and/or postnatal growth retardation (weight and/or length below the 10th percentile); 2) central nervous system involvement, including neurological abnormalities, developmental delays, behavioral dysfunction, intellectual impairment, and skull or brain malformations; and 3) a characteristic face thin upper lip, and an elongated, flattened midface and philtrum (the groove in the middle of the upper lip) (3). One of the most important things to know about children with this disability is that FAS (Fetal Alcohol Syndrome) is that they don't understand the concept of "cause and effect" (i.e. if I touch the hot stove, I will burn myself) (8).

Another important facet is the deviations of the gravity of alcohol among different ethnicities. According to the Centers for Disease Control, incidents of FAS per 10,000 births for different ethnic groups were: Asians 0.3, Hispanics 0.8, whites 0.9, blacks 6.0, and Native Americans 29.9. The former FAS coordinator on the Fort Peck Indian Reservation says that this data is mostly due to the fact that Native Americans are more open and comfortable in speaking about alcohol problems (2). But it is wide-spread knowledge that alcoholism has been a problem among Native American tribes for decades.

Melissa Clark is a 22 year old victim of fetal alcohol syndrome. Recently, when she was home alone in her house in Great Falls, a man rang the door bell. Even though she did not know the man, she opened the door and let the stranger in. She walk straight to her bed room and commenced to take off his clothes. He told her to do the same and she did. After raping her he simply got dressed and walked out. Some hours later her foster mother came home and Melissa told her what happened. Johnelle Howanach, her foster mother, called the police who in turn wrote it off as consensual sex. Johnelle however argues that Melissa did not know that having sex with a stranger was wrong. She says: "People with fetal alcohol syndrome just don't have those boundaries. They are eager to please, very friendly...They don't know the difference between a friend and a stranger because they can't remember" (6)..

In another case, a woman drank herself into a stupor in her ninth month of pregnancy. A Wisconsin appellate court ruled that she could not be charged with attempted murder of her fetus. In fact, the only state that criminalizes such behavior is South Carolina (7). This raises many questions among humanitarians. How much is too much and what should the consequences be, if any at all? Is the unborn baby considered a part of the woman or an individual living organism? Could it live without the mother? Will it ever be asked if it wants a sip? Is this even an issue of choice? After all, the Bible clearly states: "Behold, thou shalt conceive and bear a son: and now drink no wine or strong drinks" (Judges 13:7).

References

1) Westside Pregnancy Resource Center, "Prenatal Risk Assessment, Keeping Your Unborn Baby Healthy Through Prevention."

2) Great Falls Tribune, "My baby was born drunk."

3) National Institution of Alcohol Abuse and Alcoholism , "Fetal Alcohol Syndrome"

4) Tucsoncitizen, "Alcohol's toll on unborn worst of any drug."

5)National Institute of Health, "CERHR: Alcohol (5/15/02)."

6)Great Falls Tribune, "Fetal alcohol syndrome leaves its mark."

7) Family Watch Library, "A Setback For Fetal Rights In Wisconsin Alcohol Case."

8) Alcohol Related Birth Injury Resource Site, "Alcohol Related Birth Injury (FAS/FAE) Resource Site."

9) Evening Post , "Study looks at effects of alcohol on unborn."

10) Alcohol Related Birth Defects Resource Site, "Alcohol Related Birth Injury FAS/FAE) Resources Site."

11) University of North Carolina, "An Introduction to the Problem of Alcohol-Related Birth Defects (ARBDs)"


Chemical Sunscreens - When Are We Safe?
Name: Virginia C
Date: 2002-09-30 13:57:48
Link to this Comment: 3033

<mytitle> Biology 103
2002 First Paper
On Serendip

     The debate between chemical vs. physical sunscreens has been a hot one amongst the scientific community for a number of years. Since we became aware of the link between the sun and skin cancer, the rates for cases of melanoma have been on the rise, more so every year (1), most likely due to environmental factors (bigger hole in the ozone, pollution, etc.), overexposure to the sun, and improper protection precautions. While some scientists argue that there is no relationship between sunscreen use and the development of melanoma skin cancer (2 and 10), most of the scientific community agrees that sunscreen can be a (not THE) form of protection and prevention for all types of skin cancers.
     Given this sunscreen-based philosophy, we come to the question of what types of sunscreens are most effective at protecting us from the sun's harmful rays. To do this we must first analyze the types of UV (ultra violet) rays that the sun produces. Hans R. Larsen, MSc ChE, of International Health News, writes:
UVA rays constitute 90-95% of the ultraviolet light reaching the earth. They have a relatively long wavelength (320-400 nm) and are not absorbed by the ozone layer. UVA light penetrates the furthest into the skin and is involved in the initial stages of suntanning. UVA tends to suppress the immune function and is implicated in premature aging of the skin(2,13,14). UVB rays are partially absorbed by the ozone layer and have a medium wavelength (290-320 nm). They do not penetrate the skin as far as the UVA rays do and are the primary cause of sunburn. They are also responsible for most of the tissue damage which results in wrinkles and aging of the skin and are implicated in cataract formation. UVC rays have the shortest wavelength (below 290 nm) and are almost totally absorbed by the ozone layer. As the ozone layer thins UVC rays may begin to contribute to sunburning and premature aging of the skin. All forms of ultraviolet radiation are believed to contribute to the development of skin cancer.i
     In order to be fully effective at preventing skin cancer, then, sunscreens nowadays should offer broad spectrum protection (against both UVA and UVB rays). However, many ingredients in sunscreens are less effective than they claim at preventing skin cancer (5), as well as having other possibly harmful side effects (4). Therefore, we must pay more attention to the specific ingredients in our sun care products to effectively protect ourselves against skin cancer, something the FDA often doesn't tell us (6), at least not explicitly on the product packaging (4).
     The important distinction I want to look at is that of physical vs. chemical sunscreens. Chemical sunscreens work by absorbing the UV rays that hit them and therefore absorbing the radiation. Physical sunscreens, on the other hand, work by reflecting and/or scattering UV rays and radiation (5). The following chart (5) provides information about how each type of sunscreen performs:


 
 
Ingredient
UVB
Protection
UVA
Protection
Chemical
Absorbers
Avobenzone
(Parsol 1789)
No
Yes
Cinnamates
Yes
No
Octocrylene
Yes
No
Oxybenzone
(Benzophenones)
No
Yes
PABA
(para-aminobenzoic acid)
Yes
No
Padimate-O
(Octyl dimethyl paba)
Yes
No
Salicylates
Yes
No
Physical
Blockers
Titanium Dioxide
Yes
Yes
Zinc oxide
(including transparent)
Yes
Yes

     From this chart alone, one might glean that physical sunscreens are more effective at overall sun protection. We see through the majority of studies that this is true (7,8). Furthermore, there is frightening evidence that these chemical sunscreens, as well as being less effective than their physical counterparts, are in fact somewhat harmful to us, and have even been argued to be the cause of interference with normal sexual development as well as other potential health problems (9 and 4, respectively). However, physical sunscreens have also been demonstrated to be both possibly harmful (11) and less effective in some cases than chemical sunscreens (12, 13). Therefore, it is difficult to analyze these sunscreens in a context of which is better/healthier or more effective, since it seems that both have been shown to have possible health side effects as well as flaws in efficiency. However, in one study it was shown that subjects were more likely to apply "about two-thirds the quantity of physical compared with chemical sunscreen. This reduction in amount applied is likely to lead, in practice, to the physical sunscreen providing a SPF of about one-half of that achieved with the chemical sunscreen" (14). This might indicate that, at least in practice, physical sunscreens are more effective than they are given credit for simply due to common misuse, and furthermore that, when used properly, physical sunscreens are just as if not more effective than chemical ones.
     Another point in favor of physical sunscreens is that countries around the world approve of their use in sunscreens, whereas only one chemical sunscreen is legal in Europe and commonly used in Canada and Australia (4)ii. Also, I was unable to find many resources stating that physical sunscreens had harmful effects similar to those of chemical sunscreens – only one source claims anything of the sort, and it merely states that titanium dioxide (a physical sunscreen) is a compound "whose toxicity remains unclear" and whose "full effects on human health are still under investigation" (11). The same source also quotes the U.S. government's National Institute for Occupational Safety and Health (NIOSH) as labeling the chemical "a potential occupational carcinogen." However, these findings are less conclusive than any I found against physical sunscreens, and I am therefore inclined to trust chemical sunscreens more, at least until more information is available.



i Entire quote taken directly from (3) However, other sources indicate that UVC waves do not easily reach the earth's surface and are therefore not accounted for by most sunscreens.

ii However, source (4) also states the following in relation to this chemical:     In 1997, Europe, Canada, and Australia changed sunscreens to use three specific active sunscreen ingredients - avobenzone (also known as Parsol 1789), titanium dioxide, and zinc oxide - as the basis of sunscreens. In the USA, the cosmetic companies have held off this policy as they try to sell off their stockpiles of cosmetics containing toxic sunscreens banned in other countries. However, avobenzone is a powerful free radical generator and also should have been banned. Avobenzone is easily absorbed through the epidermis and is still a chemical that absorbs ultraviolet radiation energy. Since it cannot destroy this energy, it has to convert the light energy into chemical energy, which is normally released as free radicals. While it blocks long-wave UVA, it does not effectively UVB or short-wave UVA radiation, and is usually combined with other sunscreen chemicals to produce a "broad-spectrum" product. In sunlight, avobenzone degrades and becomes ineffective within about 1 hour.

References

1)Does sunscreen increase or decrease melanoma risk? Cancer HealthLINK
2)March 6, 1998, Hour 2: Sunscreen and Skin Cancer
3)Sun Screens and Cancer
4)The Chemical Sunscreen Health Disaster
5)ASPA Sun Safe Products / Sunscreen
6)No Title [FDA Regulations regarding sunscreen](this site takes a long time to load, please be patient)
7) Van Der Molen, RG; Hurks, HMH; Out-Luiting, C; Spies, F; Van't Noordende, JM; Koerten, HK; Mommaas, AM; "Efficacy of micronized titanium dioxide-containing compounds in protection against UVB-induced immunosuppression in humans in vivo," Journal of Photochemistry and Photobiology B: Biology [J. Photochem. Photobiol. B]. Vol. 44, no. 2, pp. 143-150. 10 Jul 1998.
8) Wolf, P; Donawho, CK; Kripke, ML; "Effect of sunscreens on UV radiation-induced enhancement of melanoma growth in mice," Journal of the National Cancer Institute [J. NATL. CANCER INST.], vol. 86, no. 2, pp. 99-105, 1994
9)Sun-Care Chemical Proves Toxic in Lab Tests 10/15/00
10)Sunscreens May Not Prevent Melanoma
11)Absorbing Titanium from Sunscreens
12) Serpone, N; Salinaro, A; Emeline, A, "Deleterious effects of sunscreen titanium dioxide nanoparticles on DNA. Efforts to limit DNA damage by particle surface modification," Dept. of Chemistry and Biochemistry Concordia University, Montreal, Que., H3G 1M8, Canada Nanoparticles and Nanostructured Surfaces: Novel Reporters with Biological Applications, San Jose, CA, United States, 01/24-25/01 PROC SPIE INT SOC OPT ENG. Vol. 4258, pp. 86-98. 2001.
13) de Fine Olivarius F; Wulf HC; Crosby J; Norval M; "Sunscreen protection against cis-urocanic acid production in human skin," Department of Dermatology, Bispebjerg Hospital, University of Copenhagen, Denmark. Acta dermato-venereologica, 1999 Nov, 79(6):426-30
14) Diffey BL; Grice J; "The influence of sunscreen type on photoprotection," Regional Medical Physics Department, Dryburn Hospital, Durham, U.K. The British journal of dermatology, 1997 Jul, 137(1):103-5


How Bark is Protection for Trees
Name: Jodie Ferg
Date: 2002-09-30 14:52:21
Link to this Comment: 3034


<mytitle>

Biology 103
2002 First Paper
On Serendip


Bark: It is defined by the American Heritage Dictionary as "the outer covering of the woody stems, branches, roots, and main trunks of trees and other woody plants, as distinguished from the cambium and inner wood. Most of us can easily identify bark when presented with a question similar to: where is the bark? To answer such a question we will point to a tree, because trees are covered in bark. This seems very obvious and is. There is much about bark, however, that most people either do not know or have never taken the time to know or realize. I can name at least ten different trees off the top of my head, and although each has bark, each type of bark is different. Bark can be used as an aesthetic way to distinguish one tree from another; tree A had bark of a dark, flaky variety whereas tree B had bark of a pale, tightly covering the tree variety. When one looks at bark aesthetically, one misses the point of bark: that it is a protective device for the tree, and that its unique characteristics are functional.

The state tree of New Hampshire is the white birch. The bark of this tree is papery and white. As children, we would often peel off pieces and write to each other on them. The white color is supremely white—as white as this page. I was talking to my mother earlier this evening and she told me that the color of the bark is to reflect the winter sunlight—if the tree absorbs too much heat it will die. The white color of the bark prevents this from happening. The white birch tree is found in New Hampshire as well as other northern regions. It loses its leaves in the winter, thereby exposing its bark to the harsh sunlight of winter. The pale color of its bark allows the tree to survive.

One of the most famous types of tree in America is the redwood. These huge trees are found mostly in California, and are artifacts of an unsettled American wilderness. To further express how large these trees are: redwoods average eight feet in diameter and can be as wide as twenty feet. There are some as tall as 375 feet, which is taller than the Statue of Liberty. A typical redwood forest contains more biomass per square foot than any other area on earth, including the rain forests in South America. These trees are large. It would seem that they would be unharmed by anything in nature—could you imagine a beaver trying to chew through a twenty foot wide trunk? Still, there are things in nature that can harm these trees—namely, fire. A fire can burn any tree. Redwoods are not invincible, but they have evolved to avoid being burnt to the ground by the periodic fires the area experiences. The branches of the redwood do not start until very high off the ground—the branches are thinner than the trunk and therefore are more easily devoured by flame. Because the trees are so thick, nearing a foot thick in some trees, the fire chars the wood instead of burning through it. The charred wood acts as a heat shield and prevents the entire tree from being destroyed. Redwood trees can also withstand Redwood trees have been around since about the time of the dinosaurs. As we all know, not much has survived from that time. (1)

The Eucalyptus tree is to Australia as the redwood is to America. This tree is also found in California and other parts of the United States. The bark of the eucalyptus is very oily, so if it is caught in a fire the oil burns rather than the tree itself. The bark that is damaged by the fire sheds, so the tree does not catch on fire. There are also roots below ground that are very wet; their moisture protects them from the fire. There have been several reports of eucalyptus forests being completely burned, regenerating, being completely burned again, and regenerating again. To survive, the plant had to become resistant as possible to fire. That is what it has done. By being able to regenerate after a destructive fire, the plant adapts to a harsh climate. Other examples of plants that use fire to their advantage are the Jack pines, which have seritonous cones. This means that in order for the cones to open and go to seed, they must be exposed to direct and intense heat—that is, fire. Without the fire, the plant could not actually continue as a species.(2),(3)
Bark serves to protect a tree. Without bark, there would not be trees. Bark has its uses to humans as well as to trees: Native Americans used birch bark to build canoes and wigwams. The bark was also used to write on. There are oils in many different barks around the world that humans use. These same oils and other chemicals in the bark of trees and other plants can also serve to protect the plant. We are all familiar with poison ivy, one of the most irritating poisonous plants. There are also trees with poisonous bark—trees that we are somewhat familiar with. A few such trees with poisonous bark are the black locust, the yew tree, and the elderberry tree. There are many other plants that are completely poisonous, which would include the bark, but they seem to be smaller plants that do not necessarily have bark. A poison in the bark is a way to prevent being eaten by animals. (4)
We sometimes think of trees and plants as living things that are just there, passively accepting human interference and animal destruction. We often forget that trees have ways of being active organisms—they have ways of protecting themselves (obviously beyond the bark as well) that we rarely notice or think about. In discussions in class it has seemed that people have forgotten that trees are even living at all. It is important to recognize that such beings as trees do exist and are very necessary for human life. With all the protective devices trees have, they cannot withstand humans and their chain saws. We are hazardous to these plants. Perhaps if there were something akin to chain saws in nature, however, there would be plants whose bark was so tough and strong it could withstand such a cut. Despite the toughness of wood and bark, however, we have managed to create and build with the hardwood trees. With our tools we can almost anything with wood.(5) There is nothing stopping humans. Even trees will never have the chance to adapt to withstand us. They have developed as to withstand so much else that we should step back, stop cutting so many of them down, and admire their ability to continue with life even under the harshest conditions.

WWW Sources


1)Redwood Tree Information
2) Eucalyptus Tree Information
3) Jack Pine Information
4) Poisonous Plants
5) Interesting Use of a Hardwood


Mammograms: A Go or No?
Name: stephanie
Date: 2002-09-30 14:58:15
Link to this Comment: 3035


<mytitle>

Biology 103
2002 First Paper
On Serendip

"A mammogram is an x-ray picture of the breast. It can find breast cancer that is too small for you, your doctor, or nurse to feel. Studies show that if you are in your forties or older, having a mammogram every 1 to 2 years could save your life." Though this is currently the official government endorsed idea, the entire controversy over breast cancer preventatives is far much more complex. In what has become perhaps the most highly-debated topic in all of cancer research, the question on the validity of mammograms as a preventative for breast cancer has increasingly caught media attention in the past few years. Media attention notwithstanding, the statistic that suggests that one of every eight women in the U.S. will get breast cancer in their life makes the attention fall closer to home. Whether or not a mammogram can help more than hurt women in preventing cancer is an extremely touchy debate and deserves a considerable amount of research.

Essentially, a mammogram's main idea is to x-ray the breasts in order to find what are called microcalcifications, or tiny build-ups of calcium deposits or tumors that may be unidentifiable from feel. The controversy to the issue lies not in having the mammogram done at all, but the age group that should be included in this. The National Cancer Institute, who is endorsed by the government, released a statement in February of 2002:

"Women in their 40s should be screened every one to two years with mammography. Women aged 50 and older should be screened every one to two years. Women who are at higher than average risk of breast cancer should seek expert medical advice about whether they should begin screening before age 40 and the frequency of screening." (1)

Although many people now take this as a good rule of thumb, there are a number of justifiable reasons that those under 50 should not in fact use mammogram testing. The number of "risks" associated with the testing begins with the fact that the mammogram doesn't always detect breast cancer. The breast density, which just refers to the amount of tissue in the breast that is not fatty, can obscure results. Women under the age of 50 most commonly have a denser breast, which leaves greater room for false-positives or any other abnormal test. For women under 50 who do have cancer, a mammogram detects it in about 70 percent of all cases. For those over 50, about 85 percent of breast cancer cases are detected through mammograms. One source explained the risk as, "If a 40 year old woman is screened every year for 10 years, her chance of having an abnormal mammogram result is about 1 in 3". This chance is decreased for those aged 50-60, to about 1 in 4. And of those that have abnormal results, most do not end up being cancer. (2)

However, this leads into the second aspect of the controversy. When an abnormal result occurs, only a diagnostic test can determine whether or not the "cancer" is legitimate. This often painful, time consuming, worrisome and expensive procedures involve extracting fluids from the breast to be tested in labs. Many women find the wait for further results nerve racking, especially due to the fact that most end up being negative. And, many studies have shown that these women "have more anxiety and worry about having breast cancer, even after being told they do not have cancer."(2) For those under the age of 50, there is about a .03 % chance that the abnormal result will prove to be cancer. For those over 50, that result increases to 14 percent. Still, because younger women have a lower chance of having cancer in the first place, there are a smaller number of breast cancer deaths to prevent, though the percentages may be higher.

Another concern about mammograms revolves around the claim that the radiation exposure to the breast tissue during the process may actually increase the chances of cancer. One source refuted the idea saying that the exposure is comparable to a dental x-ray, with the possibility of causing the death of 1 in 10,000 women, under the condition of one mammogram per year for ten years. (1) In contrast to this finding, other sources claim that the chance is far greater, 1 in 2,700 chance, cumulating with every exposure. (3). The details of this claim and study were not mentioned, however.

And the most obvious affair associated with mammogram testing is their helpfulness in the first place. The idea that the mammogram acts as only a time consuming, expensive insurance that is just as effect with self-breast examinations comes to the forefront. In a 1992 Canadian study of 25,000 women with an equal number of routine screeners and an equal number of non-screeners found that both groups had the same rate of breast cancer deaths. (3) The source also went on to claim that:

"Seven other randomized studies have also reported no statistically significant reduction in the death rates of women who underwent routine screening mammography." (3)

In addition to that finding, the Lancet, a highly esteemed medical journal, published results that clashed with that particular study. In the study of the 54,000 women over a 14 year medical history (half of which were regular screeners while the other half relied on only medical check-ups), a 21% lower death rate from breast cancer was found in the group that used screening. (4) Said Dr. Freda Alexander of the University of Edinburgh in Scotland, who conducted the study, "The results for younger women suggest benefit from introduction of screening before 50 years of age." (4) And, in a comparable study involving 100,000 women, the death rate about 27% percent lower among those who were regular screeners. (4)

Nonetheless, organizations remained strongly divided on the topic. Of those that officially recommend routine mammograms for women under 50 include the American Cancer Society, the American College of Obstetrics and Gynecology, the American College of Radiology and the National Cancer Institute. (2) Of those who do not include, the American College of Physicians, the International Agency for Research on Cancer, the American Academy of Family Practice and the Canadian Task Force on the Periodic Health Exam.

All in all, the data seems to generally support the idea of receiving the routine exams. While nearly every organization approves their use though, mammograms only become controversial when age is factored into the picture. In accordance with the 1999 UK Trial of Early Detection of Breast Cancer, researchers said "The analysis of results by age at entry continues to suggest that screening of women aged 45-49 years is at least as effective as is that for women over 50 years." (4) The principle of defining the controversy by age seems, in retrospect, mildly irrelevant. With the support of data from the numerous valued sources, it is not dangerous nor is it impractical to undergo mammograms for those under 50. But, more importantly, the decision should be made on a personal level. Obviously, for those at greater risk (due to genetics, history, or other factors), the decision seems an obvious one, in accordance with most of studies overviewed here. As with any decision about a person's body, it should strictly remain a personal one but it is always important to make use of responsible medical technologies and resources.


References


(1)National Cancer Institute

(2)Potential Benefits and Risks of Mammograms

(3)Should you get a mammogram?,

(4)Mammograms before age 50,


Religion vs. Science
Name: Laura Silv
Date: 2002-09-30 16:20:24
Link to this Comment: 3037


<mytitle>

Biology 103
2002 First Paper
On Serendip

I grew up with the impression that science and religion were incompatible. Maybe it was because I went to Catholic school, and my religion teacher thought I was trying to be sarcastic when I asked things like, "If the pope is infallible, why did he say that Galileo was wrong about the sun being the center of the universe?". When she answered, "Because the pope didn't know any better", I said, "Isn't he supposed to know better if he's the pope?", and the teacher told me to stop asking dumb questions and said we'd get into it later (which of course we never did). So out of fear of flunking fifth grade religion AND science, I adopted the policy that what was taught in Science class applied only to science, and ditto for Religion.

Nine years later, I realize that maybe my questions weren't so dumb. Some people spend their lives trying to bring out the similarities between religion and science, while others spend their lives trying to tear the two apart. For my paper, I wanted to explore possible reasons why these two opposing sides have never been able to find common ground enough to unite upon (fade in War: Why Can't We Be Friends?).


One reason religion is unwilling to familiarize itself with science because science offers simple, valid, irrefutable and, above all, logical explanations for some of the "miracles" described in holy books. The Nile, for example, is known to turn red when it is overgrown with bacteria. Sorry, Moses. Carbon dating of fossils tells us that there was life on this planet long before the estimated time of the creation of Adam and Eve. Sorry, God. You can see where the religious leaders might get a little worried that their congregations would begin to fall away from the belief that an invisible man in the sky makes miracles happen, if too many explanations which appeal to their more rational way of thinking were to come up.


There are those, of course, who would argue that the Torah and the Bible are not meant to be taken literally but figuratively; that Adam and Eve are representative of all men and women, that the story of the Creation in seven "days" it meant to be a more figurative term for a longer amount of time (substitute the word "eon" for "day" in the Creation story and you'll get what I mean). That's nice and all, but it begs the question, where does the line between figurative and literal translations end? For example, the story of Esther, which, as opposed to some other stories in the Bible, is very specific when it comes to times, dates, names and places - not only that, but the story is historically supported as it is written. Should we apply the figurative translation to something which is so obviously meant literally? Of course not. So when does the figurative translation end and the literal begin? This is one question which scientists and theologians still have not been able to come up with a satisfactory answer to.


Another difference which I have found between science and religion is the definition of "truth". To the scientist, who is more skeptical, truth is ever-changing - the more one sees of the world, the more observations one makes, the closer one comes to the truth. In laymen's terms, the truth is out there. It is the goal which may not ever be attained, but that certainly won't stop the scientist from coming as close as she can. The scientist does not define "truth" by what it is, but rather by taking away the attributes which truth is not. In this manner, the definition of truth is always changing and never finalized. The theologian, on the other hand, defines truth as that which is printed in the Holy Texts, that which comes from the mouth of God Himself (although personally I believe that if there IS a god, she would have to be a woman, but that's another paper topic). Truth is absolute, definitive, unchanging and final. You can see the truth, touch it, feel it.


Although there are undeniably many differences between the issues encompassed by science and religion, few people ever take the time to realize how similar in nature the two really are. Think about it - both science and religion have their own set of books from whence all their information is drawn, instructors (if the professor will forgive me for comparing him to a pastor), philosophies of life and death, instructions and jargon. It's actually a little creepy to think of how similar these two spheres really are, for science is a religion in and of itself, and religion is a type of science. Both are learned practices; no one is born with an instinctive knowledge of the divine just as no one is born with an automatic knowledge of biochemistry. Perhaps the reason why these two fields can never seem to quite get along is because they are too similar in their nature while being dissimilar in their specific outlooks.


Science and religion are related to each other in ways both strange and familiar - for example, we can imagine that there are people raised in religious backgrounds who find science to be more practical and logical than the Invisible Man in the Sky, but what most people don't realize is that a majority of scientists are religious, not atheists. My former employer was a chemist, and I remember he said once that he and most of the people he worked with found that their faith in religion is strengthened by their work rather than diminished by it, for the detail and intricate design which is found in science and nature led them to believe that there has to be some divine power which holds the world together in the delicate balance in which it exists (Dr. Don Jones, San Bernardino, California).

Although this paper is only a small portion of the massive study which ensues on the comparison between religion and science, I hope that I have put a new spin on the comparison, for I would hate to have written anything too hackneyed and be considered unoriginal. I hope perhaps to continue the comparison in a later paper.


Tay-Sachs Disease: The Absence of Hope
Name: Lauren Fri
Date: 2002-09-30 17:04:52
Link to this Comment: 3040

<mytitle> Biology 103
2002 First Paper
On Serendip


Introduction.

When a couple has a baby, they pray that they will have an easy childbirth and a healthy newborn. However, an easy delivery and a healthy-seeming baby does not guarantee a problem-free childhood. Children born with Tay-Sachs Disease (TSD), a fatal genetic disorder, do not show symptoms until they are six months old, but almost never survive past the age of five.

Tay-Sachs Disease was named for Warren Tay and Bernard Sachs, two doctors working independently. In 1881, Dr. Tay, an ophthalmologist, described a patient with a cherry red spot on the back of his eye; the presence of this red spot has become a clear signal for the diagnosis of TSD. Several years later, Dr. Sachs, a New York neurologist, described the cellular changes caused by TSD, observed the hereditary nature of the disease, and noted its predominance among Jews of Eastern European descent (1).

A rarer form of the disease known as Late-Onset Tay-Sachs exists, but this paper will focus on classic infantile TSD and explore its scientific and social implications.


Definition and symptoms.

TSD is defined as a genetic disorder that causes the progressive destruction of the central nervous system (2). TSD occurs in babies with the Tay-Sachs gene on chromosome 15 (1). All affected babies exhibit a red spot in the back of their eyes. TSD is caused by the absence of hexosaminidase A (Hex-A), an enzyme whose presence is necessary for the breaking down of acidic fatty materials known as gangliosides. In an unaffected child, gangliosides are made and quickly biodegraded as the brain develops. When a child is afflicted with TSD, ganglioside GM2 accumulates in the brain, distending cerebral nerve cells and forcing physical and mental deterioration (3).

Once the symptoms begin, they grow progressively worse. First, normal development slows, stops, and eventually reverses. Often the baby loses newly-acquired skills such as the ability to crawl, roll over, and interact with its environment. Second, the baby loses peripheral vision and exhibits an "abnormal startle response." Third, general mental function becomes clearly debilitated, and the baby experiences recurrent seizures. Often, children lose coordination, ability to swallow, and respiratory ease. Ultimately, the child becomes blind, deaf, paralyzed, mentally retarded, and completely unable to interact with or respond to his/her environment (1).


Risk factor.

Tay-Sachs Disease is considered extremely rare among the general population. Occurrences of TSD are not limited to, but definitely concentrated in, certain ethnic sub-populations. TSD is often considered a "Jewish disease," but French-Canadians who live near the St. Lawrence River and the Cajun population of Louisiana are also at high-risk. Still, most research on the groups affected most by TSD focuses on American Ashkenazi Jews. The frequency of TSD within the Jewish population is attributed to the "founder effect" in which "genetic disorders and mutations within a closely knit minority group are perpetuated over generations" (4).

The statistics on the frequency of TSD among Jews is startling. TSD potentially affects one in every 2,500 Ashkenazi Jewish newborns (5). Ashkenazi Jews are one hundred times more likely to have an affected child. Only about one in three hundred in the general population (non-Jews and Sephardic Jews) are carriers of the TSD gene, compared to approximately one in thirty Ashkenazi Jews (6). Most people who are carriers are completely unaware since they are perfectly healthy. The gene for Tay-Sachs can be passed down through many generations before anyone in the family line gives birth to a TSD-afflicted baby. If both parents are carriers, the baby has a fifty percent chance of being a carrier, and only a twenty-five percent chance of being born with TSD. There is a twenty-five percent chance that the child of two carriers will be completely unaffected. If only one parent is a carrier, their child has a fifty percent chance of being a carrier and a fifty percent chance of being completely unaffected. A baby can only be born with TSD if both parents are carriers (7). Due to its recessive hereditary characteristics, TSD is classified as autosomal recessive (8).


Prevention and detection.

While there is no cure or proven treatment for TSD, it is a "preventable tragedy" (9). First, it is now recommended that couples within high-risk populations get tested to see whether or not they are carriers. This involves only a simple blood test whose results can be interpreted using either enzyme assay (checks the amount of Hex-A in the bloodstream) or DNA analysis (checks for one of fifty known mutations in the Hex-A gene) (9). Once carrier status is determined, a couple may decide to pursue parenthood at their own discretion, baring in mind that even when both parents are carriers, their child still will have a seventy-five percent chance of being perfectly healthy.

Once the mother is pregnant, she has two options for prenatal diagnosis. The first, amniocentesis, involves removing a small amount of amniotic fluid during the sixteenth week of pregnancy (10). If there is an absence of Hex-A in the fluid, the fetus is affected by TSD, and the couple can choose to have a therapeutic abortion. Another, newer option is chorionic villus sampling (CVS), which is performed during the tenth week of pregnancy (11). This procedure involves removing a small amount of placenta, and test results are returned more quickly than with amniocentesis. Also, should the couple choose to have an abortion, CVS allows them more privacy and safer pregnancy termination (9). Genetic counseling is widely recommended to all couples who are members of high-risk populations, especially those who are determined carriers.


Social and sociological implications.

It is important to analyze the effects of Tay-Sachs Disease from a broader cultural perspective. Because TSD occurs mainly in clearly-defined populations, and also because of the profound moral issues raised by genetic screening, screening-based abortions, and alleged eugenics, a purely scientific study of the disease would be lacking.

A group called Dor Yeshorim (Hebrew for "the generation of the righteous") provides an illuminating example of how the effects of TSD raise moral, social, and even theological issues. Dor Yeshorim was founded in 1983 by groups of Orthodox Jews (led by Rabbi Joseph Eckstein, father of four children born with TSD) in New York and Israel. Rabbi Eckstein intended to do everything within his power to eliminate Tay-Sachs disease from the Jewish population. Through programs implemented by Dor Yeshorim, Orthodox Jewish high schoolers are tested to determine whether or not they are carriers. Rather than receiving the results directly (in an effort to curb stigmatization), they receive a six-digit identification number. Then, when two youths are considering marriage or even dating, they are encouraged to call a hotline. They enter their six-digit numbers, and the service deems them "compatible" or "incompatible" (if both are carriers). Eight thousand people were tested in 1993, and eighty-seven couples who were previously considering marriage decided against it based on their genetic incompatibility (12). A few years later, Dor Yeshorim expanded its testing services to Yeshiva University, sparking controversy. Originally, Dor Yeshorim was aimed at a the Chasidic population, who still arrange marriages; however, arranged marriages are extremely rare at Yeshiva University, and thus the appropriateness of the testing was questioned there (13). Whether or not Dor Yeshorim is morally sound, their tactics have been effective; in 1995, they released a statement which declared: "Today, with continual testing, new cases of Tay-Sachs have been virtually eliminated from our community" (14).


Conclusion.

While Dor Yeshorim's position is a radical one, drastic steps must be taken to put an end to the devastation suffered by families who must cope with the hopeless misery of Tay-Sachs. There is still no cure, and no effective method of treatment. Research is being conducted that utilizes gene therapy to try to repair the mutated Tay-Sachs gene, but attempts have been largely unsuccessful (15). For now, carrier screening and prenatal testing are encouraged for all couples who may be at risk. TSD can also occur in babies who are not born to couples in high-risk populations, so testing and education must be expanded to put an end to Tay-Sachs disease for good.



References


1) Tay-Sachs Disease. A question-answer overview of TSD, with clear explanations of basic questions.

2) Tay-Sachs Disease (Classic Infantile Form). A comprehensive guide to TSD including who is at risk and how it is transmitted. Provided by The National Tay-Sachs and Allied Diseases Association.

3) NINDS Tay-Sachs Disease Information Page. A very cursory explanation of TSD (provided by The National Institute of Neurological Disorders and Stroke) along with a selective listing of relevent organizations.

4) Erasing Tay-Sachs Disease. A web paper by a Dartmouth student who discusses the history and controversy surrounding efforts to eliminate TSD.

5) Tay-Sachs Disease Fact Sheet. A fact sheet on infantile TSD as well as late-onset TSD, with a distinctly Jewish-centric perspective. (Provided by The National Foundation for Jewish Genetic Diseases.)

6) Montreal Tay-Sachs Disease Screening Program. A question-answer explanation of the disease, directed at couples thinking of having a baby.

7) Genetic Fact Sheet 20: Tay-Sachs Disease. A focus on the genetics behind TSD.

8) Modes of Inheritance. An explanation of how recessive conditions are inherited.

9) UMMC - Tay-Sachs Disease. An excellent question-answer exploration of TSD that is scientific without being esoteric. (From the University of Michigan Health System.)

10) Amniocentesis. An explanation of the amniocentesis procedure.

11) Chorionic Villus Sampling. An explanation of the procedure referred to as CVS.

12) Eugenics: Discussion Questions. A discussion of the ethics (or lack thereof) in Dor Yeshorim.

13) Genetic Screening Causes Controversy. A student piece from the Yeshiva University campus paper detailing the campus controversy over Dor Yeshorim.

14) Why Eugenics Is Here To Stay. An article about trends in eugenics with mention (and criticism) of Dor Yeshorim.

15) Tay-Sachs Disease: Public Health Information Sheet. A question-answer explanation of TSD, with a focus on detection, prevention, and research efforts. (Sponsored by The March of Dimes.)

16) *Tay-Sachs Disease Hub.* While this link is not directly cited in the above paper, it provides a comprehensive listing of Tay-Sachs web resources, organized by category.


Chemical Castration: The Benefits and Disadvantage
Name: Katherine
Date: 2002-09-30 17:05:50
Link to this Comment: 3041


<mytitle>

Biology 103
2002 First Paper
On Serendip

Child molestation is a serious problem in the United States. The legal system is lenient with pedophiles, punishing them with insufficiently brief prison sentences that are further abbreviated by the option of parole. Some child molesters are released back into society after serving as little as one fourth of their prison-time (1). Recidivism is extremely high among child molesters; 75% are convicted more than once for sexually abusing young people (6). Pedophiles commit sexual assault for a variety of reasons. Some rape children because of similar instances of abuse in their own childhoods (1). Some view the act of molestation as a way to gain power over another individual (1). Some pedophiles act purely on sexual desires. No matter what causes these heinous criminals to molest children, their crimes are inexcusable. Unfortunately, utilizing prison as a punishment for child molestation creates only a Band-Aid solution for the issue of sexual assault and other resolutions need to be investigated.

Alternative options for the punishment of male pedophiles are being explored in the status quo. Scientists have observed the link between testosterone and aggression and concluded that high levels of testosterone correspond with increased violent and aggressive behavior in men (5). "It is the reason that stallions are high strung and impossible to train, the reason male dogs become vicious and start to bite people. It's why boys take chances and chase girls, why they drive too fast and deliberately start fights. In violent criminals, these tendencies are exaggerated and carried to extremes" (8). In an effort to stop male pedophiles, male child molesters have the option of being chemically castrated in some states. "Chemical castration is a term used to describe treatment with a drug called Depo-Provera that, when given to men, acts on the brain to inhibit hormones that stimulate the testicles to produce testosterone" (2). Depo-Provera is a common birth control pill that containing a synthetic version of the female hormone progesterone. Advocates of chemical castration hope that injections of Depo-Provera will prevent men from molesting children.

However, some experts argue that Depo-Provera is ineffective and will not prevent molestation. Forced castration may have the adverse affect of angering a criminal, increasing his violent tendencies and lead to additional sexual abuse (2). Additionally, Depo-Provera is reversible. Therefore, unless injections are mandatory and monitored, pedophiles will not be "cured" by the drug therapy. The child molester will have renewed sexual fantasies and high levels of testosterone if the injections are discontinued (7). Joseph Frank Smith, a convicted child molester, became an advocate for chemical castration after undergoing the therapy in the 1980s. Smith stopped using the injections in 1989. In 1999, he was convicted for molesting a five-year old girl and immediately returned to prison (3). Depo-Provera also has caused side effects in some men "including depression, fatigue, diabetes, [and] blood clots" (2). Chemical castration may cause some detrimental effects in child molesters.

Regardless, Depo-Provera has been proven to inhibit the abilities of pedophilias to assault children. The progesterone in Depo-Provera counteracts the biological tendencies that lead men to rape children (4). By lowering testosterone, Depo-Provera reduces sex drive (6). Males can have sexual intercourse (7) but do not want to. Depo-Provera also decreases aggressive tendencies by reducing testosterone. "[T]he castrated criminal would be more docile and have a better opportunity to be rehabilitated, educated, and to become a worthwhile citizen" (1). Castration removes the biological and chemical tendencies that are intrinsically linked to the desire to rape in males.

Depo-Provera also reduces recidivism rates. When used as a mandatory condition of parole (6), chemical castration decreases the occurrence of repeat offenses from 75% (6) to 2% (1). Prison is less desirable because it serves no rehabilitative purpose for sexual offenders. Pedophiles who spend time festering in a prison cell are given extensive downtime to concoct new sordid sexual fantasies involving children. These horrific visions are translated into terrifying realities once the criminal comes back into contact with children following his inevitable release from prison (1). Prison simply produces sneakier criminals. Pedophiles do not want to be incarcerated again so they think of new ways to rape children that will avoid detection and future detention (6). Prison increases aggressive tendencies in male pedophiles while chemical castration addresses the root causes of sexual assault and decreases further sexual deviance.

Although chemical castration is not the perfect solution to inhibit child molestation, it discourages sexual assault better than incarceration. Injections of Depo-Provera decrease the aggressive tendencies that lead to rape in males. Castration also discourages sexual fantasies and eradicates sexual obsessions. Pedophiles are reduced to apathetic pacifists. Regulated chemical castration should be encouraged as an alternative to prison for male child molesters in order to stop recidivism and decrease instances of sexual assault.

References

1)Castration Works, an article by Susan Feinstein for 212.net regarding the implications of chemical castration on pedophiles.

2)Chemical Castration Law May Backfire, Experts Warn, an article off the ACLU Newswire from September 18, 1996.

3)Convict Who Had Chemical Castration Gets 40 Years For New Sexual Attack, the Roswell Daily Record Online, February 4, 1999.

4)Is Chemical Castration an Acceptable Punishment For Male Sex Offenders, by LaLaurine Hayes for the online database "Sex Crimes, Punishment and Therapy" constructed by students in a Psychology course at California State University Northridge.

5)High Testosterone Levels Linked to Crimes of Sex, Violence, Volume 1 No. 3, 1995, pg. 2.

6)Repeat Sexual Offenders Must Face Chemical Castration, an article
prepared by Crystal Hutchinson, a student at Monroe Community College in New
York State.

7)Chemical Castration: A Strange Cure for Rape, from the Kudzu Monthly, an e-zine popular among the
Southern States.

8)Dr. Robert Girard, in a scientific study on factors that contribute to criminal conduct, in an article by Susan Feinstein chronicling the effects of chemical castration as posted on 212.net.


Bipolar Disorder and the Connection to Dyslexia
Name: Meredith S
Date: 2002-09-30 17:17:07
Link to this Comment: 3042


<mytitle>

Biology 103
2002 First Paper
On Serendip

Dyslexia is affecting more and more children every year, and although most educators would agree that dyslexics are "not people who see backwards, (1)" there is still no solid theory on why dyslexics cannot differentiate between the sounds "or" and "ro." At the same time, bipolar disorder is becoming more and more understood by scientists as more and more people, especially children, are diagnosed with every year. These two seemingly different disorders, both lacking a cure, are often found within the same children, and yet no substantial research has been conducted nor have educators been taught how to teach some one who is both dyslexic and bipolar.

The most universal diagnosis of bipolar disorder consists of massive mood swings, ranging from manic to severe depression, all within a few hours. Manic episodes often consist of long periods, in which sufferers may feel elevated, think that they are invincible, have trouble focusing on one topic, and need little to no sleep. Depression episodes can also be long, but instead of riling sufferers, it makes them fall into a state a feeling hopeless, increased apathy, decreased appetite, and "a drop in grades, or inability to concentrate (2)."

Bipolar disorder, especially found in children and adolescents, is not just a phase that they can hope to out grow. It is a biological phenomenon in which the brain overworks in some areas to compensate for others not working hard enough. Neurotransmitters are cells that send signals between brain cells using chemicals, such as serotonin and dopamine, also known as monoamines (2). In bipolar patients, "40 percent have a loss of the serotonin 1a receptor, which may contribute to the atrophy of neurons, and may set off depression (3)." These lack of neurons affect the other parts of the brain that control understandings of rewards, possible dangers, and emotions. By not having as many messenger cells, these areas of the brain are not as connected to each other, meaning that sufferers' brains are not able to control themselves as much as a person without bipolar disorder because their brains are "wired differently (4)."

Bipolar disorder used to be thought of as an adults-only disorder. While only 1-2 percent of the adult populations suffer from this disorder, it is now thought that up to one third or 3.4 million children may be exhibiting symptoms of bipolar disorder (3). Bipolar disorder by itself would seem bad enough, but "it is suspected that a significant number of children diagnosed in the United States with attention-deficit disorder with hyperactivity (ADHD) have early onset bipolar disorder instead of, or along with, ADHD (2)."

ADHD is, like most disorders, not completely understood, and although the symptoms are now recognized, the causes are still in dispute. Sufferers are often hyperactive, unruly, and unable to concentrate. Medications as Ritalin are often used to combat such hyperactive symptoms and help sufferers return to a normal life. While the symptoms are often treated, the cause of them is not understood nearly as well. ADHD is thought to be the result in imbalances, within and outside of the body. Factors such as congenital and biochemical are thought to be main causes, although more and more research is proving that stress can also be a significant contributor to the outbreak of symptoms. For every child or adult who is suffering from ADHD, approximately 50 percent also suffer from other learning disorders, the main one being dyslexia (5).

At one time, a dyslexic was dismissed being lazy or even dumb. It affects one in 20 people (6), and is now known as one of the most common learning disorders. Thought to be congenital, dyslexics have problems "translating language to thought or thought to language (1)," meaning that they often have problems reading or remembering how to spell a word, not matter how often they see it. One theory as to why dyslexics cannot make the connection between written and spoken language is that the they have smaller magnocellular pathways, routes on which magnocells, or nerve cells found between the retina and the place where the right and left images are combined to form one image, carry the image to the brain. Since these pathways are smaller, some of the information may be "lost (7)."

But Dyslexia is not purely a visual problem, and the answer to the riddle has yet to be solved. One thing researchers do know it that stress, anxiety, and other factors can increase the impairment of this disorder (5). Men and women, along with children cannot have their dyslexia cured, but they can learn to live with it. Unfortunately, dyslexia does not display many physical symptoms; so diagnosing the disorder can be difficult, unless one knows quite a bit about it. As a result, many children are not diagnosed until later, and still have to deal with the stresses of being thought of as an underachiever or having a lower IQ by other fellow students and even teachers. When dyslexia is combined with ADHD or bipolar disorder, the levels of stress and anxiety on an individual skyrocket, making all of the conditions worse, and forcing the student to have an even harder time with school and everyday life.

The possible link between dyslexia and bipolar disorder has not been nearly as extensively investigated as the two disorders in isolation. But a link may exist. If about majority of cases of bipolar disorder also involve ADHD and about half of all ADHD cases involve dyslexia and/or another learning disability, it may be possible that a direct link between bipolar disorder and dyslexia may exist. At the moment though, there is no research ongoing directly connecting the two.

Teachers are responsible for educating all children, including those with learning disabilities. To, a multisensory teaching approach, known as the Orton and Gillian approach (8) has been proven to help dyslexics learn and master previously impossible tasks substantially. But, the Orton and Gillian method can only help those with dyslexia alone and does not ever consider other disorders that may be at play within the same student. Although the multisensory approach can be easily adapted for each dyslexic, seeing as each case of dyslexia varies in severity, it has yet to be modified to compensate for ADHD or bipolar disorder. Until the day when scientists and teachers formulate connection and method of teaching that deals directly with a combination of learning disorders, sufferers are going to continue to struggle, in turn making their disorders even worse and giving them more power over their lives.


References

1) www.childpsychology.com - a database for articles and links for psychological disorders

2) www.bpkids.com

3) Kluger, Jeffrey, "Young and Bipolar," Time Magazine 160 (August 19, 2002): 38-51

4) www.jama.ama~assn.org


5) www.delawareonline.com

6) www.news.bbc.co.uk/1/hi/health/343139

7) www.exn.ca/Stories/1999/04/21/53.asp

8) www.ortonacademy.org


Sum yourself up in a single cell
Name: Sarah Tan
Date: 2002-09-30 17:36:19
Link to this Comment: 3043


<mytitle>

Biology 103
2002 First Paper
On Serendip

Somehow during high school, a friend of mine was watching me pack a travel bag, and based on stream-of-conscious thoughts I voiced and my overall level of high stress, she likened me to a paramecium. The comparison was based mainly on our hazy recollections from freshman year biology, which didn't involve much on paramecia. We imagined paramecia swimming around at random, bumping into things, getting disoriented, and waving their cilia in a frenzy. For some time now, I've wanted to find out how accurate this image was and therefore how accurate the comparison was, but I never had a pressing reason to look these things up. For this purpose, I was more interested in the behaviors of paramecia as opposed to their physical structure, though it turns out that the latter cannot be fully disregarded in trying to understand the former.

One of the first things I found that had never occurred to me about paramecia is that they are three-dimensional. All the pictures I'd seen before of paramecia presented a view that looked two-dimensional so that if they rotated, they'd just become a line of cilia. I realize that this couldn't reasonably be the case because it would be impossible for them to always be right side up under the microscope. They travel by rotating lengthwise on an invisible axis and moving forward or backward depending on the way the cilia beat (4). Although it seems like an unproductive use of energy, this method in fact allows the paramecia to move in direct lines despite being asymmetrical creatures. Another interesting characteristic of paramecium movement that came up in more than one source is their reaction to encountering a block in a path. When that happens, paramecia go backwards a bit at an angle, turn slightly, and try again. They repeat this trial and error process until it is successful (2). The paramecium's complexity has been discussed regarding the abovementioned navigation of obstructions, as well as recognizing dangerous situations and the apparent learning from past experiences, all done by a single cell with no nervous system, neurons or synapses (7).

The common misconception that paramecia or other single-celled organisms are simple or more primitive that multi-cellular organism is completely untrue. Paramecia, as types of ciliates are larger than the average unicellular eukaryotes, and they are arguably the most complex unicellular organisms. Because single-celled organisms must perform all functions with a single cell without the advantage that multi-cellular organisms have of specialized cells for specific functions, their structure may be much more complex than the cells of larger organisms (1). Such functions include movement, water balance, food capture, sensitivity to environment, and possibly self-defense. Defense, however, is one function where there seems to be some uncertainty as to whether paramecia have it. When they are disturbed, they rapidly shoot out trichocysts-short, thread-like structures (5). While most sources seem to agree that they are used for defense, at least one argues that the trichocysts of paramecia are seldom successful in warding off predators (3). Another source seems to describe trichocysts but calls them extrusomes (1), and the difference in what should be standard terminology is confusing.

Although paramecia are almost always described as cigar or slipper-shaped, they are not stuck in that form. They are able to change shape to squeeze through narrow passages and are therefore more versatile than I had previously realized (10). The reason they maintain their usual oval form, though, is their exterior membrane, called a pellicle, which is elastic enough for small changes but stiff enough to protect them (4). Understanding just how small they are was also enlightening, particularly when a video said that several hundred thousand paramecia could live in a single dewdrop (10).

So what about the comparison that this started out with? Up until this point, the paramecium seems rather sedate, which is not what I had hoped for. Nevertheless, I continued searching and turned up some video clips of paramecia in action under the microscope. This was where I could watch the speed at which they swam around, and I finally found evidence for our assumptions in the beginning. Given the 10,000-14,000 cilia on each cell's surface (3), looking at the speed at which they can zoom around is just fun (6), but specifically zooming in on the cilia is fascinating (9). I think that in real time, the paramecium does fit the qualities that my friend meant when she described me as a paramecium because it seems to have to independently do with one cell what most other organisms only do with many more. Additionally, the new information I learned in researching paramecia for the paper also suits my personality surprisingly well, such as dubious defense mechanisms, insisting on trying to run through brick walls, using roundabout way for everyday tasks which works well for them but perhaps not many other similar organisms.

Even if this is not a topic of great interest to the general population, or even to anyone besides my friend and I who share this joke, I think that researching paramecia has been useful to me in helping to get it "less wrong." Neither of us had biology in mind when we started this, but double checking the biological facts ensures that we are using the metaphor correctly, and we can now better explain the similarities than we could before. One problem that this method of research shows, however, is the ease with which information and data can be adopted and manipulated by someone who has a specific conclusion to prove. There are all kinds of ways to spin the supposed facts so that they show a predetermined result, and if the listener is not aware of the way in which the research was obtained, it is all too easy to be misled.

References

(1) Introduction to the ciliata

(2) Paramecium - www.101science.com

(3) Paramecium - NYU

(4) Paramecium

(5) Protist Images: Paramecium Caudatum

(6) Molecular Expressions Digital Media Gallery: Pond Life - Paramecium (Protozoa)

(7) How does a paramecium move and process information?

(8) Paramecium by phase contrast

(9) Micscape video gallery

(10) National Geographic video


Raising Children Vegan
Name: Chelsea W.
Date: 2002-09-30 17:40:55
Link to this Comment: 3044


<mytitle>

Biology 103
2002 First Paper
On Serendip


In recent years, the prevalence of vegetarian and even vegan diets has increased substantially, with a 1997 Roper Poll estimating the number of vegans in the United States to be between one-half and two million (though it is worth noting that it is difficult to gather accurate statistics on the subject).((7)) And, many people are choosing to raise their children on these diets - I'm one of those children who was raised vegetarian.

"Vegetarian" is a broad term referring to diets without meat. Often, it refers to "lacto-ovo vegetarians": people who do not eat meat, but do eat both eggs and dairy. "Vegan" refers to those who do not use a wider range of animal products, generally considered to include meat, eggs, dairy products and sometimes honey ((5) ) (though some people, myself included, may adopt the label "vegan" and still eat honey). It is also sometimes used to refer more generally to a lifestyle aimed to reduce animal suffering, including, for example, not purchasing leather products.((5))

People make the choice to become vegan for a variety of reasons, commonly involving, though often not limited to, a concern for animal rights.() Other reasons may relate to health, spirituality, ecology - or any number of other issues.((5)) And, similar rationales would likely apply for people who wish to raise their children on a similar diet. Part of my interest in the subject stems from the likelihood that I will eventually decide to raise my own children vegan.

Even vegan advocacy groups, such as Vegan Outreach, are generally quick to acknowledge that merely removing certain foods from one's diet, without otherwise seeking to balance it, is unlikely to be healthy.((1)) Similarly, the health value of removing wheat, for example, from one's diet would be questionable if it were not replaced with other grains as a staple of the diet (there are, in fact, plenty of other, less popular grains teeming with nutrition). But there is fortunately a plethora of information available on how best to meet nutritional needs on a vegan diet. Here I'll specifically address some of the issues pertaining to the needs of young vegans (as it is interesting and worth noting that young vegans, just like all children, have nutritional needs related to, but unique from, those of adults). For more general information on vegan nutrition or veganism in general Vegan Outreach((1) ) is a good place to start.

Nutrition
Veganism is recognized to have certain nutritional advantages, though having other areas of potential deficiencies to watch. The American Dietetic Association (ADA) gives as its position on vegetarianism at large "that appropriately planned vegetarian diets are healthful, are nutritionally adequate, and provide health benefits in the prevention and treatment of certain diseases."((8)) And, child-rearing "expert" Dr. Spock even ultimately endorsed vegan diets for children.((3)).

Breast-feeding is recommended for vegans as for other infants. But, the mother should be careful to maintain sufficient nutrients in her own diet and thus in her breast milk.((6)) Vitamin B-12 and iron are noted as nutrients to particularly watch on this matter, and moderate exposure to sunlight should be allowed for in order to maintain vitamin D levels.((6))

B-12 and iron continue to be nutrients to watch through-out development. It is important to ensure an adequate supply of vitamin B-12 in children's diets (even more so than adults, especially as those who have been raised eating meat may thus have stored B-12 in their bodies), and this is often done via supplements.((2)) Additionally, sufficient sources of protein must be present in a diet. And, young children, more notably than adults or teenagers, should have substantial fat intake (calorie intake should not be restricted for children before at least age 2) because of the swift growth normal at that period of time.((6)) Calcium is also an important nutrient, especially during teen years. Although it is often associated with dairy, calcium can be obtained from several other sources including leafy green vegetables (such as kale) and fortified soy milk.((6))

Although I did not encounter any studies about advantages of the vegan diet specific to children, there is a great deal of evidence of the health advantages overall of diets low in animal products. "Vegetarian diets," for example, "are associated with a reduced risk for obesity, coronary artery disease, hypertension, diabetes mellitus, colorectal cancer, lung cancer, and kidney disease."((1))

On a Practical Note
The potential "inconvenience" of a vegan diet is often brought up as a stumbling block. Yet, as veganism becomes more commonplace, so does the availability of vegan food. Given the typical contents of my cabinets at home, in fact, I would likely find it extraordinarily inconvenient to attempt to plan a week's worth of meals containing meat (my poor cooking skills left aside), and most restaurants (fast food typically excluded) are willing to alter menu items to suit vegans if there aren't options all ready on the menu. But, certainly, being vegan in the context of an outside world which is not, does present certain frustrations, especially in the context of travel to places where vegetarianism and veganism are less popular.

On Paying Attention to Nutrition
The above checklist of nutritional "do's and don'ts" seems to raise the larger question of just how attentive parents might be expected to be (or endeavor to be) to the nutrition of their children - whether vegan or not. On this issue, Reed Mangels, Ph.D., R.D. makes an excellent point. "Of course it takes time and thought to feed vegan children," she writes. "Shouldn't feeding of any child require time and thought?"((6))


References

1)Vegan Outreach, a portion of the website of an organization called Vegan Outreach

2)an article on vegan children in the June 2001 issue of the Journal of the American Dietetic Association

3)an article on Dr. Spock's endorsement of a vegan diet for children in the April 22, 2001 edition of the Knight Ridder/Tribune Business News

4)Considerations in planning vegan diets: Children, an article in the June 2001 issue of the Journal of the American Dietetic Association

5)a useful veganism FAQ

6) Wasserman, Debra. Simply Vegan. Baltimore: Vegetarian Resource Group, 1999. (Note: The nutrition section of this book is written by Reed Mangels, Ph.D., R.D. The most pertinent section of this can be found online at http://www.vrg.org/nutshell/kids.htm)

7) "Why Vegan?" Pittsburgh: Vegan Outreach, 1999. (Note: Substantial portions of this pamphlet are available online at http://www.veganoutreach.org/whyvegan/)

8)the American Dietetic Association (ADA) stating its position on vegetarian diets.


Sustainability in Action
Name: Carrie Gri
Date: 2002-09-30 19:02:50
Link to this Comment: 3047


<mytitle>

Biology 103
2002 First Paper
On Serendip

Carrie Griffin
Biology 103
Prof. Paul Grobstein
September 30th, 2002

Sustainability in Action

Humanity could never have come this far had it not been for the bevy of natural resources available on this planet and our own ingenuity regarding their use. We've found shelter from forests, energy from oil, and food from sources that might strike a diner as peculiar with a second glance at the unprepared fish or spiny pineapple. And, as history has rolled on, we have become rather taken with all of our innovations, from our multi-colored vinyl siding to our canned tomatoes, and rather than focusing on the fulfillment of human needs, we're now seeking to satisfy human wants, a Sisyphean task.
Consequently, humans have polluted the air, depleted the soil's nutrients, rendered water unpotable, chopped the tropical rain forest, even heated the globe up a bit, all in pursuit of economic expediency (1).

Fortunately, during the 1960s, a burgeoning social conscience struck America and began to transform the conventional wisdom regarding a variety of issues, including the environment. As the American public grew increasingly aware of environmental concerns, environmentalism soon developed its own political and social agenda. At the core of the movements' goals rests the concept of sustainable development, an answer to the chronic economy versus ecology conflict. In the words of the 1987 Bruntland Report, Our Common Future, "sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs" (2). Sustainable development acknowledges the interconnectedness between ecology, economy, and a third factor: community. The principle asserts that by focusing on one of the smaller groupings in society, i.e. neighborhoods, cities, townships, the tensions between economic opportunities and ecological preservation can be treated on a local, and therefore better specified, scale. Or, as Minnesota's Office of Environmental Awareness concisely stated, "Sustainable development means development that maintains or enhances economic opportunity and community well-being while protecting and restoring the natural environment upon which people and economies depend " (3) .

As a philosophy, sustainability has been embraced globally by countless individual countries and the United Nation itself. Indeed, the UN cited sustainable development as the hallmark of their environmental creed at the 1992 United Nations Conference on Environment and Development held in Rio de Janeiro, Brazil. The conference ultimately produced Agenda 21, a document that affirms the UN's commitment to sustainability and offers a method of implementation of the philosophy. Agenda 21 was re-examined recently at the 2002 Earth Summit in Johannesberg, where the ideals of sustainable development and the relationship between "the environment, poverty, and the use of natural resources" were again established as important for the upcoming century (4).

Certainly, the ideals of sustainability are often met with enthusiasm and pledges to commit to them. Who can truly argue with a philosophy that, as Stanley Kuston and William Gibson, authors of The Ethic of Sustainability assert, is " a call to ethical responsibility" (5) ? The question now lies beyond the ideal itself but in the practicalities and implementation of the philosophy.

And, ironically, the answer is implicit within the theory. Sustainable development requires local action and therefore depends upon individual behavior for its existence and sustenance. Furthermore, the movement has already started. The September 2002 issue of the Utne Reader highlights " Thirty Under Thirty," a list of thirty in-their-twenties youth (and some younger!) who have taken up the banner of activism for a variety of issues, including sustainability. The article describes the work of Malaika Edwards, a twenty-seven year old resident of Oakland, California who founded The People's Grocery, a "community-owned organic grocery store run exclusively by youth." Distressed by the lack of healthy wares offered in her neighborhood, and further urged by the growing population of unemployed youth, she conceived of a small market that could also, as she put it, " tackle issues of racism and globalization on a grassroots level " (6).

Edwards' story is one of many local tales that fulfill sustainability's credo. Ultimately, these regional actions can serve as the creation of better normative behaviors for consumers. With these seemingly small acts, it could become customary for humans to ask questions regarding environmental viability versus economic practicality, so that sustainable development becomes a part of any manufacturing procedure or any plans for construction. Sustainable development does not have to exist merely as an abstract principle; it only requires thoughtful consumption and decision-making on our parts.


Web Sources

1) 1) World Scientist's Warning to Humanity


2)
2) SD Gateway


3)
3)
Minnesota Office of Environmental Assessment

4) 4) The United Nations

5) 5) Kuston, Stanley and Gibson, William. "The Ethic of Sustainability"


Non Web Sources

6) 6) Optiz, Maria. "Thirty Under Thirty: Young Movers and Shakers."
Utne Reader, Sept-Oct 2002.


References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


Sexual Attraction Among Humans
Name: Diana Fern
Date: 2002-09-30 22:47:53
Link to this Comment: 3050

Diana Fernandez
Biology 103
Professor Grobstein
9/29/02

Sexual Attraction Among Humans

Being a heterosexual female, in the twenty first century, I pride myself on the fact that I take people at more than face value, that I appreciate human beings for their character rather than for their looks. I scoff at women who proclaim that they will not date a guy unless he has substantial material assets, a broad back, and good breeding. Yet why do I find myself making conversation with physically attractive males while blowing the off more unattractive ones? Why does my head whip around when I see a man in a Porsche? Why do my male friends all have the same prerequisites for the perfect female despite race and ethnicity: perky breasts, slim waist, and full lips? Despite most people's lofty notions of equality, and beauty being in the eye of the beholder, we are all susceptible to certain physical, and material traits that make some humans more desirable than others. Perhaps we cannot punish ourselves for our weakness when we see beautiful and successful people, part of the answer lies in the biology and evolution of humans. Males and females have different standards for a desirable mate, and we share many of these characteristics with other animals in the animal kingdom, yet these instincts are inherent for a reason: reproduction.
"As unromantic and pragmatic as it may seem, nature's programming of our brains to select out and respond to stimuli as sexually compelling or repelling simply makes good reproductive sense"(1) . Recent studies have indicated that certain physical characteristics stimulate a part of the brain called the hypothalamus, which is followed by sensations such as elevated heart rate, perspiration, and a general feeling of sexual arousal. So what visual queues instigate these feelings of sexual arousal in men? How does it differ from what women find attractive? "A preference for youth, however, is merely the most obviously of men's preferences linked to a woman's reproductive capacity"(2). The younger the female the better the capacity for reproduction, hence attributes that males find attractive and contingent on signs of youthfulness. "Our ancestors had access to two types of observable evidence of a woman's health and youth: features of physical appearance, such as full lips, clear skin, smooth skin, clear eyes, lustrous hair, and good muscle tone, and features of behavior, such as a bouncy, youthful gait, and animated facial expressions"(2) . Cross-cultural studies have found that men, despite coming from different countries find similar traits attractive in females. Men's preferences are biologically and evolutionarily hardwired to find signs of youth and health attractive in women in order to determine which females are best suited to carry on their gene, and legacy. Healthier and more youthful women are more likely to reproduce, and be able to take care of the children after birth, hence ensuring a perpetuation of the male's gene.
Scientist's have also been establishing that scent plays an important role in deeming females attractive. At certain points during their menstrual cycle women produce more or less estrogen accordingly. During certain times thought the menstrual cycle their sent can be more or less appealing to males. "A research team reports in the Aug. 30 NEURON that the brains of men and women respond differently to two putative pheromones, compounds related to the hormones testosterone and estrogen. When smelled, an estrogen like compound triggers blood flow to the hypothalamus in men's brains but not women's, reports Ivanka Savic of the Karolinska Institute in Stockholm"(3) .
Men are not the only ones subject to biological predispositions in deeming attraction. "Women are judicious, prudent, and discerning about the men they consent to mate with because they have so many valuable reproductive resources to offer"(2) . Men produce sperm by the thousands, yet women produce about 400 eggs in their lifetime, and the trials of pregnancy and child rearing are long and arduous, hence their preferences and what they find sexually attractive in a male are based more on security and longevity of relationships. Athletic prowess is an important attribute to most women that hearkens back to the beginning of man. An athletic and well-muscled male is more likely to be a good hunter hence provide for a family. Large and athletic male can also provide physical protection from other males.
I was speaking to one of my male friends the other day when he mentioned that when he was in a bar speaking to an attractive girl, he always lied about his profession, telling them he was either a lawyer, doctor, or investment banker. What do all of these professions have in common? Money. Women are attracted to a successful male because this is indicative of his ability to provide for a family. This is a desirable trait that is shared by females thought the animal kingdom. "When biologist Reuven Yosef arbitrarily removed portions of some males' (Gray shrike, a bird that lives in the desert of Israel) caches and added edible objects to others, females shifted to the males with the larger bounties"(2) . Yet a man has had more than just the resources to attract a female, he also has to be willing to share them. Women tend to be attracted to more generous men because this is indicative of how they will treat them in the future, a man cannot withhold his resources from a female and their offspring.
Sexual attraction does have biological and evolutionary traits. Yet humans do have the ability to transgress the standardization of what is attractive. The topics that I touched upon can vary from person to person, yet are all inherently a part of the human species. We are not fully beyond the basic drives of our biological and evolutionary makeup, yet not all of our desires for a sexual mate are purely physical and material, there is always the mysterious capacity to fall in love and maintain a lasting relationship with one other person.


1. 5)he evolutionary Theory of Sexual Attraction, a site posted by the university of Missouri, Kansas city.

2.7) Buss. The Evolution of Desire: Strategies of Human Mating. New York: HarperCollins, 1994.
HarperCollins: NY, NY. 1994.
3.
5)Brain Scans Reveal Human Pheromones, a news source found by encyclopedia brittanica when entered the search key word, "sexual attraction"


Do I Have Insomnia?
Name: Maggie Sco
Date: 2002-09-30 23:38:25
Link to this Comment: 3051


<mytitle>

Biology 103
2002 First Paper
On Serendip

Do I Have Insomnia?

For about two and half weeks now, I haven't been able to sleep properly. I feel tired at a relatively normal hour, around eleven or midnight, but when I go to bed I can't fall asleep. I lay awake for hours, and then when I do fall asleep I only sleep for an hour or so before waking up again. In search of a cure for my sleeplessness, I decided to research sleep disorders.

Sleep disorders are much more common than I had expected. According to the National Institute of Neurological Disorders and Stroke, about 60 million Americans per year suffer from some sort of sleeping problems. There are more than 70 different sleep disorders that are generally classified into one of three categories: lack of sleep, disturbed sleep, and too much sleep (1). All three types of disorders are serious problems and can pose a grave risk to the sufferer's health, but because of my problem I have decided to focus my paper only lack of sleep, or insomnia.

To understand why not getting enough sleep was affecting me so much, I needed to understand a little more about sleep. Sleep is a period of rest and relaxation during which physiological functions such as body temperature, blood pressure, and rate of breathing and heartbeat decrease (2). Sleep is essential for the normal functioning of the body's immune system and ability to fight disease and sickness, as well as for the normal functioning of the nervous system and a person's ability to function both physically and mentally (1). Sleep also helps our bodies restore and grow, and some tissues develop more rapidly during sleep. There is also a theory that while the deeper stages of sleep are physically restorative, rapid-eye movement (REM) sleep is psychically restorative. REM sleep also might incorporate new information into the brain and reactivate the sleeping brain (2). These are just a few of sleep's less obvious duties, not to mention that it refreshes us and makes us alert for the next day.

I always thought that insomnia was just not getting enough asleep. One interesting definition that I found described insomnia as the 'perception of poor-quality sleep' (3). This seems to indicate that it can almost be caused just by a person thinking that they aren't getting enough sleep. Insomnia can refer to difficulty falling asleep, trouble staying asleep, problems with not sleeping late enough, or feeling unrefreshed and tired after a night's sleep. Insomnia can cause such problems as sleepiness, fatigue, difficulty concentrating, and irritability.

Insomnia can be divided into three main categories, transient, chronic or intermittent. Transient insomnia is if it lasts from one night to four weeks. If transient insomnia returns periodically over months or years is becomes intermittent. It is chronic insomnia when it continues almost nightly for several months (4). Transient and intermittent types often do not require more treatment than an improvement in sleep hygiene.

There are many factors that can contribute to insomnia, and different issues trigger each type of insomnia. Transient and intermittent insomnia can be caused by something as simple as the sleeplessness that occurs just before a big test, and are very common and considered a normal stress reaction that will typically go away (5). Depression, internalized anger, anxiety and behavioral factors are the most common reasons for insomnia. The most frequent behaviors include consuming too much caffeine, alcohol or other substances, excessive napping, or stimulating activities such as smoking, exercising or watching television before bedtime (3). Insomnia can often be linked to mental illnesses or other diseases; for example, chronic insomnia is usually caused by depression (1). When a person is having sleep problems because of something else, it is called secondary insomnia. Environmental factors, such as discomfort or excessive light, and changes in a normal sleeping pattern, such as jet lag or moving to a new time zone, also cause transient insomnia (1). When none of these factors are contributing to a person's sleeplessness, they are considered to have primary insomnia, or insomnia that isn't caused by other obvious causes.

People who have insomnia tend to worry about the fact that they are not getting enough sleep, and sometimes their daytime behaviors contribute to increased lack of sleep. Worrying and stress will only increase insomnia, and habits developed to make up for a lack of sleep can delay the return of a normal sleep schedule. These behaviors include napping during the day, giving up on regular exercise, or drinking caffeinated beverages to promote staying awake or concentration (5). In order to regain normal sleeping patterns, insomniac have to practice good sleep hygiene.

After learning about the causes for insomnia, I decided that I didn't have any of the main underlying causes such as alcoholism or depression, so I decided to research good sleep hygiene. Sleep hygiene consists of basic behaviors that promote sleep and try to change behaviors that might increase chances of insomnia. These habits include going to sleep and waking up at the same time, not taking naps during the day, avoiding caffeine, nicotine, and alcohol late in the day, getting regular exercise but not close to bedtime, not eating a heavy meal late in the day, not using your bed for anything other than sleep or sex, making your sleeping place comfortable, and making a routine to help relax and wind down before sleep, such as reading a book, listening to music, or taking a bath (1). Interestingly, while sleeping pills can be effective for transient or intermittent insomnia, they are not recommended and they may make chronic insomnia worse (1). The best way to cure insomnia is to use good sleep hygiene, and be aware of any underlying causes that might be causing it.

After learning about insomnia, I decided that I don't really have it. The only side effect that I have in common with ones associated with insomnia was difficulty concentrating. I'm not irritated or sleepy during the day, and as far as I can tell I don't have any of the typical causes of insomnia. My sleep hygiene had been pretty good before I learned about it, but I did try to improve that as much as I could. The last two nights I have gotten six consecutive hours of sleep, and now I am feeling more tired during the day than when I was only getting three hours. But I do feel like my sleeplessness is declining, whatever the causes were.


References


1)Neurology Channel,
2) Bartleby.com, using The Colombia Encyclopedia as a reference.
3)Personal Health Zone4)The Chinese High School's iSpark Consortium
4)medbroadcast.com
5)The National Women's Health Information Center


Do I Have Insomnia?
Name: Maggie Sco
Date: 2002-09-30 23:38:33
Link to this Comment: 3052


<mytitle>

Biology 103
2002 First Paper
On Serendip

Do I Have Insomnia?

For about two and half weeks now, I haven't been able to sleep properly. I feel tired at a relatively normal hour, around eleven or midnight, but when I go to bed I can't fall asleep. I lay awake for hours, and then when I do fall asleep I only sleep for an hour or so before waking up again. In search of a cure for my sleeplessness, I decided to research sleep disorders.

Sleep disorders are much more common than I had expected. According to the National Institute of Neurological Disorders and Stroke, about 60 million Americans per year suffer from some sort of sleeping problems. There are more than 70 different sleep disorders that are generally classified into one of three categories: lack of sleep, disturbed sleep, and too much sleep (1). All three types of disorders are serious problems and can pose a grave risk to the sufferer's health, but because of my problem I have decided to focus my paper only lack of sleep, or insomnia.

To understand why not getting enough sleep was affecting me so much, I needed to understand a little more about sleep. Sleep is a period of rest and relaxation during which physiological functions such as body temperature, blood pressure, and rate of breathing and heartbeat decrease (2). Sleep is essential for the normal functioning of the body's immune system and ability to fight disease and sickness, as well as for the normal functioning of the nervous system and a person's ability to function both physically and mentally (1). Sleep also helps our bodies restore and grow, and some tissues develop more rapidly during sleep. There is also a theory that while the deeper stages of sleep are physically restorative, rapid-eye movement (REM) sleep is psychically restorative. REM sleep also might incorporate new information into the brain and reactivate the sleeping brain (2). These are just a few of sleep's less obvious duties, not to mention that it refreshes us and makes us alert for the next day.

I always thought that insomnia was just not getting enough asleep. One interesting definition that I found described insomnia as the 'perception of poor-quality sleep' (3). This seems to indicate that it can almost be caused just by a person thinking that they aren't getting enough sleep. Insomnia can refer to difficulty falling asleep, trouble staying asleep, problems with not sleeping late enough, or feeling unrefreshed and tired after a night's sleep. Insomnia can cause such problems as sleepiness, fatigue, difficulty concentrating, and irritability.

Insomnia can be divided into three main categories, transient, chronic or intermittent. Transient insomnia is if it lasts from one night to four weeks. If transient insomnia returns periodically over months or years is becomes intermittent. It is chronic insomnia when it continues almost nightly for several months (4). Transient and intermittent types often do not require more treatment than an improvement in sleep hygiene.

There are many factors that can contribute to insomnia, and different issues trigger each type of insomnia. Transient and intermittent insomnia can be caused by something as simple as the sleeplessness that occurs just before a big test, and are very common and considered a normal stress reaction that will typically go away (5). Depression, internalized anger, anxiety and behavioral factors are the most common reasons for insomnia. The most frequent behaviors include consuming too much caffeine, alcohol or other substances, excessive napping, or stimulating activities such as smoking, exercising or watching television before bedtime (3). Insomnia can often be linked to mental illnesses or other diseases; for example, chronic insomnia is usually caused by depression (1). When a person is having sleep problems because of something else, it is called secondary insomnia. Environmental factors, such as discomfort or excessive light, and changes in a normal sleeping pattern, such as jet lag or moving to a new time zone, also cause transient insomnia (1). When none of these factors are contributing to a person's sleeplessness, they are considered to have primary insomnia, or insomnia that isn't caused by other obvious causes.

People who have insomnia tend to worry about the fact that they are not getting enough sleep, and sometimes their daytime behaviors contribute to increased lack of sleep. Worrying and stress will only increase insomnia, and habits developed to make up for a lack of sleep can delay the return of a normal sleep schedule. These behaviors include napping during the day, giving up on regular exercise, or drinking caffeinated beverages to promote staying awake or concentration (5). In order to regain normal sleeping patterns, insomniac have to practice good sleep hygiene.

After learning about the causes for insomnia, I decided that I didn't have any of the main underlying causes such as alcoholism or depression, so I decided to research good sleep hygiene. Sleep hygiene consists of basic behaviors that promote sleep and try to change behaviors that might increase chances of insomnia. These habits include going to sleep and waking up at the same time, not taking naps during the day, avoiding caffeine, nicotine, and alcohol late in the day, getting regular exercise but not close to bedtime, not eating a heavy meal late in the day, not using your bed for anything other than sleep or sex, making your sleeping place comfortable, and making a routine to help relax and wind down before sleep, such as reading a book, listening to music, or taking a bath (1). Interestingly, while sleeping pills can be effective for transient or intermittent insomnia, they are not recommended and they may make chronic insomnia worse (1). The best way to cure insomnia is to use good sleep hygiene, and be aware of any underlying causes that might be causing it.

After learning about insomnia, I decided that I don't really have it. The only side effect that I have in common with ones associated with insomnia was difficulty concentrating. I'm not irritated or sleepy during the day, and as far as I can tell I don't have any of the typical causes of insomnia. My sleep hygiene had been pretty good before I learned about it, but I did try to improve that as much as I could. The last two nights I have gotten six consecutive hours of sleep, and now I am feeling more tired during the day than when I was only getting three hours. But I do feel like my sleeplessness is declining, whatever the causes were.


References


1)Neurology Channel,
2) Bartleby.com, using The Colombia Encyclopedia as a reference.
3)Personal Health Zone4)The Chinese High School's iSpark Consortium
4)medbroadcast.com
5)The National Women's Health Information Center


The Female Praying Mantis: Sexual Predator or Misu
Name: Michele Do
Date: 2002-10-01 02:15:27
Link to this Comment: 3055


<mytitle>

Biology 103
2002 First Paper
On Serendip

"Placing them in the same jar, the male, in alarm, endeavoured to escape. In a few minutes the female succeeded in grasping him. She first bit off his front tarsus, and consumed the tibia and femur. Next she gnawed out his left eye...it seems to be only by accident that a male ever escapes alive from the embraces of his partner" Leland Ossian Howard, Science, 1886. (7)

The praying mantis has historically been a popular subject of mythology and folklore. In France, people believed a praying mantis would point a lost child home. In Arabic and Turkish cultures, a mantis was thought to point toward Mecca. In Africa, the mantis was thought to brink good luck to whomever it landed on and even restore life to the dead. In the U.S. they were thought to blind men and kill horses. Europeans believed they were highly worshipful to god since they always seemed to be praying. In China, nothing cured bedwetting better than roasted mantis eggs. (7) The praying mantis is known for its unique look and very interesting aspects of behavior. Their bodies consist of three distinct regions: a moveable triangular head, abdomen and thorax. It is the only insect capable of moving its head from side to side like humans. Compound eyes help give them good eyesight, but it must move its head to center its vision optimally, also much like a human. Females usually have a heavier abdomen than males. Legs and wings are attached to the thorax and elongated to create a distinctive "neck". Its front legs are modified as graspers with strong spikes used for grabbing and holding prey. (2) To say the least, the mantis is a highly evolved curiosity with raptorial limbs that can regenerate when young, wings for flight, ears for hunting and evading predators, and mysterious behavior. With such highly evolved bodies for capturing and seizing prey, why are females infamous for their sexual cannibalism of males?

The mantis has an enormous appetite, eating up to sixteen crickets a day, but is not limited to just insects. They are carnivorous and cannibalistic, and only eat live prey in both nymph and adult stages. Although customarily they eat cockroach-type insects, they prefer soft-bodied insects like flies. They have been documented eating 21 species of insects, soft-shelled turtles, mice, frogs, birds, and newts. (2) Although the European mantis was introduced to the United States to eat insects that destroy farm crops, other species are known informally as "soothsayers," "devil's horses," "mule killers," and "camel crickets" since their saliva was mistakenly thought to poison farm livestock.

Because of the interesting sexual cannibalism of the species, there have been many studies on the praying mantids reproductive processes. Breeding season is during the late summer season in temperate climates. (5) The female secretes a pheromone to attract and show that she is receptive to the mate. The male then approaches her with caution. The most common courtship is when the male mantis approaches the female frontally, slowing its speed down as it nears. This has also been described as a beautiful ritual dance in which the female's final pose motions that she is ready. The second most common courtship is when the male approaches the female from behind, speeding up as it nears. He then jumps on her back, they mate, and he flies away quickly. It is most seldom that courtship occurs with the male remaining passive until approached by the female.

The actual mating response process has been described as an initial visual fixation on the female, followed by fluctuation of the antennae and a slow and deliberate approach. Abdominal flex displays with a flying leap on the back of the female are executed in order to mount her. The female lashes her antennae and there is rhythmic S-bending of the abdomen. During one experiment, mantids were observed in copulation for an average of six hours. The male flew away after mating. (6)

Although the praying mantis is known for its cannibalistic mating process in actuality it only occurs 5-31% of the time. Especially in laboratory conditions of bright lights and confinement, the female is more likely to eat the male as means of survival. "In nature, mating usually takes place under cover, so rather than leaning over the tank studying their every move, we left them alone and videotaped what happened. We were amazed at what we saw. Out of thirty matings, we didn't record one instance of cannibalism, and instead we saw an elaborate courtship display, with both sexes performing a ritual dance, stroking each other with their antennae before finally mating. It really was a lovely display". (7) There is one species, however, the Mantis religiosa, in which it is necessary that the head be removed for the mating to take effect properly. (5) Sexual cannibalism occurs most often if the female is hungry. But eating the head does causes the body to ejaculate faster. (3)

There are over 2000 species of praying mantids that display diverse shapes and sizes. They are camouflaged to blend into their environments from tropical flowers to fallen leaves. "And although they work around the same general lines- 'wait, seize, devour', behavior patterns between different species are as diverse as their body shape." (7) Some engage in sexual cannibalism more often than others. Those that do, it seems, are responsible for giving those that don't a bad reputation.

In our society that loves gory tales of sex and violence, it seems that we have focussed more on the fatal attraction aspect of the species than trying to figure out exactly why they do it. After all, being eaten also benefits the male since he serves as a kind of vitamin for his offspring so that they are strong enough to survive. And he gets to pass on his genes. The fact of the matter is that sexual cannibalism isn't that uncommon in nature. Especially in the insect world, male redback and orbweb spiders fall prey to their lovers, not to mention the infamous black widow. Have scientists focussed too much on the tales and myths of the deadly seductress? Have we misunderstood the praying mantis?

References


1) href="http://www.pansphoto.com/mantidae.htm"Praying Mantis

2) href="http://insected.arizona.edu/mantidinfo.htm">Praying Mantid Information

3) href="http://www.psy.tcu.edu/psy/Chapter%2013.rtf.">Sexual and Mate Selection

4) href="http://www-unix.oit.umass.edu/~abrams/mantis.html.">The Wondrous Praying Mantis!

5)href="http://www.geocities.com/paraskits/index/praying_mantis/praying_mantis.html.">The Praying Mantis

6) href="http://www.colostate.edu/Depts/Entomology/courses/en507/papers_1999/feldman.htm.">The Praying Mantis

7) href="http://www.scicom.hu.ic.ac.uk/students/features/caroline_mantis.html.">You Give Love a Bad Name


Exercise and the "Runner's High": can it realy ma
Name: Sarah Fray
Date: 2002-10-01 11:07:07
Link to this Comment: 3060


<mytitle>

Biology 103
2002 First Paper
On Serendip

Exercise and the Runner's high: can it really make you happy?

By: Sarah Frayne

The commonly referred to "Runner's High" is a euphoric, calm and clear state reportedly reached after a long period of aerobic exercise. There is no concise single definition for the phenomenon because it is immeasurable. The concept is soley based on reports of personal experiences. Also, exercise is said to have the effects of a general boost in mood and happiness. This theory is the basis for numerous depression treatment programs that incorporate exercise. Many believe this mood change is a result of both mental and physical factors. Psychologically, exercise causes a boost in self esteem, an improved self-image, confidence and feelings of accomplishment as well as a break from the other aspects of life (2). While these reasons to be happier during or after exercise are well accepted, the chemical processes behind the immediate "runner's high" and a lasting general mood change during and after exercise is greatly debated.

The first theory about the chemical cause of the "Runner's high" was put forth in the 1970's. Jogging was popularized around the same time a new type of brain chemicals was discovered . These chemicals, now called endorphins, were found to be very similar to morphine in chemical structure and pain killing abilities (7). In fact, morphine attaches to the same receptors in the brain as endorphins. The scientists found the similarities so striking that they actually named the chemicals 'endorphins', meaning "morphine" and "made by the body" (1). These endorphins became the popular answer to anything that gave pleasure (they are also commonly associated with orgasms). The theory that endorphins caused the high during exercise was supported when early research found that there were heightened levels of endorphins present in the blood stream during and after exercise (1).

Scientists found it hard to investigate the exact relationship between these new chemicals and the euphoric effects of exercising because of the variability involved in the qualitative nature of exercise difficulty and the intricacy of evaluating whether the endorphins were, not only present, but also responsible for the high. To this end, Virginia Grant, a psychologist, did experiments with rats comparing the behavior of rats addicted to morphine and rats that exercised. The experiment allowed rats to eat for one hour a day. Some rats were left in an empty cage the remaining 23 hours, while others were left in cages with wheels. Those left in the empty cages were able to eat enough in the eating hour to stay healthy, while those with the running wheels showed an inverse relationship between eating and running and eventually ran so much they died of starvation (1). It was concluded that exercise stimulated the same portion of the brain as addictive drugs. Any addictive drug causes a surge of dapamine in the brain, resulting in the building of the small proteins enkaphilin, dynorphin and substance P (1). Further, Rats who were in cage with a running wheel would run until these three chemicals were present in the brain. While research is still being conducted on the subject, this phenomenon with rats could help to explain the addiction to exercise sometimes seen in people with eating disorders. The dangerous combination of over exercising and anorexia is strikingly similar to the lowering of caloric intake of the rats the further they ran.

The experimentation with rats built a strong case that exercise is addictive; however, it failed to address the specific nature of endorphins in the process. The fact that endorphins are present during exercise is not surprising. Endorphins act as a pain reducer and are released when the body is in stress. The mere presence of the chemical does not prove that it is the main factor in causing the elation. Further, some scientist point out that the endorphins don't leave the bloodstream and therefore do not stimulate the receptors in the brain (1). Also, there are other chemicals within the brain capable of causing good feelings in a way comparable to those during and after exercise. Serotonin is one such chemical. Similar to endorphins, serotonin is released into a portion of the brain where it activates receptors causing heightened emotions and senses. Also, the chemical often times causes a suppression of appetite. However, there has not been much research done on the role of serotonin in the exercise process (3).

The most recent findings in the search has lead scientists to focus on a chemical called phenylethylamine, also found in chocolate (6). Phenylethylamine (PEA) had previously been found to relieve depression in two thirds of depression cases. There are theories that relate a low level of phenylethylamine with the presence of depression making it a natural candidate for possible chemicals surrounding the anti-depressant effects of exercise. Also, the chemical has been found to cause heightened activity and attention in animals (5). It is also notable that phenylethylamine has been able to boost moods as quickly as amphetamines, but without side effects or creating a tolerance to the chemical (5).

Ellen Billet of Nottingham Trent University studied the levels of phenylethylamine in 20 young men before and after exercising on a treadmill at 70 percent maximum heart-rate capacity. The men were asked to rate the level of exercise level they felt, and then were tested for phenylethylamine. Rise in the level of the chemical was around 77 percent with huge variances in levels between individuals (7).

It has become accepted in the scientific community that there is some sort of "runner's high" or general mood elation associated with exercised on a physical level. The research to find the processes behind this phenomenon have all shown the immediate chemical levels of people after exercise. However, the lasting ability of these effects are largely important to depression treatment and an overall healthy happy lifestyle. Donna Kritz-Silverstein from UCSD, found that exercise must be done on a regular basis to maintain the positive effects. She found that those who exercised had a lower Beck Deppression Inventory (BDI) meaning they were generally in a better mood. Ten years later, those who had stopped exercising had BDIs similar to those who had never exercised, while those who continued to exercise were able to maintain a low BDI (4).

The reaserch on the chemical processes behind, and on the lasting effects of exercise concerning its effects on mood is very new and still being done. There are incredible implications for the treatment of depression and the general ability of people to maintain happy life styles through exercise. Also, the study of these anti- depressant chemicals help to show the chemical properties of depression and mood. Further, the drug- like qualities of exercise allows an avenue for investigation of exercise addiction and the eating disorders with which it is often associated.


References

Websites

1) 1)JS Online, a collection of articles by different people

2) 5)International Association of Mind Body Professionals , a collection of articles about the mind and body and the interactions between the two

3) 5)About.com Page, Informational page about psychology with new discoveries, articles and general information

4) 5)UniSci Home Page, science page with articles on various numerous topics

5) 5)Chocolate Information Page, a site which includes information about drugs, chocolate, and the chemicals behind these substances

6) 5)BBC News Page, a page with the news in the UK

7) 5)Cosmiverse Home Page, a page with science news and articles

8) 5)Advanced Chemistry Developement Home Page, a site with the latest on chemistry


Living Dead, Walking Life
Name: Lydia Parn
Date: 2002-10-01 15:43:41
Link to this Comment: 3076

<mytitle> Biology 103
2002 First Paper
On Serendip

"Mans final frontier is the soul" - Arrested Development

It seems that everything around us is coming to an end. Walk down the aisle of a grocery and you'll see cans of oranges with expiration dates stamped on the side to remind us of the transient nature of grocery goods. A CD doesn't play forever and a candle always burns out. Even fun has to end for wild nights of hedonism are bound to once again turn into blue Mondays. So, if everything around us is reaching its grand finale, where does biology, the study of life, end?

Before advances in modern science made it possible to restore broken hearts and weary lungs to their original operative states, death was easy to notice. When the beat of the heart stopped, one was considered dead. Now, with technology developed to resurrect the dying, the once clean-cut line between life and death has been dulled, only to incite a fury of discussion (3). This debatable issue exists on a variety of levels for it is a grouping of diverse "philosophical, theological and scientific ideas about what is essential to embodied human existence"(2). However, before delving into a discussion of death, it is first important to think about what constitutes life.

Simply put our bodies are made up of a collection of cells. If one of these cells were to be extracted from a multicultural organism and placed in a solution with the appropriate nutrients, it would endure with no great trouble. It would keep on performing the basic metabolic processes considered necessary to life; taking in nutrients, breaking them down to create energy, then using that energy to divide, expel wastes and further develop. This sort of life could be considered metabolic. The next step up would be the level of tissues and growth. These are basically collections of cells that are grouped into carrying out the same functions as described above. Extracting muscle tissue, which is composed of cells whose purpose is to contract upon correct stimulation, placing it in a supporting solution and artificially energizing it will cause it to contract. This sort of life can be considered to be organic life. Further grouping individual organs together, as in our own bodies, adds another plane of life to the framework. These examples illustrate the view that life is narrowly biological in nature and would further suggest the cause of death to be the malfunction of particular organizational structures. To state that the cessation of human life is a clear-cut biomedical process would be to refute the idea of consciousness; the soul, the spirit, the mind (3).

"I think, therefore I am" - Rene Descartes

Released in 1981, the Uniform Determination of Death Act (UDDA) was a landmark statement that specified two alternative criteria for determining death.

An individual who has sustained either (1) irreversible cessation of circulatory and respiratory functions, or (2) irreversible cessation of all functions of the entire brain, including the brain stem, is dead. A determination of death must be made in accordance with accepted medical standards (5).

The UDDA then recognizes that death can be determined by the traditional cardiopulmonary criteria, yet also authorizes brain death to be declared for patients who fail to meet traditional criteria because cardiopulmonary functions are artificially maintained. Due to the UDDA's adoption of either cardiopulmonary or neurological criteria for the determination of death, this act has been carped by many as being confused. For both neurological and cardiopulmonary criteria can serve as signals to show an organism's capacity for life has been permanently lost. Since it is respiration and cardiac, not brain, activity that can been artificially maintained, many claim that neurological criteria provide direct evidence of death, while cardiopulmonary criteria only provide indirect evidence. In the event that respiration and cardiac activity are artificially sustained, neurological criteria must be used to certify someone as dead (1).

Medical advances have made it possible to transplant organs and tissues, and the expansion of technological methods to artificially sustain respiratory and circulatory functions have made it crucial to reconsider our understanding of death and have encouraged the adoption of brain-related criteria for death. When somebody passes away it is not the loss of the physiological function that is missed, but the person that was sustained by such functions. The brain contains the physiological centers responsible for integrating the functions of various other organ systems and tissues of the body, so that it is the death of the brain that results in the loss of integrated functioning (5). Consciousness and cognition reside in the cerebral portion of the brain, and by focusing on this advocates of brain-based criteria do not bash the traditional views of death based on cardiopulmonary criteria, rather they tend to see the profound difference between conceiving human life "as a heart-centered reality and as a brain-centered reality" (2).

However advocates of brain-standard criteria usually tend to slip away into two schools of thought, "whole-brain" versus "higher-brain" criteria for a standard of death (4). According to advocates of whole brain criteria, a person is brain dead only when the entire brain, including the stem, is dead. In application a few problems do arise, however, since patients who meet the standard clinical tests for brain death may still maintain some brain function, such as the secretion of neurohormones, or coordinated activity within isolated nets of neurons. This has driven some to further refine neurological criteria for brain death based upon functional differences between the different parts of the brain. The brain stem is the elemental constituent that supports most vegetative functions essential for life - regulation of wake-sleep cycles, respiration, swallowing. "When the brain stem ceases to function, the person loses capacity for spontaneous circulatory and respiratory function as well as the capacity for consciousness" (2). The issue of concern between advocates of whole-brain and brain stem death criteria is essentially which brain structures and functions must be lost in order to certify that the body no longer has power over the capacity for spontaneous regulation of vital processes. What the two measures have in common though is the fact that they both reflect the concept of death as a loss of integrated functioning of the organism as a whole, body and soul.

The higher brain formulation proves tricky though when placing it in practicality. This is most easily illustrated when considering higher brain death in the context of patients with a condition referred to as a "persistent vegetative state"(PVS). In such patients, all higher brain functions are lost, however brain stem functions remain largely intact. With medical care, such as respirators and artificial nutrition, people in a PVS can live for many years (1). If brain death criterion for death is employed, such patients would be considered dead. In situations such as this careful concern needs to be given in order to draw a distinction between the questions of when it is morally permissible to withhold treatments and allow a patient to die, and when it is right to declare a patient dead. In the end, one's response to brain-death standards depends both on ethical judgments and one's degree of trust in the medical profession itself.

References

1) Brain Death and Technological Change: Personal Identity, Neural Prostheses and Uploading. , James J. Hughes, 1995.

2) The Determination of Death , May, 1997.

3 Definition of and Criterion for Determining Death , Igor Jadrovski.

4 Report from the national institute of philosophy & public policy , Consciousness, and the Definition of Death, 1998.

5) Neurology: Brain Death Criteria , Carlos Eduardo Reis


Poor Man's Heroin
Name: Brie Farle
Date: 2002-10-02 13:49:02
Link to this Comment: 3084

"Poor Man's Heroin" Biology 103
2001 First Web Report
On Serendip

Poor Man's Heroin

Brie Farley

A plaintiffs group in Washington D.C. has filed a $5.2 billion lawsuit against Purdue Pharma LP and Abbott Laboratories Inc. charging the drug companies with allegedly failing to warn patients the painkiller OxyContin is dangerously addictive. Do you think they'll win?

" Oxy, oxies, oxycotton, OC s, killers, oceans, O's, oxycoffins, Hillbilly Heroin." Each of these words is another name for the drug, OxyContin, marketed by Purdue Pharma LP. Addiction and abuse of the drug, crime and fatal overdoses have all been reported as a result of OxyContin use. (1).

This drug was approved by the FDA in 1995, and is a 12-hour time-released form of oxycodone, an opium derivative, which is the same active ingredient in Percodan and Percocet. OxyContin is the longest lasting oxycodone on the market. Opiates provide pain relief by acting on opioid receptors in the spinal cord, brain, and possibly in the tissues directly. Opioids, natural or synthetic classes of drugs that act like morphine, are the most effective pain relievers available. (2).

Oxycodone has been around for decades and taken for post surgical pain, broken bones, arthritis, migraines and back pain. Oxycodone is a central nervous system depressant. Its appears to work through stimulating the opioid receptors found in the central nervous system that activate responses ranging from analgesia to respiratory depression to euphoria. While Percocet and Percodan have about five milligrams of oxycodone, OxyContin tablets contain oxycodone in amounts of 10, 20, 40, and 80 milligrams. ( 4). A 160- milligram tablet became available in July 2000. Thus, OxyContin is a high potency painkiller, intended only for use by terminal cancer patients and chronic pain sufferers. People who take the drug repeatedly can develop a tolerance or resistance to the drug's effects. A cancer patient can take a dose of oxycodone on a regular basis that would be fatal in a person never exposed to oxycodone. Most individuals who abuse oxycodone seek to gain the euphoric effects, mitigate pain, and avoid withdrawal symptoms associated with oxycodone or heroin abstinence. The strength, duration, and known dosage of OxyContin are the primary reasons the drug is attractive to abusers and legitimate prescribers.

Although designed to be swallowed whole, abusers have found other ways to ingest OxyContin. Abusers often chew tablets, or crush the tablets and snort the powder. Because oxycodone is water soluble, crushed tablets can be dissolved in water and the solution can be injected. Both of these methods lead to rapid release and the absorption of oxycodone. Combining any use of OxyContin with alcohol is deadly. OxyContin and heroin have similar effects, so both appeal to the same abuser population. The powerful prescription pain reliever has become a hot new street drug. It s the so-called poor man s heroin, says Capt. Michael Holsapple of the Kokomo Police Department. (5). A 40 mg tablet of OxyContin by prescription costs approximately $4 or $400 for a 100-tablet bottle in a retail pharmacy. Generally, OxyContin sells for between 50 cents and $1 per mg on the street. Therefore, the same 100-tablet bottle purchased for $400 at a pharmacy can sell for $2,000 to $4,000 illegally. How does this compare to the street price of heroin? One bag of heroin sells for about $40, according to 1998 findings in Ireland. (6,7). A bottle of OxyContin containing one hundred tablets is clearly more for the money. (4).

Sometimes, OxyContin can be obtained easily in clinics. For a brief visit and the appropriate presenting complaint, patients may leave with a prescription for OxyContin. Many physicians are not formally trained to identify drug-seeking behavior. (4). In April 2002, the US Drug Enforcement Agency reported that OxyContin has been implicated as the direct cause of main contributing factor in 146 deaths and a likely contributor in an additional 318 deaths. Based on their findings, only nine of the reported deaths involved injecting the drug and only one death related to snorting. This indicates even non-abusers may be adversely affected. It has been alleged that Purdue Pharma L.P has marketed the drug excessively while underplaying how addictive it is. Reported warnings about the drug found on the Internet include:

1. This medicine can be habit-forming. You should not use more than the prescribed amount.

2. Whole oxycodone tablets may appear in your stool. This is no cause for worry because the medicine is absorbed when the tablet is still in your body.

3. If you are pregnant or breastfeeding, talk to your doctor before taking this medicine.

4. This medicine can cause dizziness or drowsiness. Be careful if driving a car or using machinery.

5. If you have taken this medicine for several weeks, ask your doctor before stopping, as you may need to take smaller and smaller doses before you stop the medicine completely. (5).

These precautions are not uncommon for any prescription pain reliever. However, Purdue Pharma LP has not included information regarding the drug s similarity to heroin, and has not stressed the severity of the complications. A recent newspaper article reported that OxyContin s sales, which exceeded $1 billion in the United States in the year 2000, are said to be the result of an aggressive marketing strategy to physicians, pharmacists and patients that misrepresented the appropriate uses of OxyContin and failed to adequately disclose and discuss the safety issues and possible adverse effects of OxyContin use (4) .

Seven people who are former addicts or relatives of addicts filed the Washington D.C. lawsuit. In May, Purdue said it had met with officials from the DEA because of the agency s concerns about its illegal diversion and abuse. Around the same time, Purdue Pharma said it tried to reduce abuse of the drug by halting distribution of the drug in 160mg tablets. According to the lawsuit, defendants, made misrepresentations or failed to adequately and sufficiently warn individuals regarding the appropriate uses, risks, and safety of OxyContin. Specifically, the suit quotes a May 2000 U.S. Food and Drug Administration warning letter to Purdue Pharma ordering the company to cease use of an advertisement for the drug that appeared in a medical journal. A section from the warning letter is quoted that suggests the advertisement inaccurately represents the drug as a first-line treatment for osteoarthritis. The suit alleges inappropriate marketing of OxyContin, that the drug has been inappropriately prescribed and used, unnecessarily putting people at risk of addiction to OxyContin (4).

Should it be assumed that the general public is aware of the effects of opiates? Is it the responsibility of the physician to be suspect of warning labels on every newly marketed drug? Does the word "addiction"? always prevent chronic pain sufferers from taking a miracle drug ? And finally, will anyone, especially teens, ever stop experimenting with drugs? Your answers to the above questions were probably doubtful, but this does not mean that the D.C. lawsuit isn t worth fighting for. We should be personally careful, but we also need to emphasize our right to be thoroughly and accurately informed about what we put in our bodies.

WWW Sources

1) Oxy Abuse Kills , Informative and Realistic Site

2) Government Information about OxyContin , Facts

3) About OxyContin , Facts and Information

4) OxyContin Addiction Help , Facts and Resources where to get help from addiction

5) Yahoo Health , Basic Information

6) MapInc , Article about the increase in Heroin prices in Ireland

7) Oanda , Monetary Conversion Site




| Serendip Forums | About Serendip | Serendip Home |

Send us your comments at Serendip

© by Serendip 1994- - Last Modified: Wednesday, 02-May-2018 11:57:38 CDT


The Letter B
Name: Catherine
Date: 2002-10-02 19:36:00
Link to this Comment: 3095

<mytitle> Biology 103
2002 First Paper
On Serendip

When a person is asked about hepatitis B, how much does he know about this disease? "I knew absolutely nothing about hepatitis at this point." "I believe that most people know nothing about hepatitis — I know I didn't." (5). says one woman who tested positive for the infection. When it comes to hepatitis, there is simply not enough awareness and outreach, unlike for other sexually transmitted life threatening diseases, such as Acquired Immunodeficiency Syndrome.
In the United States alone, an estimated 12.5 million people currently carry the hepatitis B virus, while another 200,000 to 300,000 people are infected each year (that is one out of every twenty people who will get hepatitis B some time during their lives). 11,000 of the newly diagnosed are hospitalized, and 20,000 remain chronically infected. 4,000 to 5,000 sufferers pass away each year from hepatitis B-related chronic liver disease or liver cancer. Hepatitis B is one hundred times more infectious than Human Immunodeficiency Virus. The sudden thrust of facts and figures makes one feel more susceptible, no?
The newly acquired information makes one wonder, what exactly is hepatitis B Virus? HBV is a forty-two nanometer, double-stranded Deoxyribonucleic Acid (genome has four genes: pol, env, pre-core and X, that respectively encode viral DNA-polymerase, envelope protein, pre-core protein, and protein X) containing hepadnavirus, which can exist on almost any surface for up to one month. Its key components are hepatitis B surface antigen (HBsAg), hepatitis B core antigen, and hepatitis B e antigen (HBeAg). It causes acute and chronic hepatitis, and can damage liver cells, which can cause inflammation and impaired function of the liver. The virus is found in high concentrations in blood, serum, semen, and vaginal secretions of infected people, and low intensity can be found in saliva.
Hepatitis B Virus is transmitted in many different ways, the most general categories beginning with horizontal and vertical transmission. It is spread horizontally by blood and blood products and sexual transmission (body fluids), while vertically from mother to infant in the perinatal period. Contact with even small amounts of infected blood can cause infection. It is not possible to get HBV from sneezing, coughing, or holding hands, nor is HBV found in sweat, tears, urine, and respiratory secretions.
Although exposure to the virus can occur in all age, social, and ethnic groups, some are more at risk than others. In the United States, the majority of infections occur in adults with behaviors or occupations that put them at risk. Those in the higher risk category include people who: a) live with someone who has hepatitis B, b) have hemodialysis, c) practice "unsafe" sex, d) use injection drugs, e) have body piercings or tattoos, f) have contact with open sores, g) share toothbrushes, razors, nail clippers, or washcloths, h) receive human bites, i) are in healthcare, dental, emergency care professions, j) are sexually active adults and teens, k) are in adoptive families, l) are children born to mothers who are carriers, m) travel to high-risk countries, n) are immigrants or refugees from areas of high HBV endemicity, or children of such o) are recipients of certain blood products, p) are clients or staff of institutions for the developmentally disabled, q) are inmates in long-term correctional facilities, r) are homosexual or bisexual men, which makes them ten to fifteen times more likely to acquire HBV than the general population, especially if they are promiscuous (up to seventy percent of gay and bisexual men have already been infected with the virus). In addition, children sometimes transmit the disease to one another, but it is unknown as to how that is precisely achieved. High risk also pertains to newborns of infected women, who can give the virus to their babies during the delivery process. For pregnant women who are infected with HBV, it depends on when the illness occurs. If it is early in the pregnancy, chances are less than ten percent that the baby will receive her virus; later, the odds soar to eighty or ninety percent.
So, if the chances are so favorable that one will contract hepatitis B, why is the majority of world population not dying out from this virus? The answer supplied is simple at the shell, yet complex at the core. As aforementioned, there are two types of viral hepatitis, type B: acute and chronic. Acute hepatitis B is the short and early infection, which manifests itself about one to six months (incubation period of forty-five to one hundred and sixty days, average of ninety days) from the time of infection. Initial symptoms include nausea, vomiting, fever, abdominal pain, loss of appetite, fatigue, muscle and joint aches, followed by jaundice, dark urine, and light stools. Most people are able to make the bug subside after two to three weeks, return the liver to normal in sixteen weeks, and dispose of it within six months. Still, acute hepatitis B ranges from sublicinal disease only detectable by liver function test to fulminant acute hepatic necrosis in about one to two percent of the cases. Patients may suddenly collapse with fatigue and develop symptoms. Acute fulminant hepatitis can be life-threatening due to liver damage, particularly if not treated immediately, because it could lead to liver failure; sometimes this may require a liver transplant, if one is available. Roughly ninety to ninety-five percent of acutely infected people will develop antibodies and totally clear the virus from their bodies. While they may experience some symptoms, they will recover without complications. The remaining five to ten percent will become chronically infected.
Chronic Hepatitis B symptoms appear within two to six weeks after contact, and the virus stays in the blood beyond six months, usually life-long. It induces many symptoms, which only about half of the infected population experiences. Some include fatigue, malaise, joint aches (arthralgias), low grade fever, nausea, vomiting, loss of appetite, abdominal pain (anorexia abdominal discomfort), bloated and tender belly, which progress often to jaundice, and dark urine due to increased bilirubin. Chronic HBV's broad range of effects includes creating clinically insignificant or minimal liver disease in people, who never develop complications, and clinically apparent chronic hepatitis, some of which will go on to develop cirrhosis. Many patients from the first category never develop symptoms or abnormalities, but the evidence of hepatitis will be apparent on liver biopsy, and they are still potentially infectious to others. The condition is commonly referred to as chronic carrier state. Patients are referred to as just "carriers." "These people may switch from "non-replicative to "replicative" infection states and vice versa," (5). which will bring them from carrier states to dangerous states, and reverse. Sometimes, HBV carriers will spontaneously clear the infection from their bodies, but this is rare.
Individuals who have had hepatitis B virus infection have a higher incidence of hepatocellular carcinoma (primary liver cancer) compared to the general population. But chronic carriers, especially those with cirrhosis (chronic hepatitis B heightens chances of this scarring of the liver, a permanent liver damage), are at an even greater risk of developing the cancer because the virus steadily attacks the liver; it is reasonable for such individuals to undergo periodic screening. Death from chronic liver disease occurs in fifteen to twenty-five percent of chronic hepatitis B patients.
The risk of chronic infection is inversely related to a person's age at initial HBV infection. More than ninety percent of newborns, about fifty percent of children, and five to ten percent of adults infected with hepatitis B develop the chronic type. As many as twenty-five percent of infected babies will develop liver failure or liver cancer later. The Centers for Disease Control and Prevention (CDC) estimates that one third of all chronic infections in the Untied States come from infected infants and young children. This is why in the year 1991, routine infant hepatitis B vaccination became law, and the same appeared for adolescents in 1995 (the infant vaccine plan is ninety-five percent effective in protecting babies from becoming chronic carriers).
The best prevention method for hepatitis B infection is the hepatitis B vaccine, which has been available since 1982. Many people do not know when or how they acquired hepatitis B; studies demonstrate that "30 to 40 percent of people who have it are unable to recognize risk factors for the disease." (5). More than one half of acute hepatitis B cases alone may have been prevented through routine immunization and correctional health programs. Alpha-interferon and laminvudine, the two legal vaccines in the United States, are effective in up to forty percent of hepatitis B patients (although they cannot cure the disease), and in ninety to ninety-five percent of all healthy recipients. In chronic infection with liver disease in adults, interferon alpha has been demonstrated to induce a long-term remission in twenty-five to forty percent of treated patients, although it is less effective for chronic infections acquired during early childhood.
Many Americans do not realize the gravity of contracting hepatitis B. We do not want another incidence such as the "1942 outbreak of hepatitis B in military personnel," (5). in which 28,585 people contracted the virus. More than twenty million adults and adolescents and sixteen million infants and children have received the vaccine in the United States, and this country contains one of the low hepatitis-infected citizen numbers. The United States' strategy to eliminate hepatitis B virus transmission is comprised of these components: 1) preventing perinatal transmission, 2) routine infant vaccination, 3) catch-up vaccination of children in high-risk groups at any age, 4) catch-up vaccination of all children at eleven to twelve years of age, 5) vaccination of adolescents and adults in high-risk groups. If we stick to this plan and spread the awareness, perhaps there will be no concern at all for this viral hepatitis, type B.

References

1) Health Library at MerckSource
2) The Official Patient's Sourcebook on Hepatitis B
3) Hepatitis B Vaccine Lawsuit News
4) Hepatitis B
5) Immunization Action Coalition
6) Hepatitis B Foundation
7) Hepatitis B: The Facts
8) The New England Journal of Medicine
9) The Journal of Infectious Diseases
10) Childhood Hepatitis B Virus Infections in the United States Before Hepatitis B Immunization
11) Progress Toward Elimination of Hepatitis B Virus Transmission in the United States
12) Impact of Hepatitis B Virus Infection On Women and Children
13) Centers for Disease Control and Prevention
14) Hepatitis B and the Vaccine
15) Medical Library: Hepatitis B
16) Medical Library: Red Book
17) Medical Library: Hepatitis B Virus
18) Medical Library: Viral Hepatitis, Type B
http://www2.hepb.org/virus.gif


Kawasaki Disease - No not the motorcycle
Name: Yarimee Gu
Date: 2002-10-10 22:15:43
Link to this Comment: 3253


<mytitle>

Biology 103
2002 First Paper
On Serendip

When hearing the word Kawasaki the first thing to come to my mind was always the motorcycle. This was until the day I came into contact with the disease itself. Although I was not directly affected, my younger brother was. He was diagnosed at the age of nine, when I myself was ten. Because of my age at the time I did not really understand the disease. All I knew was that my brother had a heart condition serious enough to send him to the hospital for a while and that he had to return for follow-up visits for up to three years after this. It was not until recently that I asked myself, what is Kawasaki Disease?

Kawasaki is a disease that was detected fairly recently which is characterized by inflammation of arteries, especially coronary arteries (those that transport blood back to the heart) that are most at risk. Tomisaku Kawasaki released the first report concerning Kawasaki in 1967 and it was only in the 1970´s where recognition as a disease came about. From then on "Kawasaki disease (Also known as KD) has become the leading cause o f acquired heart disease among children in North America and Japan. (3)

The symptoms of KD include a very high or spiking fever (104 or higher) that lasts a few days to about a week and does not respond to treatment, red lips or mouth, red eyes (similar to conjunctivitis) without mucus discharge. The peeling off of the top layer of the tongue. (This is called "strawberry tongue for it's bright red, glossy look) Swollen hands and feet that may also become red, and swollen lymph nodes. The following table shows the criteria used to diagnose KD. (2)

Table 1. CDC CRITERIA FOR DIAGNOSIS OF KAWASAKI DISEASE

Fever >5 days unresponsive to antibiotics, and at least four of the five following physical findings with no other more reasonable explanation for the observed clinical findings:
1. Bilateral conjunctival injection
2. Oral mucosal changes (erythema of lips or oropharynx, strawberry tongue, or drying or fissuring of the lips)
3. Peripheral extremity changes (edema, erythema, or generalized or periungual desquamation)
4. Rash
5. Cervical lymphadenopathy >1.5 cm in diameter
Centers for Disease Control (1980). Kawasaki disease-New York. Mortality and

Morbidity Weekly Report, 29:61-63.

Other symptoms which may or may not develop, but often times help in diagnosing the disease are swelling of the joints and extremities, irritability, diarrhea, nausea, vomiting, a rash, abdominal pain, and swelling of the gall bladder. (2)

In most times these symptoms can disappear over a period of a couple of months even if untreated. However, there are lasting and extremely serious effects to the coronary arteries which can last forever. Because these arteries become inflamed, they can be significantly damaged. This in turn can cause small sacs in the blood called aneurysms. These allow blood to pool and platelets in the blood begin to gather. After a while they form a blood clot that slows or stops the blood from getting to the heart. If the flow of blood is stopped then the child can have a heart attack. Another complication is the scaring of the arterial walls, resulting from the healing of the aneurysm. (Also known as the regression of an aneurysm) This causes them to thicken, making the arteries more narrow, which can lead to the same result as an aneurysm. Even after these aneurysms heal, the arterial wall will never be the same. However long term research has not been done to determine the effects of this later on in life. (4)

Despite the fact that this is new disease, there have been extremely efficient treatments for the developed over the years. Aspirin is used to thin the blood to lessen the chance of platelets forming blood clots. It is also used to help reduce the extreme fevers in the beginning of the disease, and as a prevention of the inflammation of the arteries. A product called Gamma Globulin is also used to treat KD. This is essentially anti-bodies from donated blood which help to lower the inflammation of the coronary arteries and protect them from the damage this can cause.

Unfortunately, modern science has been unable to find a cause for KD, either microbial or infectious. As such, there is no way of preventing the disease or even of knowing who is more susceptible to it. As of today what is known is that it is a non-communicable disease, meaning that it is not contagious. You cannot catch KD by being near someone who has it. Also, Kawasaki is a children's illness. "About 80% percent of the people with Kawasaki Disease are under age 5. Most of those affected are boys who develop the disease about 1.5 times as often as girls, and children of Asian descent. In the United States there have been reports of over1, 800 cases being diagnosed annually. (1)
Because of this it is extremely important that research is conducted and information be distributed about this disease. It is necessary to gain awareness and to gather more information in hopes of one day deciphering this disease and being able to do away with it.
.

References

1)AMERICAN HEART ORGANIZATION
2)KAWASAKI DISEASE FOUNDATION
3)THE AMERICAN ACADEMY OF PEDIATRICS
4)THE HOSPITAL FOR SICK CHILDREN


Kawasaki Disease - No not the motorcycle
Name: Yarimee Gu
Date: 2002-10-10 22:15:52
Link to this Comment: 3254


<mytitle>

Biology 103
2002 First Paper
On Serendip

When hearing the word Kawasaki the first thing to come to my mind was always the motorcycle. This was until the day I came into contact with the disease itself. Although I was not directly affected, my younger brother was. He was diagnosed at the age of nine, when I myself was ten. Because of my age at the time I did not really understand the disease. All I knew was that my brother had a heart condition serious enough to send him to the hospital for a while and that he had to return for follow-up visits for up to three years after this. It was not until recently that I asked myself, what is Kawasaki Disease?

Kawasaki is a disease that was detected fairly recently which is characterized by inflammation of arteries, especially coronary arteries (those that transport blood back to the heart) that are most at risk. Tomisaku Kawasaki released the first report concerning Kawasaki in 1967 and it was only in the 1970´s where recognition as a disease came about. From then on "Kawasaki disease (Also known as KD) has become the leading cause o f acquired heart disease among children in North America and Japan. (3)

The symptoms of KD include a very high or spiking fever (104 or higher) that lasts a few days to about a week and does not respond to treatment, red lips or mouth, red eyes (similar to conjunctivitis) without mucus discharge. The peeling off of the top layer of the tongue. (This is called "strawberry tongue for it's bright red, glossy look) Swollen hands and feet that may also become red, and swollen lymph nodes. The following table shows the criteria used to diagnose KD. (2)

Table 1. CDC CRITERIA FOR DIAGNOSIS OF KAWASAKI DISEASE

Fever >5 days unresponsive to antibiotics, and at least four of the five following physical findings with no other more reasonable explanation for the observed clinical findings:
1. Bilateral conjunctival injection
2. Oral mucosal changes (erythema of lips or oropharynx, strawberry tongue, or drying or fissuring of the lips)
3. Peripheral extremity changes (edema, erythema, or generalized or periungual desquamation)
4. Rash
5. Cervical lymphadenopathy >1.5 cm in diameter
Centers for Disease Control (1980). Kawasaki disease-New York. Mortality and

Morbidity Weekly Report, 29:61-63.

Other symptoms which may or may not develop, but often times help in diagnosing the disease are swelling of the joints and extremities, irritability, diarrhea, nausea, vomiting, a rash, abdominal pain, and swelling of the gall bladder. (2)

In most times these symptoms can disappear over a period of a couple of months even if untreated. However, there are lasting and extremely serious effects to the coronary arteries which can last forever. Because these arteries become inflamed, they can be significantly damaged. This in turn can cause small sacs in the blood called aneurysms. These allow blood to pool and platelets in the blood begin to gather. After a while they form a blood clot that slows or stops the blood from getting to the heart. If the flow of blood is stopped then the child can have a heart attack. Another complication is the scaring of the arterial walls, resulting from the healing of the aneurysm. (Also known as the regression of an aneurysm) This causes them to thicken, making the arteries more narrow, which can lead to the same result as an aneurysm. Even after these aneurysms heal, the arterial wall will never be the same. However long term research has not been done to determine the effects of this later on in life. (4)

Despite the fact that this is new disease, there have been extremely efficient treatments for the developed over the years. Aspirin is used to thin the blood to lessen the chance of platelets forming blood clots. It is also used to help reduce the extreme fevers in the beginning of the disease, and as a prevention of the inflammation of the arteries. A product called Gamma Globulin is also used to treat KD. This is essentially anti-bodies from donated blood which help to lower the inflammation of the coronary arteries and protect them from the damage this can cause.

Unfortunately, modern science has been unable to find a cause for KD, either microbial or infectious. As such, there is no way of preventing the disease or even of knowing who is more susceptible to it. As of today what is known is that it is a non-communicable disease, meaning that it is not contagious. You cannot catch KD by being near someone who has it. Also, Kawasaki is a children's illness. "About 80% percent of the people with Kawasaki Disease are under age 5. Most of those affected are boys who develop the disease about 1.5 times as often as girls, and children of Asian descent. In the United States there have been reports of over1, 800 cases being diagnosed annually. (1)
Because of this it is extremely important that research is conducted and information be distributed about this disease. It is necessary to gain awareness and to gather more information in hopes of one day deciphering this disease and being able to do away with it.
.

References

1)AMERICAN HEART ORGANIZATION
2)KAWASAKI DISEASE FOUNDATION
3)THE AMERICAN ACADEMY OF PEDIATRICS
4)THE HOSPITAL FOR SICK CHILDREN


test
Name: Paul Grobstein
Date: 2002-11-08 10:45:47
Link to this Comment: 3607


<mytitle>

Biology 103
2002 Second Paper
On Serendip

YOUR TEXT. REMEMBER TO SEPARATE PARAGRAPHS WITH A BLANK LINE (OR WITH

, BUT NOT BOTH). REFERENCES (IF ANY) GO IN A NUMBERED LIST AT THE END (SEE BELOW). TO CITE A REFERENCE IN THE TEXT, USE THE FOLLOWING AT EACH NEEDED LOCATION: (YOUR REFERENCE NUMBER).

References

SUCCESSIVE REFERENCES, LIKE PARAGRAPHS, SHOULD BE SEPARATED BY BLANK LINES (OR WTIH

, BUT NOT BOTH)

FOR WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER)NAME OF YOUR FIRST WEB REFERENCE SITE, COMMENTS ABOUT IT

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

REFERENCE NUMBER) STANDARD PRINT REFERENCE FORMAT


I Have PMS and a Handgun, Any Questions?: Demysti
Name: Adrienne W
Date: 2002-11-08 12:54:33
Link to this Comment: 3613


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Most of us are familiar with PMS, the acronym that stands for Premenstrual Syndrome, perhaps largely through the jokes told about it. However, many women who suffer from PMS or PMDD will insist that these disorders are no laughing matter. PMDD, or premenstrual dysphoric disorder, is perhaps less well known, but its impact is equally, if not more significant on a woman's life and health. Both PMS and PMDD refer to physical and mood-related changes that occur during the last one or two weeks of the menstrual cycle. What is the difference between the two disorders? For the most part, PMDD is simply a more acute manifestation of PMS, as its symptoms are more severe. Women with PMDD experience more severe mental symptoms than women with PMS, which is looked on as a more physical condition. Both are serious medical conditions that require treatment, even though their origins are still somewhat of a mystery.
The symptoms of PMS include bloating, weight gain, poor concentration, sleep disturbance, appetite change, and psychological discomfort (1). As part of its definition, the main symptoms of PMDD are actually the core symptoms of depression: irritability, anxiety, and mood swings. Some other symptoms of PMDD include: decreased interest in daily activity, difficulty concentration, decreased energy, and sleep disturbances. Thus, although PMS also affects mental health, it does not interfere with daily functioning as much as PMDD. Premenstrual dysphoric disorder, the severest form of PMS interferes with a woman's quality of life, much like depression. For this reason, many doctors believe that it, like depression, should be looked at as a serious medical condition that requires treatment.
The impact PMDD has on a woman's life and the life of those around her is not trivial, and should be taken more seriously by our society. According to a leading researcher in this field of study, Dr. Jean Endicott: "Many women report that their PMDD symptoms have caused seriously impaired relationships with relatives, friends, or co-workers, as well as with spouses or partners. Often, relationships have been lost because others say they can no longer 'put up with' some of the recurrent behaviors" (4). Of course, any disorder that interferes with the quality of one's life should be taken seriously. Unfortunately, in our society these disorders are looked upon as a joke more than anything. PMS is not covered in the medical curriculum; doctors that wish to seek more information and research on the subject must do so independently. Perhaps it is for this reason that the few researchers in the field remain unclear on the causes of PMS and PMDD.
There are, however, quite a few hypotheses on the causes of these disorders. Obviously, hormones must play a role because women report the disappearance of symptoms after their ovaries are removed. There is also evidence that the brain chemical serotonin is a factor that causes the more severe PMDD. Studies attempting to link PMS and PMDD to genetics have also been conducted. In the study, daughters of mothers with PMDD were more likely to have it themselves. Also, 93% of identical twins in the study share PMDD, which is a higher percentage than the fraternal twins in the study (44%) (4). Another leading researcher in the field, Dr. Susan Thys-Jacobs, hypothesized that calcium deficiencies are the cause of PMS and PMDD. In her study, she found her hypothesis to be true. However, there is still much to be uncovered in the mystery of these disorders. Dr. Thys-Jacobs is currently testing her theory in an NIH funded study.
Given the similarities of PMDD to depression, some doctors prescribe antidepressants to patients that suffer from PMDD. Many researchers believe that one of the causes of PMDD is a low level of serotonin, which is also a cause of depression. For this reason, SSRIs, antidepressants such as Prozac, Paxil, and Zoloft that increase serotonin levels in the brain, are considered to be effective treatments for PMDD by some doctors (2). However, there is some controversy in the medical community as to whether medication is a necessary and/or appropriate treatment. According to the PMS Project, an organization committed to the advancement of PMS and PMDD research, studies show that the most successful treatment of these disorders is a change of lifestyle and nutrition. The organization also argues that PMS and PMDD are too complex and have too many diverse symptoms to be treated with a single drug effectively (3).
What, then, would be a more natural treatment that fits within the parameters of lifestyle and diet change? Dietary change includes the elimination of all caffeine and a low carbohydrate diet, which especially avoids simple, refined sugars (1). Vitamin supplements are also recommended for sufferers of PMS and PMDD: calcium, vitamin B6, vitamin E, and tryptophan, a precursor of serotonin, have shown to ease symptoms in some women. Lifestyle changes include regular exercise, and therapy. It has also been found that hormonal therapy, such as oral contraceptives with estrogen and progesterone, may be used to decease the symptoms of PMS and PMDD.
Although the causes of these disorders are still unknown, women do have treatment options that have been proven to help ease symptoms. The problem, however, is that our society does not treat PMS and PMDD as serious disorders; and, if it is treated seriously, the assumption is that women should simply be medicated and silenced. Hopefully, more women will take the initiative to demystify these disorders to help themselves because there is help available. Perhaps there will also be more interested members of the medical community that will conduct more extensive research to advance the treatment. Either way, it is important for women to understand more about these disorders so they can help themselves.

References

1)Explains the differences between PMS and PMDD
2)Gives examples of the causes and treatments of PMDD and PMS
3) Official site of the PMS project
4) A comprehensive explanation of PMDD


Are you SAD: The reality of Seasonal Affective Di
Name: Kathryn Ba
Date: 2002-11-08 13:47:49
Link to this Comment: 3617


<mytitle>

Biology 103
2002 Second Paper
On Serendip

The winter blues. Cabin fever. These terms bring to mind that glum feeling that overcomes many people during the winter months. Does this seasonal depression have any validity or do we just get antsy when the temperature turns from scorching to frigid? About twenty years ago, Herbert E. Kern noted in himself regular seasonal emotional cycles, which he hypothesized might be related to seasonal variations in environmental light. He then learned from the findings of Alfred J. Lewy et. al that bright environmental light could suppress the nocturnal secretion of melatonin by the pineal gland in humans. In 1980-1981, Dr. Norman Rosenthal admitted Kern to his psychological unit and treated his symptoms of depression with bright light. Amazingly, the treatment worked. The follow-up study, in which the original results were replicated, lead to the description of Seasonal Affective Disorder (SAD) in 1984 (1). Further research over the past two decades has led to a better understanding of SAD, including possible causes, the symptoms associated with SAD, and treatment options.

SAD affects four women four every one man, with an overall incidence ranging from 2% to 10%, with more people effected at higher latitudes, in North America. The frequency of SAD in North America was double that in Europe, "suggesting that climate, culture, and genetics may be more important factors" (2). The differences in the epidemiology of SAD may cause one to pause. How can gender and cultural difference be accounted for in relation to SAD, and do gender differences as a result of culture explain both the epidemiology and etiology of the disease? Have populations in which women hold traditional gender roles, as opposed to many American and European women who are rapidly blurring gender boundaries, been studied?

General Cultural differences between the United States and Europe are also another important point to consider. Do the two cultures place different emphasis on certain events, which in turn lead to a larger prevalence of SAD in America, such as the stresses associated with the winter holiday season? Are post-holiday blues the result of our culture, and do they lead to SAD? Do periods of economic decline contribute to the overall stress of one region, which then leads to more incidents of SAD? These questions, among many, may cause one to wonder what causes this disease; the environment, biology, culture?

The etiology of SAD remains a topic of great debate. Rosenthal notes that "winter changes often involve energy conservation...many SAD symptoms can be seen as conserving-overeating, oversleeping and low sexual ability...Seasonal adaptations are adaptive in some circumstances, but not in humans, who must function at the same level all year" (3). One might wonder if his explanation is a bit shortsighted. Humans are not excluded from their fellow mammals, birds, fish, reptiles, etc. when it comes to conserving energy in the winter. Perhaps the necessity to conserve energy is not as obvious as it once was before modern technology provided Gortex jackets, Polar-fleece gloves, or Smart-Wool socks. Before these luxuries, humans probably considered the importance, and indeed necessity, of keeping warm and conserving energy during the winter months. This may have taken the form of eating more food to add more fat to one's body, remaining in bed or next to a fire, and participating in as little physical exertion as possible. Not only is Rosenthal's explanation insufficient for these reasons, he also does not take into account other factors, such as other biological influences or a genetic component.

Another theory states that excessive or inadequate levels of neurotransmitters, such as serotonin, may cause depression. Interestingly, serotonin is known to decline in the autumn and throughout winter, a fact that might allow for correction by appropriate medications. Disturbed circadian rhythms have also been pointed to as the cause of SAD. At night, circadian rhythms lower body temperature and trigger the production of melatonin, a hormone that enhances sleep. If these are not functioning properly, it is theorized, one might experience symptoms associated with SAD (2).

A genetic factor might also provide an explanation. In a study of monozygotic and dizygotic twins, seasonality was shown to be a heritable trait (4). This area of research leads to interesting questions. For example, further research might attempt to determine what specific genetic factors are responsible for SAD. Also, one might ask if SAD is the result of genetic differences and environmental influences. A study that examines monozygotic twins and dizygotic twins both separated at birth and raised in different environments, and data regarding the frequency of both twins having SAD, would be useful to determine what the effects culture and genetics may have on this disease.

It is plausible that the environment, biological, and cultural factors combine to determine the occurrence of SAD among certain populations. For example, do people who live in northern latitudes have an better chance of having SAD than people who live in southern latitudes, merely as a result of geographic and environmental differences? Do people inherit genes from their ancestors who made physical adaptations, in order to survive in a northern climate, that carry a SAD related gene? If physical adaptations did occur, did those adaptations lead to cultural differences, which in turn increased the likelihood of SAD? As of now, these questions remain just theories, and despite the general success of treating SAD, its etiology remains elusive.

The standard US manual of psychiatric diagnoses, the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV), lists SAD as a subtype of major clinical depression. It is called a "specifier" because it refers to the seasonal depressive episodes that can occur within major depression and bipolar disorders. Specifically, the criteria for SAD are as follows:
A. Regular temporal relationship between the onset of major depressive episodes and a particular time of the year (unrelated to obvious season-related psychological stressors).
B. Full remission (or a change from depression to mania or hypomania) also occurs at a characteristic time of the year.
C. Two major depressive episodes meeting criteria A and B in the last two years and no non-seasonal episodes in the same period.
D. Seasonal major depressive episodes substantially outnumber the non-seasonal episodes over the individual's lifetime (5).
SAD is classified into two distinct types: fall-onset SAD, also called winter depression, and spring-onset SAD. Winter depression usually begins in late autumn and lasts through the spring or summer. Symptoms of this type of SAD include increased sleep, increased appetite, craving for carbohydrates, weight gain, irritability, and interpersonal conflict. The symptoms of spring-onset depression include insomnia, weight loss, and poor appetite, and typically begin in late spring or early summer and end in early fall (2). Patients with SAD report that their symptoms improve in lower latitudes, and worsen if they travel to an area with great winter cloud cover (4).

What can be done to help SAD patients. The most widely prescribed treatment is the use of a light box, a device that emits fluorescent light of approximately 2,500 to 10,000 lux. Lux is defined as "a unit of illumination intensity that corrects for the photopic spectral sensitivity of the human eye." Bright sunshine can be over 100,000 lux, a brightly-lit office is less than 500 lux, and an indoor light at night is only 100 lux (4). This treatment, which is 60% to 90% effective, rarely produces side effects. If they do occur, they may include: photophobia, headache, fatigue, irritability, hypomania, insomnia, and possible retinal damage. A typical treatment includes shining the light at a downward slope, while aiming at the eyes, for a period of 10 to 90 minutes daily, depending on the severity of SAD (5). The use of a light box is not effective with all SAD patients, for whose cases selective serotonin reuptake inhibitors (SSRIs) are prescribed. These medications generally are most effective when used in combination of light therapy. For practical reasons, some patients choose not to use light therapy because of the large time commitment. For this reason, the treatment of SAD must be considered on an individual basis (4).

The questions raised in this essay point to the necessity of considering differences in the epidemiology of SAD in terms of culture, biology, and environmental influences. It may be that none of these factors is the cause of SAD, but it is clear that these factors are thoroughly related and sometimes difficult to distinguish. For this reason, the etiology of this disease may be found by looking at these factors as a unit, instead of their individual parts.


References

1)Two decades of research into SAD and its treatments: A Retrospective , an article written by Dr. Rosenthal

2)Seasonal Affective Disorder: Autumn Onset, Winter Gloom, a clinical review of SAD

3)Modern Solutions to Ancient Seasonal Swings, from the November 2000 edition of Psychology Today

4)SAD: Diagnosis and Management, an article written by Raymond W. Lam, with general information about SAD

5)Seasonal Affective Disorders , an article


Yoga: Stress Reduction and General Well-Being
Name: Mahjabeen
Date: 2002-11-10 12:53:10
Link to this Comment: 3633


<mytitle>

Biology 103
2002 Second Paper
On Serendip

As the last few weeks of the semester approaches, Bryn Mawr finds itself submerged in the stress of finishing syllabuses, writing papers, meeting deadlines, begging for extensions and taking exams. The keyword here is stress. Stress is perhaps the most utilized word at Bryn Mawr and as a junior with more than my share of work, I have also managed to accumulate more than my regular share of stress.

Nevertheless with every problem comes its solution. There are many ways to manage and reduce stress and one such technique is practicing Yoga.

Yoga is an ancient, Indian art and science that seeks to promote individual health and well-being through physical and mental exercise and deep relaxation. Although known to be at least 5,000 years old, Yoga is not a religion and fits well with any individual's religious or spiritual practice. Anyone of any age, religion, health or life condition can practice Yoga and derive its benefits.

Unique and multifaceted, yoga has been passed on to us by the ancient sages of India; early references to yoga are found in the spiritual texts of the Vedas, Upanishads and the Bhagavad-Gita. Pantanjali's Yoga Sutras (the Eightfold Path) are still widely studied and practiced today. The Sutras form the basis of much of the modern yoga movement. (1).

The three major cultural branches of Yoga are Hindu Yoga, Buddhist Yoga, and Jaina Yoga. Within each of these great spiritual cultures, Yoga has assumed various forms.

Yoga is the practice of putting the body into different postures while maintaining controlled breathing. It is considered to be a discipline that challenges and calms the body, the mind, and the spirit. Preliminary studies suggest that yoga may be beneficial in the treatment of some chronic conditions such as asthma, anxiety, and stress, among others. (2).

By focusing on the breath entering and leaving your body, you are performing an exercise in concentration. If your mind wanders to other things, your focus on the breath will bring your concentration back. Research confirms that consciously directed breathing can have the following benefits: reduced stress, sound sleep, clear sinuses, smoking cessation, improved sports performances, relief from constipation and headaches, reduced allergy and asthma symptoms, relief from menstrual cramps, lower blood pressure, and emotional calmness. (3).

According to Dean Ornish, in his book, Reversing Heart Disease, "almost all of these (stress reduction) techniques ultimately derive from yoga." Yoga integrates the concepts of stretching, controlled breathing, imagery, meditation, and physical movement.

Yoga is thought of by many as a way of life. It is practiced not only for stress management but also for good physical and mental health and to live in a more meaningful way. Yoga is a system of healing and self-transformation based in wholeness and unity. The word yoga itself means to "yoke" -- to bring together. It aims to integrate the diverse processes with which we understand the world and ourselves. It touches the physical, psychological, spiritual, and mental realms that we inhabit. Yoga recognizes that without integration of these, spiritual freedom and awareness, or what the yogis call "liberation," cannot occur. (4).

Yoga's numerous health benefits, its potential for personal and spiritual transformation, and its accessibility make it a practical choice for anyone seeking physical, psychological, and spiritual integration. Interest in Yoga is surging throughout the world. Among the many different Yoga styles, Hatha yoga is the most familiar to Westerners. It is the path of health using breathing techniques and exercises concerning different postures to better mental and physical harmony.

During an experiment in biology lab concerning the measurement of heart rate, my partners and I experimented with yoga breathing as a technique to decrease heart rate and bring about relaxation. Our results did show a decrease in heart rate from the norm and it was concluded that if yoga was practiced in a calm setting without a time constraint (neither of which was available to us in a noisy laboratory) there would have been a significant decrease in the practicing individual¡¯s heart rate. Moreover, from personal experience I can vouch that Yoga is indeed effective in not only stress reduction but also for an individual¡¯s general well-being.

All forms of Yoga teach methods of concentration ad contemplation to control the mind, subdue the primitive consciousness, and bring the physical body under control of the will. In Hatha yoga, slow stretching of the muscles in exercise is taught, along with breathing in certain rhythmical patterns. The body positions or asamas for exercises and meditation can be learned, with some practice, by most. These positions are thought to clear the mind and create energy and a state of relaxation for the individual. Hatha Yoga is basically the style of Yoga practiced by most Westerners not only for relaxation and stress reduction but also for the mitigation of pain during certain illnesses. Yoga is also widely recommended for pregnant and nursing women as well as those reaching menopause. (5).

In Britain, there is widespread practice of Yoga in the workplace. Employers who fund exercise programs for their employees are beginning to rule in favor of Yoga instead of the regular gym membership. Research shows an individual who is relaxed and at the peak of his mental and physical health will also perform better in the workplace. (6).

Yoga is so popular in today¡¯s world that it is increasingly being coined as a religion. Is Yoga a religion? Your guess is as good as mine. Since Yoga comes from Hindu, Buddhist and Jain scriptures, certain aspects of these religions are supposedly integrated in Yoga such as the ideas of karma and reincarnation and the notion of there being many deities in addition to the one ultimate Reality. However, most Yoga gurus deny the existence of Yoga as a religion and go on to say that Yoga does not teach the idea of reincarnation or even impose karmic beliefs.

Yoga is one of the orthodox philosophies of India. While it is not a religion, it is theistic, that is, it teaches the existence of a Supreme Intelligence or Being. However, to practice the techniques of yoga successfully you do not need to believe in such a being. Because yoga is a spiritual rather than a religious practice, it does not interfere with any religion. In fact, many people find that it enhances their own personal religious beliefs. (7).

How can Yoga enrich the religious or spiritual life of a practicing Christian or Jew? The answer is the same as for any practicing Hindu, Buddhist, or Jaina. Yoga aids all those who seek to practice the art of self-transcendence and self-transformation, regardless of their persuasion, by balancing the nervous system and stilling the mind through its various exercises (from posture to breath control to meditation). Yoga's heritage is comprehensive enough so that anyone can find just the right techniques that will not conflict with his or her personal beliefs. (8).

More than that, yogic postures calm down the nervous system and creates sufficient space in the psyche to explore breathing control. It puts the individual in touch with his or her body¡¯s life force and opens up spiritual aspects of his or her being.

Yoga should not be looked at as a religion or an exercise, it is more of a system of well-integrated techniques and mind frames designed to alleviate stress and bring about universal harmony throughout one¡¯s body, thus infusing feel-good vibes in mind and body.In a world where most good things come with side-effects, Yoga brings a refreshingly different perspective.

References


(1)Self Discovery: Mind and Spirit

(2)Stress Reduction Techniques and Therapies

(3)Kripalu Yoga, A Way to Better Health

(4)Self Discovery: Mind and Spirit

(5)How to do Meditation and Yoga to Reduce Stress

(6)Yoga for Stress Management and Yoga in the Workplace

(7)Yoga: FAQ

(8)Yoga Research and Education Center: Is Yoga a Religion?


"Follow Your Heart": Emotions and "Rational" Thoug
Name: Laura Bang
Date: 2002-11-10 13:24:08
Link to this Comment: 3634

<mytitle> Biology 103
2002 Second Paper
On Serendip

"Follow Your Heart":

Emotions and "Rational" Thought

     Even with many definitions, from Aristotle's 4th century BC definition -- "the emotions are all those feelings that so change men as to affect their judgments, and that are also attended by pain or pleasure" (7) -- to Merriam-Webster's 20th century AD definition -- "the affective aspect of consciousness; feeling"(2) -- emotion, and what causes emotion, can be rather difficult to define, especially in non-scientific terms. Defining the difference between a "true" smile and a "false" smile is almost impossible to put into words, yet most people readily admit that they can distinguish between the two. (4) So what is it that defines emotion?
     Scientists are still trying to understand just what causes us to have emotions, but recent researchers have discovered the center of "emotions" in the brain. "A region at the front of the brain's right hemisphere, the prefrontal cortex, plays a critical role in how the human brain processes emotions," says a 2001 University of Iowa report. (6) Scientists monitored single brain cells—neurons—in the right prefrontal cortex and found

"that these cells responded remarkably rapidly to unpleasant images, which included pictures
of mutilations and scenes of war. Happy or neutral pictures did not cause the same rapid
response from the neurons." (6)

The scientists speculated that the rapid reaction of neurons to "unpleasant images" might be related to the results of other studies, which have shown that the brain is capable of responding very quickly to "potentially dangerous or threatening kinds of stimuli." (6) The study is not conclusive, however, as the experiment was performed on only one patient who had epilepsy, but the experimenters stated that "the tissue being studied was essentially normal, healthy prefrontal cortex." (6)
     Another interesting aspect of studies of emotion is the differences between the right and left hemispheres of the brain. Left-handed people, who are right-brain dominant, tend to be more emotionally and artistically oriented, but left-handed people are a minority of the population. Studies have shown that the left hemisphere of the brain is responsible for "logical thinking, analysis, and accuracy," whereas the right hemisphere is responsible for "aesthetics, feeling, and creativity." (8) The right brain also dominates in producing and recognizing facial and vocal expressions. (3) Unfortunately, most schools emphasize "left-brain modes of thinking, while downplaying the right-brain ones," (8) and society in general emphasizes rational thought over emotional thought. (1) "The classic assumption is that emotion wreaks havoc on human rationality..." (1) It has been argued, however, that emotions actually contribute to and aid rational thought, rather than being purely irrational thought. (5)
     In one study, a businessman, Elliot, suffered from a brain tumor that partially damaged his brain, specifically his prefrontal cortex—the emotional center of the brain. As a result, Elliot "lost the ability to experience emotion; and without emotion, rationality was lost and decision-making was a dangerous game of chance." (1) Without emotions, he could no longer analyze the experiences he had lived through, which left him with nothing to tell him whether a decision would be good or bad. Elliot's lack of emotional response to anything that he experienced led to a lack of understanding what is good and what is bad. This case seems to emphasize the importance of emotions in "rational" decision-making. Emotions "are fundamental building blocks out of which an intelligent and fulfilling life can be constructed." (1)
     Since emotions have been observed to be such an important part of who we are, it is worthwhile to wonder where emotions come from. Why do we have emotions, and how are we able to tell the difference between so many subtly different facial expressions that convey different emotions?
     Language is a very important part of what defines humanity and how we interact with and understand each other, and facial expressions play an important role in interpreting what another person is feeling—someone might say that they are okay, but their facial expressions might indicate that they are lying. The importance of facial expressions is easily seen "when we converse on an important subject with any person whose face is concealed." (4)
     How do we recognize emotions? When you see someone who is happy, do you pause to thoroughly analyze the person's features before determining that the person is indeed happy? Most people are not aware -- at least not consciously aware -- of performing any sort of in-depth analysis to determine what emotion someone else is feeling, so does that mean that emotions are innate? The discovery of an emotional center in the brain would seem to support this idea.
     When Charles Darwin studied emotion in humans and animals in the latter half of the nineteenth century, he hypothesized that emotions are innate, but that humans learned them before they became imbedded in our nature -- that is, after years of practicing and having to learn emotions as part of communication skills, emotions became innate through the process of evolution. (4) Further support of the idea that emotions are innate comes from observing infants and young children, who are definitely capable of conveying emotions, but have not had enough time to actually learn the emotions for themselves.

"I attended to this point in my first-born infant, who could not have learnt anything by associating
with other children, and I was convinced that he understood a smile and received pleasure from
seeing one, answering it by another, at much too early an age to have learnt anything by experience. ...
When five months old, he seemed to understand a compassionate expression and tone of voice. When a
few days over six months old, his nurse pretended to cry, and I saw that his face instantly assumed a
melancholy expression ... [T]his child never [saw] a grown-up person crying, and I should doubt whether
at so early an age he could have reasoned on the subject. Therefore it seems to me that an innate feeling
must have told him that the pretended crying of his nurse expressed grief; and this through the instinct
of sympathy excited grief in him." (4)

This demonstrates the importance of emotional facial expressions in communication—they are a child's first language, the first way a child may communicate with the others around him. (4)
     Scientists still have a lot more research to do before we can truly understand our emotions, but it is clear that emotions are an important part of who we are. Emotions are more than just whims or "following your heart;" emotions are a part of how we think "rationally," as seen in the case of Elliot, the man who lost his emotions. Therefore, it is ridiculous that society frowns on those who think too "emotionally" rather than "rationally" -- they are not two separate ways of thinking, but rather they are interconnected, so that we need both in order to make decisions about ourselves and the world around us. Emotions, and the facial expressions that go with them, are the most truthful aspects of humans—"They reveal the thoughts and intentions of others more truly than do words, which may be falsified." (4) Emotions are the intangible and indefinable elements that make us who we are.

"The joy, and gratitude, and ecstasy! They are all indescribable alike."
~ Charles Dickens (9)


References:

1) "Emotion, Rationality, and Human Potential," John T. Cacioppo (University of Chicago); from Fathom: the source for online learning

2) Merriam-Webster OnLine Dictionary: "emotion"

3) "Emotion and the Human Brain" by Leslie Brothers, MIT Encyclopedia of Cognitive Science

4) "The Expression of the Emotions in Man and Animals" by Charles Darwin (1872); Courtesy of "The Human Nature Review" edited by Ian Pitchford and Robert M. Young

5) "Emotions" by Keith Oatley, MIT Encyclopedia of Cognitive Science

6) "UI study investigates human emotion processing at the level of individual brain cells" (Week of January 8, 2001), University of Iowa Health Care News

7) Aristotle's Rhetoric, Book II, Chapter 1, Translated in 1954 by W. Rhys Roberts; written by Aristotle in 350 B.C.

8) "Right Brain vs. Left Brain"

9) Dickens, Charles. "A Christmas Carol", from The Christmas Books. First published in 1843.


How Do We Know What We Know? Tacit Knowledge Defin
Name: Diana La F
Date: 2002-11-10 15:07:00
Link to this Comment: 3635


<mytitle>

Biology 103
2002 Second Paper
On Serendip

When I asked a certain professor for help in defining tacit knowledge, he stated that it is ¡°the knowledge that we have without knowing we know it¡± and that ¡°once we know we know it, it becomes harder to know how we know what we know.¡± WHAT?!?! Needless to say, this confused me to no end and only created more questions. The more I researched, the more fuzzy the idea of tacit knowledge became to me.

Tacit knowledge is the knowledge that people have that can not be readily or easily written down, usually because it is based in skills (1). It is silent knowledge that emerges only when a person is doing something that requires such knowledge or when they are reminded of it (2). Whatever governs this knowledge is not conscious. This covers a surprising amount of knowledge that most people have, such as attention, recognition, retrieval of information, perception, and motor control. These are known skills, but they are not easily explained (3). This is not a knowledge that can be explained through a system or an outline in a book (4).

The person ascribed with the theory of tacit knowledge is Michael Polanyi (1891-1976). Polanyi was a chemist, born in Budapest, who became a philosopher later in his life (5). Polanyi thought that humans are always ¡°knowing¡± and are changing constantly between tacit knowledge and focal knowledge, that this in itself is a tacit skill and is used to blend new information with old so that we can better understand it. More easily put, people categorize the world in order to make sense of it. This is something that everyone does, whether they realize it or not, and cannot be replaced by another method. Taken in this context, it may be better to define knowledge itself as a method of knowing. Each person will have the reality of their world shaped by their experience. In this context, all knowledge is rooted in the tacit (6).

This is all well and good, but what does it mean? The problem with understanding tacit knowledge is that is it nearly impossible to be able to grasp in and think on it (7). In order to help someone understand tacit knowledge, all one can do it to give them opportunities to for them to teach it to themselves (8). This is most easily done through examples. This paper is made up of small characters based in the Phoenician alphabet. While reading this, were you even aware of the characters? Probably not. You probably skimmed over the words, heedless of how they were composed, understanding only what the grouping of letters meant. But how do you know the meaning of the words? You didn¡¯t have to think about them, you just saw the words and somehow understood what they meant. This is a tacit knowledge(6). In the same way language can be considered a tacit knowledge. How do you know whatever language you speak most often? Do you put any conscious thought into how you say something, or do you just know how to say what you want to convey (9)? Here¡¯s another example. What¡¯s wrong with the following sentence:

The girl throws ball the.

There is a grammatical rule involved with the exact reason why the above sentence is incorrect. Were you even slightly conscious that you had learned this rule (8)? Tacit knowledge goes beyond rules and meanings that we have learned somewhere along the way but that we have pushed back beyond our consciousness. When you see your friend you can recognize their face. Yet, how do you do this? How do you recognize and differentiate between two people? Many people have brown eyes, dark hair, short hair, etc. How can you tell the difference, and in a split second also(6)?

Tacit knowledge is not easily understood. The more I researched, the harder it became for me to explain, even to myself, what tacit knowledge was. I later figured out the reason for this: I understood tacit knowledge tacitly. Without the aid of examples of what tacit knowledge was, I would still be utterly confused as to the meaning of the phrase. Once it is understood, the explaining seems to come easily. Explaining it in a manner that help others understand it better is almost impossible, though. They must learn it through experiences they themselves have had if they are to understand it at all.

References

1)Management-resources.org

2)Tacit Knowledge

3)Models of Tacit Knowledge

4)Tacit Knowledge

5)Michael Polanyi 1891-1976

6)Polanyi-Tacit Knowledge

7)Tacit knowledge - riding a bike - John Seely Brown

8)Tacit knowledge and implicit learning

9)Dictionary of Philosophy of Mind - tacit knowledge


How Do We Know What We Know? Tacit Knowledge Defin
Name: Diana La F
Date: 2002-11-10 15:07:11
Link to this Comment: 3636


<mytitle>

Biology 103
2002 Second Paper
On Serendip

When I asked a certain professor for help in defining tacit knowledge, he stated that it is ¡°the knowledge that we have without knowing we know it¡± and that ¡°once we know we know it, it becomes harder to know how we know what we know.¡± WHAT?!?! Needless to say, this confused me to no end and only created more questions. The more I researched, the more fuzzy the idea of tacit knowledge became to me.

Tacit knowledge is the knowledge that people have that can not be readily or easily written down, usually because it is based in skills (1). It is silent knowledge that emerges only when a person is doing something that requires such knowledge or when they are reminded of it (2). Whatever governs this knowledge is not conscious. This covers a surprising amount of knowledge that most people have, such as attention, recognition, retrieval of information, perception, and motor control. These are known skills, but they are not easily explained (3). This is not a knowledge that can be explained through a system or an outline in a book (4).

The person ascribed with the theory of tacit knowledge is Michael Polanyi (1891-1976). Polanyi was a chemist, born in Budapest, who became a philosopher later in his life (5). Polanyi thought that humans are always ¡°knowing¡± and are changing constantly between tacit knowledge and focal knowledge, that this in itself is a tacit skill and is used to blend new information with old so that we can better understand it. More easily put, people categorize the world in order to make sense of it. This is something that everyone does, whether they realize it or not, and cannot be replaced by another method. Taken in this context, it may be better to define knowledge itself as a method of knowing. Each person will have the reality of their world shaped by their experience. In this context, all knowledge is rooted in the tacit (6).

This is all well and good, but what does it mean? The problem with understanding tacit knowledge is that is it nearly impossible to be able to grasp in and think on it (7). In order to help someone understand tacit knowledge, all one can do it to give them opportunities to for them to teach it to themselves (8). This is most easily done through examples. This paper is made up of small characters based in the Phoenician alphabet. While reading this, were you even aware of the characters? Probably not. You probably skimmed over the words, heedless of how they were composed, understanding only what the grouping of letters meant. But how do you know the meaning of the words? You didn¡¯t have to think about them, you just saw the words and somehow understood what they meant. This is a tacit knowledge(6). In the same way language can be considered a tacit knowledge. How do you know whatever language you speak most often? Do you put any conscious thought into how you say something, or do you just know how to say what you want to convey (9)? Here¡¯s another example. What¡¯s wrong with the following sentence:

The girl throws ball the.

There is a grammatical rule involved with the exact reason why the above sentence is incorrect. Were you even slightly conscious that you had learned this rule (8)? Tacit knowledge goes beyond rules and meanings that we have learned somewhere along the way but that we have pushed back beyond our consciousness. When you see your friend you can recognize their face. Yet, how do you do this? How do you recognize and differentiate between two people? Many people have brown eyes, dark hair, short hair, etc. How can you tell the difference, and in a split second also(6)?

Tacit knowledge is not easily understood. The more I researched, the harder it became for me to explain, even to myself, what tacit knowledge was. I later figured out the reason for this: I understood tacit knowledge tacitly. Without the aid of examples of what tacit knowledge was, I would still be utterly confused as to the meaning of the phrase. Once it is understood, the explaining seems to come easily. Explaining it in a manner that help others understand it better is almost impossible, though. They must learn it through experiences they themselves have had if they are to understand it at all.

References

1)Management-resources.org

2)Tacit Knowledge

3)Models of Tacit Knowledge

4)Tacit Knowledge

5)Michael Polanyi 1891-1976

6)Polanyi-Tacit Knowledge

7)Tacit knowledge - riding a bike - John Seely Brown

8)Tacit knowledge and implicit learning

9)Dictionary of Philosophy of Mind - tacit knowledge


The Essential Nutrients: Are You Getting Them?
Name: Anastasia
Date: 2002-11-10 21:26:42
Link to this Comment: 3639


<mytitle>

Biology 103
2002 Second Paper
On Serendip

After returning home from the hospital, he wondered how it happened. A heart attack at thirty-six was a surprise to him as well as to his entire family. Luckily the signs and symptoms were caught early enough to save his life, but it was discovered that he had a disorder known as hypertensive heart disease. Unfortunately however, doctors were unable to explain why he had such high blood pressure. At any given moment a second myocardial infarction could occur, and could in fact be fatal if not caught early enough. His heart condition lead to depression and thoughts of suicide. He had lived a regular life, eating healthy foods, exercising regularly, and working in a profession that he enjoyed. This disease seemed to just creep up on him and changed his life forever. As he researched the disease more and more and sought medical assistance, he found that it all could have been prevented if only he had added a few small capsules to his diet. Doctors told him that vitamins and dietary supplements could have saved him from becoming part of another statistic.

Vitamins are organic molecules required in the diet in amounts that are small compared with the relatively large quantities of essential amino acids and fatty acids animals need. Tiny amounts of vitamins, ranging from .01 to 100 mg per day, may be enough, depending on the vitamin. Although the requirements for vitamins seem modest, these molecules are essential in a nutritionally adequate diet. Deficiencies can cause severe problems. In fact, the first vitamin to be identified, thiamine, was discovered as a result of the search for the cause of a disease called beriberi. Its symptoms include loss of appetite, fatigue, and nervous disorders. Beriberi was accounted for when it struck soldiers and prisoners in the Dutch East Indies during the nineteenth century. The dietary staple for these men was polished rice, which had the hulls removed in order to increase storage life. The men, and even the chickens, which ate this diet developed the disease. It was found that supplementing their diets with unpolished rice could prevent beriberi all together. Scientists later isolated the active ingredient of rice hulls. Since it belongs to the chemical family known as amines, the compound was named "vitamine" or a vital amine.

Along with the thirteen vitamins that are essential to human beings, many scientists now believe that there are dietary supplements that exist which are vital for the success of life such as Co-enzyme Q10. Co-enzyme Q10 is believed by many to relieve ailments and promote good health as well as a feeling of well being (1). It is found in every cell in the body and acts as a catalyst for ATP, which is used as energy for cellular function. If levels of Co-enzyme Q10 drop within the body, then the levels of energy for that human being will drop as well. Taken as a dietary supplement, Co-enzyme Q10 helps guard against possible deficiencies. It helps fight against aging by increasing the supply of the enzyme as the liver's ability to synthesize the enzyme decreases.

Co-enzyme Q10 improves cardiac function by providing energy to the heart. It contains properties that are beneficial in preventing cellular damage during a heart attack (9). The enzyme has also been used to treat other cardiac disorders such as angina, hypertension and congestive heart failure. Incidentally, it is also helpful during chemotherapy because it provides additional enzymes, while the body's supplies are being destroyed by the chemotherapeutic agents (1). In additional to these benefits, it has also been noted that Co-enzyme Q10 is effective in causing a regression of gum disease, boosts the immune system, and can greatly benefit the obese (2).

Lastly, it has been noted that Co-enzyme Q10 is the most vital fuel for mitochondria. There are roughly 60,000,000,000 cells in the body. In those cells there are 100,000,000,000,000 microscopic bacteria called mitochondria. All premature diseases and sickness are attributed to poor mitochondria health or low Q10 supply to the mitochondria. They can potentially live one hundred years if they are supplied with proper nutrients such as Q10, hydrogen, phosphates, oxygen, vitamins and minerals (3). Inefficient energy production within cells can cause approximately ninety per cent of all mutative damage to the cell infrastructures. Q10, if taken daily will not stop this natural destruction completely, but will help to slow the process.

Along with this revolutionary supplement, researchers have said that introducing fish oils into the daily diet can also be beneficial. There are good fats and there are bad fats in the foods that humans consume. Artificially produced trans-fatty acids are bad in any amount and saturated fats from animal products should be kept to a minimum (4). The beneficial fats or rather oils, since they liquidate at room temperature, are those that contain the essential fatty acids, which are polyunsaturated. They are grouped into two families, the omega-6 EFAs and the omega-3 EFAs. Minor differences in their molecular structure make the two families act very differently in the body. While the metabolic products of omega-6 acids promote inflammation, blood clotting, and tumor growth, the omega-3 acids act entirely opposite. Although both the omega-6 acids and omega-3 acids are needed, it is becoming increasingly clear that an excess of omega-6 fatty acids can have dire consequences (4). Many scientists believe that a major reason for high incidence of heart disease, hypertension, diabetes, obesity, premature aging, and some forms of cancer is the imbalance between the intake of omega-6 and omega-3 fatty acids. In the past, diets included a ratio of omega-6 to omega-3 of about 1:1. An enormous change in dietary habits over the last few centuries has changed this ratio to something closer to 20:1, which is a huge problem (7).

Several studies conducted have associated low levels of omega-3 fatty acids and depression. Other studies have shown that countries with high levels of fish consumption have fewer cases of depression. Researchers at Harvard Medical School have even gone as far as to use fish oil supplementation to treat bipolar disorder and British researchers report encouraging results in the treatment of schizophrenia (4). It has even been noted that fish oils prevent and may help to ameliorate or reverse atherosclerosis, angina, heart attack, congestive heart failure, arrhythmias, stroke, and peripheral vascular disease. Fish oils help maintain the elasticity of artery walls, prevent blood clotting, reduce blood pressure and stabilize heart rhythm (5). Supplementing with fish oils has been found to be entirely safe even for periods as long as seven years and no significant adverse effects have been reported in hundreds of clinical trials using as much as 18 grams a day of fish oils (6).

Now there is also considerable evidence that the consumption of fish oils can delay or reduce tumor development in breast cancer. Studies have shown that a high blood level of omega-3 fatty acids combined with a low level of omega-6 acids reduces the risk of developing breast cancer (8). Daily supplementation of as little as 2.5 grams of fish oils has been found effective in preventing any progression in benign polyps or even colon cancer. Greek researchers report that fish oil supplementation improves survival and quality of life in terminally ill cancer patients.

Heart disease and cancers are killers that affect many human beings. They could possibly be prevented by taking supplements such as Co-enzyme Q10 or adding more omega-3 fatty acids to a daily diet. If started early enough, supplementation could change a person's life. It seems like a small price to pay to save a life.


References

1)Co-enzyme Q10, Information provided by Alaron Products Ltd

2)Co-enzyme Q10, Information provided by Symmetry

3)Q10 Stable Co Enzyme Australia, Co-Enzyme.com

4)Fish Oils: The Essential Nutrients, Hans R. Larsen

5) Simopoulos, Artemis. Omega-3 fatty acids in health and disease and in growth and development. American Journal of Clinical Nutrition, Vol. 54, 1991, pp. 438-63

6) Pepping, Joseph. Omega-3 essential fatty acids. American Journal of Health-System Pharmacy, Vol. 56, April 15, 1999, pp. 719-24

7) Connor, William E. Importance of n-3 fatty acids in health and disease. American Journal of Clinical Nutrition, Vol. 71 (suppl), January 2000, pp. 171S-75S

8) Cave, W.T. Jr. Dietary omega-3 polyunsaturated fats and breast cancer. Nutrition, Vol. 12, January 1996, pp. S39-42

9)Alternatives For Cardiovascular Health, Co-Q-10


The Mystery of Morality: Can Biology Help?
Name: Anne Sulli
Date: 2002-11-10 22:14:55
Link to this Comment: 3640


<mytitle>

Biology 103
2002 Second Paper
On Serendip

What is morality? This ambiguous yet powerful concept has puzzled mankind for centuries, never lending itself to a concise and solitary definition. The concept of morality assumes different meaning and value for various individuals—at times becoming synonymous with religion, sympathy, virtue, or other equally ambiguous terms. In recent years, scientists have acquired a unique voice in the ongoing debate of human morality. Biologist often turn to the past, reaching for the origins of morality, to elucidate its mystery. Their arguments incite heated debate, particularly with religious thinkers who link morality with God and salvation. Indeed, a scientific explanation for morality not only threatens the authority of religion; it also forces humans to reevaluate their self-image as a species. Yet it must be asked how, and to what extent, biology helps us understand what morality is and how it has evolved? Can one fully explain man's inclination to moral sentiment with science? The sheer duration and intensity of the debate regarding morality proves that there are no clear answers. The following exploration will show how a biological vantage point may be useful to understanding morality. Such a lens, however, is limited and unable to fully expose this mystery.

Central to the morality debate is the disagreement regarding ethical behavior as man's invention or as an intrinsic human quality. The latter belief is consistent with the idea of a law-giving God and the notion of natural rights (1). A biological exploration may not allay this dispute, but it can—by accounting for its origins—encourage a greater understanding of what morality is. From a biological perspective, moral aptitude is like any other mental trait: the product of competitive natural selection (1). Is this to say that moral beings were more likely to survive and therefore "chosen" by nature to thrive? Such a statement is difficult to prove. Rather, it is more likely that moral behavior is merely a part of a larger system 'tested' by time and nature's selection process. Morality, therefore, may not be an adaptive feature itself, but one associated with another trait(s) preferred by natural selection (3). Such a quality is said to be a pleitropic trait (3). The necessary trait for the development of morality is a higher intelligence (3). Moral aptitude is a product of intelligence just like any other intellectual ability—literature, art, and technology, for example—facilities which may not be adaptive themselves (3).

Indeed, an increased human intelligence provides the necessary conditions for moral conduct. An essential and primary ingredient for moral judgment, for instance, is the ability to predict the consequences of one's actions (3). When isolated, certain actions cannot be deemed as moral or immoral behavior. The act of pulling a trigger (not an inherently unethical action) is the classic example (3). Only when one is able to anticipate the outcomes of his/her actions, can such behavior be declared as moral or not (3). This ability is perhaps born alongside the evolution of the erect position in human beings. As man's posture evolved, his limbs changed from simply appendages used for movement to organs of operation (3). Man, for example, can now create tools to aid his existence. The ability to perceive tools as a future aid, however, must precede the act of tool-making. Along with the physical ability to create tools sprouts the intellectual power to anticipate the future—to relate means to an end (1). An increased intelligence, therefore, indirectly augmented man's capacity for moral judgment.

If morality is to be understood as a product of higher intelligence, the question now turns to the motives which inspire ethical behavior. For if moral judgment is an intellectual process rather than an innate tendency, our motives for behaving ethically cannot be purely altruistic. Intelligence, for example provides the ability to maneuver the conflict between cooperation and deflection (1). The most classic example of this situation is the noted Prisoner's Dilemma, which seems to prove that even criminals act under honor and moral principle (4). When two criminals are arrested together, neither will "rat" on the other; they will, instead, accept punishment together (4). This seemingly altruistic behavior is actually the consequence of an intellectual process weighing the benefits and drawbacks of both possibilities—cooperation and deflection. The prisoners decide that because both members are capable of "ratting" on the other with hopes of securing immunity, it is safer and mutually beneficial to preserve their alliance (4).

Such a scenario can be imagined in other situations—for intellectual activity is always at work. From this perspective, morality is a mental process that measures the potential benefits of one's actions. Individual profit, then, is the primary consideration, although it may often be disguised as an ethical code . (1) This tendency is present even in animals. Vampire bats, for example, drink blood at night for sustenance and often feed those bats that could not acquire food for themselves (4). The obvious payoff is that the "altruistic" bat may in turn be assisted in the future. Cooperation, therefore, is mutually beneficial and ensures the survival of the species (4). Personal interest as the driving force for ethical behavior may be found even in religious morality (5). For why might a personal live under the moral guidelines of a particular religion? For many, it may be to secure a personal reward in the afterlife (5).

When an intellectual or mental process is intertwined with moral judgment, motives for ethical behavior can clearly be viewed as calculated and selfish. Morality, however, cannot be reduced to the simple measuring of gains and losses. It is undeniable that ethical behavior is often enacted even when a foreseeable advantage to such conduct is absent. The compassionate treatment of others, particularly when there is no perceivable reward, is indeed a puzzling issue—one which religious thinkers often attribute to the existence of a loving God (4). Yet putting oneself at risk for the sake of another is not unique to human beings; animals also exhibit such behavior (3). When a flock of zebras is attacked, for example, they will each scramble to protect the young within the group, endangering their own lives (3). Humans also react with such instincts, proving that an intellectual process is not always involved in moral judgment. The scientific argument, therefore, that morality is the indirect result of a higher intelligence, may not provide a complete explanation. Only some instances of moral behavior can be attributed to this theory.

A biological explanation of morality is insufficient in other areas as well. As previously noted, by evolution provides a heightened intelligence which sets the foundation for morality (4). An important distinction, however, must be identified—that between a human's capacity for moral judgment and the ethical norms accepted by society (3). While the former is indeed influenced by biology, the latter is most likely a product of social and cultural elements (3). Although it appears that natural selection may favor certain moral codes (the ban on incest and the restriction of divorce, for instance, are moral codes that contribute to successful reproduction) it does not, in fact, favor all ethical norms (3). The models discussed earlier, which involve risking one's own life for the sake of another, are clearly not in keeping with natural selection. Moreover, biology cannot justify such codes because our moral standards are both constantly changing and widely varied amongst different cultures (3). Finally, the same heightened intelligence which makes ethical behavior possible would also grant humans the power to accept or reject moral norms (4). Biology clearly accounts only for the development of man's capacity for moral behavior, not the moral codes he has come to accept. Francisco J. Ayala of the University of California likens the distinction to a human's biological capacity for language versus his use of a particular language (3). While biology provides humans with the capability to use language, natural selection does not prefer any specific language over another (3).

What, then, can be concluded about morality? Each voice in this debate (What is morality? Is it an innate quality or social construction? From where does it originate?) provides unique and interesting insight. Yet each argument is limited, providing only a fragment of understanding to the larger puzzle. The biological perspective is one such voice—it demystifies some of the enigma yet will never suffice as a solitary explanation. Biology shows one's capacity for ethical behavior is a product of evolution, but its explanation cannot extend much further. To truly gain a heightened understanding of this ambiguous and highly charged concept, it is most useful to consider not only biological factors, but social, cultural, and psychological influences as well. An interdisciplinary exploration—a union, rather than a separation, of various fields—is indispensable.


References

1)The Biological Basis of Morality,

2)Biology Intersects Religion and Morality,

3)The Difference of Being Human,

4)Morality Without God,

5)The Basis of Morality: Scientific Vs. Religious Explanations,


Cook Your Meat, Please!
Name: Joanna Fer
Date: 2002-11-10 22:14:57
Link to this Comment: 3641

<mytitle> Biology 103
2002 Second Paper
On Serendip

Bacteria is found everywhere. It is in our mouths, on our hands, floating in the ocean, sitting on branches of trees, hanging out underneath the bed. Bacteria such as cyanobacteria were and still are essential for the survival of life on earth. Cyanobacteria produces oxygen; so much that without its existense, there might be no oxygen for animals to breath. (1) Bacteria were first discovered by Antony van Leeuwenhoek in 1683. Van Leeuwenhoek was a Dutch scientist who made some of the first observations of bacteria, and recorded them, using microscopes he made himself. (2) Currently, we make a big deal about bacteria, creating anti-bacterial soap to combat our evil mini friends. However, as many people do know, not all bacteria is bad for humans. In fact, most is not bad at all. There are many kinds that do cause disease, though, and these should not be overlooked in any backlash against society telling us to rid ourselves of bacteria. There are very dangerous sorts of bacteria, some of which we have a certain amount of protection against, especially if we take the proper precautions when engaging in potentially harmful situations. Some diseases that are caused by different sorts of harmful bacteria are: pneumonia, tuberculosis, typhoid fever, whooping cough, diptheria, and tetanus. There are many other, lesser known diseases also caused by bacteria. One such disease is hemolytic uremic syndrome(HUS), which is caused by the ingestion of the bacteria Escherichia coli 0157:H7. This is a bacteria everyone should watch out for.

"Nancy Donley['s] six-year-old son, Alex, was infected with the bug [E. coli 0157:H7] in July of 1993 after eating a tainted hamburger. His illness began with abdominal cramps that seemed as severe as labor pains. It progressed to diarrhea that filled a hospital toilet with blood. Doctors frantically tried to save Alex's life, drilling holes in his skull to relieve pressure, inserting tubes in his chest to keep him breathing, as the Shiga toxins destroyed internal organs. He became ill on a Tuesday night, the night after his mother's birthday, and was dead by Sunday afternoon. Toward the end, Alex suffered hallucinations and dementia, no longer recognizing his mother or father. Portions of his brain had been liquefied." - Eric Schlosser, Fast Food Nation (3)

Hemolytic uremic syndrome(HUS)is a disease caused mostly by E. coli 0157:H7. It is a disease of the intestinal system, sometimes causing sever kidney diseases. The disease basically shreds the inside of the body, causing diarrhea with blood in it and severe abdominal cramping. There is no known cure for the disease; all that can be done is to let it run its course. Blood tranfusions are possible and sometimes necessary. After surviving the initial onslaught of the disease, the possibility of kidney disease leading to kidney failure is high, with ten to thirty percent of children who get past the initial stages being further victimized by kidney problems. Children are most affected by this disease when they are younger than five; it is one of the leading causes of kidney disease and/or failure among children. HUS kills five to ten percent of the children it severly affects.(4) Alex Donley, above, was a victim of this rare disease.
This horrible death could have been prevented. E. coli 0157:H7 is carried in the intestines of cattle. Cattle – and humans – can carry this particular strand of bacteria without being harmed by it. If it is ingested, a human can still survive, with little to no symptoms of disease. However, especially in young children, the bacteria can "release a powerful toxin – called a 'verotoxin' or 'Shiga toxin' – that attacks the lining of the intestine."(Schlosser, 199) The hamburger Alex ate was probably not properly cooked. The meat had been contaminated with E. coli 0157:H7, and Alex contracted HUS from it. Escherichia coli bacterias are naturally found in human digestive systems; they help us digest our food. However, certain strands, such as 0157:H7, can be horrendously horrible for humans.
Escherichia coli 0157:H7 was first found to be a catalyst for disease in 1982. Most of the cases of the bacteria in humans come from their consumption of tainted meat. It is most often found in ground beef. A single ground beef patty can contain parts from up to 100 different cattle; if one of these cattle has a deadly bacteria, the that patty, as well as the others that cattle is now part of, will have the same bacteria. One infected cattle's meat has the potential to reach hundreds. However, just because the bacteria is there does not mean someone will die from it or become sick from it. The potential for disease, for the bacteria's existence, is always there, as long as the bacteria exists. Precautions should be taken to avoid it, even if the chance of it actually being in the meat is very small.
There are several microbiological ways to isolate E. coli 0157:H7 from food. According to the FDA, "isolates of 0157:H7 do not ferment sorbitol and are negative with the MUG assay; therefore, these criteria are commonly used for selective isolation." Most bacterias do ferment sorbitol in tests in which meat (or other items being tested) is placed with sorbitol, for the purpose of determining whether the sorbitol ferments. If the sorbitol ferments, the meat is said to be good. If not, the fermentation is then analyzed. MUG is a test that checks for coliform and E. coli bacteria. The FDA is working on new, faster ways to determine whether meat is contaminated. (5)
It is very, very simple to avoid this disease and this bacteria. The easiest way to avoid getting sick from eating food we like is to cook hamburgers to well done. It is not enough to merely brown the outside of a burger; the bacteria has to be heated up to 160 degrees in order to be killed. We live in a society with the capabilities to prevent this harmful strand from ever entering the human body. With our abilities to cook meat, we should be able to stop it. With our abilities to sterilize meat, we should be able to stop it. With out abilities to check meat for bacterias that are harmful, we should be able to stop it. However, because humans are fallable – too fallable – this bacteria is allowed to continue killing children and harming adults. We need to cook our meat, we need to check our meat, we need to sterilize (to the best of our ability) our meat. The bacteria is not only found in meat, but any food product that has had the potential of being near cattle feces, which on large farms is quite a few. A simple check or two for harmful bacterias solves the problem. We just need to be a little more careful. Death as a cause of E. coli 0157:H7 is ridiculous and unnecessary.

WWW Resources


1. http://www.microscopy-uk.org.uk/mag/wimsmal/bacdr.html
2. http://www.ucmp.berkeley.edu/history/leewenhoek.html
3. Schlosser, Eric. Fast Food Nation. New York: Houghton Mifflin, 2001
4. http://www.niddk.nih.gov/health/kidney/summary/hus/
5. http://vm.cfsan.fda.gov/~mow/chap15.html
6. http://www.aphis.usda.gov/vs/ceah/cahm/Dairy Cattle/ndhep/decoli2.pdf>
7. http://www.wispolitics.com/freeser/pr/pr0209/sept27/pr02092728.html


My Two-Year Old is a Punk Rocker??
Name: Margaret H
Date: 2002-11-10 23:32:10
Link to this Comment: 3643


<mytitle>

Biology 103
2002 Second Paper
On Serendip


Although slightly uncommon, it is still possible for toddlers to exhibit actions similar to what one may find at a rock concert: head banging. For a multitude of different reasons, head banging can develop into a habit for young children and can even last for a few years. This striking behavior usually does not result in any permanent injury or damage to the child. Rarely does a child banging her head against the wall, crib, pillow, or other object signify a serious condition or disease. However, there are a few other reasons behind head banging other than the child's future career in the music business.

Up to 20% of healthy children can be found head banging (1). The behavior begins sometime within the first year and can last for a few years afterwards. However, most sources recommend seeing a Doctor if the behavior continues past the age of 4. Head banging can occur at several different times: at sleepy times, during tantrums, and even throughout the night (1). Children can also just randomly start, without any apparent provocation or reason. Because children do not have to be in a certain location for the head banging to start, heads can be hit against any type of material. Walls, cribs, pillows, and floors are the most common surfaces. At times, children can wake up with a headache, develop nasal problems, or have an ear infection as a result of the repeated banging (4). Other consequences include a temporary bald spot in the location of the banging (2). Toddler's heads have adapted for the normal bumps and bruises associated with learning to walk and climb, thereby preventing the more serious head trauma (1).

Although head banging usually is not considered very serious or worthy of medical attention, it does have a clinical classification as Movement Disorder or Rhythmic Disorder. Movements classified under this disorder seem to occur especially during the transition between the state of wakefulness and sleep, as well as the different stages of sleep (4). The disorder contains other behaviors such as head rocking, body rocking, folding, and shuttling (4). Experts speculate reasons for the actions stems for the need of rhythmic stimulation: to help fall asleep, during a tantrum, if under-stimulated or even if over-stimulated. Because children are constantly rocked in utero, once outside the womb, the still look for similar rhythmic movements (1). Children's propensity towards jumping rope, swinging, bumper cars, and dancing can be attributed to this theory (1). Other explanations behind the head banging are the rhythmic sensation it ensues, visual movement it can provide, release of inner tensions, or boredom/frustration when the child cannot sleep (2). Ear infections or teething can be added causes for the excessive movement (1). Most experts encourage parents to ignore the behavior, as it will subside in a few years (2).

In few very cases, ignoring Rhythmic Disorder can be a mistake. The excessive head banging and body rocking can be an early sign of Autism, a neurological disorder (5). Although children who are thought to have Autism exhibit other symptoms that normal head bangers do not. Behavior such as rocking, nail biting, self-biting, hitting own body, handshaking, or waving, in addition to the head banging, can be signs of Autism (5). Autism inhibits a person's "ability to communicate, form relationships with others, and respond appropriately to the environment," (6). Because symptoms of Autism are relatively easy to recognize and usually include more than one behavioral symptom, most doctors and parents are able to quickly decipher if their child has a serious condition.

Medical Experts also rarely equate Rhythmic Disorder with psychological disorders, (4). Since there is little threat of a child's behavior signifying something significant, ignoring the behavior really is the best thing to do. Because ignoring the head banging sometimes can be difficult, a plethora of suggestions exist on the web. One site indicates music therapy, hypnotism, motion-sickness medications, tranquilizers, or stimulants would help both the child and the parent (4). More conservative approaches include placing a metronome by the child's bed (3). Hopefully, the child will recognize a strong beat and will not feel the need to duplicate it. Parents can move the bed away from the wall, add cushioning to the crib, or carpet the floor in order to decrease the noise. One drug, Naltrexone, has had some success in treating children with Rhythmic disorder, although only preliminary research has been completed at this current time (5).

Little is known about the causes behind Rhythmic Disorder. A few studies have indicated that the head banging stimulates the Vestibular system in the inner ear, which controls balance (3). Another unrelated study shows that children who exhibit this kind of behavior were more advanced as compared to their peers (1). The few reports and studies of Rhythmic Disorder published often illustrate that not much is known about this behavior. As with most areas of health, this Disorder requires further study. However, information shows that the Disorder does not indicate a serious problem. In the case of Autism, other symptoms persist and doctors are able to diagnose the condition with ease. So maybe your head-banging child may grow up to worship Nine Inch Nails, but the possibility of her continuing a healthy maturation process is even more likely.

References

1) Dr. Greene.com : Caring for the Next Future , Featured Article, "Head Banging."

2) PlanetPsych.com – A World of Information , "Head Banging by Children" by James Windell.

3) American Academy of Pediatrics website , "Guide to Your Child's Symptoms: Rocking/Head Banging."

4) Kid's Help for Parents website , Sleep Problems

5) MEDLINE Plus Health Information , Stereotypic Movement Disorder.

6) National Institute of Mental Health website , "What is Autism?".


Magic Seeds
Name: Erin Myers
Date: 2002-11-11 00:02:36
Link to this Comment: 3644

INTRODUCTION

Just before its end, the Clinton Administration implemented rules for federally funded human stem cell research allowing embryonic cell research of otherwise discarded sources.  Upon inauguration, the Bush Administration immediately put a hold on federally funded human stem cell research until a compromise was reached in August 2001.  The controversy over human stem cell research springs the origin of embryonic stem cells.

Within one day after fertilization an embryo, which until this point is simply a fertilized egg, begins to cleave, or divide from one cell to two, from two cells to four, etc.  When the embryo reaches 34-64 cells it is considered a blastocyst.  It is four or five day old embryos, blastocysts of about 150 cells, which are implanted into a uterus during in vitro inplantation.4  Many embryos are made and kept frozen as back ups in case implantation is unsuccessful.  When a family decides they no longer need the embryos they can opt to dispose of them, put them up for adoption, or donated them for research.

Embryonic stem cells come from blastocysts made in a laboratory for in vitro fertilization that are donated for research with the informed consent of the donor.  Blastocysts have three components: the trophectoderm, an outer layer of cells that form a sphere; the blastocoel, a fluid filled cavity; and the inner cell mass, a cluster of cells on the interior that may ultimately grow into a fetus.1  It is from the inner cell mass that stem cells are harvested.  The inner cell mass is extracted from the blastocyst and cultured on a Petri dish.  Here the controversy arises.  The embryo is no longer viable without the inner cell mass.  For those who consider a blastocyst to be a living human being, this extraction is tantamount to the death of a human.  This issue has given rise to a new platform for the anti-abortion vehicle.

The controversy and its resulting restrictions are hindering the exploration of what may be the future of medicine.  There is evidence to suspect that stem cells may be used to treat -- even cure -- AIDS, Parkinson's disease, Multiple Sclerosis, heart disease, cancers, diabetes, Alzheimer's, genetic diseases, and a host of other diseases.

 

MAGIC SEEDS

All stem cells have three promising characteristics: they are capable of proliferation, they are unspecialized, and can be differentiated.  Proliferation is the ability of certain cells to replicate themselves repeatedly, indefinitely in some cases.  Within six months, thirty stem cells can divide into millions cells.1  Stem cells are unspecialized; they are not committed to becoming a certain type of cell.  They can be differentiated under certain protocols, tissue recipes scientists have identified,1 to become any of the 220 kinds of cells in the human body.  Herein lay the great possibilities.  Scientists may be able to produce cells that can replace damaged or sick cells in a patient with an injury or degenerative disease.6

A large portion of the political debate is devoted to alternatives to embryonic stem cells.  Somatic stem cells, also referred to as "adult stem cells," are morally acceptable to the embryonic stem cell research opposition.  Somatic stem cells come from select sources in fetal or human bodies.  They exist in very small quantities in the umbilical cord of a fetus, bone marrow, the brain, peripheral blood, blood vessels, skeletal muscle, skin, and the liver.  The number of somatic stem cells decreases with maturity.  They exist to help repair their source, should injury occur.  The majority of somatic stem cells are less transdifferentiable than embryonic stem cells.  For the most part, they can only be coaxed into cells that are associated with their source.  For instance, hematopoietic stem cells, harvested from blood vessels and peripheral blood, give rise to all the types of blood cells; and bone marrow stromal cells give rise to bone cells, cartilage cells, fat cells, and other kinds of connective tissue cells.1  Another disadvantage of somatic stem cell research is the difficulty to produce large quantities of somatic stem cells.  Scientists also fear that somatic stem cells may lose their potency over time.12 

A new study identifies a somatic stem cell that can "differentiate into pretty much everything that an embryonic stem cell can differentiate into."  Catherine Verfaillie of the University of Minnesota found these cells in the bone marrow of adults and dubbed them multipotent adult progenitor cells (MAPCs).  The study claims that MAPCs have the same potential as embryonic stem cells.  These cells seem to grow indefinitely in a culture, as do embryonic stem cells.  Encouragingly, unlike embryonic stem cells, MAPCs do not seem to form cancerous masses if you inject them into adults.  Skeptics think the scientists stem cell selection process creates MAPCs and do not think these cells that exists on their own, they think the scientist have simply found a way to produce cells that can behave this way.8

Stem cell therapy testing in rodents is yielding exciting results.  Mouse adult stem cells injected into the muscle of a damaged mouse heart have help regenerate the heart muscle.  In another experiment, human adult bone marrow stem cells injected into the blood stream of a rat similarly induced new blood vessel formation in the damaged heart muscle and proliferation of existing cells.  Petri dish experiments also have promising applications.  Parkinson's disease, a neurodegenerative disorder that affects more than 2% of the population over 65 years of age, is caused by a progressive degeneration and loss of dopamine-producing neurons.  Scientists in several laboratories have been successful in inducing embryonic stem cells to differentiate into cells with many of the functions of the dopamine neurons needed to relieve the symptoms of Parkinson's disease.1

There are many more implications for stem cell research.  In addition to cell therapy, human stem cells may also be used to test drugs.  Animal cancer cell lines are already used to screen potential anti-tumor drugs.1  The possibilities are endless.

 

CONCLUSION

There are over 200,000 embryos left over from in vitro fertilization3 attempts but only about 6 existing embryo stem cell lines2 available for federally funded research under Bush Administration regulations.  United State's scientists pioneered this field of research.  Federal funding could speed the development of therapies and keep the United States at the forefront of science.2  Alta Charo, a law and medical ethics professor at the University of Wisconsin and member of the National Bioethics Advisory Committee, is against any potential limitations placed on the numbers of cells available.11  Great numbers of blastocysts are needed to harvest a diverse selection of stem cells.  Diversity is needed to expand the range and reliability of research and for immunological "matching reasons."9  While opposition is mainly coming from the conservative side of the political spectrum with anti-abortion sentiments, some people, including Senators Orrin Hatch and Tent Lott, who are unremittingly anti-abortion are pro stem cell research.9 

The anti-abortion opposition believes that life begins at fertilization and that life should not be compromised even if it is to save the lives of many.  The conservative Family Research Council goes so far as to say that every frozen embryo deserves "an opportunity to be born."12  About 15% of pregnancies end in miscarriage, most of them in the embryo stage before the woman even knows she is pregnant.7  If every frozen embryo was given the opportunity to be born, and the 85% that statistically survive to become fetuses were born, there would be 170,000 more babies in the world.  This is dramatically more than 15 times the number of babies born in the United States each day.

The potential for stem cell therapy is too great to deny federal funding to new embryonic stem cell research.  The current strict regulation slows progress and inhibits vital research, restricting federally funded research to six embryonic stem cell sources.  There should be supervision to ensure research does not lead to utilitarian purposes, but the number of embryos used should not be limited.  Scientists would not need to harvest stem cells indefinitely.  At some point, they would have a wide enough variety and would be able to stop.9  Extracting "magic seeds" from a cluster of cells could end disease, but we will never know if we are not allowed to try.

 

WWW Sources

1) Stem Cells: A Primer from National Institutes of Health
2) Research avenue adds fuel to stem cell controversy
3) Adoption of Frozen Embryos
4) Click on "Flash: Embryonic stem cell explainer »"
5) Cancer, AIDS hope from stem cell study
6) The Great Debate Over Stem Cell Research
7) If You Believe Embryos Are Humans...then curbing stem cell research is an odd place to start protecting them
8) Ultimate stem cell discovered
9) Elizabeth Cohen: Ethics of stem cell research
10) Bush's Stem Cell Decision Displeases Scientists
11) Awaiting Bush's Stem Cell Choice
12) Click on "Common Questions" Andrew Goldstein explains the key issues

 

For more information visit

International Journal of Cell Differentiation and Proliferation
Great illustration of stem cell harvesting
Stem Cell Research News.Com
Click on "A Moral-Compass Guide" The science and ethics of human cloning


America's Secret Disease
Name: Brenda Zer
Date: 2002-11-11 00:14:10
Link to this Comment: 3647


<mytitle>

Biology 103
2002 Second Paper
On Serendip

America has a serious health problem - one that usually escapes the notice of most people. Over 15 million Americans are affected by asthma (10 million adults and 5 million children). (3) This chronic respiratory disease can even be life threatening, if not kept in careful check. So, what exactly is asthma and why do so few people seem to know about it?
While asthma is chronic (meaning that it is always with you), its symptoms are not always detectable. Sometimes these symptoms are dormant, waiting for the right irritant to trigger them. Symptoms of asthma include shortness of breath, wheezing, coughing and a "tight" feeling in the chest region. (5) Coughing can either be a dry cough or a wet cough (phlegm comes up the esophagus during rough coughing). The three main components of asthma are: inflammation, muscular contraction and increased mucus production. While these symptoms are usually under control, if they should be aggravated, the person may be in danger of suffocating. (1)
During a severe bought of asthma (commonly known as an asthma attack, or episode) the lining of a person's bronchiole becomes inflamed. The inflammation causes a build up of fluid and cell-clots – this swells the tissue and contributes to the blockage. The muscles around the bronchiole involuntarily constrict, causing a further decrease in bronchiole diameter. In addition to all this, an increase in the production of mucus floods the lungs. (1) If the person cannot clear their airways, they can die either of asphyxiation, or of carbon dioxide poisoning (sometimes, fresh O2 is allowed into the system, but CO2 cannot escape causing a massive build up of carbon-dioxide in the system). Persons with asthma have a decreased lung-capacity, which makes everyday breathing hard (my father, for instance, only has 50% of his lung capacity – before my maternal grandfather died, he only had 27% capacity [he also had emphysema]).
In 1999, more than 4, 500 people died of asthma or asthma-related conditions. Between 400, 000 and 500, 000 people are hospitalized each year because of asthma. (3) This makes asthma the third ranked cause of hospitalization for children under the age of fifteen. (3) Approximately one in every thirteen school-age children has asthma. (4) Children are most susceptible to asthma because they breathe more air, eat more food and drink more liquid in proportion to their body size than an adult does. As their bodies are still developing, this leaves them more vulnerable to environmental exposures and diseases than adults. (1)
Although children are more likely to develop asthma, there are many different types that adults can have. The form of asthma that is least well recognized for what it is, is exercise-induced asthma. (1) While most people think that it is normal after exercising to breathe heavily, it is not normal to wheeze or cough either during or immediately after exercising. So, while some people may not exhibit signs of asthma every day, it may still be present. Jogging in cool/cold weather sometimes causes bronchiospasms. In general, cool/cold air is bad for persons with asthma, while warm/moist air is beneficial and can help relieve symptoms. People with asthma who still wish to exercise are recommended swimming, as the slow, rhythmic breathing is good for your respiratory system. (1)
Other than air temperature, there are many things that can trigger asthma. To start with, many asthmatics are also allergic to many substances. Many asthma attacks are caused by allergic reactions. That is why the two most common triggers for asthma are allergies and irritants. Outdoor environmental irritants are such things as cold air, cigarette smoke, commercial chemicals, perfume, as well as paint or gasoline fumes. Indoor environmental irritants can include; second hand smoke, cockroaches, dust mites, molds and pets with fur or feathers. As Americans spend 90% of their time indoors, it is important to keep residences and work places as clean as possible. (4)
For the outdoors, studies have shown that air pollution is causing the number of people worldwide with asthma to rise significantly. The groups of people who are affected the most are inner-city residents or persons living in a highly industrial area. Air pollution is a prime factor in asthma related illnesses and deaths.
While taking medications (like albuterol sulfate solution, chromoline sodium solution, albuterol inhalers, or various other bronchio-dilators or pills) can help reduce asthmatic symptoms, asthma has no known cure (although they are considering gene-therapy a possible treatment). (5) People living in low-income areas have no hope of purchasing the prescription drugs needs to fight asthma, so their rate of respiratory problems is the highest of any demographic group.
More than half of all asthma patients spend 18% or more of their total family income on asthma-related expenses. Over $4 billion per year is spent on the hospitalization of asthma patients. (1) Many of my asthmatic friends have been hospitalized more than once and my older brother was hospitalized for his asthma when he was a child.
Although it is not possible to cure asthma at this time, with a habitual use of medication, it is possible to send the asthma into remission. This is what happens with many children. They appear to "grow-out" of their asthma, but in reality, it is only lying dormant inside them.
Many people are affected by asthma and do not even know it. Although it is not a prolific killer, if untreated, asthma can kill. Most people that live in cities should pay attention to the air quality reports for their neighborhood. Persons living in a high pollution area are more susceptible than others are. (4) Certified doctors or allergists can run a simple test to determine your lung capacity.
As asthma affects a large portion of our population, it is surprising the reaction that asthmatic people receive. They are often ridiculed for not being able to "keep-up" with others (this happens especially to children during recess or play-time). While this is a common respiratory disease, some people still find it hard not to look down upon people who use inhalers or nebulizers as being "weak". This creates an entire series of social groupings. Who would have thought that such a common, yet unknown disease could cause so much physical and mental harm?
.

References

1)Sniffles & Sneezes: Allergy & Asthma care and prevention, a site dedicated to treating allergies and asthma.

2)Allergy Asthma Technology, Ltd. , a pharmaceutical website.

3)Center for Disease Control, their website about asthma – great links.

4)Environmental Protection Agency, the EPA's website about the causes of asthma.

5)American Lung Association, ALA's comprehensive webpage of asthma and a series of links.


Colloidal Silver: Miracle Elixir or Plague of the
Name: Christine
Date: 2002-11-11 00:16:06
Link to this Comment: 3648


<mytitle>

Biology 103
2002 Second Paper
On Serendip

If someone told you that they were in possession of something that could cure any illness almost instantaneously, would you believe it? If you were told that there was a small risk of permanently changing the color of your skin, would you be willing to run the risk and take it anyway? The name of this elixir is colloidal silver, and it is concocted by suspending microscopic silver particles in liquid. Colloidal silver has been claimed to be effective against hundreds of conditions and diseases, including cancer, AIDS, parasites, acne, enlarged prostate, pneumonia, and a myriad of others. However, long-term use of this silver can lead to a conditional called argyria, where a buildup of silver salt deposit on the eyes, skin, and internal oranges change the skin permanently metallic ashen-gray, making the individual have permanent death pallor. Does colloidal silver really carry the medicinal cure-all properties it is claimed to have, or is it just risk without benefit?

Silver has been used for hundreds of years as both a medicine and preservative by many cultures around the world. The Greeks used silver vessels to help keep water and other liquids fresh. Pioneers put silver coins in the wooden water casks to keep the water free from the growth of bacteria, algae, and other organisms, and placed silver dollars in milk to keep it fresh. In 1901, a Prussian chemist named Hille and Albert Barnes discovered a method of preparing a true colloid by combining a vegetable product with a silver compound and patented it as Argyrol, the only non-toxic antibiotic available at the time. Another scientist, Crede, advocated the use of colloidal silver to fight bacterial infections because colloidal silver is non-toxic and it carries germicidal properties and through his work introduced colloidal silver into medicine (1). The colloidal state proved to be the most effective means to fight infections because it demonstrated a high level of activity with very low concentrations, and also because it lacked the caustic properties of salt. By the mid 1930s there were more than four-dozen silver compounds on the market, although there was a wide variation of their effectiveness and safety. The first reason for the vast differences is that the compound was available in three forms: oral, topical, or injection form. Second, some were true colloids and some were not, with some containing 30% silver by weight and others hardly had any. Third, the freshness of the colloid, the time elapsed since manufacture, had a lot to do with the effectiveness of the compound (2).

In the 1940s, the use of colloidal silver in the medical field began to taper off mainly due to the advent of the modern antibiotics, but also due to three other reasons. The first was the high cost. Even in the depression era of the late 1930's, colloidal silver was reported to have been sold for as much as $200 per ounce (in present day dollars.) The second reason was that many of the silver products available at the time contained toxic forms of silver salts or very large particles of silver related to the available technology of the time. The third reason is that in 1938 the federal Food and Drug Administration established that from that point forward, only those "drugs" which met FDA standards could be marketed for medicinal purposes (1). In 1999, the FDA banned the use of colloidal silver or silver salts in over-the-counter products. Silver products can and are sold as "dietary supplements" in health stores only if they make no health claims, but many advertisers ignore the last restriction and still promote the benefits of the use of colloidal silver.

Prolonged contact or too much of colloidal silver can result in argyria, which produces a "gray to gray-black staining of skin and mucous membranes produced by silver deposition" (3). The normal human body contains about 1 milligram of silver, and the smallest amount of silver ingested reported to cause argyria ranges from 4-5 grams to 20-40 grams. The silver is deposited on the face and diffused all over the skin, and as the individual is under the sun the silver darkens as a result of being oxidized by strong sunlight, thus producing the silver/blue/gray complexion (4). There are a few physical signs that suggest the onset of this condition: the first is a gray-brown staining of the gums, later progressing to involve the skin. The color is usually slate-gray, slightly metallic, or blue-gray and may appear after a few months of silver treatments. The second sign is that the hyperpigmentation is most apparent in the sun, with the exposed areas of skin, especially the face and hands. There are different theories to explain the blue-gray pigmentation to sun-exposed sites, but there are no definite explanations. The third sign is the hyperpigmenatation of the nail beds. The fourth sign is a blue discoloration of the viscera, which is apparent during abdominal surgery (3). While the majority of the individuals using colloidal silver will never developing argryia, some individuals are at a higher risk than others. The Environmental Protection Agency suggests that people with low vitamin E and selenium levels are more susceptible to argyria, as well as individuals with slower metabolisms. People with slower metabolisms have the rest of their natural eliminative systems working more slowly and can be more easily overwhelmed (4). Cases of argyria were most prevalent when silver medications were commonly used, the 1930s and 1940s, and have since become a rare occurrence. The famous "Blue Man," who was exploited in the Barnum and Bailey Circus sideshow, had a classic case of argyria. The most recent case of argyria is Stan Jones, Montana's Libertarian candidate for Senate. He started taking colloidal silver in 1999 for fear that there would be shortage of antibiotics due to Y2K disruptions. People ask him two questions: if his blue-gray skin is permanent and if is he dead. His usual response is that he is practicing for Halloween (6).

Advocates of colloidal silver believe there is a call for urgent action for the use of more natural alternatives than antibiotics due to the increasing difficulty in treating infections. Colloidal silver is argued to be the best alternative, safe for pets, children, plants, and all multi-celled organisms. From his own bacteriological experiments, Dr. Henry Crooks supports the use of colloidal silver, claiming that all known disease-causing organisms die within six minutes of the ingestion of silver. Medical promoters of the use of colloidal silver allege that the presence of colloidal silver near a virus, fungus, bacterium or any single celled pathogen disables its oxygen metabolism enzyme, or its chemical lung, so to say. Within a few minutes, the pathogen suffocates and dies and is cleared out of the body by the immune, lymphatic and elimination systems (5). People in the medical field against the use of colloidal silver argue that just because a product effectively kills bacteria in a laboratory culture does not mean it is as effective in the human body. Products that kill bacteria are actually more likely to cause argyria because they contain more silver ions that are free to deposit on the user's skin.

There are compelling arguments both for and against the use of colloidal silver as an alternative to antibiotics. However, very little research has been done to test the effectiveness of the use of silver in the human body to fight infections, and the risk of argyria increases as the number of people using silver increases while the amount of information known remains constant. If I had an infection of some kind and had the option of taking an antibiotic or drinking colloidal silver, I would choose the antibiotic. For starters, there is more information about the drug. Many scientists have performed experiments with the drug and know a lot about its effects in the human body. Not much is known about colloidal silver, and the risk of taking too much and permanently changing the color of my skin outweighs any benefits the silver may contain. As more research and testing is done about the effectiveness of colloidal silver, it may be discovered that it is a wonderful alternative to antibiotics, but until it is proven safe and effective with low risks involved, then I believe that people should stick to the safe side and take something they know will be effective and not make them look permanently dead. Furthermore, I find it a little disconcerting that colloidal silver kills all bacteria within five minutes. I would worry if it was doing some other damage along the way and I would worry about other possible long-term effects. I believe that too many people are jumping on the bandwagon concerning colloidal silver; I think an extra measure of caution is necessary due to evident health risks involved. In conclusion, I see a need for further research to be performed regarding colloidal silver, its usage protocol and clinical issues.


References

1) IPS site on colloidal silver,

2) A Brief History of Silver and Silver Colloids in Medicine ,

3) Argyria ,

4) Argyria ,

5) Colloidal Silver: Risk Without Benefit ,

6) Blue is the Color of My Candidate's Skin ,


Sex and Advertising: An "Organic" Experience
Name: Heather D
Date: 2002-11-11 00:24:06
Link to this Comment: 3650


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Whenever you turn on the television, it is there. When you are in the doctor's office staring into a magazine, it is staring right back at you. In fact, in today's society, it is assaulting you in sight and sound, no matter where you are or what you are doing. Yes, I am talking about advertising. It is what drives our consumer culture onward. Ads are everywhere, pitching an extensive array of useless products in an equally extensive variety of ways. Advertisers play on several different tactics to get people interested in their products; they humor, self-esteem, peer pressure and many other things, but the one tactic that is most popular and the most effective is using sex in advertising. Why is this ploy so effective? Simply because is plays upon the biological needs of every single human being.

No matter what the product, from shampoo to beer, the tactics are all the same. Get a beautiful person in there and maybe, just maybe, the audience will be tricked into thinking that maybe they can be/have that beautiful person. Seems simple, right? Wrong. The use of these models and how they are posed is actually a very precise art, starting with human biology and the nature of sex. For sake of brevity in this paper, let us just look at print advertising. On the cover of almost every magazine in the grocery store there is a gorgeous model or actor staring at you with that ever so seductive "come hither stare." The thing that most people do not realize that, for the most part, the appeal of that look is generated on a computer.

Graphic artists have become the Picasso's of this technological age; splicing and stretching, they can turn any ordinary woman into a goddess. How do they do this? They simply play upon our biological instincts for procreation. By showing women in a false state of arousal, advertisers are able to associate their products with pleasure and instinctual survival. When a woman is in the early stages of arousal, blood flows to key erogenous areas of the body, namely her breasts and lips. To achieve this effect in print, artists add extra curves and shadows to a woman's breasts and make her lips darker and fuller. Then they enlarge her pupils (another sign of sexual arousal), and lighten the whites of her eyes (because why would her blood be there if it had more important places to be?). Also, in about 65% of all print ads, women are shown with open mouths. This is seen as a very sensual, sexual gesture of submission by men. (1). Advertisers also do these things to make the women look "healthy."

Why is it that this works so well in advertising? According to Richard F. Taflinger, PhD, "Sex is the second strongest of the psychological appeals, right behind self-preservation. Its strength is biological and instinctive, the genetic imperative of reproduction," (2). He also point out, though, that gender also play a huge role in the effectiveness of the advertising.

The biological prerogative of the male is to impregnate as many women as possible in order to carry on the species. Richard F. Taflinger accounts for this by saying, "Genetically, it is the most practical course of action. The more females with which a male mates, the greater number of offspring containing his genes are possible. In addition, the cost of sex in terms of time and energy is considerably lower for the male than the female," (3). By showing a woman in a state of arousal, this is giving a man the "good to go" signal; so advertising is ultimately easier and more effective on men. They are receptive to the immediacy of image, and to the immediacy of the advertising campaign itself.

Women, on the other hand, have a different biological prerogative which makes it more difficult to sell to them. Women instinctively think in the long run and look for that in a sexual partner. Other factors besides health and accessibility come into play. They naturally look for someone who can provide for their offspring, so factors of wealth, power and intelligence quickly come into play and spoil the immediacy of the ad. So women are far more prone to be attracted to images of romantic attachment than sexual imagery. (4) Also, showing a man in the early stages of arousal (like they do with women) is actually counter-productive because women see that as an aggressive, threatening gesture. In today's society women do not want to be threatened, and are more prone to wait and then make her choice of mate.

All these sexual signals displayed by advertisers play upon our most basic, primitive instincts. Though we may laugh at the idea of associating toothpaste with sex, it often is and it sells. I guess the question that follows is not really "why is this so effective on us?" but rather, "what does this do to us?" Does this type of advertising have any sort of psychological or physiological effect on us? Advertisers are playing with instincts which have been formed over a span of millions of years, so it does not seem likely that they can change our most primal ways of thinking about the opposite sex and about sex in general. However, there may lay a danger in the fact that as people become more used to advertising and more adept at deciphering the codes it is sending them, will that change the ways in which they react to these instincts? When we begin to associate the act of sex with gum, there is something intrinsically wrong with that. Unfortunately, with our capitalistic, commercialistic society being what it is, we will continually have to come to terms with the fact that washing your hair is an orgasmic – I mean organic experience.
">(YOUR REFERENCE NUMBER).

References

1)"Sexual Images of Women to Sell Products – 'Facism' and 'Bodyism'", an article with some statistics about the use of women in advertising
2) "You and Me, Babe: Sex and Advertising", An article by Richard F. Taflinger, PhD on the use of sex in advertising
3) Biological Basis of Sex Appeal", an article by Richard F. Taflinger, PhD
4)"The Evolutionary Theory of Sexual Attraction", an article by Jan Norman on "The Human Sexuality Web"


The Science Behind Raw Food
Name: Virginia C
Date: 2002-11-11 00:34:38
Link to this Comment: 3651


<mytitle>

Biology 103
2002 Second Paper
On Serendip

In the past few years, a new dietary trend has become popular. Raw foodism has hit the US, with a strong base and an ever-growing popularity. Raw foodists claim that a raw-food diet (which some define as as low as 70% uncooked foods, while others are exclusively [100%] raw) can boost overall health, increase energy, ameliorate disposition and physical appearance, and even cure many (sometimes terminal) diseases and ailments. However, the scientific community outside of the raw food community doesn't seem to see this diet in the same light as its followers. What is the science behind the raw food diet, and how much of what its advocates believe is true?


Raw foodists base their practices on the theory that cooking food kills it, destroying its nutritional value (one source quotes that cooking destroys between 30 and 85% of food's nutritional values, (9)) and making it unhealthy and less easy to metabolize. Some raw foodists claim that all raw foods have large counts of enzymes, which are fundamental to human health and digestion and metabolization of food, and which are destroyed when food is heated to above 116 degrees Fahrenheit (8). One article even claims that cancer, heart disease and diabetes are all directly linked to the consumption of cooked foods (6). Another more specifically targets a chemical called acrylamide, which is found in plastics and is known to be carcinogenic, and was recently discovered to be present in high levels in many baked and fried foods (7), while raw (and boiled) foods showed no traces of the chemical. Yet another article goes further and points out that, aside from the dangers of acrylamide in many cooked starchy foods, it has been shown that meat cooked at high temperatures is contaminated by heterocyclic amines, or HCAs, which are also known to be carcinogenic. All in all, the raw food community online has provided many links to scientific articles backing up their theories and practices.


Given all of these interesting scientific pro-raw foodism stances, I am still somewhat skeptical in my research. This is in part due to the fact that, when I was not navigating specifically from pro-raw foodism sites, I was unable to find many articles in favor of raw foodism, and none which were in specifically scientific publications. This fact makes me question the credibility of these sources, simply in that the scientific community at large seemed more skeptical and disapproving than anything else of the raw food movement. However, the reasoning behind anti-raw foodism was not always any more convincing than the pro case.


The majority of the scientific articles stating that raw foods are dangerous are referring to animal-borne diseases, such as e-coli and salmonella. Since most raw foodists are also vegetarians (or even vegans) this tends not to apply. However, vegetarian foods such as unpasteurized milk and juice can harbor harmful bacteria (3), (4). Furthermore, studies have shown that even raw salad greens such as lettuce and spinach can harbor harmful bacteria due to irrigation and fertilization (2). Therefore, there is clearly a safety issue surrounding these raw foods, in that they must be free of harmful bacteria that typical sterilizing processes such as cooking would normally kill. One site in favor of raw foodism includes the caveat that "The only concern here is if you are eating traditionally raised meat which is frequently contaminated with bacteria. You will want to make sure you cook that food." Therefore, we can see that despite the pro-raw stance, there are exceptions made in order to facilitate overall dietary healthiness.


Overall, I simply did not find any current articles praising the raw-foodism diet, outside of that community itself. This selective pro-raw foodism made me believe that, despite the diet's possible benefits, there couldn't be such a strong difference if no one in the scientific community at large has noticed the effects. It may be that there are serious scientific articles out there by non-members of the raw food community, and that I was simply unsuccessful in finding them. However, I searched through every seemingly relevant biology database of journal articles, magazines and studies that I could get my hands on, and the results were consistently 0 articles found for the search "raw food." The only remotely "hard science" type article I found was, while good, only linked to by one particular raw food website (5).


This makes me think that, since the larger scientific community has not yet got wind of this trend, it can't possibly be as big of a deal as its advocates claim. Until the raw food movement goes under serious critical and objective analysis, I am reluctant to believe that the many claims that the body metabolizes raw foods faster/better, or that raw foods can cure diseases, are more than mere speculations and ideals of the pro-raw foodism movement. One site even claims that "a raw food diet creates major improvements [sic] in health. The reasons are not known, but the experience is unmistakable" (10) This very claim, that 'the reasons are not known,' is what I suspect to be the case behind most of the raw foodism claims. However, this is not to say that said claims are definitely false, only that they should undergo more rigorous scientific investigation.


References

1)NY Times Online

2)Bugs Dress Salad, an article from the online journal nature.com

3)Eating Well: Food Safety, an article from the AARP's online index of articles

4)Labeling Raw and Undercooked Foods, an article on public health from King County, WA

5)Raw Foods vs. Cooked Foods – Looking at the Science, a good scientific article that I found on beyondveg.com

6)Raw Food Q & A, from the rawfoodlife.com website

7)Could these foods be giving us cancer?, from The Guardian

8)The Living and Raw Foods FAQ, from Living and Raw Foods website

9)Healing Powers of Raw Food and Juice part 1, from Shirley's Wellness Café website

10)A Raw Food Diet, from Nov55 website, a "science and science criticism" site


VeriChip
Name: Diana DiMu
Date: 2002-11-11 00:40:10
Link to this Comment: 3653

Helpful Tracking Devise or too "Big Brother"?

You've heard about it, the possibility of implanting a microchip into a human body as a tracking device, but is this really just limited to science fiction? Sound too much like George Orwell? Not anymore. Using a Global Positioning System in the means of products like VeriChip may help save missing children or the elderly but is it a violation of privacy? Do the positives of such products justify the negatives of its use? By examining the uses of products such as VeriChip I hope to gain a better understanding of its intended use and the benefits it will provide, while taking into consideration the possible negative outcomes of its widespread use. Will such products provide safety and security at too great a cost? Are such products against one's constitutional rights no matter how good the intentions of its creators?

 

What is it?

Applied Digital Solutions, a Florida-based company, has been in the testing and production stages of microchip products called VeriChip and Digital Angel. VeriChip is a miniaturized, implantable identification device, with the potential to be used for security, financial, health, identification or other reasons. An encapsulated microchip the size of a grain of rice that contains a unique verification number. The microchip is energized and activated when passed by a specific VeriChip scanner. Previously, the chip used radio frequency to energize and transmit a signal of the verification number. (1) More recent tests have developed a chip that will use satellites to transmit signals globally. The newer product, Digital Angel, proposes to integrate wireless Internet technology with global positioning to transmit information directly to the Internet. The microchip is inserted under the fleshy part of the skin, typically under the upper arm. The chip and inserter are pre-assembled and sterilized for safety, and reportedly have little discomfort to administer. Once implanted, the microchip is virtually undetectable and indestructible. It has a special polyethylene sheath that helps skin bond to it to help keep it in place. The chip has no battery and thus no chemicals, and its expected life is up to twenty years. Contact with the body will enable the device to read body temperature, pulse, and even blood sugar content. (2) Research is being done to produce a micro battery that will generate energy through heat or movement. Currently, Global VeriChip Subscription is $9.95 a month as a form of universal identification. The information can be kept up-to-date by using the Applied Digital Solutions' website or calling a secure support center. (2) Currently some products are being manufactured by Applied Digital Solutions or other companies, that are not implants but worn in the form of wristwatches or badges. (3)

 

Uses and Benefits?

Some people have already begun using VeriChip as a means of providing identification and personal medical information. Through the use of such a microchip, medical records could be saved and carried with at-risk patients for emergency response. Such products would help track down abducted children or lost adults with Alzheimer's disease. Microchips would also help find lost pets or keep track of endangered wildlife, as well as find lost or stolen property. (3) VeriChip could also be used as a means for security. Heightened airport security, authorization for access to government buildings, laboratories, correctional facilities and the like. After September 11th, many feel a personal identification record would be beneficial in the probability of another terrorist attack. (4) Using VeriChip could also help track convicted criminal or possible terrorists from future attacks. Not only limited to health and security issues, the future of VeriChip and Digital Angel could lead to implantable mobile phones, and access to information found on personal computers and the Internet, such as email. (8)

 

Problems and Risk?

VeriChip and the future Digital Angel still need approval from federal health regulatory agencies to make sure there are no adverse effects to its wearer; however, there is already much controversy about its use. The biggest concern about the use of VeriChip and other similar microchip tracking devices is an invasion of privacy of the user. People fear the risk of third parties who would gain information on the Internet through resale or hacking. Groups like telemarketing companies could use such information for advertising. (3) Many have posed the possibility that if you were able to track down your own child through the use of a microchip, what would prevent other people from doing the same? How many false alarms would the police have to deal with from over-protective parents who thought their children were missing? (7) Parents may deem VeriChip's use in the best interest of their children but it may eventually lead to even more intense invasions of privacy, creating a society of parents who constantly survey their children. Despite the possibility of more easily tracking down abducted children, kidnappers and molesters alike could potentially disable or remove such microchips. (6) The idea that VeriChip will increase security and prevent such terrorist attacks, as September 11th is a difficult ethical question to pose. Would all prisoners on parole be forced to use VeriChip? How would you implant criminals and terrorists? If the government began implanting United States citizens with microchips that held their social security numbers what would happen to tourists, students, or even foreign dignitaries? (7) Currently, the microchips used and in production are passive chips, dormant until activated by a scanner. Future chips like Digital Angel will be active chips, beaming out information all the time. This leads to the problem of creating a continuous power source, as well as developing a chip that is small enough, yet still sensitive enough to receive signals form satellites thousands of miles out in space. (4) With the possibility of implantable mobile phones and personal computers comes the possibility of contracting viruses. (8) The risks of such possibilities are currently unknown; therefore possible solutions do not even exist. Most people fear an invasion of privacy as the greatest fault of implanting microchips. A recent CNN poll said that 76% of Americans said they would not want a devise like VeriChip implanted on their children, while 24% suggested they would. (3)

 

While companies like Applied Digital Solutions have good intentions I feel that at this stage in development there are still many ethical questions that will prevent the widespread use of products like VeriChip and Digital Angel. Although saving children and the elderly from kidnapping and sickness are admirable causes, the encroachment of privacy by such devices makes me feel that the negatives greatly outweigh its positive intentions. I feel the devices, which may start out favorably, have a lot of potential to be corrupted by outside parties, criminals, or yes, even over-protective parents. I think the widespread use of implantable microchips for security use would be extremely beneficial but could also lead to higher and newer forms of prejudice against people who do or do not use them. I feel many Americans would be strongly opposed to the possibility of the government having full knowledge of their whereabouts at all times. The use of VeriChip would be extremely useful in hospitals but how long would it take before hospitals would invest money in specific scanners for widespread use? Currently VeriChip costs $9.95 a month for standard identification purposes, but with increased technology, would its price go up or down? Would people who decide they do not want VeriChip, or better yet, cannot afford it, be prejudiced against by people or companies that do use such technology? What about the possible viruses or effects that might be caused by microchip use? Would such products affect your body? Your thinking? While I feel there are certainly many Americans who would condone the use of VeriChip and similar products, I feel for the time being that I'd rather take my chances with safety for a little more freedom.

 

Additional Information:

1) http://www.adsx.com/prodservpart/verichip.html, VeriChip Corporation website, part of Applied Digital Solutions.

2) http://www.adsx.com/faq/verichipfaq.html, VeriChip Frequently Asked Questions.

3) http://www.space.com/businesstechnology/technology/human_tracker_000814.html, States News Service article by Alex Canizares on Space.com website.

4) http://abcnews.go.com/sections/scitech/DailyNews/chipimplant020225.html, article by Paul Eng on ABC News website. 

5) http://news.bbc.co.uk/1/hi/sci/tech/1869457.stm, article by Jane Wakefield on BBC News website.

6) http://www.futurecompany.co.za/2000/09/15/gillmor.htm, Can Parents Love too Much? by Dan Gillmor on Future Company website.

7) http://www.thehawkeye.com/columns/Saar/Saar_0728.html, opinion piece by Bob Saar on The Hawk Eye Newspaper website.

8) http://www.nytimes.com/2002/11/10/technology/10SLAS.html, Voices in your Head? Check that chip in your Arm by Matt Richtel New York Times Online.

9) http://home.wanadoo.nl/henryv/biochipnieuws_eng.html, Bio-Chip Technology in the News (Links to other articles).


Why Does Pizza Taste So Good?
Name: Amanda Mac
Date: 2002-11-11 00:43:53
Link to this Comment: 3655


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Throughout most of life, humans are taught to disobey their taste buds by eating foods such as brussel sprouts, celery, and liver. How is it that our sense of taste works? Why is it that those things that are so unhealthy for our bodies taste so good? Shouldn't we expect that those foods that are most useful to our bodies are those foods that taste the best? And why is it that those unhealthy foods are enjoyed by practically everyone? Is this sense a biological trait of mammals and if so, is it hereditary?
First, in order to understand the scientific reasons for tasty foods it is necessary to understand the workings of our taste buds. The sense of taste is mediated by both gustatory receptors and olfactory receptors (1); when food or beverages enter our mouth they contact the tongue and palate and volatiles rise into our nasal cavities so that our sense of taste is made up of both smell and taste. Taste, is based upon groups of cells (taste buds) which take oral concentrations of many small molecules, through receptors within the cells, and inform taste characteristics to centers in the brainstem (See Image 1). (2)Taste buds are microscopic onion-shaped bunches of cells buried in the epidermal cell layer of the papillae. Little pores in the cells called, gustatory pores allow the receptors to contact the taste in our mouths. The average adult has about 10, 000 taste buds. These taste buds are most predominant on little knobs of epithelium on the tongue called papillae. Papillae are little bumps on the top of the tongue that increase the surface area for the tastes buds. The papillae also aid in the mechanical handling of food in the mouth. (3)There are four types of papillae (See image 2). The most abundant of the papillae are the filiform, but these contain no taste buds. Fungiform papillae are those that are located on the front of the tongue and appear most noticeably to the human eye. Foliate papillae are the series of folds on the rear edges of the tongue. Lastly, circumvallate papillae are the large bumps on the back of the tongue. (4)
Humans discern four types of taste: saltiness, sourness, sweetness and bitterness. (5) Notably though, scientists have suggested that there is another category, umami, the sensation induced by glutamate, an amino acid that composes proteins in meat, fish and legumes and is also included in MSG. Before this, it was thought that fat did not have a specific taste, but rather provided texture in food. (6) Richard Mattes, professor of foods and nutrition at Purdue proved differently. He proved that humans can taste fat therefore explaining why fatty foods taste so much better than fat-free foods. So, when someone says "this fat-free cookie tastes like cardboard," it is due to the lack of tasty fat in it. Here, though is where the slogan, "Eat everything in moderation," comes to mind. The reason for unhealthiness is not so much because we eat fatty foods, but because we do not eat them in moderation for they are certainly somewhat healthy.
Furthermore, it is true that those foods that are most tasteful are useful in our bodies systems. Glutamate is the major fast excitatory neurotransmitter in the brain. In fact, it is believed that 70% of the fast excitatory CNS synapses employ glutamate as a transmitter. Therefore, glutamate is an essential nutrient to our bodies, particularly our brains. (7)
Also, humans and most mammals share the composite structure of taste buds; practically all mammals have a sense of taste. However, there have been categorized three different types of tasters: super-tasters, medium tasters and non-tasters. Scientists have found that the distinction lies within the amount of taste buds on the tongue; the less the taste buds the less the sensitivity to taste. (4) The difference is due to age, whether or not someone smokes and hereditary. Children born of non-taster parents will most likely also be non-tasters.
So, indeed there are biological reasons for the tastiness of pizza and also reasons for difference in particular tastes. I love spicy food so, I am most likely a non-taster because the spiciness does not affect me the way it would affect a super-taster. And lastly, it is important to listen to your bodies cravings, because there is a high chance that your body lacks a particular nutrient that you are craving. Eat what tastes good. Eat pizza.


References

1) Campbell, Neil and Jane Reece. Biology. 6th edition. Pearson Education Inc; San Francisco, 2002. p.1074.
2)Physiology of Taste, abundant resource on taste buds
3)Mythos Anatomy homepage, on Mythos Anatomy website
4)"A Taste Illusion: Taste Sensation Localized by Touch", article written by Lina M. Bartoshuk
5)Scientific American homepage, on the Scientific American website
6)Cosmiverse homepage, Purdue University's journal
7)Glutamate as a Neurotransmitter, Glutamate information


Lasers: the most effective option for tattoo remov
Name: Emily Sene
Date: 2002-11-11 01:00:34
Link to this Comment: 3656


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Everyone makes decisions that they regret later in life. Some people make bad financial decisions, others bad relationship decisions, and most make bad fashion decisions. If you're lucky, the mistakes you make have only temporary repercussions. If you're not lucky, one of your mistakes was getting a tattoo. Not much can be done for those who outgrow their body art. The only procedures available cause so much trauma to the skin that they leave scars that are as large and offensive as the original design. That is, until laser therapy.

In order to understand how tattoo removal works, it is necessary to know something about the tattooing process. A needle attached to a hand-held "gun" is used to inject the desired pigment. It vibrates several hundred times per minute and reaches a depth of about a millimeter. The ink must penetrate past the top layer of skin, the epidermis, because its cells divide and die very rapidly. The dermis, or the second layer, is much more stable so the design will last with only minor fading and dispersion. The ink is insoluble and will not absorb into the body. Typically, a scab forms over the design and the wound has healed within 3 weeks. (1)

The most popular procedure for removal is laser surgery, but several other methods are still available. Salabrasion has been practiced for centuries and is somewhat antiquated. It involves numbing the area with a local anesthetic then applying a solution of tap water and table salt. An abrading surface, which can be as basic as a wooden block wrapped in gauze, is used to scrape the area until it turns a deep red color. Then a dressing is applied. Essentially, this is a primitive dermabrasion. (6) It works by shaving off the epidermis and then the areas of the dermis containing pigment. Dermabrasion is a more modern version of salabrasion. A rotary abrasive instrument is used to peel off the pigmented skin. It is called cryosurgery when the area is frozen with a special solution prior to abrasion. (2) Since these are rather traumatic procedures, bleeding is likely to occur. Scarring is significant and a virtual certainty. Until the development of laser surgery, dermabrasion was the most effective and convenient means of tattoo removal.

Tissue expansion is a less common procedure. A balloon is placed under the dermis of the patient and slowly inflated. This stretches the skin and forces cells to divide more rapidly. Then, the tattoo is cut out and the new skin is used to cover the excised flesh. If it is performed properly, tissue expansion only leaves a linear scar. (3)

Perhaps the most invasive option is staged excision. First, the area is numbed with local anesthetic. Then, a dermatologic surgeon uses a scalpel to cut into the skin and actually remove the pigmented sections. The area is closed with stitches and leaves scarring wherever an incision was made. This technique works best on small tattoos. (3) For a larger are, a skin graft is necessary. (2)

Since the scarring and pain associated with each of these procedures is often more offensive than the tattoo itself, the option of laser surgery is extremely desirable. As early as the 1960's, scientists began exploring the medical uses of lasers to correct birthmarks such as port-wine stain. Eventually, researchers determined lasers are effective in tattoo removal because heat generated from the beam breaks pigments in the cells of the dermis into small particles which can be absorbed by the body's immune system. The epidermis is "transparent," meaning that the laser travels through it and focuses on the exact level of the pigment. This chars the ink and it breaks down. The tattoo subsequently fades as immune cells attack the foreign particles. Patients liken the sensation of the laser to having hot grease splattered onto the skin, or being snapped with a rubber band. (2) One man reported that his skin smelled like pork chops following the procedure. (7)

The first lasers used for tattoo removal were the Argon and the CO2. They broke down the ink, but at the cost of the other layers of skin. Just as with abrasion and excision therapies, scarring was left in place of the design. (8) Only three lasers have been proven effective in breaking down ink without damaging the surrounding skin- the Q-switched Ruby, Q-switched Alexandrite, and most recently the Q-switched Nd: YAG. They are referred to as "Q-switched" because of the short, high energy pulses of light used in the procedure. (2) The Q-switched Ruby is the most commonly used laser for tattoo removal. However, the Q-switched Nd: YAG has been recently discovered to be more effective on colored tattoos and darker pigmented skin. The beam of light penetrates deeper which increases the amount of damage done to the epidermis. As a result, the surface layers of skin sometimes retain a permanent "frosted" appearance. (6) Research is still being conducted on which lasers are most effective, although it is generally acknowledged that a combination of all three Q-switched beams is necessary in most cases. (10)

The color of the ink and the quality of the tattoo also play a role in laser removal. Black ink absorbs all laser wavelengths which makes it the easiest to treat. Blue is also fairly easy while green and yellow are the hardest. (5) If a tattoo is done by an amateur artist, the ink particles are larger and may be contained on several layers of dermis. This increases the amount of exposure to the beam needed to produce the charring effect. A professional artist will typically have better control over the tattoo "gun" and distribute the ink more evenly and with more precision. (2) No matter what condition the tattoo is in, laser removal is a bloodless, low risk alternative. It is usually performed in several sessions on an outpatient basis.

Redness and swelling are common immediately following the procedure and the site may scab. (2) Side-effects are generally mild. There is a possibility of hyperpigmentation, an abundance of color in the skin at the treatment site, hypopigmentation, a lack of color at the site, or lack of pigment removal. The chance of permanent scarring is only 5 percent. (5)

Typically, having a tattoo removed is more expensive than getting one put on. The cost will range from several hundred dollars to several thousand based on the size, location, pigment color, and number of visits required. Medical insurance will not usually cover the expense because it is considered a cosmetic procedure. (2) However, there are a number of programs available to those who want to get rid of gang tattoos for free. (9)

The advent of laser removal in recent years has made more invasive techniques virtually obsolete. It is more effective, less painful, and results in less long-term skin damage. There is still some controversy surrounding which beams and wavelengths are most effective in different situations, but laser removal is unanimously regarded as the most expedient procedure..

References

1)www.howstuffworks.com, How Tattoo Removal Works

2)www.howstuffworks.com, How Tattoos Work

3)no-tattoo.com, Tattoo Removal, The Things You Did as a Kid, Cleaning up the Mistakes

4)www.topdocs.com, FAQ on Tattoo Removal

5)American Academy of Dermatology, Tattoo Removal Made Easier With New Laser Therapies

6)www.patient-info.com, Article on Tattoo Removal

7)www.thesite.org, Health and Fitness article on Tattoo Removal

8)Skin Laser Center, Article on Tattoo Removal

9)free tattoo removal programs offered to gang members

10)MedScape from WebMD, Abstracts on Tattoo Removal (note: I had to set up an account on MedScape to view these articles so the link might not work on all computers)


PMS- the Premenstrual Syndrome
Name: Roseanne M
Date: 2002-11-11 03:19:24
Link to this Comment: 3657


<mytitle>

Biology 103
2002 Second Paper
On Serendip

"What is wrong with you? Why are you acting this way?"
"Are you ok? Why are you crying all of a sudden?"
"What? Rosie, I think you ate enough already. You're still hungry?"

Have you ever had comments like these said where you couldn't really answer them? This actually happens to me once a month; these sudden outbursts of anger, depression, and ofcourse, the munchies. Some cases it could be more severe than others but the same symptoms definitely appear at a certain time of every month, and this is what society now calls 'PMS.' I always wanted to know what PMS, or the premenstrual syndrome, was defined as exactly. I was curious because I was the one affected so much by it- or so what the magazines 'Cosmo' and 'Glamor' taught me. I often hurt and offended the people I care for most, although my actions were uncontrollable, and felt extremely guilty about what I've done. After what was said and done and the emotion distress I caused myself, I felt that something had to be done. Therefore I was curious to know if there was any way I could lesson the degree of my PMS through research and study for this web paper.

The Premenstrual Syndrome is defined by 'a series of physical and emotional symptoms that occur in the luteal phase of the menstrual cycle, which is the two week time frame between ovulation and menstruation.'(1) It is a disorder characterized by hormonal changes that trigger symptoms in women; an estimate of 40 million women suffer from PMS and over 150 symptoms have been attributed to PMS. The symptoms vary for each individual lasting for about 10 days. Symptoms have been characteristically both physical and emotional including 'physical symptoms as headache, migraine, fluid retention, fatigue, constipation, painful joints, backache, abdominal cramping, heart palpitations and weight gain. Emotional and behavioral changes may include anxiety, depression, irritability, panic attacks, tension, lack of co-ordination, decreased work or social performance and altered libido.' (2)
The original description of PMS has been grouped the same ever since 1931 by an American neurologist. However, the cause of PMS is still unknown. The general consensus is that migraine and depression stem from neurochemical changes within the brain. Also, female hormones play an important role- the 'combination of hormonal imbalance (that is the deficiency in progesterone and excess in estrogen) in fluid retention since it holds fluid causing women to gain up to 5 pounds premenstrually.' (3)

According to theorists and doctors, in order to manage PMS, it is recommended to a) eat 6 small meals a day at 3 hour intervals high in complex carbohydrates and low in simple sugars. This helps balance the sugar energy high and lows. b) drink less or no caffeine, alcohol, salts, fat, and simple sugars to reduce bloating, fatigue, depression and tension. c) drink daily supplemental vitamins and minerals to reduce irritability, fluid retention, joint aches, breast tenderness, anxiety, depression and fatigue. d) exercise 3 times a week for 20-30 minutes to reduce stress and tension. These are daily recommendations by doctors to reduce the degree of PMS without the help of medication.

However, when in certain cases women need medication for severe PMS (5 out of the 40 million), they have 3 options 'a) taking tricyclics (Elavil, Triavil, Sinequan) b) taking tranquilizers (Valium, Ativan, Xanax) and c) taking serotonin.' (2) However, after a few cycles of the above medication, the patient became forgetful, sleepy, and less communicative. Another form of treatment was giving a dose of 100mg of danazol twice daily. Danazol prevents the rise and fall of estrogen levels. Although improvement occurred with danazol treatment (a 80% success rate), menstrual change and nausea were frequent side effects. After several cycles, some patients' hormones were so well controlled that they were able to discontinue this medication.

Although there is yet much to learn about PMS, after this research I could say that the change in nutritional and lifestyle changes are best to lessen the degree of PMS. I thought it was best to avoid medical treatment- but only if your PMS is not that severe. We do live in a society where we demand 'quick fixes' and expect that a pill would cure your every dissatisfaction, however there is no instant cure for PMS. They are far too complex involving too many diverse symptoms and factors to be treated for one single medication. Again, in conclusion to this research paper I would reiterate the importance of daily nutritional and lifestyle changes for a lesser degree of PMS.

WWW Sources:
1) Understanding PMS, a comprehensive PMS website

2) Medical Treatment of PMS- Premenstrual Syndrome, many experiments and results of drugs for PMS

3) What is Premenstrual Syndrome?, a concise description of what PMS is exactly

References


Cocaine:scaring the crap out of America for Decade
Name: Diana Fern
Date: 2002-11-11 03:23:20
Link to this Comment: 3658


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Cocaine has been present in American drug culture for the past three decades and has seen a rise in popularity in the new millennium. From Grateful Dead songs, to investment bankers, as well as in recent movies such as "Traffic" and "Blow" cocaine has permeated the fabric of American society. So what is it that has made cocaine so popular and sought after? Are the biological measures that the American government taking toward eliminating cocaine, ethically sound?
The indigenous peoples of South America have used cocaine for hundreds of years. Indians chewed coca leaves in order to alleviate feelings of fatigue as well as feelings of hunger. Cocaine derives from the coca plant found in Colombia, Peru, and Bolivia. In the modern world cocaine predominantly takes the form of white powder and is snorted through the nostril where it is absorbed into the mucus membrane. Cocaine is a combination of Coca leaves, sulfuric acid, and HCL. It is then purified with water, ammonium hydroxide, and ether. It's chemical formula is C1 7H2 1NO4(1). Cocaine can also be smoked or injected.
This mixture of chemicals creates a multitude of pleasurable effects, which is why it is so habit forming. These pleasurable effects include, euphoria, garrulousness, increased motor activity, lack of fatigue and hunger, and heightened sexual interest. Cocaine stimulates the nucleus accumbes; where in a large amount of dopamine is released by neurons. Cocaine inhibits dopamine receptors by the accumulation of dopamine in the brain. This accumulation of dopamine is what causes the euphoric feeling that cocaine users describe.
Cocaine also has many adverse effects because of its highly addictive nature. It has been associated with cardiovascular problems, respiratory effects, including chest pain and respiratory failure, strokes, seizure, and gastrointestinal complications. (2).Memories of cocaine use can be a powerful draw for former cocaine users to fall into relapse. This memory association is attributed to the hippocampus, an area of the brain that assists in recalling memories. (3).Although cocaine causes many health consequences,the role that government plays in trying to prevent cocaine from surfacing in the United States far outweighs these affects.
The United States government has been focusing in recent years toward eliminating cocaine at the source, the coca plant. Spraying herbicides over areas of presumed coca growth have had severe implications for residents of Colombia, and other countries bordering the Amazon. In June of 2002, 7,000 hectares of food crops were damaged from arial herbicide, sprayed by the Colombian government in a U.S sponsored sweep of the area. 4,000 people and 178,000 animals were found with major skin, respitory and digestive problems due to the herbicide. (4).
The medical issues that have arisen due to the use of chemicals in the war on cocaine have lead to new research in the elimination of cocaine that could be as detrimental as herbicides. The United States and Colombian governments have decided to test out a fungus called Fusarium that could kill coca plants. Yet introducing a new species of plants in an area that has already been ravaged ecologically could have unfavorable effects on the environment. Biodiversity could plummet as species of plants and animals, unaccustomed to the new fungus could die off as a result of a non-native plant being thrust into the ecosystem. Species of Fusarium oxysporum is known to cause disease and could endanger the well being of the humans residing in these areas. (5). The rainforests of this area are already at risk, and government meddling will only worsen the problem.
Although cocaine is potentially lethal, and has affected the lives of thousands of addicts, those people who use cocaine do so of their own freewill. Those people, who reside in the South American countryside, do not choose to have new species of lethal fungi, or choose to have their food crops destroyed. By trying to create a moral infrastructure in the United States, government funded irradiation projects, have affected the lives of campesinos in Colombia and Bolivia in a direct and harmful way. The war and drugs has gone on for decades, and what does America have to show for it? Cocaine is a reality, yet we have a right to choose. The life of someone in Colombia is not worth less than a Statement America chooses to make.

References

1)"cocaine" Encyclopedia Britannica,

2),
"Research Report Series- Cocaine Abuse and Addiction,

FOR NON-WEB REFERENCES USE THE FOLLOWING, REPEATING AS NECESSARY

3) Netting, J."Memory May Draw Addicts Back to Cocaine"in Encyclopedia Britannica. Vol. 159. Issue 19. Science news Inc; New York. 2001. p.292.

4) "Drugs war's true cost" in Encyclopedia Britannica. Vol. 32. Ecologist. June2000. P10.
5) Vargas, "Biowarfare in Colombia?" NACLA report on the Americas. Vol. 34. Issue 2. Oct 2000. p20.


Emotions and How They Inhibit Me From Living
Name: Laura Silv
Date: 2002-11-11 10:19:34
Link to this Comment: 3660

Emotions and How They Inhibit Me From Living
by Laura Silvius

One issue that has been raised a lot in class and in the online forum in the last couple of weeks is the question of emotions; why they happen, what affects them, and what exactly controls them. No one in the class was able to answer these questions to the complete satisfaction of every one else, so I thought I would try to pursue this topic a little further. Specifically, I would like to explore the effect that premenstrual symptoms have on women's emotions during their twenty-eight day cycle.
Premenstrual symptoms, or PMS for short, cause all sorts of problems, as any woman can tell you. The British National Association of Premenstrual Syndrome's website (http://www.pms.org.uk) briefly describes the symptoms that one may experience over the four-week cycle. Some of the more common physical characteristics of PMS include bloating and cramping in the stomach, backaches, weight gain due to fluid retainment, skin problems and headaches. The more emotional symptoms include aggression, fatigue, anxiousness, a feeling of being misunderstood, intense sensitivity, mood swings, depression and, most common (for me, anyway), a feeling that one simply doesn't want to get out of bed. In fact, there are more than 150 symptoms that have been attributed to PMS.
While at least one of the many symptoms of PMS is experienced by nearly all women, about 20% of women experience these symptoms in a much more severe manner which affect one's ability to go through simple every-day tasks. Women who experience this form of extreme PMS could possibly suffer from Premenstrual Dysphoric Disorder (PMDD), which was recently added to the American Psychiatric Association's list of mental illnesses. This illness can be described as "PMS intensified by about a thousand", according to a friend who suffers. A more complete explanation of the differences between these two can be found at www.conquerpms.com.
Although it is not entirely clear exactly what causes premenstrual symptoms, the reigning theory is that they are caused by the rise and fall of estrogen levels within a woman's body over the course of a month. Estrogen levels begin to rise slowly just after menstruation ends, reaches its peak two weeks later in the middle of the cycle. The estrogen then falls sharply, only to rise slowly and fall again just before menstruating begins. Because estrogen holds fluid, higher amounts of estrogen carry with them fluid retention. It also increases brain chemicals and activity, both of which fall again as estrogen lessens. This flux can affect mood, causing the emotional symptoms described above. Estrogen also carries with it a sense of vulnerability that is lost again when estrogen falls, leaving women to feel more alert and aggressive.
Endorphins, which are released in the body through exercise, are also commonly believed to affect PMS by relieving some of the physical pain. One can also relieve the intensity of symptoms by changing one's diet to include less sugar, caffeine or alcohol and more fruits and vegetables. Starch especially is thought to lessen the intensity of cramps. A longer list of causes can be found at the website for the Women's Health Channel (http://www.womenshealthchannel.com).
Many women opt for medical treatment of PMS, and with a slew of drugs available, from over-the-counter to prescription medicines, this is typically the most common response. The most common over-the-counter drugs are Pamprin and Midol, which are available in any drug store. Treatments prescribed by doctors usually depend on the age and maturity of the patient's body. For women in their early 20's or younger, a strong pain reliever is usually the more common response unless the woman in question is sexually active, in which case birth control is prescribed in order to kill two birds with one stone, as it were. Birth control is also prescribed by doctors for women whose ages range from the early 20's until the mid-30's As women begin to exhibit symptoms of menopause, hormone therapy is usually prescribed to make the transition easier and estrogen levels more even over the menstrual cycle.
The most difficult thing to deal with about PMS is the emotional distress that it puts one through. Besides the physical pain that makes one wish that they distributed hysterectomies at birth, the emotional pain can cause relationships to suffer and have long-lasting effects such as depression. On a more short-term basis, irritability and acute sensitivity can blow any harmless comment out of proportion. When left untreated, even by over-the-counter medicines, this emotional roller coaster can affect personal relationships with friends, family and co-workers. These symptoms are especially noticeable during menopause. Hormone therapy can help these emotional trials and make the cycle of one's menstrual period or the menopause phase a little easier, not only for the sufferer but also for all those in close contact with her


Laughing Matters
Name: Maggie Sco
Date: 2002-11-11 10:20:02
Link to this Comment: 3661


<mytitle>

Biology 103
2002 Second Paper
On Serendip


We all like to laugh, and generally it makes us feel better. Laughter is a common physiological phenomenon that researchers are just beginning to study. What exactly happens when we laugh? What makes us laugh? Is it true that laughter is contagious? Is laughter healthy?

When we laugh, the brain pressures us to simultaneously make gestures and sounds. Fifteen facial muscles contract, the larynx becomes half-closed so that we breathe irregularly, which can makes us gasp for air, and sometimes, the tear ducts become activated (1). Nerves sent to the brain trigger electrical impulses to set off chemical reactions. These reactions release natural tranquilizers, pain relievers and endorphins (2).

There are three different theories for what people find humorous. The incongruity theory is when people's logical expectations don't match up with the end of the situation or the joke. The relief theory is when tension is built up and we need a release of emotion; this is commonly seen in movies in what we refer to as 'comic relief' (1). The relief theory also takes into account laughing at forbidden thoughts (6). The third is called superiority theory, when we laugh at someone else's mistakes because we feel superior to them (1). While what people find humorous can be divided into these three generic categories, many factors affect a person's sense of humor, which is why we don't all laugh at the same things. The main factor seems to be a person's age (1). We have all seen young children laugh at jokes that they don't "get" just because they understand the format for riddles (4). There is always a certain amount of intelligence involved in understanding a joke, no matter how basic or stupid the joke may seem (1). So the older a person gets, the more she learns, and her sense of humor will usually become more mature.

However, laughter also occurs in situations not necessarily considered to be typically humorous. Psychologist and neuroscientist Robert Provine, from the University of Maryland, studied over 1,200 "laughter episodes" and determined that 80% of laughter isn't based around humor (3). We laugh from being nervous, excited, tense, happy or because someone else is laughing (4). The listener isn't just laughing in response to the speaker, either. Provine found that in most conversations, speakers laugh 46% more than listeners do (3). I think the fact that speakers laugh more than listeners implies a kind of nervousness and need for acceptance on the speaker's part. They subconsciously think that if they laugh, the people listening to them will also laugh, and the listeners laughing makes the speaker feel more comfortable.

Conversationalists who think that if they laugh they will also make their audience laugh may not be too far off. It is widely accepted that laughter makes people laugh, even if they do not know the original context that caused laughter. The ability of laughter to cause laughter indicates that humans might have "auditory "feature detectors"--neural circuits that respond exclusively to this species-typical vocalization"(3). These detectors trigger the neural circuits that generate laughter. A laugh generator that is initiated by a laugh detector may be why laughter is contagious (3). So people who are laughing with someone else may not be able to control themselves, even if they do not know what caused the original laugh.

What we consider normal, healthy laughter doesn't come in different forms. Laughter is rigidly structured the same way as any animal call. All types of laughter should be a series of short vowel-like syllables such as 'ha-ha-ha' or 'tee-hee-hee' that are about 210 milliseconds apart (3). When it doesn't follow that structure, laughter usually sounds unnatural or disturbing. Laughter that sounds like 'haa-haaa-haaaaa', that gets louder instead of quieter, or that interrupts the structure of a sentence are all examples of odd laugh forms (5). I realized that many of the examples of 'unhealthy' laughter are what we use in our society to depict villains. Since laugher is structured like animal calls, it is almost as though when we hear something that doesn't follow those patterns, we instinctively know that it is menacing or unnatural.

We often laugh because we're happy, but laughing can also make us happy - and healthy. Laughter releases endorphins, neurotransmitters that have pain-relieving properties similar to morphine and are probably connected to euphoric feelings, appetite modulation, and the release of sex hormones (7). Studies have shown that laughter boosts the immune system in variety of ways. Laughter increases the amount of T cells, which attack viruses, foreign cells and cancer cells, and gamma interferon, a protein that fights diseases (8). It increases B-cells, which make disease-destroying antibodies (1). Immunoglobulin A, an antibody that fights upper respiratory tract infections, and immunoglobulins G and M, which help fight other infections, levels all rise due to laughing (8). The amount of stress hormones are also reduced by laughing, some of which are hormones that suppress the immune system (1). So when you feel better after laughing, you really are happier and healthier.

Laughing is also a full body workout. Some researchers estimate that laughing 100 times is as much of a workout as 15 minutes on an exercise bike (1). This raises the question of exactly what type of laughing do they mean? The kind where your stomach hurts by the time you are finished, or any type of laughing? Also, the average adult only laughs seventeen times a day, so it would take a little more than five days to get the equivalent of 15 minutes on an exercise bike through laughing. Laughing exercises the cardiovascular system by lowering blood pressure and increasing heart rate, which any aerobic exercise will do (6). It probably improves coordination of brain functions, which increases alertness and memory, and helps clear the respiratory tract from coughing (8). Laughter increases blood oxygen; and strengthens internal muscles by tightening and releasing them (6). One doctor says that 20 seconds of laughing works the heart as hard as three minutes of hard rowing (8). My friends who are rowers say that this is practically impossible, but the fact that research indicates that laughing gives you that much of a workout means it must be good for you, even if not to such an extent.

Laughter is a very complex physical process. There are theories on how to classify what we find humorous, which in turn makes us laugh. But even if these categories are correct, there are other things that cause laughter. Any extreme emotion can make people laugh, which is sometimes why we laugh in what are considered socially inappropriate moments (like funerals or car accidents). Someone else laughing also triggers laughter, so it really is contagious. There is a great deal of research that indicates that laughter is healthy for you in a variety of ways, such as boosting the immune system and reducing stress. So if you feel like you're getting sick or you don't have much energy, stop worrying about going to the gym or the health center. You just need to find funnier friends.


References

1)How Stuff Works, "How Laughter Works".
2)Body Manifestations, by Dr. Sarfaraz K Niazi, 2/9/94.
3)American Scientist, Jan-Feb 1996. "Laughter", by Robert Provine.
4)"The Best Medicine", by Raj Kaushik, from The Halifax Herald Limited, 1/20/02.
5)Nature Science Update, "A Serious Article about Laughter", by Sara Abdulla.
6)Laughing Out Loud to Good Health
7)Bartleby.com, using the Colombia Encyclopedia as a reference.
8)MDA Publications, Quest, Volume 3, Number 4, Fall 1996. "Is Laughter the Best Medicine?" by Carol Sowell.


Illegal Drugs
Name: Jennifer R
Date: 2002-11-11 11:04:24
Link to this Comment: 3663

<mytitle> Biology 103
2002 Second Paper
On Serendip

There is a terrible problem that plagues the nation in the year 2002. It is one word that does considerable amounts of damage—drugs. Over the years, the level of teenage drug use fluctuates. "The prevalence of illicit drug use among America's teenagers dropped slightly in 1998. The decrease follows a leveling off in 1997, and suggests that the increasing use of drugs by teenagers that marked most of the 1990s may have begun to turn around."(1) . However, recent reports conclude that drug use is on the rise. In October, the Boston Herald reported that cocaine use was the leading cause of overdose in the Boston area. Illegal drug use interests me because I see it first hand and teens don't know what the long term effects are or even that there are any. Most teens are going to jail or even killing themselves because they are so addicted. What is most surprising among drug users is they don't know the long term effects of drugs- they are just in it for the ride; however, most rides end up deadly. In writing my paper, I will focus on the long term effects of three specific drugs: cocaine, heroin and Oxycontin. Each of these drugs is moving into a more mainstream category and is being used as commonly as drinking a beer. Heroin and cocaine, which were very popular in the 80's, are back and taking away lives of many teens. Oxycontin is a newly discovered drug used for cancer treatment, but when snorted or injected is more deadly than heroin and cocaine. Heroin contains hydrochloride (diacetylmorphine, diamorphine) which releases endorphins and blocks pain. Cocaine contains hydrochloride which releases dopamine, adrenaline ( causing rapid heart rate and increased blood pressure), acetylcholine (causing muscle tremors), serotonin (causing feelings of calm and pleasure). (2) Oxycontin are a prescription drug used as painkillers for extremely sick cancer patients. They contain an opium derivative, which is the same active ingredient in Percodan and Percocet (also pain relievers). Unfortunately, heroin is the most addictive and commonly used substance out of the three drugs. The reason is because it is so cheap. It costs less to get high than it does to order a beer at a bar. Heroin is the most popular drug in the slums and ghettos. Authorities never thought that heroin use would be a problem because it had to be injected with a needle, however they were wrong. Just this past February, Attorney General Janet Reno admitted heroin is more plentiful, purer, and less expensive than it was just a few years ago. "If we do not counteract the heroin threat now," she said, "we risk repeating the terrible consequences of the 1980s' cocaine and crack epidemic." Authorities estimate that heroin addiction has increased 20 percent and worldwide production has grown sharply, even as other illegal substance abuse is declining. (3) Just as heroine is addictive, cocaine is as well. Fortunately, cocaine is more expensive, therefore it is not used as much as heroine. Cocaine gives off the same type of high as heroin, just not as strong and does not last as long. Throughout the 1980s and 1990s cocaine and heroin has been the leading drug of choice, with increasing numbers. In 1996, it was reported that heroin was the primary drug of abuse related to drug abuse treatment admissions in Newark, San Francisco, Los Angeles, and Boston, and it ranked a close second to cocaine in New York and Seattle. (3) It was not until recently that teens discovered oxycontins. In 1995, the FDA approved the drug and by the end of 1996, Oxycontins had been linked to at least 120 deaths. (4) What teens don't know when they are getting into drugs is that they are addictive and it is too late for them to stop once they realize what these drugs are doing to them. There are numerous effects drugs have on teenagers. Heroin, cocaine and oxycontins are all narcotics and therefore cause similar effects to the body and the brain. Some effects include euphoria, drowsiness and respiratory depression. Used over a long period of time, these drugs cause serious damage. Not to mention that with cocaine and heroin, which both can be injected for an immediate high, there is a high possibility of getting AIDS through sharing of needles. Oxycontins are so addicting that withdrawals cause depression from dependency on the drug. All of these drugs when injected cause an immediate high but coming down from the high is very bad because of withdrawals. Withdrawals to these drugs are why addiction is so high; instead of coming down, they just shoot/snort up again and they have the euphoric feeling again. Doctors agree that you can always tell a previous drug user even after being sober for years. Track marks on people's arms from injection, delayed reactions, paranoia, high blood pressure and even heart attack are all long term effects of these drugs. The most popular long-term effect of heroin, cocaine and oxycontins is addiction. Addiction is a chronic problem, characterized by compulsive drug seeking and use, and by neurochemical and molecular changes in the brain. (3) The long term effects of these three drugs are still being studied today and researchers are still finding new discoveries and problems that drugs cause. In Philadelphia, Dr. Michelle Ehrlich, a neurology professor at Thomas Jefferson University in Philadelphia insists there is a link to adolescent drug use and brain damage. "The adolescent brain appears to be more sensitive to certain effects of these psycho-stimulant drugs. We need to see whether this sensitivity leads to permanent brain changes and behavior changes." (5) Either way, not everything is known about drugs and what they will do to you. People think that since they have been around for so long that they know everything about them—they are wrong. In conclusion, cocaine, heroin and oxycontins are all extremely addicting and cause dependency over short and long periods of time. Teens need to ask themselves before taking drugs if a "thirty minute high" is worth (in some cases) the rest of their life. It is proven that drugs do kill and if not, cause permanent damage to your body and brain. Your brain is affected by drugs immediately and in most cases leaves permanent damage. Long-term use may result in changes in brain function that last long after the person stops using drugs. (6) Unfortunately with illegal drugs such as cocaine, heroin and oxycontins, there is no place for a "warning" label. The only way drug use will go down is with increased education and programs.

References

WWW Sources 1. 1), National Institute on Drug Abuse 2. 2), HIV Plus, September 1998 3. 3), Drug Rehab Center 4. 4), Drug Rehab Center 5. 5), Health on the Net Foundation, 10/31/02 6. 6), Discovery Health Channel, November 2002, Diseases and Conditions


To Botox or Not to Botox
Name: Brie Farle
Date: 2002-11-11 12:27:34
Link to this Comment: 3664

Biology 103
2002 Second Paper
On Serendip

Brie Farley
Biology 103
November 11, 2002

TO BOTOX OR NOT TO BOTOX

Where does our plight for perfection end? Self-improvement, especially in the area of personal appearance, is applauded in American culture. In April 2002, the FDA announced the approval of Botulinum Toxin Type A to temporarily improve the appearance of moderate to severe frown lines between the eyebrows (glabellar lines), a medical condition that is not serious. Botulinum Toxin Type A, better known as Botox, was first approved in December 1989 to treat eye muscle disorders (blepharospasm and strabismus) and in December 2000 to treat cervical dystonia, a neurological movement disorder causing severe neck and shoulder contractions. (1).

Botox is a protein produced by the bacterium Clostridium botulinum. In medical settings, it is used as an injectible form or sterile, purified botulinum toxin. In 1895, Emile P. Van Ermengem first isolated the Botulinum microbe. He discovered that this bacterium produced a toxin, and understood that this was what caused disease. However, it wasn't until 1946 that the toxin was isolated in a crystal form by Edward J. Schantz. In the early 70's, Dr. Alan Scott began investigating the use of botulinum toxin injections to treat crossed eyes (also called strabismus). Clinical studies for this purpose were initiated in 1977. (6).

Shortly after these studies, Dr. Jean Carruthers, a Canadian opthalmologist, noted a marked decrease in the appearance of frown lines on a patient that was receiving botulinum toxin injections to relieve twitching of the eye (blepharospasm). Soon after, Dr. Carruthers teamed up with husband Dr. Alastair Carruthers, a Canadian dermatologist, to use the botulinum toxin to treat frown lines and crow's feet. The results of these treatments were published in 1989, laying the foundation for a revolution in cosmetic surgery. (6).

Small doses of toxin are injected into the affected muscles and block the release of the chemical acetylcholine that would otherwise signal the muscle to contract. Thus, the toxin paralyzes the injected muscle. Botox worked so well to help medical conditions, it was tested as a cosmetic procedure. (1).

"In placebo-controlled, multi-center, randomized clinical trials involving a total of 405 patients with moderate to severe glabellar lines who were injected with Botox Cosmetic, data from both the investigators' and the patients' ratings of the improvement of the frown lines were evaluated. After 30 days, the great majority of investigators and patients rated frown lines as improved or nonexistent. Very few patients in the placebo group saw similar improvement." (1).

Within a few hours to a couple of days after the botulinum toxin is injected into the affected muscle(s), the spasms or contractions are reduced or eliminated altogether. The effects of the treatment are not permanent, reportedly lasting anywhere from three to eight months. By injecting the toxin directly into a certain muscle or muscle group, the risk of it spreading to other areas of the body is greatly diminished. (2).

When Botox is injected into the muscles surrounding the eyes, those muscles can not "scrunch up" for a period of time. The wrinkles in that area, often referred to as "crow's-feet," temporarily go away. (2).
(For before and after pictures, see (7))

Most of the patients in the study were women, under the age of 50. The most common side effects were headache, respiratory infection, flu syndrome, blepharoptosis (droopy eyelids) and nausea. Less frequent adverse reactions (less than 3% of patients) included pain in the face, redness at the injection site and muscle weakness. (1).

In June 2002, the American Headache Society released findings of 13 studies that indicate Botox rid a number of patients of severe headaches. (3).
One particular project suggests that people plagued with headaches who also had Botox injections for cosmetic reasons suffered from fewer migraines, experienced a reduction in the disabling effects of migraines and used less pain medication. (3).
The headache and Botox connection began emerging in 1992 when a California physician noted his patients who got Botox injections said they were having fewer headaches. (3).

"The biggest advantage to Botox is its lack of side effects, especially compared to other medications," Dr. William Ondo of the Baylor College of Medicine said in an AHS press release. "It really is extremely safe and appears to be very effective for some people." (3).

Researchers think Botox blocks sensory nerves that relay pain messages to the brain in order to relax muscles, making them less sensitive to pain. (3).
"Scrunching" your eyebrows in a concerned or angered expression relays these messages and may cause headaches.

"More than half of the 48 patients in a study at a Mayo Clinic in Scottsdale, Arizona, said their migraine occurrences dropped by 50 percent or more. Of the ones who had a positive response, 61 percent said they had headaches less frequently and almost 30 percent said the headaches were less severe. At the Baylor College of Medicine Headache Clinic, 58 patients participated in a controlled trial. Some received Botox and others had placebos. After three months, 55 percent of the patients who received Botox reported at least moderate improvement in their headaches. Two of the 29 who got the placebo water injections reported the same results." (3).

What's the worst that can happen, you might ask, from having a toxic substance injected into your face?? According to results from a study conducted at Wake Forest, Botox side effects are minimal. Doctors found a small risk the skin around the injection site would droop temporarily. (3). This is known as blepharoptosis and occurs in about 5% of patients. It usually appears 7 to 14 days after the injection and can last 4 to 6 weeks. A more speedy method of treating it is the application of prescription eye drops (iopidine). In many cases, these drops will help resolve the droop within a few days. To reduce the risk of blepharoptosis, it is recommended that patients obtain Botox from a physician who is experienced in its use. It is also important for a patient to remain vertical for 4-6 hours after the injection. This allows the Botox to be taken up in the treated area and reduces the chance of displacement to other muscles. The injected site should not be touched for two to three hours following injection. (4).

Botox sounds like a miracle drug for those desiring a wrinkle-free face. It has minimal side effects and is relatively inexpensive to other forms of cosmetic alterations such as surgery. Different patients will require different amounts of Botox treatment, which can vary the cost. According to recent information, Botox treatment can cost anywhere from $300 to $700 per treatment. (6). However, what should we anticipate is the future of Botox?

Botulinum Toxin Type A, Botox, is related to botulism. Botulism is a form of food poisoning that occurs when someone eats something containing a neurotoxin produced by the bacterium Clostridium botulinum. Botulinum toxin A is one of the neurotoxins produced by Clostridium botulinum. (2).

Thus, muscle paralysis is the most serious symptom of botulism, which in some cases has proven to be fatal. The botulinum toxins attach themselves to nerve endings. Once this happens, acetylcholine, the neurotransmitter responsible for triggering muscle contractions, cannot be released. Essentially, the botulinum toxins block the signals that would normally tell your muscles to contract. If, for example, it attacks the muscles in your chest, this could have a profound impact on your breathing. When people die from botulism, this is often the cause; the respiratory muscles are paralyzed so it is impossible to breathe. (2).
Said in this way, Botox may not sound so harmless. It is a serious toxin, and although the side effects from facial injections do not sound lethal, its users should realize the lethal properties of the substance.

Furthermore, what will happen to all of the Botox enthusiasts when they decide to discontinue Botox injections? The current recommended 'dosage' is to return once every three months for new injections between the eyebrows. These muscles are paralyzed and weakened. Over time, will these muscles be able to function properly without Botox injections or will the toxin have weakened their natural capabilities to the point of destruction? When all of the women from the study are eighty years old, will their eyes be visible under permanent blepharoptosis?

We are living in an era of self-manipulation and self-perfection. Is it not ironic that many of the same individuals who do their grocery shopping exclusively at the organic market are off having poison injected into their face to look healthy? The newest rage of self-improvement enthusiasts is Botox Parties. Botox parties are one of the newer, more controversial ways to administer Botox. Typically, Botox party guests will get a quick lecture on the risks before receiving Botox treatments in a private area. Alcohol is sometimes served, although it should never be served prior to the treatments (and many doctors will say that alcohol should never be involved in any medical procedure, before or after). (6).

"Some people enjoy Botox parties because of the support they receive from other guests. In addition, a Botox party can be a more economical way to have treatment, since the prices for the actual toxin are usually lower in large groups. At any rate, these occasions have been growing ever more popular, and many highly qualified physicians look upon them with disfavor." (6).

There are many physicians who refuse to do Botox parties. They believe that no medical procedure should be administered in a social setting. They also argue that it is impossible to meet the specialized needs of each individual Botox patient in a party setting where the doctor is administering up to ten Botox treatments in an hour. (6). It almost sounds as if Botox is comparable to an addictive drug; a qualified dealer, a group of high paying clients, and a social setting complete with food and alcohol to make it more acceptable and more fun.

So far, Botox has been approved to help with cross-eyes, uncontrollable blinking, cervical dystonia, and now moderate to severe frown lines between eyebrows It is being studied to help excessive sweating, spasticity after a stroke, back spasms, and headaches. (2) Is Botox a problem or is the newest and best cure to a myriad of medical and cosmetic concerns? Apparently, the risks of Botox injections, and the unknown future effects of Botox, are not enough to discourage Botox enthusiasts.

It is a difficult generation who can no longer find a distinction between potential inflictions of self-harm and striving to look the best. In Hollywood, the treatments are so popular that some directors complain that their leading actors can no longer convincingly perform a full range of facial expressions. (5)

Doctors knowledgeable in Botox are in high demand. This increases the possibility that not all doctors know all they should. This includes knowing when Botox won't be useful at all. "Muscles cause some wrinkles, but many result simply from the loss of elasticity that goes naturally with aging (or, less naturally, with smoking and sun exposure), causing the skin to sag and crumple". (5)

There are treatments for this sort of wrinkle, but Botox isn't one of them, says Dr. David L. Fledman, director of plastic surgery at Maimonides Medical Center in Brooklyn New York. "I had a patient recently who came in asking for Botox." He says. "It would have done no good at all. In fact, she might have ended up looking worse." (5)

Botox isn't a cure-all, and it is accompanied by some strange side effects. In September 2002, the company that distributes Botox, Allergan Inc., was asked to revise their advertising. Thus, all advertisements for Botox will disappear until Botox is advertised seriously and realistically as a medical procedure and not as a simple method to destroy, "those tough lines between your eyebrows." (8)

...."If you don't mind getting shot up with poison and you don't mind paralyzing parts of your face-well, you've got plenty of company."(5)

References


1) FDA , an article posted by the FDA upon approving Botox

2) How Stuff Works , A great site with explanations on how everything works!

3) CNN , an article regarding the helpful effects of Botox on headaches.

4) www.ebody.com , Details about the side effects of different medical procedures

5) Vreflect.com, Talks about Botox as a cultural phenomenon

6) Botox Injections Information , A professional site with information and links to doctors

7) Botox Injections Information, Go here to see pictures!

8) Sunwellness Magazine, An article from September 2002 announcing the FDA's notice to Allergan Inc. to halt advertising


Dyslexia
Name: Lawral Wor
Date: 2002-11-11 12:41:39
Link to this Comment: 3665


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Dyslexia is a learning disorder that effects a large portion of the population. Once it is diagnosed, it can be overcome, but undiagnosed it can prove to be a great hardship to people who have it, especially children in grade school who are trying to learn to read or do basic math. Dyslexia has recently been linked to genetics and to brain abnormalities. A lot of research and positive action is being conducted to help people, especially children, work around dyslexia so that they can function normally at school. However, not everyone sees dyslexia as a harmful thing. Many groups are dedicated to the creativity and artistic talent that usually goes hand in hand with dyslexia as well as many groups formed by dyslexics for dyslexics that provide support and an outlet for artistic endeavors. With research and new teaching techniques the effects of dyslexia can be overcome, but for some, that is not the goal.

Researchers have been working on finding the root of dyslexia for years. While it is still unknown why and how dyslexia occurs, a lot of advancements have been made. Dyslexia is now sometimes classified as a genetic brain anomaly. The anomalies in a dyslexic's brain impair how they perceive and therefore learn language skills. (3) It is still unclear where in the brain these anomalies would occur and to what extent. One of the theories, however, is that dyslexia is caused by anomalies in the brain's lipid metabolism. (3) The research is still very preliminary.

Dyslexia is first and foremost a language-based learning disorder. It is characterized by problems with single word decoding and usually is undiagnosed based on the age and level of school of the person suffering from it. (1) If caught when a child is young, kindergarten or first grade, the child can learn to overcome the major pitfalls of dyslexia with special learning techniques such as phonological training and become a strong reader and a strong student. Using multi-sensory techniques to teach children with dyslexia seems to be the most effective. By using all of their senses to learn and then to practice, children are "overlearning" in an effort to make up for their poor memory and initial confusion. (2) If the child with dyslexia is not diagnosed until after they have formed most of their reading habits, around third grade, the special learning techniques are not as effective. (1)

As dyslexia is becoming more main stream and losing some of the stigma attached to it as a learning disorder support groups made for dyslexics by dyslexics have become more and more common. These groups share methods for working around dyslexia and emotional support for those that suffer from it. Most of them also have special programs for parents or teachers of dyslexic children. These groups all stress the fact that it is highly possible to be dyslexic and still be successful. Some even have lists of famous people who have been reported to have learning disorders such as dyslexia. These lists usually contain the professions of those listed, further emphasizing the variety of ways in which dyslexic people can achieve success. The lists are usually very eclectic including such luminaries as Walt Disney, Winston Churchill, MC Escher and Whoopi Goldberg. (4)

Many of the support groups for people who suffer from dyslexia and their families also celebrate the positive benefits of dyslexia. Dyslexia makes language development difficult because it causes people to think in pictures. (5) For this reason, many dyslexics are very talented and creative artists. There are many websites for these groups that display the work of their members. They share painting, drawings, poems, stories, and other artwork from their members whose ages range from elementary school age to college students to adults. For many of the people who post their work with these support groups, dyslexia is the reason that they are artists. It is either their inspiration or the source of their talent. Either way, they comment on their need to have it but the conflicting hardships it causes. Karin Peri, a teenager, wrote a poem entitled "Dear Dyslexia" that perfectly exemplifies this inner conflict. (6) She writes:

Because of you
I see a different angle.
you make me who I am,
But what would life be without you?
A life free of constant frustration,
A chance to see things "correctly"
To say exactly
What I have to say
And write exactly
What I have to write.
But without you
Would I have anything to write?
Anything to say?
Would I have a poem?

Her words are echoed in the work of many of the other artists who post their work on these sites.

Though most dyslexics will admit that dyslexia has been hard and things would have been easier without it, especially school, there are some who embrace it and what it has to offer to them. With new teaching techniques that encompass more of the senses rather than trying to force dyslexics to learn traditionally, it will be easier for future children with dyslexia to hold on to the benefits that it can bring and still be successful in school. The proliferation of support and informational groups for sufferers, their families, and their educators continue to provide services and help to spread these practices.

References

1)Dyslexia - What is It?

2)The Dyslexia Institute

3)Brooks, Liz. "Dyslexia: 100 Years on Brain Research and Understanding." Dyslexia Review Magazine. Spring 1997.

4)Great Minds Think Alike

5)Dyslexia the Gift

6)memory of Kari Peri


Dreams
Name: Elizabeth
Date: 2002-11-11 13:04:27
Link to this Comment: 3666

Dreams

Dreams

While we sleep, our bodies rest from the events of the day and recharge in order to face the next round of challenges it will face during its waking life.  During sleep, our brains produce a fractured, often nonsensical amalgamation of random events and people, otherwise known as dreams.  These dreams often provoke powerful reactions of fear or pleasure, as, for all their improbability, they follow reality in such a way as to trick the dreamer into reacting to this fantasy world as if it actually existed.  However, although dreaming undeniably is a large and memorable part of one's nightly sleep cycle, scientists have yet to define for certain the biological function of dreams.  Some believe dreams are a remnant of our Neanderthal past, when our ancestors used dreams as a sort of training ground for developing appropriate reactions in the life or death struggles they faced every day.  Others thinks dreams simply stem from random impulses which produce images from one's daily life with no particular significance. Others believe that dreams serve to "clean out" the emotional stress accumulated during the day.  Whatever the hypothesis, it is difficult to prove for certain the purpose of dreams.

            Researchers have identified four distinct stages to the sleep cycle (1).  Of these, the phase known as Rapid Eye Movement (REM) is most closely associated with dreaming.  The REM phase, characterized by rapid heart rate, distinct brain waves, and an increased amount of electrical activity in the brain, produces the most vivid and memorable dreams (2).  Initially, scientists believed that dreams only occurred during REM sleep.  While studies have identified dreams during other phases of the sleep cycle, the most powerful dreams are still associated with REM sleep(1). Previously, scientists and psychologists believed that dreams were simply a byproduct of the functions of REM sleep, but the discovery of the possibility of dreams occurring during non-REM stages of the sleep cycle undermines the validity of this theory.  This has led many in the scientific community to develop new and often farfetched theories of the biological function of dreams.

            The formal study of dreams first began with psychoanalysts like Sigmund Freud, whose dream theory of 1900 served as an influential and widely accepted conception of why humans dream.  Freud believed that dreams reflected the baser impulses of the human subconscious, impulses which could not be acted upon in society.  His observations were based on subjects whose disturbing dreams haunted them even while they were awake (2).  However, while dreams certainly can reflect the subject undertaking actions which they would never have the opportunity to do in real life, Freud's theory seems to imply that only the seriously troubled dream.  Research has shown that all humans dream, whether or not they remember their dreams the next day.

            In the 1960s and 70s, researchers at the Harvard Laboratory of Neurophysiology focused on observing the biological causes of REM sleep in order to better understand why humans dream.  They discovered that REM sleep is induced by the release of the brain chemical acetylcholine.  The release of this chemical stimulates nerve impulses which recreate random bits of one's internal information in a sequence which may not conform to logic.  J. Allan Hobson and Robert McCarley, the primary researchers at the Harvard laboratory, named this new theory the activation-synthesis hypothesis.  From this hypothesis, Hobson developed an idea of dreaming not as an arena in which to explore hidden urges, but as an opportunity for mental "housekeeping".  He also believed that dreams could serve to solidify emotional ties to memories (2).

            Since scientists like Hobson established a biological basis for why humans dream, other researchers have developed their own theories regarding the purpose of dreams, while undermining others' hypotheses.  Hobson's concept of dreams as an opportunity for mental reorganization has been criticized as research has shown that very little of the day's events recurs in that night's dreams (4).  Rather, dreams tend to deal with larger issues of conflict and emotions, which has led others to develop a concept of dreams as stimulated by a threat simulation mechanism, a remnant of the days when humans faced life or death struggles on a daily basis.   This theory also takes into consideration the recurring dreams of war veterans and trauma victims, as in these cases the brain attempts to present the dreamer with the former conflict again and again in order to prepare them to deal more effectively with such a catastrophe in case they are ever in such a position again (3).  Many agree with this theory in part, as they recognize the problem solving aspect of dreams, but may not believe in the existence of a threat simulation mechanism (5).  Still others believe dreams have no biological function at all, only a cultural significance assigned by human attempts to make sense of dreams (4).

            Humans may never discover the actual biological function of dreams.  While many theories of dreams as an opportunity to reorganize one's thoughts and to solve problems sound feasible, it is difficult to prove anything conclusively, due to the relative youth of neurobiology and the shadowy nature of dreams themselves.  As our general understanding of the brain develops, scientists may be better able to understand why we dream.

Works Cited

1. "The Biology of Dreams"

2. "Dream-Catchers"

3. "The Biological Function of Dreaming"

4. "The 'Purpose' of Dreams"

5. "Biological Dream Theory"

 


DEHYDROEPIANDROSTERONE: By Any Other Name would be
Name: Chelsea Ph
Date: 2002-11-11 15:00:53
Link to this Comment: 3667


<mytitle>

Biology 103
2002 Second Paper
On Serendip


What exactly is dehydroepiandrosterone? Dehydroepiandrosterone (DHEA) is one of the hormones secreted by the adrenal glands, located on top of the kidneys in human beings. DHEA has been toted as everything from "chemical trash" to "the fountain of youth drug". Thousands of studies on DHEA have been conducted, but few have been long-term, and even fewer have been done on humans. Despite this, however, many people continue to use DHEA as an over-the-counter medication for heart disease, to combat the aging process, cancer, obesity and many other ailments. What is the biological role of DHEA? Is it a viable possibility that DHEA really is a miracle drug?

DHEA is the most abundant steroid hormone found in the human body, and is used in the synthesis of other hormones, such as testosterone and estrogen. Levels of DHEA in the human body peak around 20-25 and steadily decline with age. One of DHEA's most important functions is counteracting the presence of high levels of cortisol, a chemical that "accelerates the breakdown of proteins to provide the fuel to maintain body functions"(1) while the body is under significant stress. Cortisol is designed to allow the body to react quickly when threatened, but can be damaging when produced for a long period of time. DHEA works as a buffer between the body and cortisol, and is triggered by the same stress that stimulates production of cortisol. As age increases, and DHEA levels decrease, the body has fewer defenses against the effects of cortisol, hence the idea of supplementing the body's reserves. However, there is wide debate within the medical community about the consequences of taking DHEA supplements because of the lack of long-term testing on humans.

While the information on humans is non-forthcoming for the moment, there are many interesting theories about the effects of DHEA based on studies done with laboratory animals- mainly mice and rats. DHEA was also found to inhibit the growth of cancer cells and to help genetically obese mice lose weight, as well as aiding strength, agility and memory in older mice. While the conclusions are very exciting for the rodent community, the question remains as to whether or not these results can be duplicated in humans. "For 50 years we've studied estrogen replacement therapy in women, and look at how much anxiety the latest studies on estrogen are causing. We have no equivalent ... studies for these other substances. We just don't know." (6)

Based on the information gathered from these studies on rodents, DHEA is thought by many to be an essential chemical for the body's tissues. It is theorized by one man (no indication that he is a doctor or scientist could be found!) that bringing the body's levels of DHEA up to those of a 25-year-old, the course of Alzheimer's can be slowed, and the immune system can be stimulated to fight cancer, degenerative diseases and AIDS(5). While this information is tempting to believe, and very convincing in theory, there simply is no proof to back it up.

While levels of DHEA do appear to have negative correlations with aging, disease, and immunity on the surface, the blasé attitude with which it has been marketed is highly inappropriate. Rats and mice are not human, they do not have levels of DHEA even approaching ours, and just because you feel good now does not mean that you will in a year, or five, or ten. Interestingly enough, some of the many side effects thought to come from long-term use are breast and prostate cancer because DHEA is used in the synthesis of testosterone and estrogen; therefore, too much can actually cause tumors. The presence of tumors in mice was significantly reduced because they have very little DHEA naturally, but the physiology of humans (because of the already high levels of the hormone) may trigger the reverse effect. Our country's culture is obsessed with being youthful, healthy and thin and marketers tote DHEA as the cure-all, despite the lack of conclusive evidence to support their claims.

The time may come when appropriate, long-term trials indicate that there are benefits that outweigh the risks of taking DHEA. It is important to bear in mind, however, that while reduction in DHEA levels occurs with age, it does not necessarily follow that supplements will prevent disease or inhibit the aging process. Our culture, perhaps from genetic predisposition, craves youth and the health that goes with it; the idea of something that will keep you young is too tempting, and far too eagerly accepted. Many other factors play into the aging process, including diet, exercise, genetics, and environment, and the effects of these factors cannot just be erased by taking a pill.

References:

1) University of California, Berkeley Homepage , a resource from the medical courses offered at UCABerkeley

2) The University of Montana Research and Scholarship Page , a resource from the University of Montana

3) Cognitive Enhancement Research Home Page , CERI Homepage

4) DiagnosisTech International, Inc. , DiagnosisTech Information Page

5) The DHEA Homepage , Interesting site linking DHEA to human maladies

6) AARP Home Page , An article from the AARP

7) Quackwatch Home Page , An article from HealthNews

8) Anti-Aging Revolution , A chapter from "DHEA and Pregnenolone: The Anti-Aging Superhormones"


The Hip Questions
Name: Katie Camp
Date: 2002-11-11 15:20:55
Link to this Comment: 3671


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Controversy surrounds the abnormal development of the acetabulum and femoral joint in infants, the hip. The definition of Congenital Hip Dysplasia or Developmental Dislocation of the Hip (CHD/DDH) is a variety of types and degrees of severity diagnosed by the many tests that are used for its diagnosis. Treatments vary as well and by far the most confusing question to CHD/DDH is the question of its root cause. Congenital Hip Dysplasia is thought to "run in families" (1) and many statistics support the assumption that it is a concern of genetics. However, other research has also observed that an infant has the probability of developing problems with the hip in the womb and after birth, unassociated with the genetic factor, identifying the disorder as Developmental Dislocation of the Hip. Whether one identifies with the evidence to support its heredity or the idea that any child can suffer from a deformed femur and acetabulum through any fault of development, it is important to be aware of CHD/DDH affecting one to five children per 1,000 births (3), how it is diagnosed, the different degrees of severity to which it occurs and the multiple choices in treatment.

CHD/DDH is the abnormal formation of the hip joint, causing easy subluxation or dislocation of the hip. The hip joint, a "ball and socket" joint, is comprised of the acetabulum, the socket and the head of the femur, the ball. There are three main classifications of CHD/DDH. Dysplasia is the term to describe just the "abnormal development" or malformation of the femur and/or the acetabulum. Subluxation classifies a partially dislocated hip and a further classification is a completely dislocated hip. A four tiered scale developed by Dr. Crowe in 1979 is used in diagnosing newborns to identify the severity of malformation and degree of dislocation. Crowe I is the least severe; the femur and acetabulum are almost normally developed and there is less than fifty percent dislocation. Crowe II hips result from abnormal development of the acetabulum and a fifty to seventy-five percent dislocation. In Crowe III stage, the acetabulum lacks a roof and so the femur portion creates a "false acetabulum" using the pelvis, resulting in complete dislocation. "High hip dislocation" can be used to describe the classification of Crowe IV because the acetabulum is completely underdeveloped and the femur sits high on the pelvis in attempt to form some sort of joint articulation. (1)

To diagnose CHD/DDH and indicate the degree of severity a variety of tests are routinely performed on newborns. The Barlow test is positive, when the "hip is flexed...thigh adducted [and] pushing posteriorly in ling of the shaft of [the] femur causing [the] femoral head to dislocate posteriorly from [the] acetabulum" (7). This however is not entirely conclusive of CHD/DDH and it is confirmed by the performance of Ortolani's test. The Ortolani test involves bringing the "femoral head from its dislocated posterior position to opposite the acetabulum," reducing the dislocated hip, otherwise described as bringing it back into proper positioning (6). If positive, this produces an audible "clunk" as the hip is reduced. The Barlow test shows that a hip has the potential to dislocate whereas the Ortolani test confirms its dislocation. Because both of these physical tests require experience and specific skills to identify the feelings and sounds of positive results, controversy surrounds the use of the Barlow and Ortolani tests. Examination by x-ray and ultrasound has become an additional tool in diagnosis. X-ray, however is less common because whereas it shows the hip in one fixed position, ultrasound can allow the hip to be seen in many positions and during movement. Ultrasound was first developed as a CHD/DDH diagnostic tool in 1978 to confirm positive physical tests and identify the degree of abnormality of the hip. A scaled developed by Graf is based on the depth and shape of the acetabulum as viewed in ultrasound and is similar to the Crowe scale described earlier. A type one is normal and no treatment is necessary while a type two hip has a shallow acetabular cup and is just "developmentally immature" in infants less than three months but should be treated in those older. In type three the hip is partially dislocated and in four is completely dislocated, both requiring treatment. (5) Other general symptoms that prompt further examination is a discrepancy in leg lengths, asymmetrical skin folds around the pelvic area, and a limp.

Just as there are many different degrees of dislocation and dysplasia (malformation) associated with this disorder, there are many different treatments. The purpose of treatment is to force correct development of the acetabulum so that the femoral head can sit properly in the joint and further subluxation or permanent dislocation does not ensue. The most common and perhaps simple way of achieving such an effort is the use of the Pavlik harness, von Rosen splint, or a stiff shell cast. Each are used on infants in the first six months of life to spread the infant's legs apart and force the femoral head into the acetabulum, applying pressure to "enlarge and deepen the socket" (2) and develop the hip normally. Closed manipulation repositions the joint by moving the leg around to get the femoral head in the proper location. In cases of children older than six months and other severe cases, treatments can involve surgery. Surgery manually repositions the joint. Before the age of four, femoral osteotomies are performed to reconstruct the hip. Pelvic osteotomies are performed after the age of four to limit instability and reduce the dislocation of the hip (1). A malformed joint and dislocated hip is obviously not fatal but treatment is important because the resulting pain is often unbearable. If undetected CHD/DDH can cause severe, painful arthritis as the forces of weight bearing wear down the cartilage of the femoral head that usually allows for comfortable and easy motion of the joint. CHD/DDH left undetected requires later treatments involving the use of anti-inflammatories, walking devices (such as a cane), physical therapy, and most often a resort to total hip replacements to correct hip alignment (1).

Despite the variety of CHD/DDH types and treatments, CHD/DDH is fairly uncommon only resulting in 1.5 births per thousand, although with more specified examination techniques this number is possibly as high as 5 births per thousand. It is hard to connect each case with one another and determine the specific cause of CHD/DDH. There are significant similarities in cases of hip malformation that have suggested the genetic causes of CHD/DDH. In many cases, CHD/DDH has "run in families" (1) and in particular ethnic groups. The prevalence in North American Indian communities has often been as high as 35 cases in one thousand births (3) and a frequency in the Lapp community, the natives of Norway (4). Also, since the majority of cases are of the left hip in female infants as well as the first born genetics could connect these cases. It has been observed that there is an increased incidence in mothers that carry a high level of a particular collagen, a bone and cartilage building material (1), often have CHD/DDH newborns. Also, mother's with high hormonal changes and a strong estrogen presence during pregnancy results in "increased ligament laxity...thought to cross over the placenta and cause the baby to have lax ligaments" (2) affecting the development of the hip. Causes of CHD/DDH are then tied together in common hereditary features.

CHD/DDH is also argued to be purely developmental, meaning it could result in any person just based on particular physical situations. For example, CHD/DDH is common in breech and caesarian births probably because of the positioning and pressure on the acetabulum and femur. Another argument is that in the case of increased incidence in the Native communities of North America and Norway is not that genetics play a role, because the practice of swaddling and use of cradleboards results in "extreme adduction" (2), bringing the hips together and displacing the femoral head from proper positioning in the acetabulum. Developmentally, the femoral head has reached its maturity in the womb while the acetabulum completes its development in the first few months of life (3). In cases of unusual positioning of the hips the acetabulum could fail to properly develop its superior position, thus allowing dislocation. Finally, because some cases of diagnosed CHD/DDH self-correct with the completed development of the acetabulum in the first few months of life it could be possible that this development of normalcy is in much the same manner as the chance that the dislocation is a developed disorder.

While CHD/DDH is simply defined as the subluxation or complete dislocation of the hip joint, the variety in which it presents itself, confusion in diagnosis and the controversy surrounding the identity of its cause makes CHD/DDH truly complicated. Whatever the root cause, it is fortunate the answers are sought to the questions raised. Increasing awareness allows more and more infants to be diagnosed and treated, usually resulting in 90% success (4). Complication in understanding CHD/DDH is beneficial, as it allows for more observation of cases and the further discovering and understanding the phenomena behind it and within it.

References

World Wide Web Sources

1)Total Hip Replacement in Congenital Hip Dysplasia, useful general CHD information resource, also delving into the subject of the treatment of hip replacement.

2)Congenital Hip Dysplasia, general information source, including good introduction of common treatments.

3)Developmental Dislocation of the Hip, comprehensive notes of occurance statistics, risk factors, treatments, and links to sites that futher explain tests and treatments.

4)What is Hip Dysplasia?, general information and brief history of the common Pavlik harness treatment.

5)Screening for Developmental Dysplasia of the Hip, comprehensive overview of CHD/DDH complete with good visual resources.

6)Ortolani's Test: for Congenital Hip Dislocation, simple and understandable description of the Ortolani test.

7)Barlow's Test, simple and understandable description of the Barlow test.


The Biology of Dreams
Name: Heidi Adle
Date: 2002-11-11 18:03:31
Link to this Comment: 3675


<mytitle>

Biology 103
2002 Second Paper
On Serendip

"Just as dreams are unreal in
comparison with the things seen in waking life, even
so the things seen in waking life in this world are unreal in
comparison with the thought-world, which alone in truly real."- Hermes


Since the beginning of their existence, heterotrophic organisms have been defined by the need for sleep. Humans accept it (more or less willingly) when they are infants and embrace every opportunity for it as college students and adults. It does not take a lot of psychological or biological background to tell that it is critical to human life. Our bodies simply stop functioning after a long period of time without it and the more we get the better we feel. But what if sleep is not only necessary for the body but the mind as well?
This is the origin of the dream. If one studies the fundamentals of biology she is sure to learn that nothing exists if it is unnecessary for survival because it would have regressed over the course of billions of years. What then is the importance of sleep to the human mind? One might think that sleep is the same as being unconscious but people take sleeping pills to knock themselves out and wonder why they still feel horrible or even worse the next morning. In fact, sleep is full of mental activity. During sleep muscles tense; blood pressure, pulse, and temperature rise; and various senses are alert (4). Random thoughts occur throughout the night, sometimes even taking on some scheme. This phenomenon is called a dream.
What is a dream? It would be pretentious of anyone to assume that modern psychology or biology have grasped all the complexities of dreams. Yet, especially in the past two centuries, many theories stand and observations have been made. There are at least three indicators that someone is dreaming.
The first indicator is called rapid eye movement (REM) sleep. As the name indicates, the eyes of the sleeper move back and forth at rapid speed during her sleep. If one is to wake someone up indicating this rapid eye movement, the sleeper is sure to tell of the vivid dream(s) she just experienced (4).
The second indicator of dreaming has to do with the EEG (electroencephalogram) system. If one takes a closer look at a sleepers brain wave pattern in REM sleep, there are striking similarities to the pattern at an awake stage (7). It consists of desynchronized minimal waves in both cases (3).
The third indicator that someone is dreaming is if they are paralyzed. In fact, paralysis is thought to protect the dreamer from acting on their dreams. This paralysis is due to certain neurons in the frontal lobes of the brain. The activity of the brain during this stage of sleep begins in a structure called the pons which is located in the brain-stem. The pons send messages to shut off the neurons in the spinal cord which results in an almost full body paralysis (2).
The first REM session occurs c. 90 minutes after falling asleep and then in 90 minute intervals after that. Depending on how long one sleeps, she can have between four and six REM sessions each night. (2).The first session is very short no longer than five minutes. Each succeeding REM session get longer and the average person's longest dream can be up to thirty minutes long. (1).
Psychology has made some early advancements in the subject of dreams dating back to the Austrian neurologist who developed Psychoanalysis: Sigmund Freud. His theories have oftentimes been taught as the truth which is always a problem. His successors today believe that dreams serve as mental relief and problem solving. In the past decades, however, biologists have made considerable advances in the field of dreams. They state that the most important function of dream sleep is the growth of the brain. This is a result of the observation that infants dreams four times as much as adults. Neurobiologists have discovered that "neurons (brain cells) sprout new axons and dendrites (nerve fibers) during dream sleep. This brain growth gives us a stronger network of brain circuits which allow us to have greater intellect...Although many brain chemicals are involved in sleep and dreaming, two very important ones are the neurotransmitter serotonin and a brain hormone called melatonin. Both are produced by the pineal gland of the brain" (1). Melatonin is meant to calm the brain and induce sleep. Serotonin on the other hand triggers the brain to dream.
Since I wrote about the effect of alcohol on the fetus in my last paper, I thought it might be interesting to consider the effect it has on sleep and dreams. The neurotransmitter, as I hope I've made quite clear, is crucial to the dreaming process. Alcohol causes the level of Serotonin in the brain to drop considerably which results in what appears to be dreamless sleep. This is sleep without REM activity. On the other hand, when alcoholics try to withdraw, many experience delirium tremens (DTs) (2). These nights are characterized by shaking, sweating and hallucinations. Many biologists believe that the mind takes the opportunity of the absence of alcohol and overproduces serotonin which results in the hallucinations.
It is important to understand that not sleeping can be harmful on at least two levels and can lead to hallucinations while one in awake. Generally one's body will compensate for lack of dream sleep one night by dreaming more the following night until the normal quota is reached. Unless you are an alcoholic who does not sleep in which case you will quite literally "loose your mind" (2).
As with any field of science, there is a fair amount of controversy surrounding dreams, some of which has been presented already. Furthermore, as with any field of scientific research, it is safe to assume that the controversy will never end. There are many theories but one in particular I would like to concentrate on. David Maurice, Ph.D. is a professor of ocular physiology in the Department of Ophthalmology at Columbia-Presbyterian Medial Center. He is one of the many that questions the wide-spread belief that REM sleep exists mainly to process memories of the previous day. Maurice hypothesizes that "while sleep humans experience REM to supply much-needed oxygen to the cornea of the eye...[He] suggests that the aqueous humor—the clear watery liquid in the anterior chamber just behind the cornea—needs to be 'stirred' to bring oxygen to the cornea." In addition he states that "[w]ithout REM our corneas would starve and suffocate while we are asleep with our eyes closed" (5).
The reason for Maurice's engagement in this field of study began some years back when he started observing animals. He says: "I wondered why animals born with sealed eyelids needed REM or why fetuses in the womb experience a great amount of REM" (5).
David Maurice then developed his hypothesis after learning about a young man who had an accident and whose eyes had been immobilized as a result. His corneas had become laced with blood vessels to supply the corneas with oxygen. We know that when eyes are shut, oxygen can reach the cornea from the iris solely by way of the stagnant aqueous humor. Maurice did the calculations and found that the oxygen supplied under these conditions would be insufficient. This ultimately formed his hypothesis that REM must bring oxygen to the brain somehow.
As I indicated in the beginning, the functions of dreams are still unclear and heavily under debate. Dreaming may play a role in the restoration of the brain's ability to cope with tasks such as focused attention, memory, and learning. Dreaming may "just" be a window to hidden feelings. Almost everything is possible and we may never know. We do know that "You have, within yourself, an ability to make yourself experiences no one else has ever had. And hence to see things no one has ever seen and learn things no one has ever learned" (6). Maybe it is as important to individualize dreams as it is to analyze the general population's dreams. We might just by able to learn about ourselves and in the process, learn about others as well which the beauty of science is, after all. Whether the thought is soothing or uncomfortable, as you continue to sleep and dream you must know that the controversy of the biology of dreams is one that won't ever go to sleep.

References

1)Geocities Biology Page, a rich resource from Geocities
2)21st Century Biology, a rich resource from by Lauren Brownlee
3)General Psychology I- Introductory Psychology, a rich resource from the University of Connecticut
4) Ask A Scientist, a rich resource from United States Department of Energy
5)Columbia University Biology Page, a rich resource from Maury M. Breecher
6)Serendip page, a rich resource from Bryn Mawr College
7)Sleep Stages from upenn, a rich resource from the University of Pennsylvania


How Does Homeopathy Work?
Name: Chelsea W.
Date: 2002-11-11 18:59:55
Link to this Comment: 3677


<mytitle>

Biology 103
2002 Second Paper
On Serendip

The litany of side-effects warned against for even the most mundane of mainstream medications often seems enough to drive one to explore alternatives. Homeopathy is one such alternative. First systemized in the late 1700's by Samuel Hahnemann, M.D.(1), homeopathy is a form of medicine based on stimulating the body's own immune responses, while minimizing the risk of exacting any harm in the process (2).

Although homeopathy is now often regarded as something of a "fringe" form of medicine in the United States, this was not always historically so (3). In fact, in 1900, homeopathic physicians made up 15% of all physicians (3). However, homeopathy has since been subjected to attempts by the American Medical Associations and other practitioners of conventional medicine to marginalize its practice - due largely to concerns over the criticism of mainstream pharmaceuticals inherent in homeopathy and the economic threat which homeopathy was seen to pose to conventional medicine (4). Its popularity is nonetheless growing domestically (3). And, abroad, homeopathic medicines are quite popular and widely-accepted: 39% of French physicians have prescribed them (and 40% of the French public has used them), 42% of British physicians refer patients to homeopaths, and 45% of Dutch physicians considered them to be effective (3).

Specifics on the Workings of Homeopathy
The Law of Similars
Also known as "like cures like," the Law of Similars is a central tenet of homeopathic medicine (5). This law refers essentially to the premise that a substance which in overdose will cause certain symptoms can, in small and appropriate doses, stimulate the body's immune system to help cure the disease marked by these symptoms (2). It is also worth noting that this particular aspect of homeopathic theory (though not some other aspects of it) is made use of in conventional medicine as well (2). Vaccinations, allergy medications containing small doses of allergens, and radiation as cancer treatment (given that radiation in large doses can cause cancer) are all examples of such instances (2). It remains somewhat unclear why this type of "like cures like" is effective (although there is much evidence that suggests that it is so) (6). One study found specifically that a homeopathic remedy known as Silicea stimulated parts of the immune system known as microphages (which fulfill the role of swallowing up foreign substances) (6).

Symptoms as Manifestations of the Body's Attempts to Heal Itself
Another important idea in homeopathy is the recognition that symptoms, biologically speaking, symptoms of a disease are not the disease but rather manifestations of the body's attempt to heal itself (2). Thus, suppressing symptoms may not be the most effective means of treating an illness (the recent realization that suppressing fever may not always the best course of action is one example of this premise) (2). Homeopathy, instead, attempts to work with the body's natural immune system rather than to suppress it (6).

Individualization
Homeopathic medicine also places a high value on the importance of individualization of treatment (2). Although some ailments may have similar general symptoms, often specifics of these conditions differ and may result from different causes (2). So, it is important to recognize this and treat illnesses in as individualized a fashion as possible. In fact, homeopaths may often inquire into personality traits or seemingly less related common complaints of patients in order to get a overall sense of the workings of the patient's body, all of which is, after all, inter-connected and inter-dependent (2).

Homeopathic Medicine and Its Relationship to Conventional Medicine
A number of clinical studies have provided evidence to support claims of the effectiveness of homeopathy (several such studies are discussed in an excerpt from Consumer's Guide to Homeopathy, see note (7)). They also offer the substantial benefit over conventional medicine of being extremely safe (8). However, although most homeopaths object to certain facets of conventional medicine (such as its tendency to often work simply to eradicate disease symptoms rather than focusing on the underlying disease) most will acknowledge that there are instances in which other methods should be used (8). For example, some ailments may be best treated through changes in lifestyle choices, others may require surgery (something which homeopathic remedies may help to prevent in some cases, but not all) (8).

Homeopathy Today
With its rich history, homeopathy remains immensely relevant and useful today, even providing medicines effective in treating post-traumatic stress disorder in this time of terrorist threats (9). And, over the past ten years, there has been a 25-50% annual increase in the domestic sale of homeopathic medicine (3).

The relationship between homeopathic medicine and the conventional medical community also raises interesting questions about science. If science involves endeavoring to be "less wrong" so to speak, might there be an added responsibility with specific respect to the field of medicine to minimize the risk of adverse effects when one is wrong - to, as homeopathy does, attempt to first not do harm?.

References

1)Homeopathy Timeline, from the Whole Health Now website

2)A Modern Understanding of Homeopathic Medicine, from the Homeopathic Educational Services website

3)Ten Most Frequently Asked Questions on Homeopathic Medicine, an article by Dana Ullman, M.P.H., from the Homeopathic Educational Services website

4)A Condensed History of Homeopathy, from the Homeopathic Educational Services website

5)What is Homeopathy?, from the National Center for Homeopathy website

6)Homeopathic Medicine and the Immune System, from the Homeopathic Educational Services website

7)Scientific Evidence for Homeopathic Medicine, an excerpt from Consumer's Guide to Homeopathy, on the GaryNull.com website

8)The Limitations and Risks of Homeopathic Medicine, from the Homeopathic Educational Services website

9)Homeopathy Responding to Crisis, from the website of the National Center for Homeopathy


Iraq's Biological Weapons
Name: Kate Amlin
Date: 2002-11-11 19:42:25
Link to this Comment: 3679


<mytitle>

Biology 103
2002 Second Paper
On Serendip

As the government's desire to attack Iraq becomes more of a frightening reality each day, many questions remain unanswered. Is Iraq really a "threat"? Does Iraq really have weapons of mass destruction (WMDs)? More specifically, does Iraq have biological weapons (BWs)? Should the United States be worried about an attack by Iraqi biological weapons? The answers in the status quo are rather murky, but there is concrete evidence that Iraq used to have a WMD arsenal. After Iraq invaded Kuwait and the Gulf War ensued, the UN Security Council passed Resolution 687, insuring Iraq's full co-operation with UN weapons inspectors to guarantee that all of Iraq's WMDs would be destroyed (1). This resolution never included military enforcement (2); instead it was contingent on economic sanctions. From 1991- 1998, UN inspectors scoured the country for WMDs to destroy. Although UNSCOM (the UN weapons inspection team) maintains that they demolished almost all of Iraq's WMDs, even the inspectors have admitted that Iraq was covertly hiding a large supply of BWs (1). The UN found that Iraq had horrendously large amounts of ricin, a biological weapon derived from castor beans that is deadly and has no antidote (3). Iraq was also found to be in possession of a multitude of ballistic missiles fitted with carrying devices to disperse chemical and biological weapons (CBWs) (4). U.S. officials thwarted the Iraqis in their attempt to smuggle 34 U.S. military helicopters transformed to include weapons systems that would deliver CBWs (3). Even the Iraqis themselves admitted that their country was fostering an active biological weapons program after Saddam Hussein's son-in-law, Hussein Kamil, defected to Jordan in 1995 (3). Kamil had been in charge of Iraq's WMD program and acknowledged that Iraq had been hiding many of its biological agents from UNSCOM, including a whopping 2,265 gallons of anthrax (3). The UN weapons inspectors were kicked out of Iraq in 1998 (5). At that time, the Western world believed that UNSCOM had successfully destroyed the vast majority of Iraq's supply of weapons of mass destruction. But many things could have happened in the course of the following four years.

Many political scientists assert that Saddam does not have WMDs, and in particular biological weapons, and that George W. Bush is simply looking for a reason to invade the country. Stephan Zunes, chair of the Peace and Justice program at the University of San Francisco, eloquently illustrated this point in an article for the think tank, Foreign Policy in Focus: "Despite speculation-particularly by those who seek an excuse to invade Iraq-of possible ongoing Iraqi efforts to procure weapons of mass destruction, no one has been able to put forward clear evidence that the Iraqis are actually doing so, though they have certainly done so in the past. The dilemma the international community has faced since inspectors withdrew from Iraq in late 1998 is that no one knows what, if anything, the Iraqis are currently doing" (1). The strength of the Iraqi military has been severely mitigated since the early 1990s due to casualties during the Gulf War and the effects of years of economic sanctions (14). In the status quo, the military is probably too weak to produce any WMDs. Even if Iraq has retained stockpiles of BWs, they would most likely be useless. If the Iraqis tried to disperse biological weapons with the SCUD missile technology that they had during the Gulf War, 90 percent of the biological agents would be destroyed when the bomb detonated (4),and with such a feeble military new technology would be difficult to manufacture. Iraq would have an extremely difficult time dispersing anything from their residual BW arsenal, as Stephen Zunes explains: "[T]here are serious questions as to whether the alleged biological agents could be dispersed successfully in a manner that could harm troops or a civilian population, given the rather complicated technology required. For example, a vial of biological weapons on the tip of a missile would almost certainly either be destroyed on impact or dispersed harmlessly. To become lethal, highly concentrated amounts of anthrax spores must be inhaled and then left untreated by antibiotics until the infection is too far advanced. Similarly, the prevailing winds would have to be calculated, no rain could fall, the spray nozzles could not clog, the population would need to be unvaccinated, and everyone would need to stay around the area targeted for attack" (1). To be effective, biological weapons must be scattered under perfect conditions, conditions that would be extremely hard for the Iraqis to replicate (4). Western nations also fear that Iraq will give BW to terrorists, although this scenario is highly unlikely even if Iraq does have a stockpile of biological weapons. Iraq has no incentive to give WMD to terrorists, since the international community would severely punish them for such an action, and probably have not for over ten years (6). Although some think otherwise, Iraq never claims to target the United States when it does sponsor acts of terrorism (7). The allegations that Iraq is harboring members of Al Quaeda are false since all such members have been found in Kurdish areas, spheres that are beyond Iraqi control (7). One of the most convincing arguments as to why Iraq does not have BW capabilities is that Iraq has recently agreed to give new UN weapons inspectors unobstructed access to all weapons facilities and some presidential palaces in order to prove that Iraq does not have weapons of mass destruction (8). Since no western powers have been allowed in Iraq to collect evidence in the last four years, there is simply no credible proof (2) that Iraq either has biological weapons or intends to use them for nefarious purposes.

Conversely, empirical confirmations leads some to believe that Iraq has maintained a supply of weapons of mass destruction (4). Especially since "[t]he inspectors withdrew entirely from Iraq in 1998, and Hussein has refused to let them back in, giving his regime four years to find a better hiding place for his weapons" (5). Since UNSCOM never accounted for 100 percent of Iraq's biological weapons and Iraq covertly added to its stockpile while the inspectors were in the country, it is intuitive to assume that Iraq has added to its arsenal over the last few years (5). Since 1998, Iraq has purchased dual-use substances under the guise of purported civilian purposes, which could be used to produce biological weapons (9). BWs are easy to hide and do not take much space to make (1). Additionally, "one of the most frightening things about BWs production is the mobility of operations" (1). Therefore, Saddam could have easily hidden and increased a biological weapons arsenal over the last four years. Also, Saddam could use profits obtained from smuggling oil during the last few years to increase his production of WMDs, in order to compensate for his weakened military program (10). The possibility that Iraq has retained BWs is particularly terrifying due to the horrendous amount of destruction that biological weapons cause. Laura Mylroie, research associate of the Foreign Policy Research Institute, Philadelphia PA, gives a particularly grim assessment: "Since Kamil's defection, Iraq has acknowledged producing 2,265 gallons of anthrax. Anthrax is extraordinarily lethal. Inhalation of just one-ninth of a millionth of a gram is fatal in most instances. Iraq's stockpile could kill 'billions' of people if properly disseminated and dispersed.[5] Anthrax, unlike some other biological agents, has an extremely long shelf life. Although Baghdad claims to have destroyed its anthrax stockpile, it can produce no documents to support that assertion, while UNSCOM interviews of Iraqi personnel allegedly involved in the purported destruction produced contradictory accounts. Thus, no reasonable person credits the claim" (3). President Bush asserts that Iraq has developed weapons systems capable of carrying and dispersing CBWs (7). Although his claim is unsubstantiated, multiple foreign policy think tanks have found evidence that proves Bush's fear. Iraq definitely has a fleet of SCUD ballistic missiles that could be fitted with chemical and biological agents. These SCUDs can carry a 500-kg load of chemical or biological agents that can be dispersed over an area of 650-km (4). Even more frightening is the possibility that Iraq has turned some or all of ITS 78 Czech L-29 trainer airplanes into unmanned weapons carriers. These planes, which are controlled remotely, could be used to deliver extremely large quantities of biological agents over extremely long distances (4). The UN found that Iraq had indeed re-wired the L-29s for this purpose, Great Britain discovered a large number of L-29s that had been turned into carrier systems for BWs during Operation Desert Fox, and the CIA reported that Iraq tested the L-29s for their effectiveness during the year 2000 (4). President Bush also worries that Iraq is actively selling weapons of mass destruction to terrorists (9). Iraq has sponsored both the Palestine Liberation Front and Mujahedin-E Khalq, two terrorist groups that are anti-Israel (9). Bush also worries that Iraq is in cohorts with Al Queada, and that they will combine to increase the potency of an attack against the United States (7). All in all, Iraq probably did retain some of its biological weapon capability. According to President George W. Bush: "UN inspectors believe Iraq has produced two to four times the amount of biological agents it declared and has failed to account for more than three metric tons of material that could be used to produce biological weapons. Right now, Iraq is expanding and improving facilities that were used for the production of biological weapons" (11).

Although neither conclusion can be fully substantiated, the empirical evidence and feasibility of a BW program indicate that Iraq probably does have a biological weapons program. However, invading Iraq is definitely not the most desirable way to destroy Saddam's WMD capabilities. No country wants to blindly trust Saddam's claim that Iraq does not have any weapons of mass destruction. But weapons inspections and the continuation of economic sanctions would be a feasible way to control Iraq's WMD program (7). Specifically, Saddam would have great difficulty hiding weapons if he does indeed give weapons inspectors unhindered access to his county (12), as he has said he will during the last month.

With out absolute concrete evidence to document Iraq's WMD possession (1) attacking Iraq cannot be justified. Even if Iraq does have weapons, they are almost impossible to disperse (4) and, most importantly, Saddam will not use them. The Gulf War proves that Saddam is rational and will not weapons of mass destruction against the West (13). Saddam did not use WMDs during the Gulf War because he was deterred by the threat of U.S. nuclear weapons (6). There is no reason to assume that Saddam would act differently a decade later. Saddam knows that if he used WMD that it will be the end of his regime - and ultimately his life, because the Western world (particularly the United States) will annihilate him (1). Therefore, although Iraq probably does have BWs, and weapons of mass destruction in general, war with Iraq cannot be justified at this time since Iraq would not and probably could not use any weapons of mass destruction.


References


WWW SOURCES


1) "The Case Against a War with Iraq", Stephen Zunes. Foreign Policy in Focus, AN U.S. Foreign Policy Think Tank. October 2002

2) "Bush's United Nations Speech Unconvincing" Stephen Zunes. Foreign Policy in Focus, AN U.S. Foreign Policy Think Tank. September 13, 2002

3) "Iraq's Weapons of Mass Destruction and the 1997 Gulf Crisis", Laura Mylroie. Meria, The Middle East Review of International Affairs. V.1, #4, December 1997

4) "Defending Against Iraqi Missiles", Staff Writer. A Strategic Comment from the International Institute for Strategic Studies. V.8, #8, October 2002

5) "Iraq's Had Time to Really Hide its Weapons Sites", John Parachini. RAND, an U.S. Think Tank, originally appeared in Newsday, September 19, 2002

6) "President Bush's Case For Attack On Iraq is Weak", Ivan Eland. The CATO Institute a Libertarian Think Tank. October 7, 2002

7) "President Bush Fails to Make His Case" Stephen Zunes. Foreign Policy in Focus, AN U.S. Foreign Policy Think Tank. October 8, 2002

8) "Iraq: 'No Blocks to Inspections", Staff Writer. CNN, 0ctober 12, 2002

9) "Axis of Evil: Threat Or Chimera?", Charled Pena. The CATO Institute, a Libertarian Think Tank, Summer 2002

10) "Iraq: The Case for Invasion", Interview of Kenneth Pollack. The Washington Post. October 22, 2002

11) "President Bush's Address to the United Nations", George W. Bush. CNN, September 12, 2002

12) "Get Ready for a Nasty War in Iraq", Daniel Byman. RAND, originally published in The International Herald Tribune, March 11, 2002.

13) "Why Attack Iraq?", Ivan Eland. The CATO Institute, a Libertarian Think Tank, September 10, 2002

14) "Top Ten Reasons Why Not to 'Do' Iraq", Ivan Eland. The CATO Institute, a Libertarian Think Tank, August 19, 2002


Body Odor-An Unpleasant Encounter
Name: Melissa A.
Date: 2002-11-11 21:10:09
Link to this Comment: 3681


<mytitle>

Biology 103
2002 Second Paper
On Serendip

"What is that smell?" you ask.
"It is absolutely disgusting," you reply to yourself as you continue walking along. Then you realize that this smell just keeps on following you, from classroom to lunchroom to dorm room, even in the courtyard. Then it hits you that garbage has not been following you everywhere but you are the cause of that disgusting smell; you have body odor. What is this phenomenon of body odor? According to John Riddle, Body odor is the term used for any unpleasant smell associated with the body(1).

Most of us are concerned about how we look especially, how we smell so body odor which can be a potentially fatal blow to a person's social life is not a welcome addition to one's ingredients for success. It is this human fear of being excluded that led to the invention of the term, body odor. In the 1910s and 1920s, advertisers highlighted people's discontent with the things around them and with themselves in order to encourage them to buy their products. A group of advertising men used the term B.O. to mean body odor in a women's deodorant advertisement for their product, Odo-Ro-No. It played upon women's sentiments that beauty was important to achieving their main goal in life: a husband.

Listerine, which today has mundane advertisements with bottles of Listerine and a voice talking about the results of laboratory research that show Listerine's ability to combat gingivitis, cavities and so on, also used this approach. Listerine had an advertisement that showed "pathetic Edna" who was approaching her "tragic" thirtieth birthday and was always "a bridesmaid and never a bride" apparently because she suffered from Halitosis or bad breath (of course it could not have been personality problems that was hindering Edna's romantic progress!). As a result of this ad, Listerine's sales went from $100,000 a year in 1921 to more than $4 million in 1927. Body odor is a major concern for most people and even those who do not suffer from it are concerned about preventing body odor because of its negative social consequences.(2)

People are very sensitive about how they smell. Humans do not appreciate the power of a bad smell as much as skunks appreciate theirs. The striped skunk which is known in the scientific community as Mephitis mephitis accurately shoots a narrow stream of yellow fluid, butyl mercaptan, up to 10 feet at a threat. If the fluid hits the eyes of the threat it may cause temporary blindness. Even if the skunks misses, which is rare, the musk will cause nausea, gagging and general discomfort. (Perhaps next time you are at a party and you receive some unwelcome attention, you should raise those armpits or let out a breath of air to your unsuspecting predator.) Most people however, try to do quite the opposite and purchase expensive perfumes to mask odors and create a sensual smell that will attract the opposite sex. They spend a lot of money buying perfumes like Object of Desire by Bvlgari and they do not realize that the fluid that skunks emit is commercially used as a base for perfumes because of its clinging nature.(3) This makes me wonder whether if it is only by having a bad smell that we can get a good smell.

So what exactly causes us to have that bad smell? Most of the causes are related to lifestyle choices. If you use drugs, toxins or herbs such as alcohol and cigarette smoking, you body will smell. Also if you eat certain foods such as garlic and raw onions you will have an unpleasant breath. You can also develop body odor by simply sweating excessively or practicing poor hygiene. A couple other causes of body odor include: tooth or oral conditions such as periodontal disease and gingivitis. Also, there can be inborn errors of metabolism for example, aminoaciduria. Most of these problems are a result of a decision that the sufferer had made about how he chooses to lead his life. If he chooses to be a chain smoker he will smell like a cigarette. If he chooses not to shower or wash his clothes then he will have a pungent smell. If he chooses not to practice proper dental hygiene then he will have Halitosis. However, if he sweats excessively because he has a fear of social situations then his cause of body odor cannot be as easily rectified. In addition, people who suffer from aminoaciduria also cannot easily change their lifestyle to rectify their body odor because aminoaciduria results from an enzyme deficiency. We should not judge or marginalize people because of their smell because the cause of it may go beyond poor hygiene.(1)

Almost everyone has received a compliment about the perfume that he or she is wearing. This shows that people react to scents and that a pleasant scent encourages a favorable perception of that person. New research has shown that some individuals are highly sensitive to smelling a component of body odor which is called androstenone. Furthermore, if the person can easily smell androstenone then he will decide whether or not he likes the person based on the smell. What is androstenone? It is a human pheromone which is a chemical attractant that is in body secretions like perspiration. Men release large quantities of androstenone while women omit small amounts. So men are more likely to be judged by their smell than women. According to the study, fifty percent of people cannot smell androstenone at all and one half of them can only catch a whiff and enjoy the scent. Those who can smell androstenone, on the other hand, do not like the smell and compare it to urine or perspiration. The study went on to show that there was a correlation between the ability to smell androstenone and the androstenone-smeller's judgment of the person. In other words, if someone can smell androstenone on someone else and finds the smell unpleasant then he will dislike the person.(4)

Clearly, there is a lot at risk if one has body odor since man is a gregarious animal then body odor can make him unable to maintain or even start relationships. We have realized this so there are number of ways to eliminate body odor. One of the most rudimentary ways to get rid of body odor is to wash with both soap and water especially in the groin area and armpits which are more likely to smell. The best soap to use is a deodorant soap which impedes the return of bacteria. Showering, as well as, washing your clothing regularly will help to prevent body odor. In addition, we should wear natural fabrics like cotton which absorb perspiration better than synthetic materials like polyester. Athletic apparel makers like Nike and Adidas are adopting this idea in their clothing design by creating materials that cause the sweat to evaporate faster.

You should also use commercial deodorants which mask underarm odor or use antiperspirants which reduce the amount of perspiration. If these fail then you should turn to France, the land of fine perfumes and "Le Crystal Nature" which is a chunk of mineral salts that helps to keep bacteria under control without irritating the skin. A more serious approach to fighting body odor is Drionic which is an electronic device that plugs up overactive sweat ducts and keeps them plugged for up to six weeks.(5) There are many ways to avoid having body odor. The easiest way to find out what is available to you is to take a stroll in your local Eckerd, CVS or Riteaid.

Body Odor is a major concern for human beings. It is one concern that affects men and women to almost the same extent. We are concerned about our smell because people judge how we take care of ourselves by our smells and use this information to decide whether we are worthy of friendship because of it. We need to pay attention to how we smell not only because of social interactions but because odors from our body may alert us to a medical problem like a urinary tract infection or periodontal disease. It must be noted however that much of the hype about body odor comes from marketing consultants who need to sell their companies' product and play on our insecurity. Try to avoid being caught up in this web of commercialism while at the same time taking good care of your body.

References

1) Body Odor. It gives basic information about Body Odor.

2). It provides information on the less renowned tidbits of history.

3). It provides information about the striped skunk.

4). It provides information that otherwise would not get a lot of attention.

5) It provides online information about problems that affect teenagers


Think before you flush or brush
Name: Sarah Tan
Date: 2002-11-11 21:40:11
Link to this Comment: 3683


<mytitle>

Biology 103
2002 Second Paper
On Serendip

One of my friends from high school has made a habit of putting toilet seat lids down before she flushes. She started doing this about four years ago when she heard that when toilets are flushed, water droplets are expelled from the toilet bowl into the air, and when they land, other areas of the bathroom get "contaminated" by toilet water. That always amused me, but when I went over to her house, I humored her and followed this personal rule of hers. However, I didn't know—and chances are, she didn't know—just how justified she was in worrying about in what is known as the "aerosol effect" in toilets. My discovery that there is actually a technical term for this phenomenon was the first indication that there might be something scientifically legitimate to it. It seems to have first been brought to light by University of Arizona environmental microbiologist Charles Gerba when he published a scientific article in 1975 describing bacterial and viral aerosols due to toilet flushing (2). He conducted tests by placing pieces of gauze in different locations around the bathroom and measuring the bacterial and viral levels on them after a toilet flush, and his results are more than just a little disturbing.

First is the confirmation of the existence of the aerosol effect, even though it is largely unrecognized. "Droplets are going all over the place—it's like the Fourth of July," said Gerba. "One way to see this is to put a dye in the toilet, flush it, and then hold a piece of paper over it" (8). Indeed, Gerba's studies have shown that the water droplets in an invisible cloud travel six to eight feet out and up, so the areas of the bathroom not directly adjacent the toilet are still contaminated. Walls are obviously affected, and in public or communal bathrooms, the partitions between stalls are definitely coated in the spray mist from the toilet (1). Also, toilet paper will be cleanest when it is enclosed in a plastic or metal casing; after all, it's subject to the same droplets splattering on it, and its proximity to the toilet bowl makes contamination potential obvious. The ceiling is also still contaminated and is in fact a potential problem site because it is often overlooked in the cleaning process. Bacteria cling to ceilings and thrive in the humid environment there; if the situation is left untreated for months or years (as is often the case), odors remain in restrooms that seem to have been to be otherwise thoroughly cleaned (1). The bacterial mist has also been shown to stay in the air for at least two hours after each flush, thus maximizing its chance to float around and spread (2). "The greatest aerosol dispersal occurs not during the initial moments of the flush, but rather once most of the water has already left the bowl," according to Philip Tierno, MD, director of clinical microbiology and diagnostic immunology at New York University Medical Center and Mt. Sinai Medical Center. He therefore advises leaving immediately after flushing to not have the microscopic, airborne mist land on you (4). Worse still is the possibility of getting these airborne particles in the lungs by inhaling them, from which one could easily contract a cough or cold (6).

Obviously, the idea of toilet water being unknowingly distributed around the bathroom is less than appealing, but a study of this sort calls for looking in detail at precisely what microscopic organisms we're dealing with here, even if we don't really want to know. Put rather graphically, it can be summed up as the F3 force: Fecal Fountain Factor, compounded by the favorable temperatures for bacterial propagation in room temperature toilet water (3). Using a more scientific viewpoint, streptococcus, staphylococcus, E. coli and shigella bacteria, hepatitis A virus and the common cold virus are all common inhabitants of public bathrooms, but just because they're all over the place doesn't mean we necessarily get sick. After all, humans carry disease-causing organisms on our bodies all the times, but with healthy immune systems, the quantities in which these organisms exist is not enough to affect us, particularly with a good hand-washing after every restroom visit (4). This begs the question, however, of the number of people who actually wash their hands after going to the toilet, and more importantly, the number who wash their hands effectively. Simply rinsing one's hands under running water for a few seconds without soap, as some people do, is not effective at all. The way to ensure maximum standards of hygiene is to lather your palms, the back of your hands, in between fingers, and under fingernails for 20-30 seconds with soap and hot water; the friction will kill off the bathroom bacteria (6).

Toilet seats have actually been determined to be the least infected place in the bathroom because the environment is too dry to support a large bacterial population (7). In accordance with that theory, the underside of the seat has a higher than average microbial population. The place in a restroom with the highest concentration of microbial colonies in restrooms is, surprisingly, the sink, due in part to accumulations of water where these organisms breed freely after landing their aerial journey. While toilets are obviously not sterile environments, they tend to not be as bad as people think because they receive more attention and are cleaned more often. "If an alien came from space and studied the bacterial counts, he probably would conclude he should wash his hands in your toilet and crap in your sink," Gerba said (2). The alien would almost certainly not put your toothbrush in his mouth because, with its traditional, uncovered spot in the bathroom, it is one of the hotspots for fecal bacteria and germs spewed into the air by the aerosol effect (5). Understandably, the toothbrush with toilet water droplets on it is one of the most retold horror stories to emerge from Gerba's report.

There are also greater implications from the study of the aerosol effect than simple grossness factor. Most obviously, bathrooms should be cleaned even more meticulously than before, with emphasis not just on and around the toilet, but equal emphasis on all areas of the bathroom because all areas are equally affected by the spray. Using the right cleaners is important because all-purpose cleaning solutions are not necessarily antibacterial, whereas most cleaners made specifically for restrooms are referred to as disinfectants or germicidal cleaners (1). Given that the sink area teems with bacteria, one must now be more careful about washing hands properly after walking into the bathroom for any non toilet-related purposes like washing your face and brushing teeth. Using a hair dryer can potentially be problematic in regard to bacteria counts because the effect would be largely the same as hot-air hand dryers, which actually increase the bacteria on hands by 162 percent, as opposed to paper towels, which decrease them by 29 percent (7). If you're still not convinced that bacteria exist in any significant quantities on your hands, consider that kitchen sink actually harbors the most fecal matter in the average home, carried there by unwashed hands after using the bathroom (5). A tablespoon of bleach in a cup of warm water on the offending sink will fix the situation... for the day.

To limit the scope of the aerosol effect, the simplest method is to close the lid on the toilet every time before flushing (5). This would also provide the peace of mind that while you are washing your hands for 30 seconds, microscopic, bacteria-laden water droplet will not be descending upon your person. Unfortunately, most public toilets, including the ones in Bryn Mawr's dorms, don't even have lids for that option. Besides, given the large number of people who have used the toilet before you, it probably wouldn't make much difference. After washing your hands, use a paper towel to turn off the faucet and to open the door to leave, in order to avoid being recontaminated (4). And today, get a new toothbrush and always, always keep it in the medicine cabinet or some other enclosed place after use (2).

References

(1) Janitorial Resource Center - Dr Klean.

(2) A Straight Dope Classic - Cecil's been asked.

(3) Car Talk's mailbag - People are talking back.

(4) WebMD - What can you catch from restrooms?

(5) Harvard Gazette book review - Overkill, by Kimberly Thompson

(6) When in doubt, Ask Men - What can you catch from (men's) restrooms?

(7) Sean Blair: Writer. Researcher. Editor. - Killer offices.

(8) The Atlantic Monthly - Something in the water.


Chocolate: Aphrodisiac or Euphamism?
Name: Michele Do
Date: 2002-11-11 22:11:07
Link to this Comment: 3685


<mytitle>

Biology 103
2002 Second Paper
On Serendip


"In most parts of the world chocolate is associated with romance, and not without with good reason. It was viewed as an aphrodisiac by the Aztec's who thought it invigorated men and made women less inhibited. So when it was first introduced to Europe, it was only natural that chocolate quickly became the ideal gift for a woman to receive from an admirer or a loved one, and of course, vice versa" (3).

What does chocolate have in common with lobster, crab legs, pine nuts, walnuts, alcohol, and Viagra? It has a reputation as being an aphrodisiac. Throughout history, there has been a pursuit for sexual success and fertility by various means including foods and pharmaceuticals. The American Heritage College Dictionary defines aphrodisiac s "arousing or intensifying sexual desire...Something such as a drug or food, having such an effect" (5). According to the Food and Drug Administration, "an aphrodisiac is a food, drink, drug, scent, or device that, promoters claim, can arouse or increase sexual desire, or libido" (2). Myths and folklore have existed since the beginning of time asserting that specific goods, or aphrodisiacs, increase sexual capacity and stimulate desire. Named after Aphrodite, the Greek goddess of sexual love and beauty, she was claimed born from the sea and many types of seafood have acquired this reputation. Similarly, chocolate's reputation as an aphrodisiac originated in both Mayan and Aztec cultures over 1500 years ago. Is chocolate really an aphrodisiac? How does it work? Does it produce different effects for men and women?

Made from the cocoa bean found in pods growing form the trunk and lower branches of the Cacao Tree, the earliest record of chocolate was in the South American rainforests around the Amazon and Essequibo rivers. The Mayan civilization worshipped the Cacao Tree for they believed it was divine in origin, thus its Latin name, Theobrom Cacao, means "food of the gods", and "cacao is a Mayan word meaning 'God Food.' Cacao was later corrupted into the more familiar "Cocoa" by Europeans" (3). Since emperors were considered divine, the Aztec emperor Monteczuma drank fifty golden goblets of chocolate a day in order to enhance his sexual ability. Consequentially, when the Spanish Conquistadors discovered chocolate and introduced it to Europe and the rest of the world, it continued to be associated with love (3).

Chocolate is a very complex food and scientists have investigated it in order to unlock its secrets. When consumed, it has been observed to have affects on human behavior (3). Chocolate contains two particular substances called Phenylethylamin and Seratonin, both of which serve as means for mood lifting. "Both occur naturally in the human brain and are released by the brain into the nervous system when we are happy and also when we are experiencing feelings of love, passion and/or (dare I say it?) lust. This causes a rapid mood change, a rise in blood pressure, increasing the heart rate and inducing those feelings of well being, bordering on euphoria usually associated with being in love" (3).

When chocolate is consumed, it releases Phenylethylamine and Seratonin into the human system producing the same arousing effects. Since eating chocolate gives an instant energy boost, increasing stamina, it is no wonder why its effects have given it a reputation as an aphrodisiac. Both Phenylethylamine and Seratonin are substances that can be mildly addictive, hence explaining the chocoholic. But women are more susceptible to the effects of Phenylethylamine and Seratonin than men (3). This illustrates why women tend to be chocoholics more than men.

Three other chemicals and theories are used to explain why chocolate makes people feel "good." "Researchers at the Neuroscience Institute in San Diego, California believe that 'chocolate contains pharmacologically active substances that have the same effect on the brain as marijuana, and that these chemicals may be responsible for certain drug-induced psychoses associated with chocolate craving'" (4). Although marijuana's active ingredient that allows a person to feel "high" is THC (tetrahydrocannabinol), a different chemical neurotransmitter produced naturally in the brain called anandaminde has been isolated in chocolate. "Because the amounts of anandamide found in chocolate is so minuscule, eating chocolate will not get a person high, but rather that there are compounds in chocolate that may be associated with the good feeling that chocolate consumption provides" (4).

In the body, anandamide is broken down rapidly into two inactive sections after produced by the enzyme hydrolase found in our bodies. In chocolate, however, there are other chemicals that may inhibit this natural breakdown of anandamide. Therefore, natural anandamide may remain extensively, making people feel good longer when they eat chocolate (4).

Although chocolate contains chemicals associated with feelings of happiness, love, passion, lust, endurance, stamina, and mood lifting, scientists continue to debate whether it should be classified as an aphrodisiac. "'The mind is the most potent aphrodisiac there is,' says John Renner, founder of the Consumer Health Information Research Institute (CHIRI). 'It's very difficult to evaluate something someone is taking because if you tell them it's an aphrodisiac, the hope of a certain response might actually lead to an additional sexual reaction'" (2). Despite scientific difficulty in proving chocolate an aphrodisiac, it does contain substances that increase energy, stamina, and feelings of well being. The reality is that chocolate makes you feel good and induces feelings of being in love. Everyone appreciates receiving a gift of chocolate from a loved one because it makes you feel loved. Perhaps the historic euphemism associated with chocolate is what really provokes people to feel it is an aphrodisiac.

References


WWW Sources:

1)Johan's Guide to Aphrodisiacs

2)Looking for a libido lift? The facts about aphrodisiacs, Food & Drug Association

3)Is chocolate an aphrodisiac?, By Janet Vine

4)Chocolate, aphrodisiac or prevention against heart attacks

Other Sources:

5)The American Heritage College Dictionary. 3rd Edition. USA: Houghton Mifflin Company. 1993.


DNA: Fingerprinting in the Court of Law
Name: Kyla Ellis
Date: 2002-11-11 23:52:00
Link to this Comment: 3687


<mytitle>

Biology 103
2002 Second Paper
On Serendip

As we dive head first into the new millennium, we are eager to embrace new "modern" technologies and ideas. One such idea is that of identification through the analysis of DNA, or genetic "fingerprinting." But, is this an idea we should rush into accepting? Should a shard of bone or fingernail be enough evidence to convict a person, to send them to the electric chair? And how does it work, anyway? Forensics and DNA have always been the areas of biology that interested me the most, so I decided that for this paper, I would explore the controversy as well as learn more about the process.

Deoxyribonucleic acid, or DNA, is made up of two strands of genetic material spiraled around each other. Each strand contains a sequence of bases (also called nucleotides). In DNA, there are four possible bases: chemicals called adenine, guanine, cytosine and thymine. The two strands of DNA are connected through chemical bonds at each base. Each base bonds with its complimentary base, as follows: adenine will only bond with thymine, and guanine will only bond with cytosine. DNA is a chemical structure that forms thin structures of tightly coiled DNA called chromosomes, which can be found in the cell nucleus of plants and animals. Chromosomes are normally found in pairs; human beings typically have 23 pairs of chromosomes in every cell. Pieces of chromosomes (or genes) dictate particular traits in human beings (1). There are millions of possible patterns, which gives rise to different physical appearances in humans. Every person's DNA also has repeating patterns, which allows scientists to determine whether two samples of DNA are from the same person.

To analyze the genetic patterns in ones DNA, scientists must go through extensive, meticulous steps. The first of such steps performed is called a Southern Blot. This is a brief outline:

The DNA must first be isolated, either by chemically "washing" it, or by applying pressure to "squeeze" the DNA from the cell. Next restriction enzymes cut the DNA into several pieces. The DNA pieces must then be sorted by size using electrophoresis, whereby the DNA is poured into wells of a gel, and then an electrical charge is applied to the gel. The positive charge is opposite the wells, and since DNA is slightly negatively charged, the piece of DNA is attracted toward the positive electric charge. The smaller pieces move more quickly than the larger pieces, and thus go farther than the larger pieces. The DNA is then heated so the DNA denatures (bases break apart) and render a single strand. The gel is then baked to a sheet of nitrocellulose paper to permanently attach the DNA to the sheet. This completes the Southern Blot, which is now ready to be analyzed.

To do this, an X-ray is taken of the Southern Blot after a radioactive probe has been allowed to bond with the denatured DNA on the paper. Only the areas where the radioactive probe binds will show up on the film. This allows researchers to identify, in a particular person's DNA, the occurrence and frequency of the particular genetic pattern contained in the probe.
(For more details visit (8). )

Every strand of DNA has pieces that contain repeated sequences of base pairs, called Variable Number Tandem Repeats (VNTRs), which can contain anywhere from twenty to one hundred base pairs. Our bodies all contain some VNTRs. To determine if a person has a particular VNTR, a Southern Blot is carried out. The pattern that results from this process is known as a DNA fingerprint. VNTRs come from the genetic information donated by our parents; we can have VNTRs inherited from either our mother or father, or a combination of the two, but never a VNTR either of our parents do not have. Because these combinations are inherited, each person's DNA fingerprint is unique.

I notice that this is a very involved process, and having performed it myself, I can assure you it is difficult and time consuming. The possibilities for human error are definitely there. If we were to rule out human error, however, the accuracy of these tests far surpasses all such tests that we have so far. The closest thing we have to this is analyzing fingerprints, which can be smudged, or otherwise distorted. Fingerprint experts never give evidence unless they are 100% sure, meaning they had the whole fingerprint and found an exact match. One expert claims that if fingerprinting were introduced today, there would be a terrible time convincing people of its validity(2). However, since it has been going on for so long, it is widely accepted, and therefore more "valid" than DNA identification. People are comfortable with it.

DNA is also easily contaminated. Since the results are derived from microscopic elements, the slightest disturbance can be a factor. It is even possible that some of the expert's own genetic material could be mixed in with the sample, and no one would know. The relative "newness" of DNA fingerprinting is another factor; people don't understand it like they would fingerprints. In a court of law, lawyers can hold up the pictures of the two matching fingerprints, and the evidence is right in front of the jurors' faces. With DNA, the evidence is harder for people with no experience in forensic science to grasp, and they essentially have to take the scientist's word for what they are seeing(3). The last problem is that of DNA sample size and age. The smaller the sample, the more likely it is to have room for error in testing. Age of the specimen also matters, if it is old and small it is less likely that there will be an error free test.

Putting the doubts aside, DNA can be a valuable tool in criminal justice. So far, at least 10 people on death row have been pardoned due to DNA evidence examined after their initial trials(4). There was a case in 1999 of a man by the name of Clyde Charles who was convicted of aggravated rape and sentenced to life imprisonment. He served nineteen years he was finally proclaimed innocent due to DNA tests(5). This is a chilling reality that we have to face: have innocent people been convicted of horrendous crimes and put into jail, or even executed, while the guilty go free?

In my research, I also got the impression that part of the controversy surrounding DNA fingerprinting is the fact that courts of law do not want to admit they are wrong. As I mentioned above, convictions of innocent people and acquittals of guilty people, do not reflect well on our legal system. No one wants to admit making mistakes and therefore being possibly inept at doing their job, especially if their job is determines who is sent to death row. It is a bit of an embarrassment to admit our legal system could have such a huge glitch. One case that I got this impression from was that of Joseph Roger O'Dell, arrested for and convicted of murder, rape, and sodomy of a young woman. From death row, he made repeated pleas for a DNA test, but he was refused each time. After his death the last of the DNA evidence in his case was burned without any further testing(5).

In May of 2001, to date, more than 85 people in the United States had been set free through post-conviction DNA testing, and, as I said above, 10 of them had been on death row. The FBI has been analyzing DNA in rape and rape-homicide cases since 1989. When arrests were made on the basis of other evidence in such cases, biological specimens were sent to the FBI for DNA analysis. In 26 percent of the cases, the primary suspect was excluded by DNA evidence(6). The question is, how many of these would have been found not guilty without DNA evidence?

This country is committed to the idea of justice. If we are sending people that are not guilty to jail, that messes with our entire conception of our legal system. I believe that forensic science is a huge step in the right direction toward justice.

References

1),
2),
3),
4),
5),
6),
7),
8),


DNA FIngerprinting in A Court Of Law
Name: Kyla Ellis
Date: 2002-11-12 00:00:34
Link to this Comment: 3688


<mytitle>

Biology 103
2002 Second Paper
On Serendip

As we dive head first into the new millennium, we are eager to embrace new "modern" technologies and ideas. One such idea is that of identification through the analysis of DNA, or genetic "fingerprinting." But, is this an idea we should rush into accepting? Should a shard of bone or fingernail be enough evidence to convict a person, to send them to the electric chair? And how does it work, anyway? Forensics and DNA have always been the areas of biology that interested me the most, so I decided that for this paper, I would explore the controversy as well as learn more about the process.

Deoxyribonucleic acid, or DNA, is made up of two strands of genetic material spiraled around each other. Each strand contains a sequence of bases (also called nucleotides). In DNA, there are four possible bases: chemicals called adenine, guanine, cytosine and thymine. The two strands of DNA are connected through chemical bonds at each base. Each base bonds with its complimentary base, as follows: adenine will only bond with thymine, and guanine will only bond with cytosine. DNA is a chemical structure that forms thin structures of tightly coiled DNA called chromosomes, which can be found in the cell nucleus of plants and animals. Chromosomes are normally found in pairs; human beings typically have 23 pairs of chromosomes in every cell. Pieces of chromosomes (or genes) dictate particular traits in human beings (1). There are millions of possible patterns, which gives rise to different physical appearances in humans. Every person's DNA also has repeating patterns, which allows scientists to determine whether two samples of DNA are from the same person.

To analyze the genetic patterns in ones DNA, scientists must go through extensive, meticulous steps. The first of such steps performed is called a Southern Blot. This is a brief outline:

The DNA must first be isolated, either by chemically "washing" it, or by applying pressure to "squeeze" the DNA from the cell. Next restriction enzymes cut the DNA into several pieces. The DNA pieces must then be sorted by size using electrophoresis, whereby the DNA is poured into wells of a gel, and then an electrical charge is applied to the gel. The positive charge is opposite the wells, and since DNA is slightly negatively charged, the piece of DNA is attracted toward the positive electric charge. The smaller pieces move more quickly than the larger pieces, and thus go farther than the larger pieces. The DNA is then heated so the DNA denatures (bases break apart) and render a single strand. The gel is then baked to a sheet of nitrocellulose paper to permanently attach the DNA to the sheet. This completes the Southern Blot, which is now ready to be analyzed.

To do this, an X-ray is taken of the Southern Blot after a radioactive probe has been allowed to bond with the denatured DNA on the paper. Only the areas where the radioactive probe binds will show up on the film. This allows researchers to identify, in a particular person's DNA, the occurrence and frequency of the particular genetic pattern contained in the probe.
(For more details visit (8). )

Every strand of DNA has pieces that contain repeated sequences of base pairs, called Variable Number Tandem Repeats (VNTRs), which can contain anywhere from twenty to one hundred base pairs. Our bodies all contain some VNTRs. To determine if a person has a particular VNTR, a Southern Blot is carried out. The pattern that results from this process is known as a DNA fingerprint. VNTRs come from the genetic information donated by our parents; we can have VNTRs inherited from either our mother or father, or a combination of the two, but never a VNTR either of our parents do not have. Because these combinations are inherited, each person's DNA fingerprint is unique.

I notice that this is a very involved process, and having performed it myself, I can assure you it is difficult and time consuming. The possibilities for human error are definitely there. If we were to rule out human error, however, the accuracy of these tests far surpasses all such tests that we have so far. The closest thing we have to this is analyzing fingerprints, which can be smudged, or otherwise distorted. Fingerprint experts never give evidence unless they are 100% sure, meaning they had the whole fingerprint and found an exact match. One expert claims that if fingerprinting were introduced today, there would be a terrible time convincing people of its validity(2). However, since it has been going on for so long, it is widely accepted, and therefore more "valid" than DNA identification. People are comfortable with it.

DNA is also easily contaminated. Since the results are derived from microscopic elements, the slightest disturbance can be a factor. It is even possible that some of the expert's own genetic material could be mixed in with the sample, and no one would know. The relative "newness" of DNA fingerprinting is another factor; people don't understand it like they would fingerprints. In a court of law, lawyers can hold up the pictures of the two matching fingerprints, and the evidence is right in front of the jurors' faces. With DNA, the evidence is harder for people with no experience in forensic science to grasp, and they essentially have to take the scientist's word for what they are seeing(3). The last problem is that of DNA sample size and age. The smaller the sample, the more likely it is to have room for error in testing. Age of the specimen also matters, if it is old and small it is less likely that there will be an error free test.

Putting the doubts aside, DNA can be a valuable tool in criminal justice. So far, at least 10 people on death row have been pardoned due to DNA evidence examined after their initial trials(4). There was a case in 1999 of a man by the name of Clyde Charles who was convicted of aggravated rape and sentenced to life imprisonment. He served nineteen years he was finally proclaimed innocent due to DNA tests(5). This is a chilling reality that we have to face: have innocent people been convicted of horrendous crimes and put into jail, or even executed, while the guilty go free?

In my research, I also got the impression that part of the controversy surrounding DNA fingerprinting is the fact that courts of law do not want to admit they are wrong. As I mentioned above, convictions of innocent people and acquittals of guilty people, do not reflect well on our legal system. No one wants to admit making mistakes and therefore being possibly inept at doing their job, especially if their job is determines who is sent to death row. It is a bit of an embarrassment to admit our legal system could have such a huge glitch. One case that I got this impression from was that of Joseph Roger O'Dell, arrested for and convicted of murder, rape, and sodomy of a young woman. From death row, he made repeated pleas for a DNA test, but he was refused each time. After his death the last of the DNA evidence in his case was burned without any further testing(5).

In May of 2001, to date, more than 85 people in the United States had been set free through post-conviction DNA testing, and, as I said above, 10 of them had been on death row. The FBI has been analyzing DNA in rape and rape-homicide cases since 1989. When arrests were made on the basis of other evidence in such cases, biological specimens were sent to the FBI for DNA analysis. In 26 percent of the cases, the primary suspect was excluded by DNA evidence(6). The question is, how many of these would have been found not guilty without DNA evidence?

This country is committed to the idea of justice. If we are sending people that are not guilty to jail, that messes with our entire conception of our legal system. I believe that forensic science is a huge step in the right direction toward justice.


References

1)How Is DNA Fingerprinting Done,
2)Fingerprint Identification: Craft Or Science?,
3)You DNA ID Card,
4)How DNA Evidence Works,
5)The Case For Innocence,
6)How DNA Technology Is Reshaping Judicial Process and Outcome,
7)DNA files,
8)Southern Blot,


Nicotine: How Does Tt Work
Name: Sarah Fray
Date: 2002-11-12 01:16:51
Link to this Comment: 3691


<mytitle>

Biology 103
2002 Second Paper
On Serendip


Nicotine: How Does it Work?
Sarah Frayne
The Basics

Nicotine is a colorless liquid that smells like tobacco and turns brown when it is burned (2). It is the chemical in tobacco products that interacts with the brain and causes addiction. The use of tobacco products such as cigarettes, chew, or cigars allow for the nicotine to move quickly throughout the body and the brain. Nicotine can be absorbed through the mucosal linings and skin of the nose and mouth, or through inhalation. When inhaled, the nicotine is absorbed by the lungs and moved into the blood stream from which it reaches the brain in less than eight seconds (4).

The effects of nicotine on the human body are diverse. In high concentrations, through the ingestion of some pesticides or the consumption of tobacco products by children, nicotine can cause convulsions, vomiting and death within minutes due to paralysis. However, in smaller doses nicotine has much milder effects. Nicotine has desirable properties such as heightened awareness and increased short term memory. Other aspects of nicotine include heightened breathing, heart-rate, constriction of arteries and pleasure stimulus in the brain.

Nicotine and the Brain

The brain consists of millions of nerve cells that communicate through chemicals called neurotransmitters. Each neurotransmitter has a particular three dimensional shape that allows it to fit into receptors that are located on the surface of nerve cells (4). Nicotine has a chemical structure that very much resembles the chemical structure of the neurotransmitter Acetylcholine. The similarity of the two chemical structures allows nicotine to activate the Cholinergic receptors naturally stimulated by Acetylcholine. These receptors are located not only in the brain, but also in muscles, adrenal glands, the heart and other peripheral nervous systems (1). These receptors are involved in numerous bodily functions such as muscle movement, breathing, heart rate, learning and memory.

The Nicotine, although very similar to Acetylcholine, does not act exactly like the neurotransmitter and consequently causes the systems that it affects to function abnormally. The Nicotine causes a spontaneous release within the brain of other neurotransmitters that affect mood, appetite and memory. Additionally, many systems such as the respiratory and cardiovascular systems are sped up (4). Nicotine causes the pancreas to release glucose, causing smokers to be marginally hyperglycemic.

Another significant interaction between nicotine and the brain is the release of the neurotransmitter dopamine in the nucleus accumbens (1). Dopamine is a neurotransmitter produced in the pleasure center of the brain. Normally this area of the brain serves to reinforce healthy habits, such as producing dopamine when the body is hungry and then receives food. The production of dopamine causes feelings of reward and pleasure (4)).

Recent studies have showed that nicotine selectively damages the brain. Amphetamines, cocaine, and ecstasy and most addictive drugs damage a particular half of the fasciculus retroflexus. The faciculu retroflexus is a bunch of nerve fibers located above the thalamus. It has been discovered that nicotine affects the other half of these fibers. These fibers control emotional control, sexual arousal, REM sleep and seizures (3).


The Addiction

Nicotine is known to be an addictive drug. Less than seven percent of all smokers who attempt to quit are successful (2). While some of the addiction may be attributed to the social and psychological patterns created by using products containing nicotine, there is also vast evidence that the addiction is chemical as well.

Nicotine causes a strengthening of the connections responsible for the production of Dopamine in the ventral tegmental area (VTA) of the brain pleasure or reward center (5). This strengthening results in a release of dopamine. This is the process used by the brain to enforce positive behavior. The Nicotine artificially stimulates this process, thus encouraging repetition of the Nicotine intake (5).

The Nicotine is quickly metabolized and altogether absent from the body in a few hours, causing the acute affects of the Nicotine to be short lived. This quickly dissipated state of effects creates the need for multiple doses of Nicotine throughout the day in order to prolong the effects and fend off withdrawal (4). Multiple dosages of Nicotine creates a tolerance within the body. In order to receive the desirable traits of the nicotine, the body must consistently take in more of the chemical.

The ending of a Nicotine habit induces both a withdrawal syndrome that lasts about a moth, and intense cravings that may last over six months. The withdrawal syndrome includes such symptoms as irritability, attention deficits, sleep disturbances and increased appetite.

More specific data is sought by the scientific community in order trace the exact portion of the brain responsible for the force of nicotine within the brain. Many studies indicate a particular portion of the receptors with which the nicotine interacts as a key componant in the process of nicotine addiction. The Choinergic receptors that the nicotine stimulates are made up of multiple subunits. In one particular study, the beta subunit was isolated and removed from a number of mice. In subsequent experiments, the mice missing the beta subunit did not self- administer nicotine. The mice with the beta subunit in tact did self administer nicotine (1).

Why Does Any of This Matter?

Nicotine addictions are estimated to account for 70 times the deaths in the United States that all other drug dependences combined (5). Approximately one of every six Deaths in the United States is attributed directly or indirectly to smoking (4). The activities associated with nicotine can cause respiratory problems, lung cancer, emphysema, heart problems, and cancers of the oral cavity, pharynx, larynx and esophagus.

Surveys show that around 90 percent of smokers would like to quit. Unfortunately, because of the addictive qualities of nicotine, very few, less than ten percent are successful (3). Nicotine replacement therapies allow for a lower intake of nicotine, without the harsh effects of tobacco forms of nicotine use. There are also non-nicotine therapies that use pills such as bupropion, an anti depressant, to help quiet the withdrawal effects. Lastly, there are behavioral treatments such as clinics, and formal session based counseling that have been developed and are often used in cooperation with one f the chemical supplements (2).

The continual research on the specific interactions between the brain and nicotine has the potential to create a more effective strategy for those who are seeking to stop using nicotine. It may also be possible to discover how nicotine causes the positive effects such as heightened awareness and strengthened short term memory. This could lead to a method to obtain such effects without receiving the undesirable aspects of nicotine use.


References

Sources

1) 1)Connecticut Clearinghouse, Connecticut State rescource center for iinformation on, and help with, alcohol and drugs


2) 2)http://ericcass.uncg.edu , Educational Information Rescources Center at University of North Carolina

3) www.cnn.com3)www.cnn.com,

4) 4)www.nida.nih.gov, National Institute On Drug Abuse informational sight


5) 5)www.howstuffwrks.com


Method for Madness: The Body's Impact on a Person'
Name: Tegan Goer
Date: 2002-11-12 14:19:51
Link to this Comment: 3694


<mytitle>

Biology 103
2002 Second Paper
On Serendip

I am-among other things-an actor. As such, people occasionally ask me how I get myself to "feel" frustrated or desperate or happy or surprised or any of the other emotions I have been asked to attempt to portray over the years that I've been doing theatre. And when asked, I have had to carefully explain that what an actor does on a stage is not exactly feeling, but rather expressing feeling. That is, being able to act is not having the ability to feel emotions; it is not some kind of empathy for the feelings of others divinely distributed to some people with artistic temperaments (as much as some people-those who consider themselves actors and those who do not-might think). Acting is the craft of expressing emotions, creating with your physical self an image for an audience.

And here's the interesting part, the part that relates to myself as a student of biology in addition to myself as an actor: a lot of actors will report-the one writing this paper included among them-that the more closely an actor is able to duplicate the physical embodiments of an emotion, the more that actor can "feel" whatever emotion he or she is trying to reproduce. As best I can see it, emotions are the physical manifestations we register and associate with them. Fear is a quickened heart rate, a trembling voice, short, quick breaths, and some other physical reactions that everyone knows but cannot express easily in words, which is how our brains identify and catalog the sensation "fear". I wondered, though, if this contention was just a load of pseudo-scientific nonsense I had somehow managed to concoct from studying acting theory.

In antiquity, physicians/philosophers were convinced that physical humors controlled emotion; that emotional imbalances were directly a result of improperly balanced internal fluids. As years passed, this connection of the emotional to the physical began to fade from medical theory. It did not really reappear until the mid-nineteenth century, when physicians like William Cullen and Robert Whytt began to once again seriously research "physiological connection between emotions and disease." (1) For quite awhile, the body of research into the matter seemed satisfied with the notion that there was some connection, that emotional and physical states affected one another in some intangible and un-specific way. Human bodies and minds are mysterious and individually unique: the best most researchers could come up with were some interesting case studies, but little in the way of general, applicable theories.

In a separate field of intellectual endeavor, Stanislavsky, a Russian actor-turned-director at the end of the nineteenth century, was advising his actors to study not literature or poetry or philosophy but rather biomechanics. Emotions could be recalled by recreating the actions linked to them. "It's possible to repeat this feeling through familiar action, and, on another hand, emotion getting linked with different actions, force actor into familiar psycho-physiological states." (2) (Grammatical error his, not mine.) Stanislavsky's Method is pretty standard acting technique: you can find it in any acting textbook written since the time of his death. Only recently has there been any scientific investigation into evidence to confirm his theories (I have no idea if the scientists doing the inquiries were influenced by Stanislavsky).

Recent research into the connections between facial expressions and emotions associated with them, for example, show that while changing moods affect a person's facial expression, changing the expression also changes a person's mood. It is now thought that "involuntary facial movements provide sufficient peripheral information to drive emotional experience," (3) a theory known as the facial feedback theory. In a study where two groups were asked to rate the funniness of various cartoons while either holding a pencil with their teeth and without touching their lips (creating a smile-like expression) or holding the pencil with their lips only and without touching their teeth (frowning, as it were), the "smiling" group rated the cartoons as substantially funnier than the either the "frowning" or the control group, doing neither. (4) Autonomic changes similar to those seen with certain emotions were experienced by participants who were instructed to make certain faces; that is, changes in the circulatory and nervous systems were observed when facial expressions were altered. A suggested notion of why this is so has to do with how the brain receives oxygen: "Blood enters the brain by way of the carotid artery. Just before the carotid enters the brain, it passes through the cavernous sinus. The cavernous sinus contains a number of veins that come from the face and nasal areas and are cooled in the course of normal breathing. Thus, there is a heat exchanged from the warm carotid blood to the cooler veins in the cavernous sinus." (4) While frowning, for example, the construction of some facial muscles alters the flow of air and blood to the brain, resulting in the brain warming up. Smiling, on the other hand, widens the face and nasal passages, resulting in a more cooling effect on the brain. So Stanislavsky's suggestion to his actors that in order to better express their character's emotions they must first replicate their physical state is based in some real, if in his case intuited, science.

So why, if an actor can alter his emotions by altering his physical state, can't a person rid themselves of depression, say, by forcing herself to smile all the time? Well, not all of the physical aspects of our emotional states can be duplicated easily or voluntarily. The same way a musician masters an instrument and can then perform pieces that he has not written, an actor uses his body and voice to do the same. Some actors have better control over their "instrument" than others, just as musicians have varying degrees of skill. And just as the sound of an instrument can be drastically altered by outside influences (an electric guitar with an amplifier has a completely different sound than an acoustic guitar without one), our bodies and therefore our emotions can be altered through the use of external stimuli, such as taking an anti-depressant or losing a fistfight.

In order to be convincing to an audience, an actor need only reproduce the visible and audible manifestations of any emotional state he is trying to convey: that is, look happy, sound angry, and so on. In duplicating just the outward embodiments, small bits of the emotions can creep into an actor's mind, but ultimately, an actor is not out to feel a certain way. He or she is out to make an audience feel a certain way. A final thought on the science behind an actor's believability, and facial expressions: certain one of the forty-some-odd muscles in the face are much more difficult to voluntarily control than others. Ones that move when a person is actually smiling, and not faking a smile, for example, create subtle differences in the contours of the face, differences that the average person may notice subconsciously but not be quite able to pinpoint; the way some people can tell when they are being lied to but not be able to say just why. Experts trained in reading faces can note the differences. And yet, some people are better at controlling these less-voluntary muscles than others. By some estimates, about ten percent of the population has the ability to control some or most of these muscles: natural actors or liars whose facial expressions are extra-believable because the average person can't fake them. Woody Allen, for an example, is able to control one of the less voluntary muscles in the face used to express sadness-according to one researcher-one that moves his eyebrows up and down for emphasis as he speaks. (5) This actor/student of biology is aware that she is able to voluntarily move a few of the muscles in her face usually used to express legitimate anger, involving a slight raising of the eyebrows, tightening of the jaw, and a pulling of the ears closer to the head. Nearly anyone can learn to move their less voluntary muscles: while it is easier for some than for others, all that is required is diligence, and careful, creative observation. I guess that makes stagecraft and science fairly similar after all.

1)National Library of Medicine-Emotions and Disease, the Balance of Passions

2) Method Acting For Directors , A sort of lousy translation, but a good overview.

3)About.com, Bi-Polar Disorder: Smiling is Good For You

4) Facial Feedback Theory

5)Emotions and Smiling, An interview with Paul Ekman about his reasearch on facial expression and other fun stuff. You might try this link if the other one doesn't take you straight to the article.


Aromatherapy: Why it makes 'scents'
Name: Stephanie
Date: 2002-11-12 21:22:43
Link to this Comment: 3708


<mytitle>

Biology 103
2002 Second Paper
On Serendip

In countries around the globe, scented oils have been used as medicines for thousands of years, varying in its therapeutic values and uses. Ancient Egypt often used scented oils for their therepuetic effects as different types of medicines, for ailments or diseases (1). In more recent times, scented candles have bombarded the market claiming remedial benefits on mood and cognition. And in institutions for medical practice in clinical aromatherapy can be found in places around the world. But while the interest in aromatherapy has heightened over the years, so has the skepticism surrounding the practice. Product claims to alter health or provide cures have only contributed to the cynicism. Though, individually, a products? capability to enhance a person?s state or mood is debatable, the fundamental theory linking mood to distinct scents is in fact a viable speculation. Underneath the commercialized hype lies scientific data supporting a correlation between scents and mood. A number of recent studies relating to the topic imply the presence of a link and further investigate the olfactory sense and its specific stimulation in the brain. At its most basic level, aromatherapy can effectively be used to alter moods or states.
The process of scent stimulation begins when the molecular chemicals which make up a scent are inhaled through a person?s nose. After traveling through the nasal passage they reach cillia, hair-like fibers connected to the olfactory epithelium, a highly concentrated area of neurons that can send messages to the brain. When the molecule binds to the cillia, the neurons are prompted and send an axon to the brain which processes the perception of smell in what is known as the olfactory bulb, located in a region behind the nose (2).

"Humans can distinguish more than 10,000 different smells (odorants), which are detected by specialized olfactory receptor neurons lining the nose.... It is thought that there are hundreds of different olfactory receptors, each encoded by a different gene and each recognizing different odorants" (3).

The process that occurs after the scent ?reaches? the brain has yet to be fully understood but it seems that the message transmitted to the olfactory system when a scent is smelled is not the only region of the brain that receives it. In one study, anxiety of patients undergoing an MRI was observed. When patients were submerged in a vanilla-like scent, a 63% of the patients showed a reduction in anxiety (4). In another study, it was found that spiced apple and powder-fresh scents ?improved performance on a high-stress task,? (4). In an Austrian study, the effects of a citrus or orange scent in the waiting room of a dental office was studied. While patients were waiting to for dental treatment, they were immersed in an orange smell. It was found that the odor had a relaxant effect, mostly on women. A lower level of anxiety, a more positive mood and a greater sense of calmness were discovered to be direct effects of the orange odor, in comparison to the control group (5).
Many businesses have even subscribed to the idea of altered mood or state through specific scents and used it to increase production. A Japanese company which began using, what was dubbed, ?environmental fragrancing? in which air-conditioning ducts released various therepuetic scents every six minutes to improve alertness or relieve stress. It was found that the introduction of a lemon scent reduced keyboard errors by 50% (4).
While these studies show the effects of specific types of scents and their link to mood, a link as also been determined between scents and their degree of pleasantness on an individual basis and their capability of altering mood. In one such study, habitual smokers were given a variety of different scents to rate on a scale of pleasantness. After being nicotine-deprived for a significant amount of time, the smokers were given the scents and the effect on their craving was observed. It was concluded that the cravings diminished when a non-neutral odor was smelled. Particularly, those odors which the smoker had rated as ?unpleasant? decreased cravings the most (6).
In another related study, observations were made on the heart rate of patients who inhaled unpleasant scents. It was determined that heart rate was increased when the patients inhaled unpleasant scents or were asked to rate them (7). This study serves as support for the theory that the olfactory system first rates a perceived scent on a scale of pleasantness. In conjunction with the other studies, it could be assumed that individual ratings of ?plesantness? vary on an individual basis and may even be culturally derived. The effects of vanilla, for instance, seem to be varied among cultures:

"When Americans smell a strong odor, it seems to remind them of their animality or mortality. On the other hand, vanilla is known to be comforting to Americans, but has no particular effect on Japanese. This may be because it is an unfamiliar smell and therefore has no link to the granny's kitchen of their childhood" (4).

Consequently, a connection between memory or past experience may also influence the degree of personal pleasantness of certain smells.
Through the various studies concerning the correlation between scents and their trigger in other regions of the brain, it can be seen that there is in fact a link. The findings of ?universal? (perhaps limited culturally) mood triggering scents gives further implications of the olfactory system?s connection with other parts of the human brain. The idea of using scents as a means of altering mood or state of mind is valid. Evidence suggests that aromatherapy is a well-founded science and one in need of even deeper investigation. Perhaps scents and their specific associations could be used on a greater scale in the future, to increase productivity, improve mood or just enhance well-being. It seems that this age-old science often waved off as ?phony? has a very scientific base. When a greater understanding of the brain and its related regions is attained, perhaps the science will, once again, become more widely accepted.

References

1)Perfumes in Ancient Egypt

2)The Sense of Smell


3)How does smell work?


4)The Role of Smell in Language Learning


5)Ambient Odor of Orange in Dental Office Reduces Anxiety and Improves Mood in Female Patients


6)Effects of Olfactory Stimuli on Urge Reduction in Smokers


7)Influence of affective and cognitive judgments on autonomic parameters during inhalation of pleasant and unpleasant odors in humans


Rigor Mortis; An Examination of Muscle Function
Name: William Ca
Date: 2002-11-12 22:57:56
Link to this Comment: 3712


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Soon after the time of death, a body becomes rigid. This stiffening is the result of a biochemical process called Rigor Mortis, Latin for "stiffness of death". (1) This condition is common to all deceased humans, but is only a temporary state. The slang term "stiff", used to refer to a dead person, originates from rigor mortis. Within hours of the time of death, every muscle in the body contracts and remains contracted for a period of time. Before I am able to explain the biological cause of this condition, I must first describe the structure of muscles and the process of muscle contraction.

Muscles have many different levels, beginning with the individual muscle fibers. Muscle fibers are a combination of many cells but have structures similar to that of an individual cell. The organelles found in a normal cell are also found in muscle fiber, but are given different names: the plasma membrane is sarcolemma, endoplasmic reticulum is sarcoplasmic reticulum, mitochondria are sarcosomes, and the cytoplasm is called sarcoplasm. Muscle fibers are composed of tissue units called sarcomeres. Sarcomeres are connected horizontally to extend the length of the muscle fiber. The components of an individual sarcomere are a thick and thin filament, myosin and actin, respectively. The ends of a sarcomere are what are called Z lines. Rows of actin extend from these Z lines but do not meet in the middle. Spaced in between the rows of actin, and not connected to either Z line are the myosin filaments. Strands of actin take on a double helix shape and are surrounded by a long protein strand called tropomyosin spotted with various small protein complexes called troponin. Underneath the tropomyosin are myosin active sites, where myosin is able to bind to the actin. Along a strand of myosin are "heads" that protrude towards the actin. Actin and myosin are the central actors in muscle contraction. (2)

Muscle contraction begins in the brain with a nerve impulse sent down the spinal cord to a motor neuron. The action potential started in the brain is passed on to the muscle fibers through an axon where it is carried into a neuromuscular junction. (2) The neuromuscular junction, also referred to as the myoneural junction, releases acetylcholine when the action potential reaches the junction. When the acetylcholine comes into contact with receptors on the surface of the muscle fiber, a number of transmembrane channels open to allow sodium ions to enter. (3) This influx of sodium ions creates an action potential within the fiber which triggers a release of calcium ions from the sarcoplasmic reticulum.

Calcium ions filter throughout the sarcomeres and bind with the troponin complexes, causing a shift in the tropomyosin structure and exposing the myosin binding sites on the actin. A "power stroke" follows, wherein the myosin heads drop the ADP and Pi, which hold the heads in a cocked back position, and move laterally thereby moving the actin filament at the same time. Finally, ATP binds to the myosin heads, thereby detaching from the actin. Upon release from the actin, the ATP breaks down into ADP and Pi, giving energy to return the myosin to its cocked position, thereby renewing the cycle. (4)

The relaxation of a muscle depends upon the termination of the action potential beginning at the neuromuscular junction. An enzyme within the muscle fiber destroys the acetylcholine, thereby stopping the action potential that acetylcholine produces. Therefore, calcium ions are no longer released from the sarcoplasmic reticulum. In fact, the already loose calcium ions are brought back into the sarcoplasmic reticulum. Finally, the myosin and actin are unable to bind, thereby contracting, because the myosin active sites need calcium ions to be exposed. (2)

The supply of ATP is central in the continuing process of muscle contraction. ATP originates from three sources; the phosphagen system, glycogen-lactic acid system, and aerobic respiration. In the phosphagen system, muscle cells store a compound called creatine phosphate in order to replenish the ATP supply quickly. The enzyme creatine kinase breaks the phosphate from this compound and the phosphate is added to ADP. This source of ATP can only sustain muscle contraction for 8 to 10 seconds. The glycogen-lactic acid uses the muscles' supply of glycogen. Through anaerobic metabolism, the glycogen is broken down and creates ATP and the byproduct lactic acid. This method does not require oxygen and is able to supply more ATP than the phosphagen sytem, but occurs at a slower rate. Finally, the aerobic respiration allows glucose to be broken down into carbon dioxide and water in the presence of oxygen. The glucose supplies come from the muscles, the liver, food, and fatty acid. This method creates the most ATP and for extended periods of time, but takes the most time. (5)

With all of the information on how muscles work, it is now possible to explain the process behind rigor mortis. Death terminates aerobic respiration because the circulation system has ceased. (6) Therefore, the muscles rely on the phosphagen and anaerobic metabolism methods to acquire ATP. As stated above, these sources only provide a small amount of ATP. This lack of ATP disables the myosin heads from detaching from the actin. Meanwhile, calcium ions leak from extracellular fluid and the sarcoplasmic reticulum, which is unable to recall the ions, into the muscle fiber. (6) The ions perform their task as if the body were alive, disengaging the tropomyosin and troponin from the myosin active sites. The muscle contracts when the myosin shifts, but the lack of ATP prevents it from detaching, and the muscle remains contracted. Such a process occurs in all muscles as the body becomes rigid.

Rigor mortis usually sets in within four hours, first in the face and generally smaller muscles. The body reaches maximum stiffness within twelve to forty-eight hours. However, this time may vary due to environmental conditions – cooler conditions inhibit rigor mortis. (1) Rigor mortis is only a temporary condition. During the process, the body has been accumulating lactic acid through anaerobic respiration. Lactic acid lowers the pH of the muscles, and deteriorates the contraction of the muscles. (7) The body loses its rigidity due to the decay of the muscles. In conclusion, rigor mortis is the stiffening of the body due to a lack of ATP after death. Only temporary, the condition is environmentally sensitive and is believed to occur in all grown humans.


References

1)"How does Rigor Mortis work?"

2)"Contraction and Rigor Mortis"

3)"Muscles"

4)"HowStuffWorks 'How Muscles Work'"

5)"HowStuffWorks – How Exercise Works"

6)"Chapter 10: Muscle Tissue"

7) "Conversion of Muscle to Meat"


What's Owl Got To Do With It?
Name: Catherine
Date: 2002-11-13 02:59:04
Link to this Comment: 3713


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Owls are notoriously wise birds; they know how many licks it takes to get to the center of a Tootsie Roll pop, they are Winnie the Pooh's best advisors, Harry Potter's messenger friends, and even Bryn Mawr's semi-mascots. Being so advanced, they are also a part of animal adaptation and evolution. Because I know next to nothing about owls, I decided to undertake some snowy owl reproduction research. The question I set out to answer was: "How do snowy owls reproduce, and how does that relate to an adaptive process to evolution?", because I know that there must be a clever way for these intelligent old birds to produce offspring. I just do not know what this method is.
Snowy owls' reproduction has many phases, including courting, mating, egg-laying, nest-building, taking care of the young, and the young leaving the parents' nest. Each of these illustrates how evolved and adapted to their environment owls are.
Male snowy owls apparently have a mating call and ritual procedure which involve courtships beginning in midwinter, and which last through March and April, distanced from the breeding grounds. To attract the females, males will take off in exaggerated flight patterns while hooting loud, booming, repeated calls, or they will stand in open-wing postures, with the wings closest to the females angled slightly towards them, attempting to receive some attention. Their mating cries can be heard up to six miles away, in the tundra region where the snowy owls reside. When the males see the females, they will swiftly snatch a gift lemming with their claws. They will land and place it somewhere visible to the females. Males may then push the gift towards females, or spread their wings and waddle around the victim, concealing it. To satisfy still uninterested females, the males may take off for more lemmings. They often feed their catches to the females. On the ground, males will "bow, fluff their feathers, and strut around with wings spread and dragging on the ground." (6)
Perhaps these rituals are not as "sophisticated" as that of humans, but this does seem survival of the fittest to me. Owls have a proper courtship, away from reproducing responsibilities. Males must display their assets and give presents in order to win the females over. Females obviously have criteria for their partners and evolution has taught them that they need partners who surpass others of their kind.
When the females are finally content with the males' courtship attempts, the couple fly off, soaring up and down through the sky. The males sometimes swoop to catch more lemming and pass their game to the females. They begin mating and breeding in April to May. Because they are a class of birds, snowy owls lay eggs.
The egg-laying process in itself is a sign of evolution. Female snowy owls typically lay five to eight white eggs in a clutch, and sometimes up to fourteen, depending on the lemming population. Around every four years, there is a predictable thinning of lemming numbers, and the owl pair may not breed at all. If a first egg clutch is unsuccessful in hatching, the couple probably also will not renest for the rest of the year. The reproduction quantity thus relies on the fickle prey population, which is an agile adaptive system indeed. In this way, there is almost one hundred percent nesting success achieved by the snowy owls.
Eggs are laid about every other day, so that the older and stronger chicks will have advantages in time periods when there may be a shorter food supply later (they will consume most of the food their parents bring, and may possibly even slay younger siblings and eat them). The females incubate for about thirty-one to thirty-four days, keeping the eggs warm, while the males guard the nest and do all of the hunting and bringing in of the food. The survival of the fittest ideal is incorporated inside the species' competition with proper care by parents from other predators, while the babies are still un-hatched. Offspring may compete between each other, even to the death, but intruders are not allowed to interrupt the growing process.
To build a nest for their eggs, snowy owls make a hollow on the exposed, snow-free, dry tundra ground with shallow talon scrapes approximately three to four inches deep and one foot across, on top of an elevated rise or mound. Gravel bars and abandoned eagle nests are occasionally frequented. The nests are almost exclusively made on the ground, lined with moss, lichens, feathers, scraps of vegetation and their own feathers. Sites are near good hunting areas, commanding a view of surroundings. Some areas are only used once for breeding, but other areas are occupied for several years at a time. Territories around the nests range anywhere from one to six kilometers, and do overlap with other pairs' regions.
Clearly, this is a well "thought-out" plan for the parents to keep their young safe and sound. They even consider where they will have easy access to food, but will be difficult to find. Evolution has made this a habit, or else the owls would not survive.
Snowy owl eggs begin hatching one by one, over the interval of a month. Owls have adapted to the shell life, by gaining temporary incisors for cracking through eggs. They employ temporary "egg teeth" to crack through the shells. The chicks are blue-grey while they stay in their hole in the ground, and will be covered in a snowy-white down and face around three weeks after hatching success. At the same time (about sixteen to twenty-five days), their primary wing feathers will grow in, and they may begin to wander away from the nest. But this is before they can fly, and so both parents feed and tend to the young until then.
Both the male and female owls will feed, protect, and bring up their chicks until their babies are ready to fly away and hunt on their own. Nestling owls take about two lemmings per day, but a family of snowy owls may eat up to one thousand, five hundred lemmings before the owlets may be able to scatter. Because they are so defensive, snowy owls may aggressively attack intruders up to one kilometer away from their nest sites. Males will sometimes fight in midair, and females may defend their territories or potential mate against other owls of their sex. Males may defend their young using a "crippled bird" act to lure predators away from nests. They have developed scenarios in order to survive.
Owlets fledge in about forty-three to fifty-seven days, in which they also become able to search and hunt for food themselves. The young clearly require an entire summer's worth of special care by the owl parents. Adaptation has made snowy owls smart reproducers with wise habits and precautions.

References

1) Enchanted Learning
2) Lady Wild Life's Endangered Wildlife
3) Minnesota Department of Natural Resources
4) Oregon Zoo Animals
5) Ross Park Zoo
6) The Owl Pages , Information About Owls
7) Tribune-Review
8) University of Michigan


Why Can My Professor Not Match His Clothing: Is Co
Name: Margot Rhy
Date: 2002-11-16 22:49:22
Link to this Comment: 3759


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Everyday when I go to my first class, my professor's wardrobe offends me. He somehow thinks that he can get away with wearing brown pants, navy socks, and black shoes in the same outfit. Eventually, I had to ask myself whether my judgments are just really too harsh or if my opinions have scientific support. If my professor is not disgusted by his clothing choices while I am, is color just a subjective experience? This question led to my investigation of the true meaning and essence of color. Where does color come from chemically and physically and how do our eyes perceive color? Do colors really have relationships, those which my professor seemingly does not understand, and can science explain them?

Color is a difficult quality to explain because it is the one characteristic that can mark the difference between two objects exactly the same all other physical traits, such as size, shape, and texture (2). Putting words to the difference between an item that is red and one that is yellow is harder than describing the difference between an item that is tall and one that is small. It is almost impossible to avoid purely subjective and emotional words when describing color. Also, we only know how to assign a. specific word to a color because we are trained to. How else can we know how to give red the word "red" if a kindergarten teacher never holds up those color flashcards? Therefore, in order to understand what color really is, it is necessary to understand how it is produced.

Colors can be defined by two different processes. The first is the physical diffraction of white light and the other is the interaction between electrons within a molecule and light. White light is a mixture of many colors and when it hits, for example, a prism, it splits into different components in a flat spectrum. Each component of the light has its own wavelength, thus yielding what we perceive as color (1).

A chemistry-based approach of defining color deals with the energy of an electron inside of a molecule. An electron can be excited from a lower orbital to a higher orbital by absorbing a specific wavelength of light and "the loss of this wavelength from the balanced white light source results in color (1)." Seeing pigment color is a process dependent on the actual molecules of the object, how those molecules interact with light, and how our eyes perceive that interaction. This can be explained by the concept of turning out the lights on a red chair- did the red chair become any less red? The answer is "yes"- even though turning off the lights does not change the molecular structure of the chair, red can only exist because of the light. The electrons in the molecules of the chair can only interact with the light. The chair is not red because the molecules create red. The molecules in the chair's pigment absorb all other wavelengths and reflect certain wavelengths back (1). Furthermore, this process only matters when our eyes process the light. This is the step in which our visual system captures those wavelengths, processes them through our retinas, interprets them in our brains, thus allowing us to give one word for that whole experience, "red." However, both processes send the same wavelengths, so the eyes and brain know that red means red, no matter if we see it in a rainbow or in the pigment of the chair (1). The eyes and brain can do this because of the nature of our biological visual system.

The biological process that enables us to see color is a subjective experience. It begins with "the stimulation of the light receptors in the eyes," leading to the conversion of light stimuli or images into signals, and then the, "transmission of electrical signals containing vision information to the brain through the optic nerves (5)." We are capable of seeing color due to the photoreceptors in the retina of the eye that are sensitive to light. There are two kinds of photoreceptors in the retina- rods and cones. Rods are receptive to amounts of light while cones are sensitive to different colors (2). There are three types of cones and each type is sensitive to the different-sized wavelengths of the visible spectrum. Long wavelength or "red" (R) cones, which are most sensitive to "greenish yellow" wavelengths, middle wavelength or "green" (G) cones, most sensitive to "green" wavelengths, and short wavelength or "blue" (B) cones, most sensitive to "blue violet" wavelengths (6). From this information arises the "trichromatic color theory", that says the primary colors are red, blue, and green (6). Basic neural programming transforms the basic outputs of these three cones into four channels of chromatic color signals and a colorless channel that determines brightness. Therefore, our perception of color comes from the amount and type of light being absorbed by each cone type (2). Also, our color vision follows some basic rulesthe stimulation of the R and G cones gives the perception of red and green, when these two cones are stimulated about equally, we perceive yellow, and that the stimulation of B cones creates the perception of blue (6). What makes this process subjective is the fact that the molecules that make up my eyes and brain are not the same molecules that make everyone else's. Therefore, no two people can perceive color the same way. However, even though color is a subjective experience, the idea of complementary colors can exist. Newton made the first arguments for this.

Newton formed a color wheel and, although this may not sound like much, it was revolutionary. His experiments and writings deal greatly with arguing against Aristotelian theory because he wanted to make clear that hue could be conceived and described separately from light and dark (4). To systemize this idea into a process, he had to use the spectrum as a reproducible color reference to identify and name the hues in nature. However, Newton had to overcome the idea of colors existing in just a flat spectrum, only considered to be near or far from one another with. By thinking in terms of a circle, Newton automatically discovered that colors can be linked together and have relationships. However, he did not just bend the spectrum into a circle to form relationships between colors. He specified the rules for color's placement on the wheel with geometry and physics. He determined that, "saturated hues are on the circumference of the circle, white or gray is at the center, complementary colors are opposite each other on the circle, and the color of a mixture is located at the 'center of gravity' of all of the hues in the mixture weighted by their brightness (3)." Through these ideas, Newton gave words that eradicated the subjectivity in the process of linking colors together. He gave a color's placement on the wheel physical justification. However, Newton relied on the principles of light to make predictions about how both types of color mix. This was misleading because he did not take into account that pigment color does not work the same way as light coloration (3).

I am arguing, though, that there really is a connection between pigment colors in the form of a color wheel too. Yes, yes- pigment color cannot be clearly defined because seeing it really is a subjective process dependent on ever-changing light and on our different biological systems. However, these concepts do not erase the scientific reasons that uphold the existence of complimentary pigment colors.

Ewald Hering argued for the existence of non-arbitrary but scientific relationships between pigment colors when he proposed his own color wheel that was based entirely off of the subjective experience of color (6). Although he understood the trichromatic color theory, Hering was not satisfied with it. This color theory cannot explain why yellow is psychologically just as primary as red or blue or green; nor can it explain either why we can visualize mixing red with yellow to get orange but not red with green and to get red-green (6). He devised a color wheel of his own to answer his questions. By saying that red, blue, yellow, and green are the four fundamental hues that can be contrasted to one another, blue to yellow and red to green, Hering made the connection between the subjectivity of our perception and the existence of complementary color for pigment. He justified their relationship as opposites through the fact that they can be mixed to form any color that appears on the spectrum (6). It turns out that these complements, and not the raw R, G and B cone responses, are the better framework for describing the discrimination between two very similar colors and the prediction of hues in a color mixture. This is because the "translation from receptor responses to opponent codings happens in the retina: the brain never "sees" the trichromatic outputs (6)." So the four colors of red, blue, yellow, and green, and not the three "primary" colors of the color receptors, led to developing a color model that respects how color is more than just our different perceptions. This allows for the existence of judgments that can be made consistently over time by more than just one person and for color theory. Hering had devised a color system that can understand our biologically subjective experience and look beyond it to describe what else is also going on in the relationships between colors.

Hering's color wheel just began to tap into the connections between pigment colors that do exist for mathematical and scientific reasons. Furthermore, his is not the only color wheel but to explain all of them and to trace their evolution requires far more space than I have here. However, Hering's color wheel alone gives me enough support to justify my opinions of my professor's wardrobe. Argue as he might that color is purely subjective because it is really an experience dependent upon light and molecules, how colors connect is not subjective. Colors have relationships that are upheld by science. The new questions that come up ask how more does the brain and psychology play a part in determining color relationships, why we tend to associate feelings with color, and how much is more advanced color theory subjective..

References

1) Dr. K.D. Luckas. Chemistry 104 Laboratory Manual: Supplement for the Major's Section. Bryn Mawr College; 2001.
2)RIT Munsell Laboratory, FAQ section on the RIT Munsell Color Science Laboratory website
3) page that discusses "Mixing with a Color Wheel" , the color section on this website a good source for information about all aspects of color
4)"Color Psychology", page that discusses "Color Psychology"
5) Molecular Expressions website , page that discusses light and color
6) Opponent Processing of Color, page that discusses "Light and the Eye"


Sleep: It Does a Lot More than You Think
Name: Meredith S
Date: 2002-11-24 13:08:48
Link to this Comment: 3859


<mytitle>

Biology 103
2002 Second Paper
On Serendip

Your doctor and your mother always recommended getting at least eight hours a sleep a night. Everyone knows without the proper amount of sleep, the mind will be groggy the next day, and as a result, many more mistakes will be made, meaning that you should get a full night's rest before taking a test, or a little nap before a long drive in a car. But scientists are beginning to realize that sleep is not just a mental recharger, but also important for the body as well. When a person sleeps, the body and mind are working just as hard as when the person is awake, correcting chemical imbalances, assuring proper blood sugar levels for the next day, and maintaining the memory(1). Before electricity, people would generally go to sleep when the sunset and rise when it rose, assuring that they got enough sleep to maintain a healthy mind and body. But in a highly industrialized nation where the light bulb has expanded the working day into 24 not 12 hours, it is becoming apparent that more and more people are sleep deprived. And with that deprivation, more and more scientists are realizing, comes not only a mental deficiency, but also a physical one.

It is not quite clear what physically happens in the body when one sleeps. Although scientists can read brainwaves on an EEG, they are not sure when exactly the brain is doing, although they acknowledge that dreaming is a large part of it. When the body is sleeping, the brain goes through four different stages, called the REM (Rapid Eye Movement) sleep cycles. At different stages, the brain is active in different ways, as seen on EEG brainwave readers. In the first stages of sleeping, the body begins to relax, the heart rate slows, and people often feel as though they are falling or otherwise weightless. As the body slips into the second and third stages of the cycle, it is very apparent that the brain is not acting in the same way (i.e. emitting the same brain waves) as when the body is awake, but nevertheless the activity is still there. This is where your body performs daily maintenance and healing, and where deep restful sleep occurs"(2).

If the body does not go through enough REM cycles, it cannot fully heal itself, making the body act sluggish the next day. Some signs of sleep deprivation include reduced energy, greater difficulty concentrating, diminished mood, and greater risk for accidents, including fall-asleep crashes. Work performance and relationships can suffer too. And pain may be intensified by the physical and mental consequences of lack of sleep(3).Thus, staying up all night to study for that test or finish that presentation actually is more detrimental than originally thought. Even everyday tasks, such as driving a car or even answering the phone are affected by lack of sleep, making those people who work under such conditions a danger to themselves and others. The memory is also affected. During sleep, the brain may recharge its energy stores and shift the day's information that has been stored in temporary memory to regions of the brain associated with long-term memory(3).

Scientists are realizing more and more the physical effects of lack of sleep. Sleep deprivation also weakens the immune system, preventing the body from being able to ward of infections and viruses. But, it also affects the chemical balances within the body. Men, who are normally healthy, start to show affects of aging after only a few nights of less than adequate sleep. In a study done at the University of Chicago, Dr. Eve Van Cauter found that, "after four hours of sleep for six consecutive nights, healthy young men had blood test results that nearly matched those of diabetics. Their ability to process blood sugar was reduced by 30 percent, they had a huge drop in their insulin response, and they had elevated levels of a stress hormone called cortisol, which can lead to hypertension and memory impairment"(4). Such physical effects were unheard of before this study, and as a result, scientists are now looking into connections with lack of sleep and obesity.

One such consideration is how the body regulates sleep itself. The body is monitored by the called the Circadian Rhythm, a natural internal clock that resets itself every 24-hours(5). This clock releases different chemicals in the body, depending on if it thinks the body needs to sleep or be awake. It is most easily set by direct, or as scientist are now discovering, indirect light. It is a common fact that it is easier to sleep with a light on than without, and scientists are now realizing that is because of the Circadian Rhythm. What this means is that every time you turn on a light, you are resetting the Rhythm just a little, making the individual cells within the body not release chemicals or produce the necessary proteins at the right time. Resetting the Rhythm also means that the body is working overtime, making it more out of balance and less efficient. Thus, not only are the necessary chemicals imbalanced, but the body will age faster as it is forced to work for longer and longer hours without being able to restore itself.

This discovery, in connection with the dietary habits of many industrialized nations, could possibly help to explain another factor of obesity. The invention of the light bulb made the once unproductive and dark night as valuable and bright as day. Now, people can work 24 hours a day, making industry and the lives that run it more crowded and hectic. More and more people are trying harder and harder to fit more into their days, and as a result, sleep is often slighted. The ultimate effect of this new lifestyle is more stress and a greater usage of artificial light, which are now proving to reset the Circadian Rhythm as much as exposure to the sun. This means that in highly industrialized countries in which artificial lights can make the night as bright as the day, people tend to be more sleep deprived(6). Scientists have proven that shining lights on rats causes them to awake earlier than if a light had not been shown. The same is true with humans(7). When the body awakens too early, it cannot fully restore itself, making the chemical imbalances remain imbalanced. Thus while people think that they are waking up because the body has had enough sleep, it is really because the body's Rhythm is off. And as a result, these people think that they are getting enough sleep, when in actuality, they are hurting the body more by off setting its own natural clock and the natural processes that occur during sleep.

Sleep is a major part of our lives, this is more than evident by the fact the most scientists would agree that the average person needs between seven to ten hours of sleep a day – that is almost one third of an entire lifetime spent sleeping. Once thought to be only a necessary for the brain's functioning alone, it is becoming more and more apparent that the body needs sleep just as much if not more than the brain. Besides reviving energy, sleep maintains chemical balances that create better moods, stabilize chemical imbalances, and even ensure that the body is working at its best capabilities to ward off disease and even obesity. Living in a country that now forces the night to be just as industrially productive as the day, also affects how much each person sleeps, regardless of when they try to go to bed. The body sets its own natural clock by comparing itself to light, be it the sun or now artificial light from light bulbs. As a result, the body can get confused as to when it is supposed to perform the actions necessary during sleep. Before the invention of electricity, the body and brain could easily set their own Rhythm, maintaining themselves and warding off the now apparent physical effects of too little sleep. Now that individuals have more control over their body's natural processes via artificial means, it is more important than ever to realize that sleep does not just effect the mind but also the body.


References

1)http://www.sciencedaily.com/releases/1999/03/990316063522.htm
2)http://home.attbi.com/~rnagle557/dream_sleepscience.htm
3)http://www.fda.gov/fdac/features/1998/sleepsoc.html
4)http://abcnews.go.com/sections/2020/2020/2020_010330_sleep.html
5)http://home.attbi.com/~rnagle557/dream_sleepscience.htm
6)www.ivillagehealth.com
7)http://www.sciencedaily.com/releases/1999/03/990316063522.htm


Smallpox: Vaccination Decisions
Name: Brie Farle
Date: 2002-12-12 17:19:31
Link to this Comment: 4062

Smallpox: Vaccination Decisions

Everywhere you turn, there exists foreboding speculations about smallpox. Smallpox may be the biggest threat to Americans concerned about bioterrorism. The last case of smallpox in the United States was in 1949, and routine vaccinations ended in 1972. Therefore, most Americans born after 1972 are completely unprotected, and completely uninformed about smallpox. (1).

On Friday, December 13th, President Bush will announce plans to vaccinate Americans for smallpox. Most Americans are quick to accept new vaccinations, such as the ever popular flu shot, in order to prevent getting sick; so why is it different this time?

Why has smallpox re-emerged as a target for vaccinations, even though the deadly disease was eradicated a quarter-century ago? If it poses even a potential threat, why aren't we immediately going back to the days of total vaccination? (1).

What is Smallpox?

It is believed that smallpox originated over 3,000 years ago in India or Egypt. For centuries, repeated epidemics swept across continents; demolishing populations. As late as the 18th century, smallpox killed every 10th child born in Sweden and France. During the same century, every 7th child born in Russia died from smallpox. (2). Historically, smallpox is known for killing 30 percent of its victims and leaving survivors with permanent scars over large areas of their body, especially the face. (1)

Smallpox is an acute contagious disease caused by variola virus, a member of the orthopoxvirus family. Variola virus is relatively stable in the natural environment. It is transmitted from person to person by infected aerosols and air droplets spread in face-to-face contact with an infected person. The disease can also be transmitted by contaminated clothes and bedding, though the risk of infection from this source is much lower. In a closed environment, the airborne virus can spread within buildings via the ventilation system and infect persons in other rooms or on other floors in distant and apparently unconnected spaces. (2)

In the absence of immunity induced by vaccination, human beings appear to be universally susceptible to infection with the smallpox virus. (2) Infection can be prevented if a person is vaccinated within four days of exposure to smallpox, before symptoms even appear; but after that, there is no treatment.

Re-Emergence of Smallpox

Thanks to a worldwide immunization program, the last naturally acquired case of smallpox was recorded in Somalia in 1977. However, while smallpox was being eliminated, U.S. and Soviet laboratories were developing the virus as a biological weapon. Experts worry that scientists shared weaponized strains of the virus with nations such as Iraq and North Korea. (1)

During the Iraq crisis in 1990-91, U.S. military personnel were inoculated against a variety of biological threats; but not against smallpox. "There wasn't a concern in the first Gulf War," said Dr. Sue Bailey, former assistant secretary of defense for health affairs. Now, she said, "there is intelligence that tells us this is a higher risk." (1)

Bioterrorism experts paint frightening scenarios like these: Terrorists release weaponized smallpox into the air in crowded places, or a dozen people on a suicide mission infect themselves with smallpox and, when they are at their most contagious, walk around airports, infecting others. (1)

Vaccination Information

Edward Jenner's demonstration, in 1798, that inoculation with cowpox could protect against smallpox brought the first hope that the disease could be controlled. He believed that successful vaccination produced lifelong immunity to smallpox. (2) In the early 1950s, 150 years after the introduction of vaccination, an estimated 50 million cases of smallpox occurred in the world each year. This figure fell to around 10–15 million by 1967 because of the vaccination.

When the World Health Organization (WHO) launched an intensified plan to eradicate smallpox, in 1967, smallpox threatened 60% of the world's population, killed every fourth victim, scarred or blinded most survivors, and eluded any form of treatment. (2)
Smallpox was finally pushed back to the horn of Africa and then to a single last natural case, which occurred in Somalia in 1977. In 1978, a fatal laboratory-acquired case occurred in the United Kingdom. The global eradication of smallpox was certified, based on intense verification activities in countries in December 1979 and subsequently endorsed by the World Health Assembly in 1980. (2)

Vaccination Concerns:

Jenner's work was monumental is helping the eradication of smallpox, but his predictions about the vaccine's potency were incorrect; vaccination usually prevents smallpox infection for just over ten years.
In December 1999, a WHO Advisory Committee on Variola Virus Research concluded that, although vaccination is the only proven public health measure available to prevent and control a smallpox outbreak, current vaccine supplies are extremely limited. (2)
A WHO survey conducted in 1998 indicated that approximately 90 million declared doses of the smallpox vaccine were available worldwide. Storage conditions and potency of these stocks are not known. (2)

Furthermore, existing vaccines have proven efficacy but also have a high incidence of adverse side-effects. Scientists say the smallpox vaccine, based on decades-old technology, presents a risk of side effects that include death. Based on studies from the 1960s, experts estimate that 15 out of every 1 million people vaccinated for the first time will face life-threatening complications, and one or two will die. Reactions are less common for those being revaccinated. Using these data, vaccinating the nation could lead to nearly 3,000 life-threatening complications and at least 170 deaths. (2)

There are four main complications are associated with vaccination, three of which involve abnormal skin eruption. Eczema vaccinatum occurred in vaccinated persons or unvaccinated contacts who were suffering from or had a history of eczema. In these cases, an eruption occurred at sites on the body that were at the time affected by eczema or had previously been so. These eruptions became intensely inflamed and sometimes spread to healthy skin. Symptoms were severe. The prognosis was especially grave in infants having large areas of affected skin. (2)

Progressive vaccinia occurred only in persons who suffered from an immune deficiency. In these cases the local lesion at the vaccination site failed to heal, secondary lesions sometimes appeared elsewhere on the body, and all lesions spread progressively until the patient died, usually 2–5 months later. As vaccination ceased in most countries prior to the emergence of HIV/AIDS, the consequences of the currently much larger pool of persons suffering from immunodeficiency were not reflected in recorded cases of progressive vaccinia. (2)

Generalized vaccinia occurred in otherwise healthy individuals and was characterized by the development, from 6–9 days after vaccination, of a generalized rash, sometimes covering the whole body. The prognosis was good. (2) Postvaccinial encephalitis, the most serious complication, occurred in two main forms. The first, seen most often in infants under 2 years of age, had a violent onset, characterized by convulsions. Recovery was often incomplete, leaving the patient with cerebral impairment and paralysis. The second form, seen most often in children older than 2 years, had an abrupt onset, with fever, vomiting, headache, and malaise, followed by such symptoms as loss of consciousness, amnesia, confusion, restlessness, convulsions and coma. The fatality rate was about 35%, with death usually occurring within a week. (2)

Historically, the live virus in the smallpox vaccine killed one or two people out of every 1 million who were vaccinated, and many others suffered debilitating side effects. The risk is heightened for those who have suffered from eczema or other skin diseases, as well as those whose immune systems have been compromised, such as HIV patients, transplant recipients and cancer patients. There are more people with those conditions today than there were 30 years ago, so mortality rates could be higher. This means that if every U.S. resident were vaccinated, 300 or more might die as a result. (2) Predictions are nearly impossible to make because of the high number of people with weakened immune systems, and the amount of individual undiscovered cases of HIV and AIDS.

Conclusion

President Bush's plan has three phases. First, a half million members of the armed forces and another half million healthcare workers will get vaccinated. Next, 10 million emergency response workers: emergency-room workers, police, firefighters, and ambulance crews will be vaccinated. The final phase, offering the vaccine to the public, will occur as soon as the FDA can license the vaccine. This will probably be in 2004. In case of a smallpox bioterror attack, the vaccine would be made widely available without licensing. (3)

"Smallpox eradication was a global campaign, and populations were protected by vaccination in every country. However, during the campaign, different forms of smallpox occurred, and different vaccines and vaccination techniques were used. The duration of protection can be influenced by the potency of the vaccine and the inoculation procedure used. These factors make it difficult to give firm, precise estimates that are relevant today, where populations no longer have widespread immunity, either from vaccination or from having survived the disease (patients who survived smallpox were immune for life)." (2)

With President Bush making decisions to vaccinate Americans, it is evident that we should be concerned about an outbreak of smallpox . In the case that smallpox is used as a bioweapon, emphasis must be placed on preventing epidemic spread. In doing so, it should be kept in mind that smallpox patients are not infectious during the early stage of the disease but become so from the first appearance of fever and remain so, though to a lesser degree, until all scabs have separated. Also, immunity develops rapidly after vaccination against smallpox. (1)
Isolation is essential to break the chain of transmission. In the case of a widespread outbreak, people should be advised to avoid crowded places and follow public health advice on precautions for personal protection. (2)

When the smallpox vaccination is licensed and made voluntary to the public, the urge to get the vaccination to prevent the virus may be too hasty. According to the WHO, the risk of adverse events is sufficiently high that vaccination is not warranted if there is no or little real risk of exposure. (2) The decision for each individual to or not to be vaccinated will not be easy. It would be beneficial to know how eminent is the threat of smallpox bioterrorism. However, we are not always granted that knowledge due to both governmental restrictions and the potential for a completely unexpected attack.

That we have not immediately turned back to the notion of total vaccination, demonstrates that the world has changed since the first Gulf War. This change is not only due to the rise of terrorism, but also the fall of the Soviet Union, the spread of AIDS and a fuller appreciation of medical risks. (1)


1)Smallpox: What you Need to Know, An informative site explaining how Bush's decisions will affect Americans; includes a great visual guide to the virus.

2)Communicable Disease Surveillance and Response, WHO Fact Sheet on Smallpox, More information than you know what to do with regarding Smallpox. Lists of facts and detailed information.

3)President Offers Smallpox Vaccine to All, An informative article about Bush's plan.


The Sun: A Silent Killer That We All Indulge In
Name: Anastasia
Date: 2002-12-13 13:04:49
Link to this Comment: 4085


<mytitle>

Biology 103
2002 Third Paper
On Serendip

Sunburn is the inflammation of the skin caused by actinic rays from the sun or artificial sources. Moderate exposure to ultraviolet radiation is followed by a red blush, but severe exposure may result in blisters, pain, and constitutional symptoms. As ultraviolet rays penetrate the skin, they break down collagen and elastin, the two main structural components of the skin, a process that results in wrinkles caused by sun damage (7). In addition, the sun damages the DNA of exposed skin cells. In response, cells release enzymes that excise the damaged parts of the DNA and encourage the production of replacement DNA. At the same time, the production of melanin increases which darkens the skin. Melanin, the pigment that gives skin its color, acts as a barrier to further damage by absorbing ultraviolet light. A suntan results from the skin's attempt to protect itself (8). Light-skinned people and infants are extremely susceptible to ultraviolet rays because they lack the sufficient skin pigmentation that protects the skin from continuous damage.

The ultraviolet radiation in sunlight is divided into three bands: UVA (320-400 nanometers), which can cause skin damage and may cause melanomatous skin cancer; UVB (280-320 nanometers), stronger radiation that increases in the summer and is a common cause of sunburn and most common skin cancer; and UVC (below 280 nanometers), the strongest and potentially most harmful form. Much UVB and most UVC radiation is absorbed by the ozone layer of the atmosphere before it can reach the earth's surface (8). The depletion of the ozone layer is increasing the amount of ultraviolet radiation that can pass through it. The radiation that does in fact pass through the ozone layer is mostly absorbed by window glass or impurities in the air.

Even though it is dangerous, sunlight has good qualities. A small amount of sunlight is necessary for good health. Vitamin D is produced by the action of ultraviolet radiation on ergosterol, a substance present in the human skin and in some lower organisms, like yeast (3). The treatment or prevention of some skin disorders often includes exposure of the body to natural or artificial ultraviolet light. The radiation also kills germs and is widely used to sterilize rooms, exposed body tissues, blood plasma, and vaccines.

Ultraviolet radiation can be detected by the fluorescence it induces into certain substances. It may also be detected by the photographic and ionizing effects it has. The long-wavelength, "soft" ultraviolet radiation, which lies just outside the visible spectrum, is often referred to as black light (3). Low intensity sources of this radiation are often used in mineral prospecting and in conjunction with bright colored fluorescent pigments to produce unusual lighting effects.

The knowledge of ultraviolet radiation and the effects it has on skin has greatly increased in recent years. Repeated sunburn is now considered a major risk factor for melanoma. Melanoma is the most virulent type of skin cancer and the type most likely to be fatal, and its incidence is increasing around the world. There also appears to be a hereditary factor in some cases. Although light-skinned people are the most susceptible, melanomas are also seen in dark-skinned people. Melanomas arise in melanocytes, the melanin-containing cells of the epidermal layer of the skin. In light-skinned people, melanomas appear most frequently on the trunk in men and on the arms or legs in women. In African Americans, melanomas appear most frequently on the hands and feet (1). It is recommended that people examine themselves regularly for any evidence of the characteristic changes in a mole that could raise a suspicion of melanoma. These include asymmetry of a mole, a mottled appearance, irregular or notched borders, and oozing or bleeding or a change in texture (2). Surgery performed before the melanoma has spread is the only effective treatment for melanoma.

Basal and squamous cell carcinomas are the most common types of cancer. Both arise from epithelial tissue. Light-skinned, blue-eyed people who do not tan well but who have had significant exposure to sun rays are at the highest risk. Both types usually occur on the face or other exposed areas. Basal cell carcinoma typically is seen as a raised, sometimes ulcerous nodule. The nodule may have a pearly appearance. It grows slowly and rarely spreads, but it can be locally destructive and disfiguring. Squamous cell carcinoma is typically seen as a painless lump that grows into a wart like lesion. It may also arise in patches of red, scaly sun-damaged skin called actinic keratoses (1). If it spreads, it can lead to death.

Basal and squamous cell carcinomas are easily cured with appropriate treatment. The lesion is usually removed by scalpel excision, freezing, or micrographic surgery. Micrographic surgery is the most complicated of them all where thin slices of the lesion are removed and examined for cancerous cells under a microscope until the samples are clear. If the cancer arises in an area where surgery would be difficult or disfiguring, radiation therapy may be an option.

The National Weather Service's daily UV index predicts how long it would take a light skinned American to get a sunburn if exposed, unprotected, to the noonday sun, given the geographical location and the local weather. It ranges from 1 (about 60 minutes before the skin will burn) to a high of 10 (about 10 minutes before the skin will burn) (7). Before going out into the sun, whether it is to walk the dog or lay out on the beach, it is important to know what degree of sun intensity you are up against. Also, no matter what you might be doing, if you are going to be in the sun it is essential to use some form of protection.

The easiest and most successful strategy for protection from the harmful effects of sunlight is avoidance. Studies of UV intensity have concluded that 30% of the total daily UV flux hits the earth between 11 AM and 1 PM (4). A good strategy would be to plan activities, trying to avoid this peak exposure time. "A useful rule of thumb is that if your shadow is shorter than you, the risk of sunburn is substantial," (4).

A second extremely important skin damage prevention method is applying sunscreen to exposed body parts before sun exposure occurs. Sunscreens block or absorb UV light. Zinc oxide, the white opaque cream that most lifeguards wear on their noses, is an excellent form of sunscreen that blocks UV light entirely. The first commercial sunscreen was developed in 1928, and contained benzyl salicylate and benzyl cinnamate (10). The most common absorption chemical in sunscreen during the 1950's and 60's was PABA (para-aminobenzoic acid) (9). It has since fallen out of favor because of its inefficiency to absorb the wavelengths of UV light compared to more recent active ingredients (10). Today salicylates and cinnamates are found in most UVB protectants.

All sunscreens are labeled with an SPF. The SPF acts like a multiplying factor. If your skin would normally be fine after spending ten minutes in the sun, then you should apply an SPF ten sunscreen to any exposed body parts. Your skin should be fine for one hundred minutes in the sun. In order for sunscreen to work, it must be applied evenly, enough must be used, and it must stay on the skin. It should be applied about half an hour before going out into the sun, in order for it to bind to the skin (10).

Another strategy that should not be taken for granted is wearing protective clothing. Clothing is generally a good UV blocker, although lighter fabrics, worn in the summer in order to stay cooler, may not have a great protective value in comparison to heavier fabrics such as denim (5). Jevtic showed that a cotton T-shirt has an SPF value of around 15, which decreases when the fabric becomes wet. "Interestingly, a cotton T-shirt may actually increase in SPF value when it is washed a few times due to shrinkage in the hole size of the fabric mesh," (1). In order to test your clothes to see just how protective they really are, hold a clothing item up to a strong light source such as a light bulb. If you can see images through it, most likely the SPF value of the item is 15. If light gets through, but you can't really see through it, it probably has an SPF value between 15 and 50. If it completely blocks the light, it probably has an SPF value of over 50 (1). Hats are also a great option for protective clothing. They cover not only the head, but the neck as well, which gets almost continual sun exposure, even in the winter months. Hats have even been proven to reduce the risk of multiple skin cancers.

More than half of all new cancers are skin cancers. More than one million new cases of skin cancer will be diagnosed in the United States this year. About 80% of the new skin cancer cases will be basal cell carcinoma, while only 16% are squamous cell carcinoma and 4% are melanoma. An estimated 9,600 people will die of skin cancer this year, 7,400 from melanoma, and 2,200 from other skin cancers. One person dies of melanoma every hour. In 2002, 7,400 deaths will be attributed to melanoma. Melanoma is the fifth most common cancer in men and the sixth most common cancer in women (2). With statistics like these, which were taken from the American Cancer Society's 2002 Facts and Figures, I hope you think twice before going out into the sun unprotected.

In conclusion, it is true that the greater the skin pigmentation the better as far as sun protection goes. It does not follow that intentional tanning specifically to achieve an increase in protective pigmentation is the best sun protection strategy. Recent evidence suggests that tanning only occurs after DNA has been damaged. DNA damage is the trigger for the tanning response, meaning that a person does not begin to tan until after they have already caused damage to themselves. In addition, tanning with high intensity UVA, which is used in tanning parlors, is more harmful to the skin than tanning with natural sunlight (6). From this, one can conclude that there really is no safe level of sun exposure.


References

1)Sun Damage and Prevention, helpful hints on protection

2)Skin Cancer Fact Sheet, important skin cancer facts everyone should know

3)Hidden Sun Damage, things you might not know

4)Protect Yourself From the Sun, how to do it right

5)Think the Sun is Less Dangerous in Winter than in Summer? Think Again!, did you know

6)Tanning Salon Exposure Can Lead to Skin Cancer, the real truth about tanning salons

7)Malignant Melanoma Fact Sheet, what you need to know about melanoma

8)An Introduction to Skin Cancer, a basic overview on skin cancer

9)How Sunscreen Works, behind the science of sunscreen

10)Sunscreens & Sunburns, one helps, one hurts


Fire and Ice ... and Darkness
Name: Laura Bang
Date: 2002-12-13 16:37:30
Link to this Comment: 4089

<mytitle> Biology 103
2002 Third Paper
On Serendip

Fire and Ice ... and Darkness:

Dark Matter, Dark Energy, and the End of the Universe

Laura Bang

     "Astronomers have dark imaginations." (6) Throughout the past century, as new technology and new theories gave science a new view of space, astronomers became aware that their conceptions of the universe did not agree with new observations. When astronomers tried to determine the mass of the universe, they found conflicting answers. To solve this problem, scientists imagined a kind of matter that we cannot see, and they decided to call it "dark matter." (1) In addition, while trying to measure the rate at which the expansion of the universe was slowing down, scientists found that instead the universe's rate of expansion was speeding up. This led scientists to imagine a kind of "dark energy." (6) Dark matter and dark energy are both important in determining the mass and density of our universe-and these, in turn, are important in determining the fate of our universe. (5)

     One of the most intriguing mysteries for astronomers today is that approximately 90% of our universe is invisible. Astronomers decided to call this invisible matter "dark matter." (1) It all began when astronomers were trying to determine the mass of galaxies. There are two possible methods for this calculation: a) by using the brightness of a galaxy to calculate the mass, or b) by looking at the how fast the stars in a galaxy are moving-the faster a galaxy is spinning, the more mass it contains. (1) When astronomers in the 1930s actually calculated these numbers using both of the above methods, however, their answers were different-even though both methods should have yielded the same answers. (1) This would not have been so much of a problem if the difference between the answers had been small, but the fact is, the answers were hugely different, leading astronomers to come to the conclusion that there must be a lot of "dark matter" in our universe that we simply cannot see (other than its gravitational effects). (1)

     The idea of dark matter may seem incredible at first. How can nearly 90% of our universe be invisible to us? To help clear this up, close your eyes and picture a city at night. In this city you are looking at a skyscraper. Several of the windows are lit, but most of them are not since it is after normal office hours. You can only see the windows that are lit, yet you are sure of the existence of the other windows that make up the rest of the building. This is our universe: the lit windows are the stars and other matter that we can see, while the dark windows are the dark matter of our universe. (1)

     What exactly is all this dark matter, anyway? There are three possible contributors to the dark matter problem: MACHOs, WIMPs, and neutrinos (strange names, but fascinating things).

     MACHO stands for "Massive Astronomical Compact Halo Object." (2) MACHOs are "halo" objects because they exist on the outer rims -- or "halos" -- of galaxies. These are the "heavyweight" components of dark matter and they consist of "massive dark bodies such as planets, black holes, asteroids or failed stars (brown dwarfs)." (2) These objects do not give off their own light and are not near enough to reflect the light of light-emitting stars, so they appear "invisible" when viewed from large distances. (2) According to recent speculations, however, MACHOs could account for about 20% of the dark matter. (2) The rest, then, is left up to WIMPs and neutrinos.

     "A WIMP is a Weakly Interacting Massive Particle." (3) Scientists have not found any actual WIMPs as of yet, however, but they think that there are millions of WIMPs flying around all the time. In spite of the fact that they are labeled "weak," scientists believe that they are actually quite strong-able to pass through solid objects without slowing down or stopping. Because of this ability to pass through solid objects, several research teams around the world are looking for WIMPs in underground laboratories. Why underground? Since most particles flying through the air are not able to pass through solids, scientists have a better chance of finding WIMPs underground where, after passing through the earth's rocky surface, they will in theory be able to see evidence of WIMPs without the interference of other particles. Right now, however, scientists are still looking for these elusive particles, which they speculate could account for about 90% of dark matter. (3)

     The third possible contributors to the "missing matter" problem are neutrinos. A neutrino is "a tiny elementary particle, smaller than an atom with no electric charge and no mass. All this particle [does is] carry energy as it zip[s] along at the speed of light." (4) These particles have an interesting origin: a scientist by the name of Wolfgang Pauli made them up in order to make his calculations work out. After further speculation, however, scientists agreed that neutrinos do in fact exist. A 1998 study also discovered that a specific type of neutrino did in fact have a small mass, which allows this particle to be a possible contender for the missing matter of our universe. (4) Astronomers believe that an abundance of neutrinos could account for around 25% of dark matter. (4)

     Astronomers are still working on fully understanding the dark matter of our universe, but while working on this problem they discovered another problem: the universe is still expanding. After so many billions of years of expansion since the Big Bang that created it, the universe should be slowing down, but it's not -- what's more, it's speeding up. Astronomers were mystified when they first discovered this, and in order to make sense of it they are now speculating on the existence of a kind of "dark energy." (6)

     According to the laws of gravity and the gravitational pull exerted by each object in the universe, after expanding for billions of years the universe should slow down. Going along with this idea, astronomers attempted to calculate the rate at which the universe's expansion was slowing down. (6) Instead, while looking at the light produced by two distant supernovas, they found that the expansion rate is increasing. In order to make sense of this new information, astronomers came up with the idea of dark energy. (6)

     The possibility of dark energy came as a surprise to scientists. Michael S. Turner of the University of Chicago summed it up: "For 70 years, we've been trying to measure the rate at which the universe slows down. We finally do it, and we find out it's speeding up." (6) Yet as with most new discoveries, finding out that they are wrong just adds to the scientists' fun. Andreas J. Albrecht of the University of California, Davis, stated, "This is the most exciting endeavor going on ... right now." (6) Scientists have only just begun to study dark energy, but they do know that dark energy plays a key role in how our universe will end and other such mysteries of deep space. (6)

     Dark energy could be said to be a kind of "antigravity," but a more accurate way to describe it is to imagine it as "the flip side of ordinary gravity." (6) One property of dark energy would be a property called negative pressure. Something that has negative pressure would resist being stretched, "as a coiled spring does: pull on the spring and it pulls back." (6) Therefore, since normal gravity would pull things together, dark energy would push things outward, causing the increased expansion rate of the universe. (6)

     There are two possibilities of what dark energy could be. One is called "vacuum energy" which has to do with complicated theories of physics and empty space (also called a vacuum; hence, the name, "vacuum energy"), and the other form is called "quintessence" and has to do with other dimensions contributing to dark energy. (6)

     A brief look at the properties of vacuum energy reveals that it could be related to quantum theory. (6) Quantum theory holds that a vacuum "seethes with energy as pairs of particles and antiparticles pop in and out of existence." (6) In addition, the Russian astrophysicist Yakov B. Zeldovich found in 1967 that "the energy associated with this nothingness [a vacuum] has negative pressure." (6)

     Quintessence, on the other hand, has to do with multiple dimensions. We live in four dimensions that we can perceive: the first through third dimensions which have to do with how we perceive depth and our world around us, plus the fourth dimension of space-time. (6) Andreas J. Albrecht and Constantinos Skordis of the University of California, Davis, proposed that "the repulsive force [of dark energy] may come from other, unseen dimensions or even from other universes beyond our own." (6)
Since all of this has yet to be confirmed, there are several current studies hoping to discover that a) the universe is actually expanding at a faster rate, and b) whether vacuum energy or quintessence is responsible for this acceleration. (6) With their imaginings of dark matter and dark energy, and how they relate to the end of the universe, scientists seem quite morbid. So how exactly do dark matter and dark energy relate to the end of the universe?

     There are three possibilities for the future of the universe. The first possibility is the "Big Freeze": the universe will continue expanding forever, which would eventually cause all the planets and stars to freeze because they would be so far from the life-giving heat of the Sun. The second possibility is the "Big Crunch": gravity will eventually pull the universe back together, resulting in all the planets and stars eventually colliding with each other. The third possibility is that the universe will reach equilibrium and come to a halt, neither expanding nor contracting. This all depends on how much of a force dark energy is exerting on the universe, and also on the density of the universe. (5)

     In order to determine the density of the universe, astronomers need to determine how much dark matter there is. The symbol scientists use for the density of the universe is the last letter of the Greek alphabet, omega (which means "the end"). The critical density is omega=1; this is the density needed for the universe to come to equilibrium. If omega<1, then the universe will continue expanding toward the Big Freeze. If, on the other hand, omega>1, then gravity will pull the universe back inward ending in the Big Crunch. "So our destiny depends on our density" (5) -- it is interesting to note that "destiny" and "density" are anagrams of each other (isn't language awesome?). The most recent estimate of the universe's density is omega approximately equals 0.3, which means that, as far as we know right now, the universe is heading toward the Big Freeze. (5)

     However, as it turns out, none of that really matters because in approximately four billion years the Sun will expand and obliterate Earth; and at about the same time, the nearby galaxy Andromeda will crash into our Milky Way galaxy. (5) So right now it seems that our "world will end in fire," "but if it had to perish twice" it looks as though "ice ... would suffice." (7)

Some say the world will end in fire,
Some say in ice.
From what I've tasted of desire
I hold with those who favor fire.
But if it had to perish twice,
I think I know enough of hate
To say that for destruction ice
Is also great
And would suffice.

~ Robert Frost (7)


References

1) Dark Matter

2) MACHOs

3) WIMPs

4) Neutrinos

5) The End of the Universe

6) A Dark Force in the Universe (April 7, 2001)

7) Frost, Robert. "Fire and Ice."


The Science of Attraction
Name: Mahjabeen
Date: 2002-12-14 16:35:13
Link to this Comment: 4093


<mytitle>

Biology 103
2002 Third Paper
On Serendip

Attraction. Such a powerful word. There is something so incredibly attractive about this word or maybe it's just growing up knowing the significance of this word that makes the word itself so attractive. So what is attraction? Butterflies in the stomach, racing of the heart, goose bumps on the skin or shivers down the spine? This paper will look at the "scientific" factors that make a man feel attracted to a woman or vice versa.

What makes human attraction so fascinating are all the elements associated with it. It is not the simple mating ritual performed by most, if not all, living and reproducing creatures on planet earth. The incidents leading to human attraction will often lead to feelings of love and hence companionship, emotions limited to mainly the human realm.

Scientists are trying to break down this enormously complex phenomenon of attraction by coming up with a number of feasible reasons to explain why we are attracted to some people while we don't even spare a glance to others. Theories have included proportionate figures, facial symmetry, pheromones, upbringing and genetics. Some theories such as men tending to feel attracted to women depending on their level of fertility are quite chauvinistic but to some extent true. Such a theory can easily apply to men as well. Women, like men, might tend to go for men who are more fertile.

Evolutionary fitness might be a criterion for attractiveness. According to evolutionary psychologists, many of the traditional and universal qualities which we link to sex appeal are grounded not merely in assimilated social cultural traditions as we have been told, but are deeply rooted in our basic physiological make-up: our unconscious innately drives to do our fair share for survival of the species. If so, is it possible that features which draw us to dates and mates appear to reflect reproductive and parenting potential? If so, how might we differentiate between those which are inborn and those instilled by our cultures? (1)

"Judging beauty has a strong evolutionary component. You're looking at
another person and figuring out whether you want your children to carry that person's
genes," says Devendra Singh, a professor of psychology at the University of Texas. The scientific properties of attraction (to whatever extent they are involved) can be explained by the simple will to produce viable offspring, also know as healthy kids.

Beyond this underlying principle of attraction, one begins to wonder how, and on what level, one can judge the fitness of another person. Certainly, a person smitten for the first time at a bar doesn't ask for a genetic sequence and specifics about that special
someone's immune system before approaching him or her. Yet some of that information is received and interpreted at a sub-conscious level. (2)

Lust leads to attraction. Lust is governed by testosterone and estrogen, says anthropologist Helen Fisher of Rutgers University. Testosterone is not confined only to men. It has also been shown to play a major role in the sex drive of women. Although the reproductive parts are often ascribed credit (or blame) for human sexual attraction, many scientists believe that sexual attraction begins in a pea-sized structure called the hypothalamus deep in the primitive part of the human brain. This tiny bundle of nerves sets off an exciting chain of events when one person perceives another to be sexually attractive. The hypothalamus instantly notifies the pituitary gland which rushes hormones to the sex glands. The sex glands in turn promptly react by producing estrogen, progesterone, and testosterone. Within seconds, the heart pounds, muscles tense; he or she feels dizzy, light-headed, and the tingling of sexual arousal. This chemical driven high induces moods which swing from omnipotence and optimism to anxiety and pining. A malfunctioning hypothalamus can have bizarre effects on one's romantic love life, including irrational and distorted romantic choices, obsessions, idealization, and separation anxiety. The height of romantic passion creates illusions of well being, feelings of possessiveness, and happily-ever-after fantasies within the psyche of the new lover. (3)

Fisher believes the volatile phase of romantic attraction is caused by changes in signaling within the brain involving a group of neurotransmitters called monoamines, which include dopamine which is activated by cocaine and nicotine, norepinephrine or adrenaline that makes the heart race and the body sweat and serotonin. Serotonin can actually send us temporarily insane. Next comes the hormones oxytocin and vasopressin which forge the bonds of attraction by bringing attachment into the picture. Oxytocin is released by the hypothalamus gland during child birth and also helps the breast express milk. It helps cement the strong bond between mother and child. It is also released by both sexes during orgasm and it is thought that it promotes bonding when adults are intimate. The theory goes that the more sex a couple has, the deeper their bond becomes. Vasopressin is an important controller of the kidney and its role in long-term relationships was discovered when scientists looked at the prairie vole. In prairie vole society, sex is the prelude to a long-term pair bonding of a male and female. Prairie voles indulge in far more sex than is strictly necessary for the purposes of reproduction. It was thought that the two hormones, vasopressin and oxytocin, released after mating, could forge this bond. In an experiment, male prairie voles were given a drug that suppresses the effect of vasopressin. The bond with their partner deteriorated immediately as they lost their devotion and failed to protect their partner from new suitors. (4)

When it comes to choosing a partner, are we at the mercy of our subconscious? Researchers studying the science of attraction draw on evolutionary theory to explain the way humans pick partners. It is to our advantage to mate with somebody with the best possible genes. These will then be passed on to our children, ensuring that we have healthy kids, who will pass our own genes on for generations to come.

When we look at a potential mate, we are assessing whether we would like our children to have their genes. There are two ways of doing this that are currently being studied, pheromones and appearance. (5)

Human pheromones are a hot topic in research. They are odorless chemicals detected by an organ in the nose. Pheromones are known to trigger physical responses including
sexual arousal and defensive behavior in many species of insects, fish and animals. There
has long been speculation that humans may also use these chemicals to communicate instinctive urges. Women living together often synchronize their menstrual cycles because they secrete an odorless chemical in underarm sweat.

Pheromones are already well understood in other mammals, especially rodents. These animals possess something called a 'vomeronasal organ' (or VNO) inside their noses. They use it to detect pheromones in the urine of other rats and use this extra sense to understand social relationships, identify the sex of fellow rats and find a mate.

In human embryos these organs exist but they appear to perform no function after birth. Now, scientists at Rockefeller University in New York and Yale University in Connecticut believe they have found a gene which may create pheromone receptors. A receptor is an area on a cell that binds to specific molecules. Called V1RL1, the gene resembles no other type of mammalian gene and bears a strong similarity to those thought to create pheromone receptors in rats and mice. (6)

In 1995, Claus Wedekind of the University of Bern in Switzerland, asked a group of women to smell some unwashed T-shirts worn by different men. What he discovered was that women consistently preferred the smell of men whose immune systems were different from their own. This parallels what happens with rodents, who check-out how resistant their partners are to disease by sniffing their pheromones. So it seems we are also at the mercy of our lover's pheromones, just like rats.

At the University of Chicago, Dr Martha McClintock has shown in her own sweaty T-shirt study that what women want most is a man who smells similar to her father. Scientists suggest that a woman being attracted to her father's genes makes sense. A man with these genes would be similar enough that her offspring would get a tried and tested immune system. On the other hand, he would be different enough to ensure a wide range of genes for immunity. (7)

Alarmingly, scientists have found that the oral contraceptive pill could stop a woman producing pheromones and undermine her ability to pick up the right chemical
signals from men, hence making women choose men with whom they cannot produce children. Scientists believe pheromones may help people choose biologically compatible mates. (8)

Appearance could be another indicator of the quality of a person's genes. Research suggests that there are certain things we all look for, even if we don't know it.

It is thought that asymmetrical features are a sign of underlying genetic problems. Numerous studies in humans have shown that men in particular go for women with symmetrical faces. The preference in women for symmetry is not quite so pronounced. Women are also looking for a man's ability to offer food and protection. This might not be indicated in their genes, but in their rank and status, for example. (9)

Consistent with the evolutionary theory, many of these sex-stereotypical traits reflect what visually appear to be signs of reproductive potential. The small jaw preferred in females, and the heavy jaw and chin in males reveals the effects of female hormone estrogen, and male hormone androgen respectively. With evolutionary theory in mind, it should not be surprising that men find visual cues to attractiveness more relevant in selecting a mate than women do.

Studies have shown men prefer women with a waist to hip ratio of 0.7; this applies whatever the woman's overall weight. This ratio would seem to make sense as an indicator of a woman's reproductive health. When women age their waist tends to become less pronounced as they put on fat around the stomach. This coincides with them becoming less fertile. The "hourglass" figure, research shows, is dominantly preferred in a woman rather than any other body shape.

Interestingly, scientists have found that female reproductive capacity shows a positive
correlation with the sharp contrast between waist and hips. Preferred female facial features; wide-set large eyes, small nose and jaw; are imitative of youth and untapped reproductive potential. Similarly, the muscular, angular T-shaped male figure, assertive behaviors, and deep voice most universally preferred by women, are visually indicative of higher levels of the male hormone testosterone. (10)

It's interesting how many married couples look quite similar. Studies have shown that more than anything we prefer somebody who looks just like we do. Research has uncovered that there is a correlation in couples between their lung volumes, middle finger
lengths, ear lobe lengths, overall ear size, neck and wrist circumferences and metabolic rates. The latest studies indicate that what people really, really want is a mate that looks like their parents. Women are after a man who is like their father and men want to be able to see their own mother in the woman of their dreams.

At the University of St Andrews in Scotland, cognitive psychologist David Perrett studies what makes faces attractive. He has developed a computerized morphing system
that can endlessly adjust faces to suit his needs. Students in his experiments are left to
decide which face they fancy the most. Perrett has taken images of students' own faces and morphed them into the opposite sex. Of all the faces on offer, this seems to be the face that subject will always prefer. They can't recognize it as their own, they just know they like it. Perrett suggests that we find our own faces attractive because they remind us of the faces we looked at constantly in our early childhood years - Mom and Dad. Even the pheromone studies are now showing a preference for our parents' characteristics, where we prefer smells which remind us of our parents. (11)

Perhaps such a genetic affiliation for feeling attracted to people with similar facial structures or those resembling features of our parents explains why the majority of humans tend to stick to their own races, cultures and backgrounds. I would also conclude that attraction is not only an integration of chemistry and genetics but feelings and emotions which can sometimes be quite inexplicable. While science attempts to answer all these questions it must be taken into account that there will still be queries to which there might not be a scientific explanation. It is not surprising at all that such extensive research has been done on attraction, since it is one of the governing factors of everyday life. Attraction however should not be confused with love, since love takes up an entirely different though related dimension. While elements such as pheromones, facial symmetry and genetics may be able to explain attraction, much more research should be conducted if the emotion of love is to be explained and even then researchers might find themselves quite empty-handed. After all, love is not based on physical attraction alone, mental and emotional attraction must also be considered if research is to be conducted in the field of love. While most scientists deal with heterosexual attraction, the field of homosexuality or bisexuality and circumstances behind it is still open to interpretation and discussion.

Nevertheless, ongoing research on the chemistry and biology behind human attraction and love will continue to make new discoveries and shed some light on why the boy
next door suddenly seems more appealing than Hugh Jackman.

References


(1) Evolutionary theory of Sexual Attraction

(2) The Science of Attraction

(3) Evolutionary theory of Sexual Attraction

(4) Manipulating the Chemistry of Attraction

(5) BBC:The Science of Love

(6) Secrets of Human Attraction

(7) Sensual Signals

(8) The Magic of Sexual Attraction

(9) What makes you fancy someone?

(10) Evolutionary theory of Sexual Attraction

(11) BBC-Hot topics-Love-Attraction

For Fun: Faceprints


Pheromones
Name: Elizabeth
Date: 2002-12-14 20:57:18
Link to this Comment: 4094


<mytitle>

Biology 103
2002 Third Paper
On Serendip

Often, animals wish to send messages to one another without making a sound.
One method of transmission for such messages is through pheromones, strong chemicals
signals received by nerve cells in the nose and interpreted by the vomeronasal organ,
another structure within the nose (2). Pheromones are only detected by members of the
same species, which are interpreted by the hypothalamus region of the brain. The
presence of pheromones has been confirmed in many types of insects and other animals,
but researchers are still unsure of whether or not mammals, and in particular humans,
transmit or respond to pheromones. Also, researchers are still working to pinpoint the
exact function of pheromones in humans. Although also linked to communications
regarding territory or food location, scientists believe pheromones in most animals are
primarily linked to sexual attraction. Therefore, the discovery of certain types of
pheromones in humans has allowed a huge market to develop based on these mysterious
chemicals. Such products promise to make the wearer more attractive to the opposite sex.
However, as the research on pheromones is relatively young, it remains to be seen
whether one can manipulate their sexual attractiveness with the aid of bottled
pheromones.

The discovery of the first type of pheromone took place in 1956. A German team
of researchers identified and isolated a powerful sexual attractant in female silkworms
which caused curious effects in the males of the species. When sensing the presence of
the pheromone, named bombykol after the species name of the silkworm moth, a male
moth would begin a frenzied mating dance. Researchers determined that this pheromone,
although odorless, must communicate a strong signal of sexual availability from the
female to the male, thus initiating reproduction. Scientists studied the chemical makeup
of bombykol extensively, determining that the substance consists of a primary alcohol,
unlike other moth pheromones, which were chemically similar to fatty acids. Females
have a reserve of the chemicals which produce the bombykol pheromone in their sex
glands and, when hoping to attract a mate, they release part of their reserve (1).
Pheromones are extremely powerful. In fact, researcher Lewis Thomas estimated "it has
been soberly calculated that if a single female moth were to release all the bombykol in
her sac in a single spray, all at once, she could theoretically attract a trillion males in the
instant" (3). Moths are not the only creatures to communicate by pheromones.
Pheromone secretions of the same compound produced by silkworm moths have been
found in samples of elephant urine. These pheromones only appear in a female
elephant's urine just before ovulation, announcing her fertility to the surrounding males.

Of course, human scientists were curious to see if such a powerful sexual
attractant was part of their reproductive ritual. If indeed it was, many hoped to
manipulate the effects of pheromones to improve their love lives. The first evidence of
pheromones in humans came in 1971, courtesy of a ground breaking study by
biopsychologist Martha K. McClintock (3). McClintock ran a study of women living in
college dormitories, through which she discovered that groups of women living together
gradually develop a synchronized menstrual cycle. Some have theorized that this
synchronism was intended to foster genetic diversity, as one man would be unable to
impregnate every woman in a prehistoric tribe it those women were only fertile at the
same times. During a series of follow-up tests to this study, McClintock attempted to
determine whether this curious effect was triggered by pheromones, and if so, whether
pheromones could affect the length of a woman's ovulation and menstrual cycle. In
order to do so, McClintock devised a complicated experiment which required test
subjects to wear a gauze pad in their armpit. From these pads, McClintock harvested
perspiration, masked its odor, and dabbed the solution under other test subject's noses.
The results showed that this mixture did indeed affect the menstruation cycle of the
subject, but only if administered within a few days prior to ovulation. If the perspiration
came from a woman who had yet to ovulate that month, the solution shortened the
subjects' period by a couple of days. However, if the sample came from an ovulating
woman, the test subjects' period was delayed by a day or so. The control group exhibited
no change. This study seemed to prove the existence of human pheromones, but left
many questions unanswered, especially regarding the chemical makeup of human
pheromones, the function of such chemical messages, and whether or not males emit
sexual signals to their prospective mates (4).


A large cosmetics industry has developed with hopes of cashing in on the
speculation that pheromones act as sexual attractants in humans. Pheromone products
claim to enhance one's popularity with members of the opposite sex by increasing the
amount of pheromones one emits with a simple topical application of concentrated
chemicals whose makeup is similar to the chemical structure of animal pheromones (5).
Encouraged by studies which hypothesize that those who emit an abundance of "sex
pheromones" tend to be more attractive to members of the opposite sex, consumers buy
these products as a new, biological approach to the age old quest to attract mates. Man
has used scent as an aphrodisiac for centuries, but this market becomes a little trickier
when pheromones are involved, due to their inherent lack of scent. Past research has
proven that humans react to strong and distinctive chemical hormones called
androstenones, present in both genders, but primarily associated with males. In turn,
many popular fragrances, such as musk and other perfumes, derive their scents in part
from the scents of androstenones. However, these compounds, unlike pheromones,
derive their power from an identifiable odor (6). Consumers are less likely to buy an
odorless attractant, unless significant scientific research solidifies its value. Nevertheless,
many products have appeared on the market which claim to use pheromones to attract
lovers. Most pheromone products aimed towards attracting men contain the compound
Androstenol, while pheromone products for attracting females contain Andtrostenol.
The effectiveness of such products is debatable, but their cost is uniformly high.
Although no side effects have been recorded from the use of topical pheromones, making
pheromone products seem safe enough for casual human use, it is undoubtedly a huge
waste of money to buy a product which may or may not deliver its intended effect.
Unlike insects and other animals, who exhibit highly predictable behavior, humans are
much less uniform in their reactions to stimuli. There is slight evidence that all humans
react in varying degrees to the presence of pheromones. A low reaction to pheromones
may be possibly due to malfunctions in the veromonasal organ. Effectiveness also
depends on the concentration of the pheromone in the solution. Those products which
boast a higher concentration of pheromones have a better chance of attracting those
members of the opposite sex who react strongly to pheromones. However, the products
with the highest concentration of pheromones also come with the highest price tag.

Although great advances have been made in the study of pheromones, it is still
too early to market effective pheromone products commercially. Before such products
can provide reliable results, scientists must pinpoint the chemical structure of human
pheromones and identify their exact function in human beings. Researchers have been
able to discover such information regarding pheromone in insects and other animals, so,
given the proper amounts of time and funding, they should be able to do the same for
humans. While humans often exhibit unique behavior, as compared to the fairly
uniform reaction patterns of insects, which makes it difficult to predict the exact reactions
of every human to pheromones, it would not be impossible for scientists to devise a
theory regarding the likely outcome of exposure to certain pheromones. Such a discovery
would help regulate the cosmetic pheromone industry, which in turn would make their
products more useful for humans.

References

1)About Pheromones
2)Study finds proof that humans react to pheromones
3)Secret Sense in the Human Nose: Pheromones and Mammals
4)Nailing Down Pheromones in Humans
5)Pheromones (Human Pheromones)
6)Scent as Aphrodisiacs


Lou Gehrig's Disease
Name: Kathryn Ba
Date: 2002-12-15 10:01:29
Link to this Comment: 4098


<mytitle>

Biology 103
2002 Third Paper
On Serendip

Most people have a "clumsy day" every now and then, when no matter how hard the person tries, he or she cannot avoid tripping or dropping things. What if "clumsy days" happened on a regular basis, and in addition to dropping and tripping over everything, the person experienced severe muscle fatigue, cramping, slurred speech, and/or periods of uncontrollable laughing or crying? This situation merits a visit to the doctor's office, and for a little over 5,600 people in the United States every year, a diagnosis of Amyotrophic Lateral Sclerosis (ALS), commonly known as Lou Gehrig's disease. The baseball player brought national attention to the disease when he was diagnosed with it in 1939 (1). This essay will examine the symptoms associated with ALS, the three types of ALS, and possible causes and treatment options for the disease. A discussion will follow, examining the applicability and actual benefits of current treatment options.

"Amyotrophic" literally means "no muscle nourishment." "Lateral" refers to the area in a person's spinal cord where portions of the nerve cells that nourish the muscles are located, and "sclerosis" defines the scarring or hardening in this region. The muscles are not nourished because as motor neurons degenerate, they cannot send impulses to the muscle fibers that normally result in muscle movement. If muscles do not receive messages to function they begin to waste away, or atrophy, leading to a variety of complications, paralysis, and ultimately death. Because ALS only attacks motor neurons, the sense of sight, tough, hearing, taste and smell are not effected. Many people are not impaired in their minds and thoughts, which remain sharp despite the progressive degeneration of the body (1). All patients diagnosed with ALS eventually die, although the mortality rate differs. Half of all ALS patients die within 18 months of diagnosis, 80% die within five years of diagnosis, and only 10% live more than ten years. Patients with ALS have a higher chance of surviving for five years if they are diagnosed between the ages of 20 and 40. The average age of onset is 55 years (2).

Many complications arise because of an ALS patients' immobility. These include, but are not limited to: joint stiffness and pain, shortening of muscles or connective tissue around the joint that prevent the normal range of the movement of the joints, pressure sores or ulcers, poor circulation, urinary tract infections, constipation, and aggravation of respiratory problems. Another symptom, depression, is also very common. People suffering from ALS often are homebound or embarrassed about their disease and become socially isolated. In addition, one's response to immobility often includes symptoms of depression, such as feelings of despair, irritability, anger, and constant sadness (3).

Classic ALS accounts for 90% to 95% of ALS cases in the United States. This type of ALS is called sporadic (SALS) because it cannot be traced to ancestors with the disease. Familial ALS (FALS), which refers to the occurrence of the disease more than once in a family lineage, accounts for 5% to 10% of all cases. The third type, Guaminian ALS, was observed in the 1950's when an extremely high incidence of ALS was observed in Guam and the Trust Territories of the Pacific (4).

The cause of all forms of ALS still remains elusive, although a gene has been identified that accounts for only 20% of FALS patients. Several theories attempt to explain what causes this disease for the remaining 98% of ALS patients, and glutamate excitotoxicity is one of the most popular. This theory suggests that an excess of glutamate, a naturally occurring chemical in the brain that accounts for approximately 30% of all neurotransmissions, triggers a series of events that ultimately ends in cell death. Excess glutamate is toxic to neurons because it over-stimulates specific neuronal metabolic functions. When this occurs, motor neurons take in too much calcium, which disrupts many cellular functions and leads to cell death. One drug, called riluzole, has been developed in order to help ALS patients reduce the amount of glutamate released when nerve cells signal (2).

A newly identified mutation involving a protein called EAAT2, which normally deactivates and recycles glutamate, may contribute to or cause almost half of SALS cases. Researchers first found that many ALS patients have little or no EAAT2 in certain areas of the brain and spinal cord, causing an excess of glutamate which leads to the death of motor neurons. Further study indicated that the mutation occurred when the nerve cells were translating the DNA code for EAAT2 into RNA. Problems in the RNA happened because when useless bits of DNA were cut and active parts of DNA were pasted together, it occurred randomly instead of in specific spots. This abnormal version of RNA either "produced a useless version of EAAT2 or suppressed production of normal EAAT2." Over half of the ALS patients in the researcher's study has this mutation, and it occurred only in areas where motor neurons were dying: in the spine and muscle control areas of the brain (5).

Damage to an enzyme called superoxide dismutase (SOD1) on chromosome #21, which normally detoxifies free radicals, may result in FALS. Free radicals are highly charged destructive molecules that damage elements of a cell's membrane, proteins or genetic material. Normally functioning SOD1 breaks down free radicals, but when it becomes damaged, it is no longer able to perform this function. It may malfunction as a result of a genetic mutation or because of the chemical environment of the nerve cells (2).

Another theory suggests that the existence of large clumps of proteins, called protein aggregates, on the motor neurons of ALS patients may cause the disease. Protein aggregates have been found both in patients with SALS and FALS, and in animals that have been genetically engineered to have a mutation in the SOD1 gene. It is not clear if the excess protein causes motor neurons to die or if it is the "byproduct from overwhelmed cells attempting to repair incorrectly folded proteins" (2).

In addition to the theories about various internal factors that may lead to ALS, one theory contends that exposure to certain environmental toxins contributes to the onset of ALS. These may include: exposure to agricultural chemicals; environmental lead and manganese; brain, spinal cord, and peripheral trauma; dietary deficiencies or excess; damage to DNA; and exposure to electric shock. Airline pilots and electrical utility workers have also been found to have a higher incidence of ALS. Conflicting results and failure to reproduce these types of studies has lead to criticism of this theory (2). One might wonder if this theory could lead the general public to develop an "ALS phobia." For example, one such popular "phobia" is that using deodorant will cause cancer. Theories that are not supported by concrete data, which has been confirmed by numerous scientific studies, are not only a waste of time to consider, but reckless in that they promote unfounded fears. It would be unfortunate if potential airline pilots and electrical utility workers chose another profession in order to avoid the onset of ALS. One must also keep in mind that although a correlation exists between certain environmental toxins, it does not mean that those toxins cause ALS.

The primary treatment options for ALS involve treating the complications associated with the disease. The drug riluzole is also used, which has been proven to prolong the survival of ALS patients (1). More recently, gene therapy has been explored to delay the onset of ALS. In a study using mice genetically engineered to develop FALS, scientists found that a gene called Bcl-2 may delay the onset of the disease. Two strains of mice were bred, one carrying mutations that produced FALS and the other carrying Bcl-2, which is known to protect against cell death. The offspring of these strains with both ALS and Bcl-2 developed the disease significantly later in life, and actually lived longer, than offspring that inherited only ALS. Offspring that had Bcl-2, regardless of whether they had ALS, had healthier motor neurons than offspring without the it. This study suggests that gene therapy with Bcl-2 may be one possible treatment option for ALS patients (6).

Although advances in determining the cause of ALS and in finding possible treatment options are promising, one must also use caution. For example, before believing that gene therapy with Bcl-2 will be an effective treatment option, a clinical study in which mice with ALS receive gene therapy must be conducted. If gene therapy is effective is delaying ALS in mice, a clinical study must then be completed with humans. It is possible that Bcl-2 may not delay ALS as effectively in humans as in mice. The reality remains that even if Bcl-2 could delay ALS in human patients, there is not a cure for this disease.

In order for researchers to develop an effective treatment for ALS patients, further data must be collected in order to determine what causes this disease. Even though it is important to understand that damage to SDO1 may causes FALS, one must keep in mind that damage to this enzyme accounts for only 2% of ALS cases (1). The cause of ALS for the remaining 98% of patients with this disease must also be determined. SDO1 might provide the link necessary to discover the etiology of ALS for the majority of patients, therefore researchers must continue to search for other explanations in light of this finding. Perhaps the existing theories about the cause of ALS considered collectively might provide a solid foundations from which to reach a more valid explanation of the disease. If and when an explanation is found, researchers will be better equipped to find a treatment and possible cure. Until then, patients suffering from ALS and their families must remain optimistic about finding an explanation of the cause, possible treatments, and cures for this disease.


References


1)The ALS Association's Website, general information about ALS

2)The ALS Survival Guide, a thorough resource about ALS

3)Preventing and Treating Complications of Immobility, an article by Pam A. Cazzolli, R.N., on the ALS Network Website.

4)Amyotrophic Lateral Sclerosis (ALS or "Lou Gehrig's Disease") , an article from focus on depression.com

5)"Gene-Reading Problem Linked to Lou Gehrig's Disease" , an article from docguide.com

6)"Science Gene Therapy in Mice Delays Onset of Lou Gehrig's Disease (ALS)", an article from docguide.com


Lunar Menstruation
Name: Catherine
Date: 2002-12-15 13:57:26
Link to this Comment: 4101


<mytitle>

Biology 103
2002 Third Paper
On Serendip

Every woman goes through the process of menstruation, yet not many know exactly what is going on until they reach the time of wanting pregnancy. It seems simple enough to realize that each lady goes through her own unique experience in dealing with her cycle, but this is not so. This topic is a complicated subject-matter with many unexplained coincidences. One is that women's menstrual cycles perhaps have strong ties to the moon and its phases, and thus they give much insight into the theories of evolution.
What is important about women's natural process in this paper is that for most, around every twenty-eight days, menstruation occurs. This is when "blood and other products of the disintegration of the endometrium are discharged from the uterus." (2) Fourteen to sixteen days before the onset of a period, ovulation has already begun and fertilization of an egg can occur. But by the time menstruation begins, women usually cannot conceive anymore.
Interestingly enough, I had read a few years ago in a book that a group of women were surveyed and asked which type of men they preferred during various times of their female cycles. A majority of women seemed to favor more feminine "pretty boys" during most times in their cycles, but during full moons, they were more inclined to choose "masculine men". This seemed to be a bizarre finding and coincidence, until I recently decided to do some extensive research on the whole topic.
I investigated online for any information about the moon relating to women's menstrual cycles, thinking that the book's findings could possibly have something to do with women's anatomy. What I found were many ideas and theories, dating back to ancient civilizations. The most straightforward relation between women and the moon was that "The two [women's and moon's] cycles last for roughly the same amount of time." (3) But in more complicated conjectures, many made claims that
"In the absence of man-made light, a woman's menstrual cycle will synchronize with the phases of the moon. When this happens, ovulation occurs when the moon is full and menstruation starts with the start of a new moon." (7)
"Think back to when we lived tribally thousands of years ago with no artificial lighting. In these natural surroundings it was highly probably that women ovulated together on the full moon and bled on the dark moon. Thus they usually gave birth at the Full moon, creating more individuals with this particular lunar fertility blueprint." (12)
"But of course these days, we live in the world of artificial light." (12) There seemed to be evidence of analogous ideas about women and the moon across all borders. "Throughout all cultures, the magic of creation resides in the blood women gave forth in apparent harmony with the moon, and which sometimes stayed inside to create a baby." (6)
"It has been shown that calendar consciousness developed first in women because their natural body rhythms corresponded to observations of the moon. Chinese women established a lunar calendar 3000 years ago. Mayan women understood the great Maya calendar was based on menstrual cycles. Romans called the calculation of time menstruation, meaning knowledge of the menses. In Gaelic, menstruation and calendar are the same word." (6)
There are many more, but it is apparent just from this sample that there was too strong a correlation for the two subject matters to be merely fortuitous. Obviously, it seems, "Woman is fertile during certain phases of the moon," (1) as long as there is no artificial light around. Even now,
"... the body seems to prefer that it stay in sync with the Moon's lunar synodic cycle—even to the point that it will alter its own menstrual cycle in order to do so." (8)
"... women tend to menstruate in the full of the moon with a diminishing likelihood of menses onset as distance from full moon increases." (4)
But what lead people to this conclusion about artificial lighting affecting women's menstrual cycles?
"In the days before electricity, women's bodies were influenced by the amount of moonlight we saw. Just as sunlight and moonlight affect plants and animals, our hormones were triggered by levels of moonlight. And, all women cycled together. Today, with artificial light everywhere, day and night, our cycles no longer correspond to the moon." (6)
That sounds entirely possible, because humans must have strong ties to our environment. At the same time, this seems impossible to test out because of other factors. Women do get exposed to artificial lighting all of the time, and even if they do not, they are influenced by other women around them.
"Women who live together experience a synchronization of menstrual cycles as a result of being exposed to chemicals contained in their sweat. A study found that if the sweat of one woman was placed under the nose of another woman on a regular basis, their periods would synchronize within three months, even if the women did not physically meet or come near each other." (7)
Hormones, underweight and overweight problems, and stress can also influence menstruation. The theory may have been applicable in the days before man-made lighting and weight-fashion trends, but now it is nearly not testable.
Out of all of this information, one person supposed that perhaps
"the human female, being more intelligent and perhaps aware of her environment, adapted to a cycle close to that of the moon, while lower animals did not." (10)
"The corresponding estrus cycles of some other mammals are twenty-eight days for opossums, eleven days for guinea pigs, sixteen to seventeen days for sheep, twenty to twenty-two days for sows, twenty-one days for cows and mares, twenty-four to twenty-six days for macaque monkeys, thirty-seven days for chimpanzees, and only five days for rats and mice." (10)
I have a slightly different concept to introduce. Merging all of the information I gathered, perhaps human females have a more expansive reasoning for cycling the way they do. If women are really more attracted to "manlier men" during the full moon, when they are naturally supposed to conceive best, perhaps this is because of evolution. In accordance with the idea of "survival of the fittest", women would choose the most capable and strongest men, for better maintenance and protection during their pregnancy by their mates. Their offspring would also be more likely to inherit their fathers' innate traits and learn such characteristics which would help them survive in the world of fierce competition. With everything tying in together so perfectly, this seems entirely possible.
The human race has come a long way since the time of ancient civilizations with unexplained ideas being set aside as magic and supernatural. But it seems as though perhaps artificial lighting, dietary trends, and other modern aspects have diminished the human population's escalating chances of survival of the fittest. Perhaps our technology and inventions, which tamper with our natural biological rhythms, are not doing us as much of a service as we believe they are. The lunar-menstrual cycle idea is only one of many areas which illustrate this point.


References

1) Astrological Fertility

2) Hormones of the Reproductive System: Females

3) La Luna

4) Lunar Influences on the Reproductive Cycle in Women

5) Menarche, Menstruation, Menopause

6) Menstrual Cycles: What Really Happens In Those 28 Days?!

7) Menstruation and Sex

8) Pregnancy, Conception, Birth Timing and the Moon

9) The Menstrual Cycle

10) The Straight Dope

11) Well-Woman

12) Why Do Women Bleed Together?


Bubonic Plague
Name: Diana Fern
Date: 2002-12-16 13:55:08
Link to this Comment: 4111


<mytitle>

Biology 103
2002 Third Paper
On Serendip

I recall sitting on the couch at my home in New Mexico, back in high-school not so long ago, turning on the local news, and hearing "In tonight's news, a Taos man was diagnosed with the bubonic plague today, and is in critical condition." As the news reporter presented the empirical data for how many plague afflictions had occurred that year, I thought to myself: "GOOD GOD! Bubonic plague, in this day and age? In my state?" I began to worry thinking about each dead mouse I had to extract from my dog's mouth, or sweep out of the garage. Bubonic plague conjured images in my head of entire European villages, succumbing to a gruesome death during the dark ages, as I think it does in most people's minds.
Although deaths associated with bubonic plague are rare today, it still exists, both in the third world and in the southwestern United States. Despite the fact that bubonic plague takes fewer lives then many other, airborne diseases, no one single epidemic has effected the human imagination as did and does the bubonic plague, or "black death" as it was called. This epidemic spawned which trials and religious fanaticism. It was associated with the ending of the world, in 14th century Europe, yet is nonexistent in Europe today. What caused the plague? What exactly is the plague? How did this worldwide epidemic become mostly eradicated? Could the plague be reintroduced on a mass level by bio-warfare? As it has for hundreds of years, the bubonic plague has, in its long existence, tried the medical community as it reappears around the world.
The bubonic plague has been prevalent in historical memory for hundreds of years. Most notable in the western hemisphere, was the spread of plague in Europe around 1349. Bubonic plague, or "black death" originates from Asia, hence it is speculated that the plague traveled from the Silk Road in china, and caravans transferring the plague to Europe (1).
. The horrific result of the plague was the decimation of two thirds of Europe's population in the period of two years. With the panic, chaos, and desperation that came with the widespread epidemic came variations of proposed explanations for the existence of the plague. Among some of the more unfortunate explanations was viewing the plague as a punishment from god. Women, Jews, and lepers as the harbingers of the plague due to their inherently poor standing in good's eyes, these people not only suffered the plague, they suffered the accusations and violence of their fellow countrymen. The plague instigated whole works of literature, art, and even saints, as the European population tried to make some sense of the horrible deaths that permeated life and society, it inflamed the imagination, and tried the limits of medieval science and medicine. Only in the sixteenth century was there a correlation of plague to sanitation hazards, finally adding some discredit to the divine pollution theory.
With the awareness that bubonic plague was spread due to unsanitary conditions came the slow dissipation of the plague. Rats were associated with plague, even from the earliest times, as villagers noted that large amounts of rats were found dead, following with an outbreak among humans. The works of Yersin and Kitasato were essential in the discovery that fleas, which engorged with the infected rat's blood, would transfer the plague to a human carrier, spreading the bubonic plague. Yersin is cited as the discoverer of the bacterium, giving it its' name: Yersinia pestis (2). For additional information on the bacteria and images go to:( http://www.cdc.gov/ncidod/dvbid/plague/bacterium.htm).
The initial symptoms of the bubonic plague are flu like in nature and include: chills fever. The initial stages occur 2 to 6 days after being bitten. After this stage is a painful swelling of the lymph nodes, or buboes, hence the origin of the name bubonic plague. Lesions usually appear at the site of the fleabite, the skin becomes encrusted and at times full of puss. The victim is usually weak, disoriented, and nauseous(3).
Fortunately, there are preventative measures the plague; a vaccine is available for those working in the field or in close contact with the plague. If someone does get in contact with the plague and is not previously vaccinated, there are antibiotics called tetracyclines, or chloramphenicol (2). Although these preventative treatments exist, the plague, untreated has proved fatal in many cases in the United States and abroad.
In 1996 there was a resurgence of plague in India that terrified the south Asian country. When plague affects such large densely populated areas, where rat populations surge due to unsanitary conditions, the plague can be seemingly unstoppable. Yet the plague is not only restricted to the third world, the northern areas of New Mexico, and Arizona have also seen cases of the plague. On November 7th 2002, in New York City a New Mexican couple was diagnosed with the plague. Fortunately the couple was diagnosed early as having been infected in New Mexico; hence no one in New York was at risk for contracting the plague bacteria (4).
Although the natural cases of Bubonic plague are rare, the plague may be making as terrifying a debut as it did hundreds of years ago. The bubonic plague is being viewed as a weapon in biological warfare. Biological warfare is seen as an unfortunately effective and low-cost tool for zealot groups who wish to inflict as high a cost to death ratio as possible. The detectability of biological agents is also a lot lower than trying to use arms, or nuclear weapons in terrorism (5). The plague has also gripped the imagination for hundreds of years and the general populace would fear an outbreak of the plague, hence the panic factor of the plague would also make it a desirable weapon of choice. Yet the fact that it is a bacterium would make it a less advantageous weapon of choice than other communicable diseases. Yet the U.S Department of Energy has seen is at enough of a threat to study the plague as a potential weapon in order to prepare for a worst-case scenario.
Although the plague does not have the same grip on mankind that it did in the medieval Europe, and Asia it still retains the ability to frighten and conjure up horrific images. Bubonic plague has made resurgence in bio-warfare, which is an unfortunate tool to use. The panic that it would cause among an unsuspecting population would be devastating. One only needs to look at a modern example such as India, to see the desperation and horror that the plague can cause. Fortunately the preventative and treatment measures are effective if one is diagnosed in time. Yet the plague is rare and if one is not aware and leaves the disease untreated it can be fatal. I, being a resident of Northern New Mexico have learned to take precautions, in this area we know the plague is nothing to be trifled with, I can only hope the rest of the nation is aware of this fact as well.

References


1)The Role of Trade in Transmitting the Black Death, a source documenting the infiltration of the plague in Europe during the 14oo's
2)Plague HOme page, at Center for DIsease Control and Prevention, The CDC is a government run informational website on rare communicable diseases
3)Bubonic Plague, at "National Organization for Rare Disorders, The NORD group is an informational website to educate the public about rare diseases and prevention.
4) Bubonic Plague suspected in NYC Visitors, at CNN/Health, CNN newsgroup
5)5. BIOLOGICAL TERRORISM: LEGAL MEASURES FOR PREVENTING CATASTROPHE, at Encyclopedia Britannica., an educational source with links to journals and newsgroups


Magic Mushrooms
Name: Roseanne M
Date: 2002-12-16 18:11:53
Link to this Comment: 4113


<mytitle>

Biology 103
2002 Third Paper
On Serendip

What's so 'Magic' about Magic Mushrooms?
Roseanne Moriyama
Biology 103 12/16/02
Prof. Grobstein
"I began to have the sensation that trees were sucking me in via the wind and I was drawn into this grove of trees. The day was absolutely beautiful and everything looked fresh and new. I said something about how good this stuff was and there really had to be something bad to it or everyone would be on it. I wondered around the house with the feeling that there was something I had to do but couldn't quite figure out what it was. I became catatonic and couldn't relate to anyone. I also pulled my shirt away from my body a bit and my stomach seemed to come out with it. I began to pray for my sober mind back and was experiencing muscle contractions and tremors. I wished for someone to take me to the hospital, but I couldn't talk. I took a drink from the gatorade bottle I felt myself being sucked into the opening. It slowly faded after about 8 hours and I was euphoric in my sobriety." 1
This was written by a teen 'tripping'- a term used when taking the drug known as the 'Magic Mushroom,' 'Shrooms,' or 'Liberty Caps.' He (and many others from this site) gives a thorough description of the effect when taking this drug...

From the Aztecs to the Native Indians to the Chinese, men have been using numerous kinds of drugs as medicine, leisure, tradition, etc. Long since history was recorded till up to this modern 21st century, drugs have been prevalent in society. In recent times 'intense' drugs have been strictly prohibited- even legal drugs have been used mainly for medical purposes. However, especially during the sixties, drugs have been widely used amongst teens and young adults for a psychedelic experience. Many smoked weed as a 'common everyday drug,' but for people who wished for a 'trip beyond life,' Magic Mushrooms was a popular choice. Their supposed intention lies in Classic Hinduism stating four possibilities: a) Increased personal power, intellectual understanding, sharpened insight into self and culture, improvement of life situation, accelerated learning, professional growth. b) Duty, help of others, providing care, rehabilitation, rebirth for fellow men. c) Fun, sensuous enjoyment, esthetic pleasure, interpersonal closeness, pure experience. d) Trancendence, liberation from ego and space-time limits; attainment of mystical union. 1 Reading the list, taking this drug may seem very enticing however, along with these 'dope trips' people voyage, there are risks of going on 'bad trips' that are supposedly nothing less than a dreadful nightmare. I am conducting this research in hopes of learning what effects these mushrooms have on people and what makes it the most popular and amusing drug people recommend by constantly saying: "...if you're going to try any drug in this world, its got to be Magic Mushrooms." 1

Shrooms are known to be intense and dangerous due to such intense reactions and long term negative effects if taken regularly (2-3 times a month is all right 3 ) therefore, you would think, "Obviously it is illegal to sell, purchase, or consume Magic Mushrooms." Yes, this is true in the United States. However, in certain areas in Tokyo, Shrooms are quite visibly sold out on the streets. I was asked many times in Japan whether I was interested in purchasing shrooms. This shocked me knowing it was a drug. I immediately looked into this situation and found out that in Japan Shrooms were legal to sell or purchase -until last year. The catch is however, it is illegal to CONSUME the drug but legal to sell or purchase them. Why buy and sell and not consume? This is all still a mystery to me. Therefore as my last webpaper, I have decided to do research on this intense and very popular drug that I COULD'VE purchased in Tokyo.

From reading personal diaries and stories by various consumers, I was increasingly terrified of the effects. In the extroverted transcendent experience, the consumer is ecstatically fused with external objects (e.g., flowers, other people). In the introverted state, the consumer is ecstatically fused with internal life processes (lights, energy waves, bodily events, biological forms, etc.). This effect, or state, may be negative rather than positive, depending on the consumer's setting. For the extroverted experience, the consumer would bring to the 'trip' candles, pictures, books, incense, music, or recorded passages to guide the consumer's awareness into the desired direction. An introverted experience requires eliminating all stimulation: no light, no sound, no smell, and no movement. The most common hallucinating effects are as follows: a) Red/green/blue blips (CEV or OEV) The basic idea is that a layer of red, green, and blue blips (like looking at a TV set from real close) is superimposed on everything. b) Pixelization (OEV) Everything is composed of separate little bits, like pixels on a computer screen. c) Tracers (OEV) Moving objects that contrast sharply with their background (tip of lit incense stick against a dark room, ball flying against the blue sky, etc) leave colorful trails. d) Red shift (OEV) Everything looks like you're looking at it through glasses with their lenses dyed red. e) Melting (OEV) Objects start acting as if it were made of plastic; as if being heated, therefore distorts and flows downwards. f) Entities (CEV, rarely OEV) Encounters with other beings are a recurring feature of high-dose trips. Some common types include: "mantid," an alien-looking insect-headed creature that tends to appear extremely intelligent and aware and neutral/negative towards the tripper- it can be green or grayish-white. "DMT elf," a gnome-like playful, funny, and usually friendly entity. 1 These are only a few of the listed hallucinating effects; it is also noted that hallucinations vary from person to person and the dosage taken.
"Angie and I had the greatest trip ever...they kicked in after an hour. We went outside to sit on a bench and we started sharing our trip...the clouds! They became overwhelming, powerful...the sky was all blue and there was this one big black cloud that totally zoomed in on us, it was beautiful since we saw rays of light coming from behind it...then we started talking and laughing that didn't stop for the next 5 hours. It was like life didn't matter. The trees, the sky, the grass- they all looked so different, so much more amazing. The purple/pink sky and the GREEN grass... everything was just beautiful...it was just great... I will do mushrooms again." 2

Although most 'trips' are known to be mind-blowing experiences (such as quoted above), some consumers voyage a 'trip to hell' which can seemingly be a horrific one. The mushrooms can cause physical or psychosomatic interference and some of the negative effects include: nausea in the beginning (which invariably wears off by the time the hallucinations start), an odd and often scary physical sensations like liquid skin or distorted body-proportions, trouble breathing, severe anxiety and paranoia, the feeling of having just excreted in your pants, and/or the feeling of sinking into the ground or even into yourself. The consumer may start to feel as if there were worms crawling inside their stomachs, the roof is collapsing, and/or the sheets that cover them is trying to eat them.

In conclusion, Magic Mushrooms have scared me, but intrigued me with its effects both at the same time. I personally don't have the guts to try shrooms knowing the effects of 'bad trips.' However reading personal experiences I was curious to feel 'out of this world' sensations and see colors so vibrant it makes the world ever more beautiful. The feeling of never wanted to go back to reality because it feels like you're dreaming- everything is an illusion but is reality at the same time. Hallucinating and seeing 'aliens' or altruistic figures seems interesting too. However, reading the first quotation in this essay among the many on the web, it is frightening to be conscious (for what feels like forever) in a nightmare. Just the thought of worms slithering inside of me is horrific- I almost faint on campus every time it rains and the worms come out from the soil. Nevertheless (according to the voyagers), the chances of going on a 'bad trip' are slim. Hence people continuously take the chance of going into a beautiful and joyful dream, whereas I would never risk tripping knowing I could be interacting with worms!


Some of the Types of Magic Mushrooms 4 :
-Psilocybe stuntzii: a 'magic' mushroom and -Galerina autumnalis, a deadly poisonous mushroom growing in wood chips.
-Psilocybe stuntzii: (classic) a species indigenous to the Pacific Northwest of North America.
-Psilocybe cyanescens:: a potent species widespread through western Europe and prolific in the Pacific Northwest of North America.
-Psilocybe cyanofibrillosa: a mild species common in rhododendron gardens from Northern California to British Columbia.
-Psilocybe azurescens: (a new species) contains up to 2% psilocybin, elevating it to the status of the most potent species in the world. Native to the Pacific Northwest of North America.
-Psilocybe semilanceata: (the Liberty Cap) a species common throughout the British Isles, France, Germany, Holland and Italy. Favoring sheep and cattle pastures.
-Psilocybe pelliculosa: a relatively weak woodland Psilocybe which favors abandoned logging roads in the Pacific Northwest of North America.
-Psilocybe silvatica: (rare) a close relative of P. pelliculosa, reported only from Washington and Oregon.

Foot Notes:

1. 1)All About Magic Mushrooms good personal experiences on this site
2. 2 Magic Mushrooms Net affects of taking mushrooms
3. 3 Magic Mushrooms mushroom species


References


HGH: Cure for Depression?
Name: Diana DiMu
Date: 2002-12-17 13:20:08
Link to this Comment: 4124

HGH: Cure for Depression? Biology 103
2002 Third Paper
On Serendip

HGH: Cure for Depression?

Diana DiMuro

HGH, Human Growth Hormone, is often first and foremost, associated with treating growth disorders or problems associated with aging. While there are several medical conditions dealing with a deficiency of HGH and improper growth development, the majority of links that pop up when typing "Human Growth Hormone" into a web browser all deal with "anti-aging benefits." I began my research on HGH after receiving a spam email offering "Free injections of HGH!" I wondered why on earth someone would want such a thing and decided to do some more research. Many websites seem dubious, advertising "Real HGH! Don't Be Fooled!" or "FDA Approved HGH!" I continued to do some research on what specifically is HGH and how it affects the human body, to gain a better understanding of why it has become a popular commodity. While there is plenty of information to research on the uses of HGH to help reduce the effects of aging, my research led me in another direction. After reading many of the "benefits" of HGH, I became curious whether HGH or any synthetic type of HGH was ever used to treat depression. Could HGH be used to reduce symptoms of depression when some of its benefits already included: loss of fat, increase of muscle mass, and elevation of mood, higher memory retention, and improved sleep? In my research, I hope to find whether HGH could be used as a means to treat depression and whether there are any treatments using HGH or similar synthetically made medications already being used. Before delving into that scientific pursuit, I felt it important to do some background research on HGH itself and why it is so important.

 

Before explaining what HGH is, it will help to first understand what a hormone is:

Hormones are tiny chemical messengers that help our body do different tasks. Hormones are made up of amino acids. Hormones are produced by the endocrine glands and then sent all over the body to stimulate certain activities. For example, Insulin is a well known hormone that helps our body digest food. Our growth, digestion, reproduction, and sexual functions are all triggered by hormones.(3)

What is HGH?

HGH is produced in the anterior section of the pituitary gland deep in the brain. It is made up of 191 amino acids - making it large for a hormone. In fact, it is the largest protein created by the Pituitary gland. Chemically, it is somewhat similar to insulin although it is secreted in short pulses during the first hours of sleep and after exercise; it only remains in the circulation for a few minutes.

What is IGF-1?

 IGF-1 stands for Insulin-like Growth Factor 1. IGF-1 is also known as Somatomedin-C. As important as HGH is, it does not last long in our bloodstream. It is extremely difficult to measure HGH in blood serum. However, the body binds most of the growth hormone in the liver and converts some into Somatomedin-C, another protein hormone also called Insulin-like Growth Factor- I (IGF-I). IGF-1 is the most important growth factor that is produced. Since Somatomedin-C remains in the blood stream for 24-36 hours, a blood sample identifying Somatomedin-C will be a more dependable indicator of competent HGH production. Normal Somatomedin-C blood levels in adults range from 200 to 450 ng/ml (nanograms per milliliter). Yet, one-third of individuals over 50 years of age show abnormal levels less than 200 ng/ml. During the growth spurt of youth, HGH levels are maximum and the Somatomedin-C will be measured well over 600- 800 ng/ml. Yet for normal men and women under 40, less than 5% have levels below 250 ng/ml! After 40 many men and women have the same amount of HGH as an octogenerian.

When one's Somatomedin-C level falls below the adult normal range, his/her muscle and bone strength and energy levels most likely will decrease. Tissue repair, cell re-growth, healing capacity, upkeep of vital organs, brain and memory function, enzyme production, and revitalization of hair, nails, and skin will also diminish. While aging and decreasing growth hormone levels go 'hand-in-hand' those who lose their pituitary production of HGH due to surgery, infection or accident, instantly suffer many profound, ill effects. In those who have no pituitary function, there is a shift in body composition whereby body fat increases by 7-25% while lean body mass decreases similarly. Muscle strength and muscle mass are noticeably reduced. Bone density studies indicate long bone density and spinal bone density decrease as significantly as if the individual had aged 15 years. Pronounced weight gain of 30-50 pounds occurs when HGH wanes. Furthermore, there are negative effects on cholesterol; triglyceride levels increase while high-density cholesterol (HDL), a 'good cholesterol', decreases. Increased risk of cardiovascular disease may be related to vascular wall thickening and changes associated with decreased cardiac output. Such insufficiencies may contribute to these people reporting a rapid decline in exercise capacity and early deaths from heart disease. They also report an impaired sense of well being and symptoms of fatigue, social isolation, depression and a lack of the ability to concentrate. (2)  

What is Recombinant Growth Hormone (GH)?

Recombinant Growth Hormone is growth hormone that is synthesized in the lab. It is a biosynthetic hormone that is identical to human growth hormone, but it is synthesized in the lab. Creating an exact replicate of HGH was not an easy task.

First scientists needed to isolate HGH. Once they achieved this step, they could study the DNA make-up of the protein. Scientist quickly realized making recombinant GH would be no easy task since they had to accurately reproduce a 191 amino acid hormone. Eli Lilly, in 1986 created a 191 amino acid hormone that was an identical match to the HGH produced by the pituitary gland. The drug is called Humatrope and is the most widely used recombinant growth hormone today. (3)

Bone Density

One of HGH's most dramatic effects is on the connective tissue, muscle, and healing potential of the skeletal system. Fragile skin with ulcers, fractured bones that do not heal, and profound gains in muscle strength have been noted. Not only does the skin look younger with less wrinkles, some report a re-growth of hair on the head. For growth hormone, DHEA, and testosterone are clearly anabolic hormones: they build tissue. And with increased age, our bodies break down tissue faster than we can repair them. This is called catabolism. Therefore, HGH tends to reverse the catabolic state. The potential role of HGH in the maintenance of the skeleton is its ability to make and repair these tissues. HGH stimulates osteoblast (bone) and fibroblast (supporting tissue) proliferation.

Other anabolic effects include a gain of muscle and renewed appetite, better exercise capacity, increased lung capacity, and faster wound healing. Many report their "old age spots," disappear within two months of HGH therapy.(2)

Numerous scientific studies have shown that restoring levels of HGH in aging individuals can have dramatic effects. One landmark study, published in 1990 in The New England Journal of Medicine, found that 12 men who took HGH had an increase in lean muscle and bone density and a decrease in fat, while nine men who didn't take it experienced none of these changes.(1)

Positive Effects of HGH Replacement

  1. Get Lean: Loss of fat and increase in muscle mass combine for up to 20 pound shift in body composition. This equates to a general feeling of physical well being, a stronger libido, and improved self image.
  2. Get Energetic: Without a need for the afternoon food cravings of sweets, caffeine, stimulants, or nicotine, HGH patients have more energy. This improves both their self control image and their general health state (because they exercise).
  3. Get Smart: An interesting yet unproved side-effect of HGH has been the return of mental acuity and a "sharp" memory. Such HGH improves the vascular and intracellular nutrient support for cells; it is not surprising that this has been reported by many individuals.(2)

If you look at all, the studies that have been done on HGH injections you get the following list of benefits:

  • 8.8% increase in muscle mass on average after six months, without exercise
  • 14.4% loss of fat on average after six months, without dieting
  • Higher energy levels
  • Enhanced sexual performance
  • Re-growth of heart, liver, spleen, kidneys and other organs that shrink with age
  • Greater cardiac output
  • Superior immune function
  • Increased exercise performance
  • Better kidney function
  • Lowered blood pressure
  • Improved cholesterol 
  • Stronger bones
  • Faster wound healing
  • Younger, tighter, thicker skin
  • Hair re-growth
  • Wrinkle removal
  • Elimination of cellulite
  • Sharper vision
  • Mood elevation
  • Increased memory retention
  • Improved sleep

(3)

Are there any negative aspects of taking HGH injections?

  • Extremely Expensive
    A year's supply of HGH injections can cost anywhere between $3,000 - $10,000. Insurance will not cover the injections because you are not treating a "classified disease".
  • Available by prescription only
    Recombinant GH is a drug that is available by prescription only. Therefore, even if you had $20,000 a year to spend, you would need to get a prescription.

Possible Negative Side Effects
Anytime you introduce a large amount of a foreign hormone into the body there is the risk of side effects. In one study, it was found that some of the patients suffered from carpal tunnel syndrome and gynecomastia (enlarged breasts). (3)

Side Effects with Low Dose HGH Replacement


The dose of recombinant HGH is an important consideration in the therapy of acquired HGH-deficiency. Large, pharmacological doses of HGH are often associated with the clinical signs of HGH excess, including fluid retention, carpal tunnel syndrome, and hypertension. However, by incorporating smaller doses, physiologically such symptoms are not noted. At a dose of 0.03mg/kg/week, many demonstrated only minor side effects including slight fluid retention and mild joint pain. There was only one reported incident of carpal tunnel syndrome. In all cases, further reduction of the HGH dosage resulted in the elimination of side effects. In another recent study in which a smaller dose of HGH was used, 0.01 mg/kg was administered three times per week without any reported side effects. Multiple studies support the conclusion that low dose HGH replacement is associated with minimal side-effects.

Is it possible to take HGH orally?

Many people look for a way to take HGH without getting an injection. However, HGH is a delicate and complex 191 amino acid hormone. This brings up the second problem with the above claim - you cannot take HGH orally. Therefore, even if a company wanted to break the law and sell HGH as a pill/spray or powder - it would not work because the HGH would break down before it ever reaches the bloodstream. (3)

Now, a new generation of products sold via the Internet and in health stores as dietary supplements, and therefore not regulated as drugs by the FDA, claim to produce the same effects at a fraction of the price -- about $1,000 per year. These are formulations of amino acids that allegedly trigger the release of HGH in the body. One such product is GHR-15, although there are many other "growth hormone releasers" on the market. One such internet advertisement for "growth hormone releasers" reads as follows:

Research indicates that the best way to elevate HGH levels is to stimulate the body to produce more HGH. Studies have shown that an old pituitary gland has the same capacity to produce HGH as a young pituitary gland. If we can find a way to stimulate our pituitary gland, we will have the best of all worlds. You are not introducing a foreign GH, so you eliminate the side effects. Also, our body is very good at self-regulating, it will not produce an excessive amount of HGH which could be harmful. In effect, your body knows best what the correct dosage of HGH is to release for your body. (3)  

The idea behind these growth hormone releasers is actually based on scientific studies showing that certain amino acids can trigger the production of HGH from the pituitary. However, consumers should be cautious, says Ronald Klatz, MD, president of the American Academy of Anti-Aging Medicine, a society of more than 7,000 physicians and scientists involved in anti-aging research. These formulations of amino acids have to be very specific, and it is unclear whether many of the products out there are using the correct type and combination of these compounds, says Klatz. Clinical studies proving effectiveness are lacking. One manufacturer acknowledges that he had no studies; another says clinical studies have begun in Brazil.

Edward Lichten, MD, senior attending physician at Providence Hospital in Southfield, Mich., treats many patients in his private practice with injected HGH. When Lichten followed some of his patients who used four of the health food store products, he could see no improvement in their symptoms or their blood levels of HGH. Before buying from a health food store or via the Internet, Klatz advises; ask if the company has solid scientific evidence published in reputable medical journals about the product's effectiveness. (1)

 

After reading the benefits of HGH and HGH Therapy, I became curious whether any form of HGH or other kinds of hormone therapy had been used in treating symptoms of depression. In order to gain a better sense of where my questions might lead, I first looked into already established treatments of depression and their side effects. After researching what is typically used to treat depression and how people react to such treatments, I hoped to learn how Hormone Therapy could be placed into discussion.

 

Treatments for Depression:

The most common treatment for depression includes the combination of antidepressant medicine and psychotherapy (called "therapy" for short, or "counseling").

Psychotherapy is sometimes called "talking therapy." It is used to treat mild and moderate forms of depression. A licensed mental health professional helps people with depression focus on behaviors, emotions, and ideas that contribute to depression, and understand and identify life problems that are contributing to their illness to enable them to regain a sense of control. Psychotherapy can be done on an individual or group basis and include family members and spouses. It is most often the first line of treatment for depression.(6)

 

How are Medications Selected?

The type of drug prescribed will depend on your symptoms, the presence of other medical conditions, what other medicines you are taking cost of the prescribed treatments, and potential side effects. If you have had depression before, your doctor will usually prescribe the same medicine you responded to in the past. If you have a family history of depression, medicines that have been effective in treating your family member(s) will be considered. Usually you will start taking the medicine at a low dose. The dose will be gradually increased until you start to see an improvement (unless side effects emerge).

Examples of effective and safe medications commonly prescribed for depression or depression-related problems are listed in the table below:

Type of medication

Drug Name

Brand Name

Conditions it Treats

Selective serotonin reuptake inhibitors (SSRIs)

fluoxetine

paroxetine

sertraline

fluvoxamine

citalopram

Prozac

Paxil

Zoloft

Luvox

Celexa

Depression -- Serotonin is a brain chemical thought to affect mood states, especially depression. SSRIs help increase the amount of serotonin to level the patient's mood.

Tricyclic antidepressants (TCAs)

Mitriptyline

desipramine

nortriptyline

protripyline

clomipramine

imipramine

doxepin

trimipramine

Elavil

Norpramin

Pamelor

Vivactil

Anafranil

Tofranil

Sinequan

Surmontil

Depression

(Clomi -pramine is used to treat OCD)

Monoamine oxidase inhibitors (MAOIs)

tranylcypro-mine

phenelzine

isocarboxazid

Parnate

Nardil

Marplan

Depression -- MAOIs increase the concentration of chemicals in particular regions of the brain that aid communi-cation between nerves.

MAOIs are usually prescribed for people with severe depression.

Azapirones

Buspirone

BuSpar

Anxiety, generalized

Benzodiaze-pines

Alprazolam

Lorazepam

Diazepam

Xanax

Ativan

PMS, panic disorder

Lithium

 

 

Bipolar disorder, recurrent depression

Mood stabilizing anti - convulsants

Carbamaze-pine

Valproate

Lamotrigine

Gabapentin

Tegretol

Depakote

Lamictal

Neurontin

Bipolar disorder

Other medications

amoxapine

buproprion

venlafaxine

nefazodone

mirtazepine

trazodone

maprotaline

Asendin

Wellbutrin

Effexor

Serzone

Remeron

Desyrel

Ludiomil

Depression

What are the Side Effects? 

Keep in mind that sometimes the benefits of the medicines outweigh the potential side effects. Some side effects decrease after you have taken the drug for a while.

Some common side effects of SSRIs include:

Agitation
Nausea or vomiting
Diarrhea
Sexual problems including low sex drive or inability to have an orgasm
Dizziness
Headaches
Insomnia
Increased anxiety
Exhaustion

Some common side effects of tricyclic antidepressants include:

Dry mouth
Blurred vision
Increased fatigue and sleepiness
Weight gain
Muscle twitching (tremors)
Hand shaking
Constipation
Bladder problems
Dizziness
Increased heart rate

It is important to note that you should not drink alcoholic beverages while taking antidepressant medicines, since alcohol can seriously interfere with their beneficial effects. (7)

Hormone replacement therapy (HRT) in women: Depression is more common in women than in men. Changes in mood with premenstrual syndrome (PMS) and premenstrual dysphoric disorder (PMDD), after childbirth and following menopause are all linked with sudden drops in hormone levels. Hormone replacement is a treatment currently used to relieve symptoms of menopause such as night sweats and hot flashes. By using HRT, women can help prevent osteoporosis and possibly reduce memory loss. There are many advantages to using HRT for relieving symptoms of menopause, and while they may, in the future, be found to help depression in some women, these hormones can actually contribute to depression. (6)

 

Discussion:

 

My question then is if Hormone Replacement Therapy (HRT) is used for women could it also be used for men? HRT is typically used to treat women's symptoms of menopause, however, in many cases, HRT results as a beneficial treatment for female depression. (8) With this in mind, is it too far-fetched to consider HRT for men as a viable treatment for serious depression? Could low doses of HGH be used to treat both men and women with depression? In theory, the many negative side effects commonly associated with SSRI's and other types of antidepressants far outweigh the number of negative side effects associated with HGH therapy. The use of injections of HGH have yielded such positive results as: mood elevation, higher energy levels, enhanced sexual performance, superior immune function, increased exercise performance, increased memory retention, and improved sleep. (3) However, there is currently little research on whether men and women suffering from depression exhibit lower levels of HGH. There is also little research on the effects of low dosage HGH therapy on patients under the ages of 40. It would be extremely interesting to conduct research on whether patients suffering from severe depression exhibit lower levels of HGH or other essential hormones. Sufficient research on the effects of HGH injections on "middle-age" or "younger" patients would also need to be conducted before further research could be done. While older patients exhibit significantly lower levels of HGH, making HGH therapy a more viable option, treatment for younger patients using the same therapy could prove more harmful than beneficial. Would younger patients (patients between 20-30 years old or younger) exhibit the same positive effects from HGH therapy as patients 30 years old and older? There is currently little research to answer this question. Nor is there enough research to back support of using HGH or other hormones as definitive treatment for depression in either men or women. Currently, Hormone Replacement Therapy is most typically associated with women as a viable treatment for menopause and osteoporosis. (8) Treatment for depression is a less accepted benefit of using hormone therapy for women. In fact, some HRT, such as the use of Estrogens (examples include Premarin and Prempro) are linked to causing depression in women. Hormone replacement therapy for men is often discussed in terms of testosterone therapy. Signs of low testosterone in men may include decreased sex drive, erectile dysfunction (ED), depression, fatigue, and reduced lean body mass. Men may also have symptoms similar to those seen during menopause in women – hot flashes, increased irritability, inability to concentrate, and depression. If prolonged, a severe decrease in testosterone levels may cause loss of body hair and increased breast size. Bones may become more brittle and prone to osteoporosis, and testes may become smaller and softer. (9) While Hormone Replacement Therapy is used for both men and women to help relieve and counteract the symptoms of aging, menopause, and osteoporosis, it is not typically associated with treating depression. Is this because depression is still viewed as a stigma by current society? Growing research in the field of depression and its on-going acceptance as a part of life may yield new research and acceptance in different fields of its treatment. Further research into Hormone Replacement Therapy, and perhaps more specifically, use of the Human Growth Hormone, may in time help establish new practices for the treatment of depression. The possibility of future research into this field may yield more information, not only in the broader treatment of depression, but into the specific treatment of depression among the sexes and among varying age groups. HRT may yield new treatments for men and women of various age groups that result in better or less side effects than the currently used SSRI's and antidepressants.

 

WWW Sources:

1.)  Growing Younger with Hormones?

2.) US Doctor – Growth Hormone  

3.) Advice HGH

4.) HGH MD

5.) Your Guide to Depression: Medical Information from the Cleveland Clinic  

6.) Treatment Options for Depression

7.) Depression Medicines

8.) Information on Menopause

9.) Testosterone Therapy


Why we can't walk pass the refrigerator!
Name: Melissa Af
Date: 2002-12-17 13:26:52
Link to this Comment: 4125


<mytitle>

Biology 103
2002 Third Paper
On Serendip

When I am hungry, I eat. When I am not hungry, I eat. When I am tired, I eat. When I am energetic, I eat. When I am happy, I eat and when I am sad, I also eat. Food is one of man's viscerogenic needs but I believe that in present times, eating has moved beyond a necessity to a pastime. In the early development of the human race, humans were hunter-gatherers. These early humans had periods of scarcity which forced them to eat a lot so that they had reserves of fat in periods of scarcity(4). Although mankind has evolved and the problem of scarcity is not as great as it was millions of years ago, present-day humans seem to still have the hunter-gatherer mentality: to eat as much as possible in case there is no food later. As always, we have turned to science to explain why we behave in a way that we think is unacceptable. Can science help us curb our eating habits and if so can we really alter these habits without any harm to ourselves and future generations? In this paper, I will use obesity as an example of one of the numerous problems that we have turned to science to solve at any expense.

A billion people in the world are overweight and 22 million of these individuals are children under the age of 5. The World Health Organization (WHO) lists obesity and problems associated with obesity like heart disease and high blood pressure as one of the Top 10 global health risks(4) In Economics, students learn that man has unlimited wants but limited resources. Since 1 billion of the world's population is overweight, I wonder about the rate at which we are using our resources to produce food and whether we are replacing the resources that we have used to satisfy our appetites. Furthermore, we must consider not only the land, trees, plants and animals that we are using to produce these commodities but also the negative effects of the production processes such as deforestation and air, water and soil pollution. When we eat our next Mc Donald's Happy Meal we will not consider the health problems that we are embracing as we open our mouths to bite and chew, problems like heart disease, high blood pressure and in the African-American community the illness of diabetes.

We think of Cancer and HIV as the major illnesses that are affecting mankind. However, we do not think of obesity as a major killer of humans. Obesity itself does not kill but problems that arise from being overweight such as heart attack kill many people yearly. People believe that everyone is preaching about being thin because "thin is in", that is to say, that we equate beauty with thin thighs and small waists in women and a slender figure in men. Although it is true that the standard of beauty is someone who is slender, we cannot ignore that obesity places a person at risk for heart attack, diabetes, discomfort in physical activity and so on. Also, the alarming rates of illnesses such as anorexia nervosa and bulimia that affect people who use their weight to have some kind of control in their lives can not be ignored. People wish to be accepted by others because of their looks and people have a fear of dying early from the complications of obesity. As a result of these concerns over obesity, researchers are looking into finding some component in our body that can be used to control how much a person eats because it will improve the physical, mental and emotional health of millions worldwide, make them instantaneously famous and make drug companies even wealthier.

Researchers are fuelled with dreams of fame and success so they devote much time and resources into studying obesity, mainly its causes so that it can be controlled. In 1994, researchers at Rockefeller University identified a hormone, Leptin that they hoped would be the key to controlling appetite. The researchers believed that fat people might lack Leptin so Leptin would make the obese people lose weight. However to the great disappointment of many this is not the case. After more research, scientists discovered that obese people had lots of Leptin and that giving them more had little effect. Having too little Leptin was more important than having too much Leptin. Fat cells make Leptin, which tells the brain when fat stores are too low and more should be manufactured. The brain sends signal to decrease the metabolism if the fat stores are too low and so weight is gained.(4) This is why people have problems dieting. When people diet they usually reduce their fat intake so their stores of fat are depleted. When this occurs in the body, the fat cells make more Leptin which tells the brain that it should inform the metabolism to slow down. However, a person who is dieting is trying to counteract the urges of the body to compensate for the loss of fat which makes dieting so difficult. The fact that dieting is difficult because our body creates hormones to make us eat should encourage mankind to consider that obesity may not be as abnormal as society has led us to believe. Instead of looking for a way to completely stop people from gaining weight, we should probably look for a way to create food and make meals that are lower in empty calories and rich in the nutrients that we need. I suggest this because people who are inclined to eat more because of their bodies may eat more unhealthy foods because of our "fast-food and junk-food" society could actually be made to be healthier if we had food that satisfied the nutritional needs of our bodies.

Now that Leptin has been found to not be the wonder drug for decreasing appetite for which so many people had hoped, undeterred scientists have embarked on a new pursuit to find the real component to alter a person's appetite. Dr. Bloom and his research team at Hammersmith Hospital at the Imperial College School of medicine in London discovered Peptide YY3-36 (PYY). It is a gut hormone that is made in response to the consumption of food. PYY circulates to the brain where it stops the urge to eat. Food in the intestine starts PYY production. PYY is absorbed into the blood stream and goes to the brain. The arcuate nucleus in the hypothalamus of the brain sorts the signals it receives from the body to determine whether the person should eat more or stop eating. PYY triggers 2 types of neurons: neurons that make you feel full or neurons that make you feel hungry. PYY turns on the neurons that make you feel full and turns off the neurons that make you feel hungry.(4) This is why PYY is such an interesting hormone: it tells the body not to eat. However, we should not think that we have solved the problem of obesity. First of all, PYY was only discovered in August which means that it is barely 4 months old. Also, there has not yet been a published article in a medical journal on PYY which would allow experts to analyze the observations made about PYY to see if they are steadfast or faulty. Hopefully the Leptin experience has taught us that we should not prematurely praise and accept a discovery because it gives the answer that we hope to hear.

We are ready to welcome with open arms any drug that combats obesity. Why are we so ready to adopt PYY or Leptin? I believe that humans wish to control obesity not only for the health risks that it poses to individuals but for the lack of control over ourselves that obesity points out to us. People used drugs like Fen-phen, Meridia and Metabolife's dietary supplements to combat obesity. These drugs all contained ephedra which is an herb similar to amphetamine which has been connected to the deaths of over 100 people. Fen-phen has caused hundreds of cases of deadly primary pulmonary hypertension and heart-valve damage. Meridia increases the likelihood of the person's suffering from high blood pressure and stroke. Obesity sufferers use diet drugs to avoid problems like high blood pressure and stroke but Meridia increases the occurrence of these illnesses. Meridia has been linked to 19 deaths.(1) People use these diet drugs to change their lifestyles in a quick and painless fashion. Since so many people are overweight and they are desperate to lose the weight, drug companies are preying on this vulnerability to sell products that have many side effects because people are willing to accept the risks in order to lose the weight. We are faced with 2 of the 7 deadly sins: gluttony and greed. More should be done to monitor these wonder drugs that are immediately available after they are discovered. The public should be skeptical about how well these products have been tested and for how long because the drug companies seem to be in a great rush to get these products on the market. People should consider the side effects that are related to taking these drugs because these side effects are a high price to pay for losing weight.

People are looking to researchers and drug companies to help curb their obesity. However, they are not considering another approach: a change in lifestyle. Fast food companies market that fast food is a quick meal that won't interfere with your daily life. As much as the fast food industry markets, those who are concerned with a healthy lifestyle should also market the healthy life as a desirable and attainable life. Television stations should broadcast as many anti-junk food ads as there are fast food ads. This could be done like the campaign against smoking in the 1960s that was so effective that the tobacco industry willingly took off their cigarette ads on the television. Another potentially effective method of campaigning for a healthier lifestyle is to put warning labels on food about the number of calories that a person would gain by consuming a particular product. .(1) These methods may not change person's lifestyle permanently but it will make them more aware of what they are putting into their bodies.

If people begin to think about what they are eating, they would be at the "pre-contemplation" stage of the "Stages of Change" Model. This model groups the 5 levels of motivational readiness that a person must pass through to successfully change a health behavior. The 2nd stage is the "contemplation" stage when you intend to change but not anytime soon. The 3rd stage is the "preparation" phase when you say that you will change next month. In the "action" stage you have recently changed your behavior and the "maintenance" stage is when you have continued the changed behavior for at least 6 months. People should consider this model when they are deciding to change their eating habits. .(2) However, if overeating is a biological disorder then it is possible that no psychological endeavor will bring about a change in eating. It must be noted that one cannot underestimate the power of the mind.

As with all things in life, we look to science to provide us with a reason for our conduct and to tell us how to fix our behavior if we find it unacceptable. We want researchers to spend long hours and expend lots of energy to find a way to curb our eating habits. Then we want drug companies to make these products readily available for our use. However, we never stop to consider that perhaps "Mother Nature" has a plan for us: to eat and by trying to go against what is engrained in us we run the risk of upsetting the course of evolution and affecting future generations of humans. Another question we need to ask ourselves is since there is diversity in life, shouldn't there be diversity in how we look so some of us may be fat and some of us may be thin. We all have to die someday; why don't we consider heart disease, hypertension and diabetes as different methods of dying. Another consideration is that if we alter one hormone to curb our eating, is it not possible that we could develop another hormone to compensate for the change in our body. I don't believe that obesity can be solved by taking a pill or an injection so that we don't eat. Perhaps instead of looking for a gene that causes us to eat we should consider looking for a gene that causes us to appreciate what we have.


References

Web Resources
1) Alternet.org
2)
3)

Non-Web Resources
4) Denise Grady.Why we eat and eat (and eat and eat). New York Times, Science Times, Tuesday November 26, 2002.


Infant Iron Deficiency
Name: Katie Camp
Date: 2002-12-18 10:15:17
Link to this Comment: 4134


<mytitle>

Biology 103
2002 Third Paper
On Serendip

Iron Deficiency is caused by an inadequate source of iron in a person's system. Iron, in the form of hemoglobin in red blood cells, carries oxygen to the brain and throughout the body. In infants, iron deficiency poses many threats. Infants still developing vital connections in their brain and between systems in their body require a significant presence of iron. Diagnosing, treating, and preventing iron deficiency in infants is vital to these developments. Recent studies have shown that children with iron deficiency in the first year of life "lag behind their school peers" (1) in "mental and physical development" (7). Iron deficiency is easily prevented and treated. It is necessary to be educated about its symptoms and threats. Although there are many different ways to treat iron deficiency, the most sensible is prolonging the period of breast feeding after birth and establishing a balanced diet. In this paper I will tackle the general issues of infant iron deficiency and the many treatments available, as well as provide support to the claim that "breastfeeding is best."

Iron deficiency is sometimes present in newborn infants or develops after the first four or six months of life. Infants are born with reserves of iron for their first few months of life. These reserves are related to the amount of iron of the mother. Since about "50% of pregnant women" are iron deficient (8) some infants are born without sufficient iron stores. Normally when the mother is not iron deficient, in term infants are born with 75 mg of iron per kilogram of body weight. During the first year, infants "almost triple their blood volume...and...require the absorption of 0.4 to 0.6 mg of iron daily" (5). If their diets do not then support such iron absorption their stores are depleted and the blood is not able to carry enough oxygen to the brain. In the case of premature infants, their iron store is about 64 mg per kilogram of body weight. Because of this inadequacy, their diet then requires about 2.0 to 2.5 mg of iron per day in order to supplement their stores. Obviously, the rate at which an infant becomes iron deficient is proportionate to whether or not they are carried to full term depends on their mother's iron content. It remains important, however, in every situation that the infant somehow establish stable stores, so as to promote basic development.

It is clear that in the infancy stage of life it is necessary to absorb a certain amount of iron in order to support the body iron stores. The primary lack of iron absorption by infants is the cause of an adequate diet. Since breast milk is the general diet of an infant it is necessary by six months to begin supplementing the diet. "Breast feeding without complementing with iron rich foods becomes an increasing risk factor for iron deficiency" (4). Often though, the mother stops breast feeding her child completely, thus depleting that specific source of easily absorbed iron from the infant's resources, it is mostly replaced with cow's milk that is not normally high in iron. Some cases of iron deficiency are "due to parasitic infections and repeated attacks of malaria" (3). Most often these cases occur in developing countries in regions of Africa, Asia and Latin America. Specifically, the "parasitic infestations causing iron deficiency are hookworm (Ancylostoma and Necator) and Schistosoma" (3). Parasites prevent the absorption of iron into the blood by absorbing it for themselves.

Iron deficiency has a variety of detrimental results to infants. The lack of iron often means a lower birth weight and slower weight gain due to a state of anorexia that can ensue (5). Because iron is in the form of the hemoglobin in red blood cells responsible for transporting oxygen throughout the body, low iron means the infant may develop a low exercise and physical activity tolerance. Their immune system is also weakened with iron deficiency and it is important to limit an iron deficient infant's exposure to infections because they have an increased risk (6). Finally, a recently probed issue of iron deficiency is its effect on the mental and intellectual development of a child. Studies have shown that iron deficiency "during the first few sensitive months of life can lead to long-term delays in mental and physical development" (7). Dr. Shabib's article mentions a correlation between the lack of iron and a decreased attention span (5). Attention Deficit Disorders are often combined with slower and increased difficulty in learning. These infants grow up to be participants of society and take on leading and involved positions. This "mental impairment" (2) is a convincing reason to treat iron deficiency.

It is extremely important to quickly treat iron deficiency and prevent further anemia so that such results do not affect children after their infant stages. Immediate treatments after birth, prescribed supplements, and dietary decisions all play a role in the treatment of infant iron deficiency. "Late clamping of the cord" (3) immediately after birth allows additional blood flow from the placenta to the child after birth. This method supplements an infant's reserves with approximately 50 mg of iron to the infant's reserves, thus dealing with iron deficiency at birth. After birth and after the depletion of the infant's body iron stores, a tool for treating iron deficiency is regulating the child's diet. First, foods rich in iron are important, like liver and dark green leafy vegetables. More importantly, however, is the intake of foods that "enhance iron absorption" (3). Examples are animal products and fruits and vegetables that contain vitamin C. It is necessary to prevent the ingestion of products like "tea or coffee, and calcium supplements...or [take] 2 hours after meals" (3) which counteract the bioavailability of the iron. In addition to introducing iron rich foods into the diet of infants, iron supplements can be prescribed to fill in the gaps of a diet. In developing countries where they are more likely to experience cases of iron deficiency based on parasites and malnutrition of the child and mother, supplemental treatments make sense.
In areas of high prevalence of iron deficiency anemia, 400 mg ferrous sulphate (2 tablets) per day or once a week, with 250 µg folate for 4 months is recommended for pregnant and lactating women. In areas of low prevalence 1 tablet of ferrous sulphate daily may be sufficient, but in these areas another approach is to give iron therapy only if anemia is diagnosed or suspected. (3)

These treatments listed require additional care to infants and are useful in cases where a mother cannot provide her child with adequate iron resources either during birth or after. However, the most popular solution to iron deficiency seems to come from breast feeding an infant. Often it has been noted that there is a strong connection between when an infant is weaned and their development of iron deficiency. Much is dependant on when a mother deems it appropriate to cease feeding her child breast milk in relation to the time period in which many infants deplete their body iron stores. This is because there is a huge difference between the content of breast milk and the cow's milk or soy-based product that is substituted. First, breast milk "is a developmental fluid...[that] changes with baby's needs" (2). In terms of the infant's need of iron, it contains .3 to .5 mg of iron per liter. Fifty percent of this iron is absorbed by the infant; whereas only ten percent of the 1.0 to 1.5 mg/liter of cow's milk iron is absorbed. In the case of soy milk or fortified milk products that contain 12 to 13 mg of iron per liter, only four percent is absorbed. The higher absorption of breast milk is due to its increased "bioavailability." This may be different from cow's milk because the cow's milk contains "high concentrations of calcium, phosphorous, and protein in conjunction with the low concentration of ascorbic acid" which may be "responsible, in part, for the poor absorption of iron from cow's milk" (5).

Infant iron deficiency obviously has a variety of effects, some of which cause longer term problems for infants. The most convincing of these issues in developing an argument for increased awareness and more comprehensive treatment of iron deficiency is the theory that infant iron deficiency leads to diminished mental development. Although I have suggested many solutions to the problem of depleted body iron stores in children, breast feeding is the most universal and natural treatment for iron deficiency. In cases of developing countries, malnutrition of the mother may prevent her from providing enough iron to her child during pregnancy and during breast feeding. In general though it is a simple solution to breast feed an infant for as long as possible before weaning it to other milk sources. In addition to a breast milk diet, adding iron rich foods is perhaps the most complete solution to establishing stable iron body stores during the first stages of an infant's life.


References

1) Iron Deficiency in Babies, brief overview of infant iron deficiency.

2)Introduction to Clinical Science-Newborn Nutrition, additional syllabus notes that detail specifics about infant iron deficiency.

3) Postpartum care of the mother and newborn: a practical guide-Chapter 4 Maternal Nutrition, information about causes of iron deficiency, including effect of parasites, as well as detailed list of iron rich food sources and supplement information.

4) Iron Deficiency in Children, general information about parasitic malabsorption and advantage of breast feeding.

5) Meeting the Iron Needs of Infants and Young Children, editorial by Saudi Dr. Shabib detailing symptoms and threats of iron deficiency, body iron stores, and difference between breast milk and substitutions.

6) Iron Deficiency Anemia, Yale-New Haven Hospital general information website on iron deficiency, explains what iron is used for, etc.

7) Preventing Iron-Deficiency Anemia During Infancy, another source that mentions the long term affects on the mental and physical development of a child because of iron deficiency.

8) Iron Deficiency Anemia, general description and information.


Once an Addict, Always an Addict?: Understanding
Name: Lydia Parn
Date: 2002-12-18 21:55:24
Link to this Comment: 4138

<mytitle> Biology 103
2002 Third Paper
On Serendip

Remarkable advances in the neurosciences are creating the ability to predict and alter human behavior in ways that would have been unimaginable a few years ago. A current goal of neuroscience is to understand the mechanisms that act as a go-between from occasional, controlled drug use to the loss of behavioral control that defines chronic use (1). The question of addiction concerns the process by which drug-taking behavior, in certain individuals, can quickly evolve into drug-seeking behaviors that take place at the expense of most other activities (2).

Addiction, also known as substance dependence, can be defined as repeated self-administration of a substance, despite attempts to refrain from use and knowledge that the substance abused has effects detrimental to health or social concerns (2). Many factors contribute to the development of addiction; drug dependence does not just happen. A person's initial decision to use a drug is influenced by genetic, psychosocial and environmental factors. Once the drug has entered the body, however, the drug can promote continued drug-seeking behavior by acting directly on the brain. Drug addiction appears to be the result of a series of neurochemical changes in the activity of neurons of the brain (3).

Understanding the neurochemical changes in the brain that drugs induce provides insights into the causes of drug abuse, its neurochemical basis, and foundations for improved treatments. The brain is composed of millions of neurons and a large number of chemicals. Specific areas of the brain are involved with the control of physiological and psychological functions. The activities of neurons and their interactions with different areas of the brain control our homeostatic functions, our very being. Neurotransmitters alter in response to changes in physiological and psychological functions, so subtle changes in neuronal activity can result in major changes (4).

Neurons release neurotransmitters that act on specific proteins called receptors, activation of these receptors then elicit a response. Small disturbances in the activity of particular neurons can cause considerable effects on the mood of a person. The level of neurotransmitters can be increased either by being released from nerve endings or by inhibition of a procedure that terminates its action (4). The major neurotransmitters spent in the actions of drugs of abuse are primarily dopamine and serotonin. Most drugs of abuse increase the release of dopamine in areas of the brain that are believed to play in a role in the rewarding properties of drugs (5). Drugs of abuse are pharmacologically diverse and alter neurotransmitter dynamics in various ways. Stimulants such as cocaine and amphetamines induce the release of dopamine from dopamine-containing neurons and block the uptake of neurotransmitters into neurons. Opioids such as heroin fuel opioid receptors on neurons and result in dopamine release. Ecstasy increases serotonin release. Crack induces a greater degree of psychological dependence than its parent compound cocaine as it has better access to the brain than cocaine and its properties result in it being absorbed faster (4).

So drugs do cause short-term surges in the dopamine levels and other messengers in the brain that temporarily signal pleasure and reward. But the brain quickly adapts to this rush; pleasure circuits become desensitized, to the extent the brain can suffer withdrawal after a binge (5). In recent years however, researches have strayed away from the study short-term effects and begun to focus on the long-term consequences of drug use in order to further understand issues of drug relapse. After long-term use, many drugs do not produce feelings of euphoria, in part from desensitization or tolerance. So why do drug addicts continue to use drugs after they no longer produce feelings of pleasure and they are trying to abstain from use? To find out researchers are seeking out changes in parts of the brain that help control motivation, looking for changes that happen weeks and years after last drug exposure.

Addiction seems to rely on some of the same neurological mechanisms that underlie learning and memory, and cravings are triggered by memories and situations associated with past drug use. Recent studies have revealed a "convergence between changes caused by drugs of addiction in reward circuits and changes in other brain region mediating memory" (5). Both learning and drug exposure reshape synapses, initiating surges of molecular signals that turn on genes and change behavior in lasting ways. Understanding these processes could help relate to the core clinical problem of drug addiction and help conquer relapse in ex-addicts. In order to focus on the clinical issues, "understanding how associative memories are laid down that change the emotional value of drugs and created deeply ingrained responses to those cues [that trigger relapse]" (3).

When it comes to kicking the habit, the process of withdrawal is the easy part; it is only after the body detoxifies itself that the real challenges begins. Ex-addicts with the strongest resolve, and plenty of external support, still struggle to refrain from use and experience cravings years after the last hit. Even though each drug of abuse has its individual effects, all specialize in attacking the brain's dopamine reward circuit. Long-term abuse reduces the number of receptors that respond to dopamine. Since dopamine fuels motivation and pleasure, as well as being crucial to memory and learning, the loss of transmitters correlates with memory problems and lack of motor coordination. Once the brain becomes less sensitive to dopamine, it becomes "less sensitive to natural reinforces" (6). The pleasure of seeing a friend or taking a walk do not always have the same value heavy drug users once felt, sometimes the only stimuli still strong enough to activate the motivation circuit, are more drugs. Understanding how drugs rearrange one's motivation priorities can help explain why addicts often partake in senseless activities and show how ex-addicts often relapse back into chronic drug taking (6).

Better understanding the neurological changes that occur in the brains of drug addicts may help find improved ways in which to treat addicts. However viewing addiction as a brain disease or a chemical imbalance makes the assumption that the addict does not have the role of choice or will in their actions. While it is necessary to further study the long-term effects drug abuse has on the brain, it is also crucial to understand that in order for an addict to recover successfully, one's own resolve and self-determination to stop drug use is central.

References

1) Is Drug Addiction a Brain Disease? , Satel, Sally L. and Frederik K. Goodwin, Program on Medical Science and Society, Ethics and Public Policy Center, Washington D.C., 1998.

2) The Psychology and Neurobiology of Addiction: An Incentive-Sensitization View. , Robinson, Terry E. and Kent C. Berridge, Department of Psychology, The University of Michigin, 2000.

3) The Neuroscience of Addiction., Koob, George F., Pietro Paolo Sanna and Floyd E. Bloom, Department of Neuropharmacology, The Scripps Research Institute, 1998.

4) Beyond The Pleasure Principle. , Helmuth, Laura. Science Magazine, Volume 294, 2 November 2001.

5) The Neurobiology of Addiction: An Overview , Roberts, Amanda J. and George F. Koob. National Institutes of Health, National Institute on Alcohol Abuse and Alcoholism, 1997.

6) Neuroscience: Implications for Treatment , Petrakis, Ismene and John Krystal. National Institutes of Health, National Institute on Alcohol Abuse and Alcoholism, 1997.


Gender: Biological or Cultural?
Name: Anne Sulli
Date: 2002-12-19 02:20:54
Link to this Comment: 4143


<mytitle>

Biology 103
2002 Third Paper
On Serendip

What does it mean to be male or female? How, if at all, do males and females exhibit different behavioral activity? Can these differences be adequately measured? Are they culturally or biologically influenced? These questions attempt to locate the true nature of sex and gender, challenging the accepted notion that the two naturally correlate. Sex—a biological concept—alludes to the different reproductive functions exhibited by males and females (1). Gender, which is often mistakenly used as a synonym for sex, is a psychological concept (1). It points to the behavioral differences between men and women. Recent studies, particularly within the feminist community, have explored gender identity and roles in society. The current, popular views within many scholarly circles assert that gender differences are merely cultural and societal constructions (2). Theorists such as Judith Butler and Adrienne Rich claim that a true and inherent gender does not exist—that accepted gender norms are compelled by traditional structures, attitudes, and institutions (2). Scholar Anne Fausto-Sterling, in fact, argues that, "Male and female babies are born. But those complex, gender-loaded individuals we call men and women are produced" (4). Such arguments are well-informed, and in many ways, they are viable and legitimate. Still, others would argue that biology also plays a significant role in gender development. It seems that the best and most thorough way to understand gender is to consider all possible influences—both biological and environmental. Biology indeed contributes an important and inimitable voice to the discussion of gender development.

There are obvious external differences between male and female anatomy—differences in genitalia, for instance. Important internal distinctions exist as well such as separate gonadal tissues (ovarian or testicular), hormonal balances (estrogen and androgen), and reproductive organs (3). In addition to these primary sex traits, a person will also later develop certain secondary sex characteristics (facial or bodily hair, i.e.) (3). Sex is essentially determined at the moment of conception when a female egg, bearing an X-chromosome, is fertilized by a sperm carrying either an X or a Y chromosome. Except for this single chromosomal difference, male and female embryos remain indistinguishable from one another (3). Yet it is this small difference that will, after approximately seven weeks of growth, ignite a chain of biological developments that differentiate the two sexes (3). Some of these differences will gear the developing being toward certain gendered behaviors.

A plethora of gender differences exist between males and females—differences which are often assumed to be inherent and natural. Clearly, not all of these distinctions are innate, yet it seems that some are more valid and consistent than others (1). Indeed, biological influences and contributions to sex differentiation can offer a foundation for these divisions. The other contributing factor is the environmental component. Social and cultural influences that enforce gender norms are frequently imposed at birth (1). Direct teaching, observation, treatment by parents, toys and activity assignment, clothing, and enforced personality traits all manipulate the development of a child's assumed gender role (1). It is the biological force, however, that dictates a person's physiological, neural, and hormonal makeup—and many of the distinct behavioral differences among males and females emerge from these factors.

Men and women clearly function in accordance with their own unique physiology. Women, for example, have both a lower metabolic rate (10% lower after puberty) and a higher percentage of body fat than do men (2). Men possess more muscle mass—because they tend to convert food to energy rather than fat—and denser and sturdier bones, tendons, and ligaments (2). Men have more sweat glands and can thus release heat quickly. Women, rather, have more insulation, energy reserves (and thus greater endurance) due to their higher content of subcutaneous fat (2). Men can circulate more oxygen than women due to their larger windpipes, lung capacity, and hearts (2). These characteristics signify a higher potential for activity, especially movement that involves short bursts of strength, in males. The limitations in locomotive activity which women experience can also be attributed to their bodies' capacity for pregnancy (5). Because they are programmed for child-bearing, women are equipped with a smaller range and capacity for physical performance (5). Furthermore, females have mammary glands (which provide nutrition and immune system codes to children) that also hinder locomotive movement (5). Such distinct physical makeup undoubtedly programs men and women for different activities. These characteristics also provide support for several of the gender stereotypes against which many argue; namely those involving physical size, strength and the abilities that arise from these traits.

Additionally, differences among the male and female nervous systems may also offer insight into seemingly controversial gender differences. Men seem to have less sensory nerve endings on their skin and therefore possess a higher tolerance for pain (2). The common "two-point discrimination test," in which the subject is to differentiate between two closely positioned pricks on the skin, also proves that females are more sensitive to touch (2). Along with this sense, women prove to be more responsive and sensitive in their senses of hearing, smell, and taste (2). What do these results imply? Perhaps the female's highly sensitive nervous system and her acute senses confirm the stereotype which claims that women are more perceptive, aware, and responsive than men (2). These qualities, then, suggest that women possess better communication skills and can maintain more successful social interactions with others. The different physical makeup which men and women possess may clearly validate several of the personality and physical gender "stereotypes" that exist today.

The differences in male and female hormonal types and proportion are also important factors in gender behavior. Both sexes possess androgens and estrogen, but at different levels (2). With the onset of puberty, male and female hormonal makeup becomes drastically distinct. After puberty, the male testosterone level is fifteen times greater than that of a female; and females possess approximately eight to ten times the male level of estrogen at this time (2). The existence and ratios of these hormones affect all organ systems—heart and respiratory rates, for example, are particularly influenced (2). Hormones also play a key role in the distinct responses to stress which men and women exhibit (2). Initially, both sexes display the same reaction: bursts of adrenaline which increase heart rate, blood pressure, responsiveness, alertness, and energy level (2). But prolonged and chronic stress elicits disparate responses between men and women. Females begin to release more estrogen (which in large amounts can sedate one's system) and cortisol, while reducing serotonin levels—which are crucial for normal sleep patterns (2). Women also experience a reduced level of neroepinephrine, which is necessary for one's sense of well-being (2). These responses suggest that, under heavy stress, women are more likely to suffer depression. The male system, conversely, reacts by increasing testosterone levels (2). In addition, androgen compounds affect the male system in a way that causes it to be hyper-reactive (2). Aggression and sexual impulses are consequently heightened (2). These responses, dictated by gender specific hormones, also provide evidence for consistent and distinct gender behavior.

In order to understand more fully the differences between biologically and culturally driven influences on gender, it is useful to observe the deviations from "standard" gender and sex patterns. For example, males inflicted with Klinefelter's syndrome (a case when males possess an extra X chromosome) and other disorders that lower one's testosterone level will often assume typical "female" traits (2). These characteristics include a longer life expectancy and a higher verbal aptitude (2). Additionally, males whose mothers were treated with DES—a synthetic estrogen—tend to be less aggressive, with a lower tendency to exhibit stereotypical male attributes (2). Likewise, some women who are given androgens during pregnancy (so as to prevent miscarriage) have female babies that become "tomboyish," more active, displaying results on aptitude tests that are similar to those obtained by males (2). These cases prove that hormonal makeup is extremely influential, and it can often be the harbinger of one's future gender roles and identity.

Evolutionary psychology, a field which attempts to connect evolutionary development with current behavioral patterns among the sexes, also contributes an important voice in this dialogue. Because women are the child-bearers of any population, for example, they are more "evolutionary important" than men (5). A loss of males in a given population, therefore, would not be extremely detrimental to the survival of the next generation. This theory argues that because men are not burdened by an "evolutionary pressure," they can behave more "courageously," displaying "risk-taking" behavior (5). Women, on the other hand, are more valuable to the population and tend to behave more conservatively (5). Another aspect of this theory attempts to locate the origin or reason that women exhibit compassion and self-sacrifice more often than men. According to theorist Daniel Pouzzner, women display "other-centeredness," while men are more "self-centered." Female pregnancy is the root of this difference. Pouzzner argues that pregnancy is "an arrangement in which the female is parasitized by a separate organism" (5). This situation forces women to focus their concern and attention onto others, namely their children. Conversely, in evolutionary history, a male was often uncertain as to which offspring was his own (while the female was, of course, always aware) (5). A man's instinct to protect and care for others is thus far lower than that of a woman. Evolutionary forces clearly play a role in gender development.

Although cultural and societal forces undoubtedly manipulate our notions of gender, it is important to also consider the biological contributions to gender development. That is, while gender stereotypes may certainly be false, some are perhaps more consistent and can be traced to a biological or evolutionary origin. It is also vital that we view biology as only one voice in a larger conversation—that biology does not enforce stereotypes or imprison individuals within certain categories. It merely adds another perspective to the mysterious and ambiguous nature of gender. Indeed, we cannot study gender or any other issue without considering the influences and contributions provided by all fields. To gain the deepest understanding, it is crucial to approach the ideas of gender development and construction with an open mind, observing the matter from all lenses.


References

1)Gender,

2)The Biological Basis for Gender-Specific Behavior,

3)Deciphering the Language of Sex,

4)"Anne Fausto-Sterling's. . . ,

5)The Evolutionary Psychology of Human Sex and Gender,


Nonverbal Learning Disabilities
Name: Kyla Ellis
Date: 2002-12-19 10:14:55
Link to this Comment: 4147


<mytitle>

Biology 103
2002 Third Paper
On Serendip

"Learning Disabilities" is a term that gets used a lot these days. Even young children are familiar with the phrase, because it is used to reason why someone may not seem as smart as they are, or might have behavioral problems. The truth of the matter is, however, that learning disabilities are more common than we think they are, and that often times they go unnoticed or un diagnosed because the person who is suffering from them does not have behavioral problems, or has learned to "get by" in school work and so blends in with the academic community. There are many different kinds of learning disabilities, ranging from dyslexia, to spatial development disorder, to Attention Deficit/Hyperactive Disorder. Statistically, children with these conditions are more likely to do poorly in school and less likely to pursue higher education. Though we don't know fully the cause or fool-proof treatment of many of these disorders, we are learning, and have been able to develop many processes to help students with these disabilities have the same opportunities to learn as children not affected. In this paper, I wanted to find out the definition of a non-verbal learning disability, as well as the cause and possible treatment.

Children with nonverbal learning disability (NLD) are most often assumed to be precocious as toddlers because they have a very easy time developing their vocabulary, memory skills, and apparent reading ability. As the child starts pre-school, parents may notice that he or she has trouble interacting with other preschoolers, learning how to help him or herself, or adapting to new situations. These problems are often dismissed with little thought. The child usually ends up floating through early elementary school without much problem in the academic realm; maybe occasionally mixing up an addition sign and a subtraction sign, or some other small details.

When all children enter upper elementary grades, they are all expected to be able to handle more things on their own. This is where the child with NLD begins to have trouble. They get lost, forget to do homework, seem unprepared for class, have difficulty following directions, struggle with math, can't read their textbooks, can't write an essay, continually misunderstand both their teachers and their peers, and are often anxious in public and angry at home. Teachers will complain that the child is lazy, rude and uncooperative, but in reality the child is frustrated because the classroom is not suiting their needs because they, in fact, have a learning disability (1).

Children with nonverbal learning disorders (NLD) often seem awkward and have a very hard time in both fine and motor skills. Riding bikes, kicking soccer balls, and tying shoes are all very challenging tasks, almost impossible to master. Children will often "talk their way through" the simplest of motor activities. They learn little from experience or repetition and cannot generalize information (2).
Students frequently "shut down" when faced with academic pressures and performance demands that require more than they feel they can do. Comprehension skills are weaker than those of the other children in their grade
(6). Many words that they hear, read and use are "empty" in that they don't have meaning to them; they are simply repeating what they have heard.

NLD i